Generating Efficient MCMC Kernels from Probabilistic Programs

Lingfeng Yang, Patrick Hanrahan, Noah Goodman
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:1068-1076, 2014.

Abstract

Universal probabilistic programming languages (such as Church) trade performance for abstraction: any model can be represented compactly as an arbitrary stochastic computation, but costly online analyses are required for inference. We present a technique that recovers hand-coded levels of performance from a universal probabilistic language, for the Metropolis-Hastings (MH) MCMC inference algorithm. It takes a Church program as input and traces its execution to remove computation overhead. It then analyzes the trace for each proposal, using slicing, to identify the minimal computation needed to evaluate the MH acceptance probability. Generated incremental code is much faster than a baseline implementation (up to 600x) and usually as fast as hand-coded MH kernels.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-yang14d, title = {{Generating Efficient MCMC Kernels from Probabilistic Programs}}, author = {Yang, Lingfeng and Hanrahan, Patrick and Goodman, Noah}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {1068--1076}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/yang14d.pdf}, url = {https://proceedings.mlr.press/v33/yang14d.html}, abstract = {Universal probabilistic programming languages (such as Church) trade performance for abstraction: any model can be represented compactly as an arbitrary stochastic computation, but costly online analyses are required for inference. We present a technique that recovers hand-coded levels of performance from a universal probabilistic language, for the Metropolis-Hastings (MH) MCMC inference algorithm. It takes a Church program as input and traces its execution to remove computation overhead. It then analyzes the trace for each proposal, using slicing, to identify the minimal computation needed to evaluate the MH acceptance probability. Generated incremental code is much faster than a baseline implementation (up to 600x) and usually as fast as hand-coded MH kernels.} }
Endnote
%0 Conference Paper %T Generating Efficient MCMC Kernels from Probabilistic Programs %A Lingfeng Yang %A Patrick Hanrahan %A Noah Goodman %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-yang14d %I PMLR %P 1068--1076 %U https://proceedings.mlr.press/v33/yang14d.html %V 33 %X Universal probabilistic programming languages (such as Church) trade performance for abstraction: any model can be represented compactly as an arbitrary stochastic computation, but costly online analyses are required for inference. We present a technique that recovers hand-coded levels of performance from a universal probabilistic language, for the Metropolis-Hastings (MH) MCMC inference algorithm. It takes a Church program as input and traces its execution to remove computation overhead. It then analyzes the trace for each proposal, using slicing, to identify the minimal computation needed to evaluate the MH acceptance probability. Generated incremental code is much faster than a baseline implementation (up to 600x) and usually as fast as hand-coded MH kernels.
RIS
TY - CPAPER TI - Generating Efficient MCMC Kernels from Probabilistic Programs AU - Lingfeng Yang AU - Patrick Hanrahan AU - Noah Goodman BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-yang14d PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 1068 EP - 1076 L1 - http://proceedings.mlr.press/v33/yang14d.pdf UR - https://proceedings.mlr.press/v33/yang14d.html AB - Universal probabilistic programming languages (such as Church) trade performance for abstraction: any model can be represented compactly as an arbitrary stochastic computation, but costly online analyses are required for inference. We present a technique that recovers hand-coded levels of performance from a universal probabilistic language, for the Metropolis-Hastings (MH) MCMC inference algorithm. It takes a Church program as input and traces its execution to remove computation overhead. It then analyzes the trace for each proposal, using slicing, to identify the minimal computation needed to evaluate the MH acceptance probability. Generated incremental code is much faster than a baseline implementation (up to 600x) and usually as fast as hand-coded MH kernels. ER -
APA
Yang, L., Hanrahan, P. & Goodman, N.. (2014). Generating Efficient MCMC Kernels from Probabilistic Programs. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:1068-1076 Available from https://proceedings.mlr.press/v33/yang14d.html.

Related Material