Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets

Diederik Kingma, Max Welling
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1782-1790, 2014.

Abstract

Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-kingma14, title = {Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets}, author = {Kingma, Diederik and Welling, Max}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1782--1790}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/kingma14.pdf}, url = {https://proceedings.mlr.press/v32/kingma14.html}, abstract = {Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments.} }
Endnote
%0 Conference Paper %T Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets %A Diederik Kingma %A Max Welling %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-kingma14 %I PMLR %P 1782--1790 %U https://proceedings.mlr.press/v32/kingma14.html %V 32 %N 2 %X Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments.
RIS
TY - CPAPER TI - Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets AU - Diederik Kingma AU - Max Welling BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-kingma14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1782 EP - 1790 L1 - http://proceedings.mlr.press/v32/kingma14.pdf UR - https://proceedings.mlr.press/v32/kingma14.html AB - Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments. ER -
APA
Kingma, D. & Welling, M.. (2014). Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1782-1790 Available from https://proceedings.mlr.press/v32/kingma14.html.

Related Material