Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machines

Guillaume Desjardins, Aaron Courville, Yoshua Bengio, Pascal Vincent, Olivier Delalleau
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:145-152, 2010.

Abstract

Alternating Gibbs sampling is the most common scheme used for sampling from Restricted Boltzmann Machines (RBM), a crucial component in deep architectures such as Deep Belief Networks. However, we find that it often does a very poor job of rendering the diversity of modes captured by the trained model. We suspect that this hinders the advantage that could in principle be brought by training algorithms relying on Gibbs sampling for uncovering spurious modes, such as the Persistent Contrastive Divergence algorithm. To alleviate this problem, we explore the use of tempered Markov Chain Monte-Carlo for sampling in RBMs. We find both through visualization of samples and measures of likelihood on a toy dataset that it helps both sampling and learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-desjardins10a, title = {Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machines}, author = {Desjardins, Guillaume and Courville, Aaron and Bengio, Yoshua and Vincent, Pascal and Delalleau, Olivier}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {145--152}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/desjardins10a/desjardins10a.pdf}, url = {https://proceedings.mlr.press/v9/desjardins10a.html}, abstract = {Alternating Gibbs sampling is the most common scheme used for sampling from Restricted Boltzmann Machines (RBM), a crucial component in deep architectures such as Deep Belief Networks. However, we find that it often does a very poor job of rendering the diversity of modes captured by the trained model. We suspect that this hinders the advantage that could in principle be brought by training algorithms relying on Gibbs sampling for uncovering spurious modes, such as the Persistent Contrastive Divergence algorithm. To alleviate this problem, we explore the use of tempered Markov Chain Monte-Carlo for sampling in RBMs. We find both through visualization of samples and measures of likelihood on a toy dataset that it helps both sampling and learning.} }
Endnote
%0 Conference Paper %T Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machines %A Guillaume Desjardins %A Aaron Courville %A Yoshua Bengio %A Pascal Vincent %A Olivier Delalleau %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-desjardins10a %I PMLR %P 145--152 %U https://proceedings.mlr.press/v9/desjardins10a.html %V 9 %X Alternating Gibbs sampling is the most common scheme used for sampling from Restricted Boltzmann Machines (RBM), a crucial component in deep architectures such as Deep Belief Networks. However, we find that it often does a very poor job of rendering the diversity of modes captured by the trained model. We suspect that this hinders the advantage that could in principle be brought by training algorithms relying on Gibbs sampling for uncovering spurious modes, such as the Persistent Contrastive Divergence algorithm. To alleviate this problem, we explore the use of tempered Markov Chain Monte-Carlo for sampling in RBMs. We find both through visualization of samples and measures of likelihood on a toy dataset that it helps both sampling and learning.
RIS
TY - CPAPER TI - Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machines AU - Guillaume Desjardins AU - Aaron Courville AU - Yoshua Bengio AU - Pascal Vincent AU - Olivier Delalleau BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-desjardins10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 145 EP - 152 L1 - http://proceedings.mlr.press/v9/desjardins10a/desjardins10a.pdf UR - https://proceedings.mlr.press/v9/desjardins10a.html AB - Alternating Gibbs sampling is the most common scheme used for sampling from Restricted Boltzmann Machines (RBM), a crucial component in deep architectures such as Deep Belief Networks. However, we find that it often does a very poor job of rendering the diversity of modes captured by the trained model. We suspect that this hinders the advantage that could in principle be brought by training algorithms relying on Gibbs sampling for uncovering spurious modes, such as the Persistent Contrastive Divergence algorithm. To alleviate this problem, we explore the use of tempered Markov Chain Monte-Carlo for sampling in RBMs. We find both through visualization of samples and measures of likelihood on a toy dataset that it helps both sampling and learning. ER -
APA
Desjardins, G., Courville, A., Bengio, Y., Vincent, P. & Delalleau, O.. (2010). Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machines. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:145-152 Available from https://proceedings.mlr.press/v9/desjardins10a.html.

Related Material