Inductive Principles for Restricted Boltzmann Machine Learning

Benjamin Marlin, Kevin Swersky, Bo Chen, Nando Freitas
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:509-516, 2010.

Abstract

Recent research has seen the proposal of several new inductive principles designed specifically to avoid the problems associated with maximum likelihood learning in models with intractable partition functions. In this paper, we study learning methods for binary restricted Boltzmann machines (RBMs) based on ratio matching and generalized score matching. We compare these new RBM learning methods to a range of existing learning methods including stochastic maximum likelihood, contrastive divergence, and pseudo-likelihood. We perform an extensive empirical evaluation across multiple tasks and data sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-marlin10a, title = {Inductive Principles for Restricted Boltzmann Machine Learning}, author = {Marlin, Benjamin and Swersky, Kevin and Chen, Bo and Freitas, Nando}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {509--516}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/marlin10a/marlin10a.pdf}, url = {https://proceedings.mlr.press/v9/marlin10a.html}, abstract = {Recent research has seen the proposal of several new inductive principles designed specifically to avoid the problems associated with maximum likelihood learning in models with intractable partition functions. In this paper, we study learning methods for binary restricted Boltzmann machines (RBMs) based on ratio matching and generalized score matching. We compare these new RBM learning methods to a range of existing learning methods including stochastic maximum likelihood, contrastive divergence, and pseudo-likelihood. We perform an extensive empirical evaluation across multiple tasks and data sets.} }
Endnote
%0 Conference Paper %T Inductive Principles for Restricted Boltzmann Machine Learning %A Benjamin Marlin %A Kevin Swersky %A Bo Chen %A Nando Freitas %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-marlin10a %I PMLR %P 509--516 %U https://proceedings.mlr.press/v9/marlin10a.html %V 9 %X Recent research has seen the proposal of several new inductive principles designed specifically to avoid the problems associated with maximum likelihood learning in models with intractable partition functions. In this paper, we study learning methods for binary restricted Boltzmann machines (RBMs) based on ratio matching and generalized score matching. We compare these new RBM learning methods to a range of existing learning methods including stochastic maximum likelihood, contrastive divergence, and pseudo-likelihood. We perform an extensive empirical evaluation across multiple tasks and data sets.
RIS
TY - CPAPER TI - Inductive Principles for Restricted Boltzmann Machine Learning AU - Benjamin Marlin AU - Kevin Swersky AU - Bo Chen AU - Nando Freitas BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-marlin10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 509 EP - 516 L1 - http://proceedings.mlr.press/v9/marlin10a/marlin10a.pdf UR - https://proceedings.mlr.press/v9/marlin10a.html AB - Recent research has seen the proposal of several new inductive principles designed specifically to avoid the problems associated with maximum likelihood learning in models with intractable partition functions. In this paper, we study learning methods for binary restricted Boltzmann machines (RBMs) based on ratio matching and generalized score matching. We compare these new RBM learning methods to a range of existing learning methods including stochastic maximum likelihood, contrastive divergence, and pseudo-likelihood. We perform an extensive empirical evaluation across multiple tasks and data sets. ER -
APA
Marlin, B., Swersky, K., Chen, B. & Freitas, N.. (2010). Inductive Principles for Restricted Boltzmann Machine Learning. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:509-516 Available from https://proceedings.mlr.press/v9/marlin10a.html.

Related Material