Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images

Marc’Aurelio Ranzato, Alex Krizhevsky, Geoffrey Hinton
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:621-628, 2010.

Abstract

Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple/complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-ranzato10a, title = {Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images}, author = {Ranzato, Marc’Aurelio and Krizhevsky, Alex and Hinton, Geoffrey}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {621--628}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/ranzato10a/ranzato10a.pdf}, url = {https://proceedings.mlr.press/v9/ranzato10a.html}, abstract = {Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple/complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.} }
Endnote
%0 Conference Paper %T Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images %A Marc’Aurelio Ranzato %A Alex Krizhevsky %A Geoffrey Hinton %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-ranzato10a %I PMLR %P 621--628 %U https://proceedings.mlr.press/v9/ranzato10a.html %V 9 %X Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple/complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.
RIS
TY - CPAPER TI - Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images AU - Marc’Aurelio Ranzato AU - Alex Krizhevsky AU - Geoffrey Hinton BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-ranzato10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 621 EP - 628 L1 - http://proceedings.mlr.press/v9/ranzato10a/ranzato10a.pdf UR - https://proceedings.mlr.press/v9/ranzato10a.html AB - Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple/complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model. ER -
APA
Ranzato, M., Krizhevsky, A. & Hinton, G.. (2010). Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:621-628 Available from https://proceedings.mlr.press/v9/ranzato10a.html.

Related Material