Learning the Structure of Deep Sparse Graphical Models

Ryan P. Adams, Hanna Wallach, Zoubin Ghahramani
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:1-8, 2010.

Abstract

Deep belief networks are a powerful way to model complex probability distributions. However, it is difficult to learn the structure of a belief network, particularly one with hidden units. The Indian buffet process has been used as a nonparametric Bayesian prior on the structure of a directed belief network with a single infinitely wide hidden layer. Here, we introduce the cascading Indian buffet process (CIBP), which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network framework to allow each unit to vary its behavior between discrete and continuous representations. We use Markov chain Monte Carlo for inference in this model and explore the structures learned on image data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-adams10a, title = {Learning the Structure of Deep Sparse Graphical Models}, author = {Adams, Ryan P. and Wallach, Hanna and Ghahramani, Zoubin}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {1--8}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/adams10a/adams10a.pdf}, url = {https://proceedings.mlr.press/v9/adams10a.html}, abstract = {Deep belief networks are a powerful way to model complex probability distributions. However, it is difficult to learn the structure of a belief network, particularly one with hidden units. The Indian buffet process has been used as a nonparametric Bayesian prior on the structure of a directed belief network with a single infinitely wide hidden layer. Here, we introduce the cascading Indian buffet process (CIBP), which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network framework to allow each unit to vary its behavior between discrete and continuous representations. We use Markov chain Monte Carlo for inference in this model and explore the structures learned on image data.} }
Endnote
%0 Conference Paper %T Learning the Structure of Deep Sparse Graphical Models %A Ryan P. Adams %A Hanna Wallach %A Zoubin Ghahramani %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-adams10a %I PMLR %P 1--8 %U https://proceedings.mlr.press/v9/adams10a.html %V 9 %X Deep belief networks are a powerful way to model complex probability distributions. However, it is difficult to learn the structure of a belief network, particularly one with hidden units. The Indian buffet process has been used as a nonparametric Bayesian prior on the structure of a directed belief network with a single infinitely wide hidden layer. Here, we introduce the cascading Indian buffet process (CIBP), which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network framework to allow each unit to vary its behavior between discrete and continuous representations. We use Markov chain Monte Carlo for inference in this model and explore the structures learned on image data.
RIS
TY - CPAPER TI - Learning the Structure of Deep Sparse Graphical Models AU - Ryan P. Adams AU - Hanna Wallach AU - Zoubin Ghahramani BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-adams10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 1 EP - 8 L1 - http://proceedings.mlr.press/v9/adams10a/adams10a.pdf UR - https://proceedings.mlr.press/v9/adams10a.html AB - Deep belief networks are a powerful way to model complex probability distributions. However, it is difficult to learn the structure of a belief network, particularly one with hidden units. The Indian buffet process has been used as a nonparametric Bayesian prior on the structure of a directed belief network with a single infinitely wide hidden layer. Here, we introduce the cascading Indian buffet process (CIBP), which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network framework to allow each unit to vary its behavior between discrete and continuous representations. We use Markov chain Monte Carlo for inference in this model and explore the structures learned on image data. ER -
APA
Adams, R.P., Wallach, H. & Ghahramani, Z.. (2010). Learning the Structure of Deep Sparse Graphical Models. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:1-8 Available from https://proceedings.mlr.press/v9/adams10a.html.

Related Material