Scalable Variational Inference in Log-supermodular Models

Josip Djolonga, Andreas Krause
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1804-1813, 2015.

Abstract

We consider the problem of approximate Bayesian inference in log-supermodular models. These models encompass regular pairwise MRFs with binary variables, but allow to capture high order interactions, which are intractable for existing approximate inference techniques such as belief propagation, mean field and variants. We show that a recently proposed variational approach to inference in log-supermodular models – L-Field – reduces to the widely studied minimum norm problem for submodular minimization. This insight allows to leverage powerful existing tools, and allows solving the variational problem orders of magnitude more efficiently than previously possible. We then provide another natural interpretation of L-Field, demonstrating that it exactly minimizes a specific type of Renyi divergence measure. This insight sheds light on the nature of the variational approximations produced by L-Field. Furthermore, we show how to perform parallel inference as message passing in a suitable factor graph at a linear convergence rate, without having to sum up over all the configurations of the factor. Finally, we apply our approach to a challenging image segmentation task. Our experiments confirm scalability of our approach, high quality of the marginals and the benefit of incorporating higher order potentials.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-djolonga15, title = {Scalable Variational Inference in Log-supermodular Models}, author = {Djolonga, Josip and Krause, Andreas}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1804--1813}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/djolonga15.pdf}, url = {https://proceedings.mlr.press/v37/djolonga15.html}, abstract = {We consider the problem of approximate Bayesian inference in log-supermodular models. These models encompass regular pairwise MRFs with binary variables, but allow to capture high order interactions, which are intractable for existing approximate inference techniques such as belief propagation, mean field and variants. We show that a recently proposed variational approach to inference in log-supermodular models – L-Field – reduces to the widely studied minimum norm problem for submodular minimization. This insight allows to leverage powerful existing tools, and allows solving the variational problem orders of magnitude more efficiently than previously possible. We then provide another natural interpretation of L-Field, demonstrating that it exactly minimizes a specific type of Renyi divergence measure. This insight sheds light on the nature of the variational approximations produced by L-Field. Furthermore, we show how to perform parallel inference as message passing in a suitable factor graph at a linear convergence rate, without having to sum up over all the configurations of the factor. Finally, we apply our approach to a challenging image segmentation task. Our experiments confirm scalability of our approach, high quality of the marginals and the benefit of incorporating higher order potentials.} }
Endnote
%0 Conference Paper %T Scalable Variational Inference in Log-supermodular Models %A Josip Djolonga %A Andreas Krause %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-djolonga15 %I PMLR %P 1804--1813 %U https://proceedings.mlr.press/v37/djolonga15.html %V 37 %X We consider the problem of approximate Bayesian inference in log-supermodular models. These models encompass regular pairwise MRFs with binary variables, but allow to capture high order interactions, which are intractable for existing approximate inference techniques such as belief propagation, mean field and variants. We show that a recently proposed variational approach to inference in log-supermodular models – L-Field – reduces to the widely studied minimum norm problem for submodular minimization. This insight allows to leverage powerful existing tools, and allows solving the variational problem orders of magnitude more efficiently than previously possible. We then provide another natural interpretation of L-Field, demonstrating that it exactly minimizes a specific type of Renyi divergence measure. This insight sheds light on the nature of the variational approximations produced by L-Field. Furthermore, we show how to perform parallel inference as message passing in a suitable factor graph at a linear convergence rate, without having to sum up over all the configurations of the factor. Finally, we apply our approach to a challenging image segmentation task. Our experiments confirm scalability of our approach, high quality of the marginals and the benefit of incorporating higher order potentials.
RIS
TY - CPAPER TI - Scalable Variational Inference in Log-supermodular Models AU - Josip Djolonga AU - Andreas Krause BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-djolonga15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1804 EP - 1813 L1 - http://proceedings.mlr.press/v37/djolonga15.pdf UR - https://proceedings.mlr.press/v37/djolonga15.html AB - We consider the problem of approximate Bayesian inference in log-supermodular models. These models encompass regular pairwise MRFs with binary variables, but allow to capture high order interactions, which are intractable for existing approximate inference techniques such as belief propagation, mean field and variants. We show that a recently proposed variational approach to inference in log-supermodular models – L-Field – reduces to the widely studied minimum norm problem for submodular minimization. This insight allows to leverage powerful existing tools, and allows solving the variational problem orders of magnitude more efficiently than previously possible. We then provide another natural interpretation of L-Field, demonstrating that it exactly minimizes a specific type of Renyi divergence measure. This insight sheds light on the nature of the variational approximations produced by L-Field. Furthermore, we show how to perform parallel inference as message passing in a suitable factor graph at a linear convergence rate, without having to sum up over all the configurations of the factor. Finally, we apply our approach to a challenging image segmentation task. Our experiments confirm scalability of our approach, high quality of the marginals and the benefit of incorporating higher order potentials. ER -
APA
Djolonga, J. & Krause, A.. (2015). Scalable Variational Inference in Log-supermodular Models. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1804-1813 Available from https://proceedings.mlr.press/v37/djolonga15.html.

Related Material