Concave Gaussian Variational Approximations for Inference in Large-Scale Bayesian Linear Models

Edward Challis, David Barber
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:199-207, 2011.

Abstract

Two popular approaches to forming bounds in approximate Bayesian inference are local variational methods and minimal Kullback-Leibler divergence methods. For a large class of models we explicitly relate the two approaches, showing that the local variational method is equivalent to a weakened form of Kullback-Leibler Gaussian approximation. This gives a strong motivation to develop efficient methods for KL minimisation. An important and previously unproven property of the KL variational Gaussian bound is that it is a concave function in the parameters of the Gaussian for log concave sites. This observation, along with compact concave parametrisations of the covariance, enables us to develop fast scalable optimisation procedures to obtain lower bounds on the marginal likelihood in large scale Bayesian linear models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v15-challis11a, title = {Concave Gaussian Variational Approximations for Inference in Large-Scale Bayesian Linear Models}, author = {Challis, Edward and Barber, David}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {199--207}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/challis11a/challis11a.pdf}, url = {https://proceedings.mlr.press/v15/challis11a.html}, abstract = {Two popular approaches to forming bounds in approximate Bayesian inference are local variational methods and minimal Kullback-Leibler divergence methods. For a large class of models we explicitly relate the two approaches, showing that the local variational method is equivalent to a weakened form of Kullback-Leibler Gaussian approximation. This gives a strong motivation to develop efficient methods for KL minimisation. An important and previously unproven property of the KL variational Gaussian bound is that it is a concave function in the parameters of the Gaussian for log concave sites. This observation, along with compact concave parametrisations of the covariance, enables us to develop fast scalable optimisation procedures to obtain lower bounds on the marginal likelihood in large scale Bayesian linear models.} }
Endnote
%0 Conference Paper %T Concave Gaussian Variational Approximations for Inference in Large-Scale Bayesian Linear Models %A Edward Challis %A David Barber %B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2011 %E Geoffrey Gordon %E David Dunson %E Miroslav Dudík %F pmlr-v15-challis11a %I PMLR %P 199--207 %U https://proceedings.mlr.press/v15/challis11a.html %V 15 %X Two popular approaches to forming bounds in approximate Bayesian inference are local variational methods and minimal Kullback-Leibler divergence methods. For a large class of models we explicitly relate the two approaches, showing that the local variational method is equivalent to a weakened form of Kullback-Leibler Gaussian approximation. This gives a strong motivation to develop efficient methods for KL minimisation. An important and previously unproven property of the KL variational Gaussian bound is that it is a concave function in the parameters of the Gaussian for log concave sites. This observation, along with compact concave parametrisations of the covariance, enables us to develop fast scalable optimisation procedures to obtain lower bounds on the marginal likelihood in large scale Bayesian linear models.
RIS
TY - CPAPER TI - Concave Gaussian Variational Approximations for Inference in Large-Scale Bayesian Linear Models AU - Edward Challis AU - David Barber BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics DA - 2011/06/14 ED - Geoffrey Gordon ED - David Dunson ED - Miroslav Dudík ID - pmlr-v15-challis11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 15 SP - 199 EP - 207 L1 - http://proceedings.mlr.press/v15/challis11a/challis11a.pdf UR - https://proceedings.mlr.press/v15/challis11a.html AB - Two popular approaches to forming bounds in approximate Bayesian inference are local variational methods and minimal Kullback-Leibler divergence methods. For a large class of models we explicitly relate the two approaches, showing that the local variational method is equivalent to a weakened form of Kullback-Leibler Gaussian approximation. This gives a strong motivation to develop efficient methods for KL minimisation. An important and previously unproven property of the KL variational Gaussian bound is that it is a concave function in the parameters of the Gaussian for log concave sites. This observation, along with compact concave parametrisations of the covariance, enables us to develop fast scalable optimisation procedures to obtain lower bounds on the marginal likelihood in large scale Bayesian linear models. ER -
APA
Challis, E. & Barber, D.. (2011). Concave Gaussian Variational Approximations for Inference in Large-Scale Bayesian Linear Models. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 15:199-207 Available from https://proceedings.mlr.press/v15/challis11a.html.

Related Material