On Measure Concentration of Random Maximum A-Posteriori Perturbations

Francesco Orabona, Tamir Hazan, Anand Sarwate, Tommi Jaakkola
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(1):432-440, 2014.

Abstract

The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. More efficient algorithms use sequential sampling strategies based on the expected value of low dimensional MAP perturbations. This paper develops new measure concentration inequalities that bound the number of samples needed to estimate such expected values. Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution. The measure concentration result is of general interest and may be applicable to other areas involving Monte Carlo estimation of expectations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-orabona14, title = {On Measure Concentration of Random Maximum A-Posteriori Perturbations}, author = {Orabona, Francesco and Hazan, Tamir and Sarwate, Anand and Jaakkola, Tommi}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {432--440}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/orabona14.pdf}, url = {https://proceedings.mlr.press/v32/orabona14.html}, abstract = {The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. More efficient algorithms use sequential sampling strategies based on the expected value of low dimensional MAP perturbations. This paper develops new measure concentration inequalities that bound the number of samples needed to estimate such expected values. Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution. The measure concentration result is of general interest and may be applicable to other areas involving Monte Carlo estimation of expectations.} }
Endnote
%0 Conference Paper %T On Measure Concentration of Random Maximum A-Posteriori Perturbations %A Francesco Orabona %A Tamir Hazan %A Anand Sarwate %A Tommi Jaakkola %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-orabona14 %I PMLR %P 432--440 %U https://proceedings.mlr.press/v32/orabona14.html %V 32 %N 1 %X The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. More efficient algorithms use sequential sampling strategies based on the expected value of low dimensional MAP perturbations. This paper develops new measure concentration inequalities that bound the number of samples needed to estimate such expected values. Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution. The measure concentration result is of general interest and may be applicable to other areas involving Monte Carlo estimation of expectations.
RIS
TY - CPAPER TI - On Measure Concentration of Random Maximum A-Posteriori Perturbations AU - Francesco Orabona AU - Tamir Hazan AU - Anand Sarwate AU - Tommi Jaakkola BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-orabona14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 1 SP - 432 EP - 440 L1 - http://proceedings.mlr.press/v32/orabona14.pdf UR - https://proceedings.mlr.press/v32/orabona14.html AB - The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. More efficient algorithms use sequential sampling strategies based on the expected value of low dimensional MAP perturbations. This paper develops new measure concentration inequalities that bound the number of samples needed to estimate such expected values. Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution. The measure concentration result is of general interest and may be applicable to other areas involving Monte Carlo estimation of expectations. ER -
APA
Orabona, F., Hazan, T., Sarwate, A. & Jaakkola, T.. (2014). On Measure Concentration of Random Maximum A-Posteriori Perturbations. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(1):432-440 Available from https://proceedings.mlr.press/v32/orabona14.html.

Related Material