Linear Approximation to ADMM for MAP inference

Sholeh Forouzan, Alexander Ihler
Proceedings of the 5th Asian Conference on Machine Learning, PMLR 29:48-61, 2013.

Abstract

Maximum a posteriori (MAP) inference is one of the fundamental inference tasks in graphical models. MAP inference is in general NP-hard, making approximate methods of interest for many problems. One successful class of approximate inference algorithms is based on linear programming (LP) relaxations. The augmented Lagrangian method can be used to overcome a lack of strict convexity in LP relaxations, and the Alternating Direction Method of Multipliers (ADMM) provides an elegant algorithm for finding the saddle point of the augmented Lagrangian. Here we present an ADMM-based algorithm to solve the primal form of the MAP-LP whose closed form updates are based on a linear approximation technique. Our technique gives efficient, closed form updates that converge to the global optimum of the LP relaxation. We compare our algorithm to two existing ADMM-based MAP-LP methods, showing that our technique is faster on general, non-binary or non-pairwise models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v29-Forouzan13, title = {Linear Approximation to ADMM for MAP inference}, author = {Forouzan, Sholeh and Ihler, Alexander}, booktitle = {Proceedings of the 5th Asian Conference on Machine Learning}, pages = {48--61}, year = {2013}, editor = {Ong, Cheng Soon and Ho, Tu Bao}, volume = {29}, series = {Proceedings of Machine Learning Research}, address = {Australian National University, Canberra, Australia}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v29/Forouzan13.pdf}, url = {https://proceedings.mlr.press/v29/Forouzan13.html}, abstract = {Maximum a posteriori (MAP) inference is one of the fundamental inference tasks in graphical models. MAP inference is in general NP-hard, making approximate methods of interest for many problems. One successful class of approximate inference algorithms is based on linear programming (LP) relaxations. The augmented Lagrangian method can be used to overcome a lack of strict convexity in LP relaxations, and the Alternating Direction Method of Multipliers (ADMM) provides an elegant algorithm for finding the saddle point of the augmented Lagrangian. Here we present an ADMM-based algorithm to solve the primal form of the MAP-LP whose closed form updates are based on a linear approximation technique. Our technique gives efficient, closed form updates that converge to the global optimum of the LP relaxation. We compare our algorithm to two existing ADMM-based MAP-LP methods, showing that our technique is faster on general, non-binary or non-pairwise models.} }
Endnote
%0 Conference Paper %T Linear Approximation to ADMM for MAP inference %A Sholeh Forouzan %A Alexander Ihler %B Proceedings of the 5th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Cheng Soon Ong %E Tu Bao Ho %F pmlr-v29-Forouzan13 %I PMLR %P 48--61 %U https://proceedings.mlr.press/v29/Forouzan13.html %V 29 %X Maximum a posteriori (MAP) inference is one of the fundamental inference tasks in graphical models. MAP inference is in general NP-hard, making approximate methods of interest for many problems. One successful class of approximate inference algorithms is based on linear programming (LP) relaxations. The augmented Lagrangian method can be used to overcome a lack of strict convexity in LP relaxations, and the Alternating Direction Method of Multipliers (ADMM) provides an elegant algorithm for finding the saddle point of the augmented Lagrangian. Here we present an ADMM-based algorithm to solve the primal form of the MAP-LP whose closed form updates are based on a linear approximation technique. Our technique gives efficient, closed form updates that converge to the global optimum of the LP relaxation. We compare our algorithm to two existing ADMM-based MAP-LP methods, showing that our technique is faster on general, non-binary or non-pairwise models.
RIS
TY - CPAPER TI - Linear Approximation to ADMM for MAP inference AU - Sholeh Forouzan AU - Alexander Ihler BT - Proceedings of the 5th Asian Conference on Machine Learning DA - 2013/10/21 ED - Cheng Soon Ong ED - Tu Bao Ho ID - pmlr-v29-Forouzan13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 29 SP - 48 EP - 61 L1 - http://proceedings.mlr.press/v29/Forouzan13.pdf UR - https://proceedings.mlr.press/v29/Forouzan13.html AB - Maximum a posteriori (MAP) inference is one of the fundamental inference tasks in graphical models. MAP inference is in general NP-hard, making approximate methods of interest for many problems. One successful class of approximate inference algorithms is based on linear programming (LP) relaxations. The augmented Lagrangian method can be used to overcome a lack of strict convexity in LP relaxations, and the Alternating Direction Method of Multipliers (ADMM) provides an elegant algorithm for finding the saddle point of the augmented Lagrangian. Here we present an ADMM-based algorithm to solve the primal form of the MAP-LP whose closed form updates are based on a linear approximation technique. Our technique gives efficient, closed form updates that converge to the global optimum of the LP relaxation. We compare our algorithm to two existing ADMM-based MAP-LP methods, showing that our technique is faster on general, non-binary or non-pairwise models. ER -
APA
Forouzan, S. & Ihler, A.. (2013). Linear Approximation to ADMM for MAP inference. Proceedings of the 5th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 29:48-61 Available from https://proceedings.mlr.press/v29/Forouzan13.html.

Related Material