Distributed Learning of Gaussian Graphical Models via Marginal Likelihoods

Zhaoshi Meng, Dennis Wei, Ami Wiesel, Alfred Hero III
Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, PMLR 31:39-47, 2013.

Abstract

We consider distributed estimation of the inverse covariance matrix, also called the concentration matrix, in Gaussian graphical models. Traditional centralized estimation often requires iterative and expensive global inference and is therefore difficult in large distributed networks. In this paper, we propose a general framework for distributed estimation based on a maximum marginal likelihood (MML) approach. Each node independently computes a local estimate by maximizing a marginal likelihood defined with respect to data collected from its local neighborhood. Due to the non-convexity of the MML problem, we derive and consider solving a convex relaxation. The local estimates are then combined into a global estimate without the need for iterative message-passing between neighborhoods. We prove that this relaxed MML estimator is asymptotically consistent. Through numerical experiments on several synthetic and real-world data sets, we demonstrate that the two-hop version of the proposed estimator is significantly better than the one-hop version, and nearly closes the gap to the centralized maximum likelihood estimator in many situations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v31-meng13a, title = {Distributed Learning of Gaussian Graphical Models via Marginal Likelihoods}, author = {Meng, Zhaoshi and Wei, Dennis and Wiesel, Ami and Hero, III, Alfred}, booktitle = {Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics}, pages = {39--47}, year = {2013}, editor = {Carvalho, Carlos M. and Ravikumar, Pradeep}, volume = {31}, series = {Proceedings of Machine Learning Research}, address = {Scottsdale, Arizona, USA}, month = {29 Apr--01 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v31/meng13a.pdf}, url = {https://proceedings.mlr.press/v31/meng13a.html}, abstract = {We consider distributed estimation of the inverse covariance matrix, also called the concentration matrix, in Gaussian graphical models. Traditional centralized estimation often requires iterative and expensive global inference and is therefore difficult in large distributed networks. In this paper, we propose a general framework for distributed estimation based on a maximum marginal likelihood (MML) approach. Each node independently computes a local estimate by maximizing a marginal likelihood defined with respect to data collected from its local neighborhood. Due to the non-convexity of the MML problem, we derive and consider solving a convex relaxation. The local estimates are then combined into a global estimate without the need for iterative message-passing between neighborhoods. We prove that this relaxed MML estimator is asymptotically consistent. Through numerical experiments on several synthetic and real-world data sets, we demonstrate that the two-hop version of the proposed estimator is significantly better than the one-hop version, and nearly closes the gap to the centralized maximum likelihood estimator in many situations.}, note = {Notable paper award} }
Endnote
%0 Conference Paper %T Distributed Learning of Gaussian Graphical Models via Marginal Likelihoods %A Zhaoshi Meng %A Dennis Wei %A Ami Wiesel %A Alfred Hero, III %B Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2013 %E Carlos M. Carvalho %E Pradeep Ravikumar %F pmlr-v31-meng13a %I PMLR %P 39--47 %U https://proceedings.mlr.press/v31/meng13a.html %V 31 %X We consider distributed estimation of the inverse covariance matrix, also called the concentration matrix, in Gaussian graphical models. Traditional centralized estimation often requires iterative and expensive global inference and is therefore difficult in large distributed networks. In this paper, we propose a general framework for distributed estimation based on a maximum marginal likelihood (MML) approach. Each node independently computes a local estimate by maximizing a marginal likelihood defined with respect to data collected from its local neighborhood. Due to the non-convexity of the MML problem, we derive and consider solving a convex relaxation. The local estimates are then combined into a global estimate without the need for iterative message-passing between neighborhoods. We prove that this relaxed MML estimator is asymptotically consistent. Through numerical experiments on several synthetic and real-world data sets, we demonstrate that the two-hop version of the proposed estimator is significantly better than the one-hop version, and nearly closes the gap to the centralized maximum likelihood estimator in many situations. %Z Notable paper award
RIS
TY - CPAPER TI - Distributed Learning of Gaussian Graphical Models via Marginal Likelihoods AU - Zhaoshi Meng AU - Dennis Wei AU - Ami Wiesel AU - Alfred Hero, III BT - Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics DA - 2013/04/29 ED - Carlos M. Carvalho ED - Pradeep Ravikumar ID - pmlr-v31-meng13a PB - PMLR DP - Proceedings of Machine Learning Research VL - 31 SP - 39 EP - 47 L1 - http://proceedings.mlr.press/v31/meng13a.pdf UR - https://proceedings.mlr.press/v31/meng13a.html AB - We consider distributed estimation of the inverse covariance matrix, also called the concentration matrix, in Gaussian graphical models. Traditional centralized estimation often requires iterative and expensive global inference and is therefore difficult in large distributed networks. In this paper, we propose a general framework for distributed estimation based on a maximum marginal likelihood (MML) approach. Each node independently computes a local estimate by maximizing a marginal likelihood defined with respect to data collected from its local neighborhood. Due to the non-convexity of the MML problem, we derive and consider solving a convex relaxation. The local estimates are then combined into a global estimate without the need for iterative message-passing between neighborhoods. We prove that this relaxed MML estimator is asymptotically consistent. Through numerical experiments on several synthetic and real-world data sets, we demonstrate that the two-hop version of the proposed estimator is significantly better than the one-hop version, and nearly closes the gap to the centralized maximum likelihood estimator in many situations. N1 - Notable paper award ER -
APA
Meng, Z., Wei, D., Wiesel, A. & Hero, III, A.. (2013). Distributed Learning of Gaussian Graphical Models via Marginal Likelihoods. Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 31:39-47 Available from https://proceedings.mlr.press/v31/meng13a.html. Notable paper award

Related Material