Learning Scale Free Networks by Reweighted $\ell_1$ regularization

Qiang Liu, Alexander Ihler
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:40-48, 2011.

Abstract

Methods for $\ell_1$-type regularization have been widely used in Gaussian graphical model selection tasks to encourage sparse structures. However, often we would like to include more structural information than mere sparsity. In this work, we focus on learning so-called “scale-free” models, a common feature that appears in many real-work networks. We replace the $\ell_1$ regularization with a power law regularization and optimize the objective function by a sequence of iteratively reweighted $\ell_1$ regularization problems, where the regularization coefficients of nodes with high degree are reduced, encouraging the appearance of hubs with high degree. Our method can be easily adapted to improve any existing $\ell_1$-based methods, such as graphical lasso, neighborhood selection, and JSRM when the underlying networks are believed to be scale free or have dominating hubs. We demonstrate in simulation that our method significantly outperforms the a baseline $\ell_1$ method at learning scale-free networks and hub networks, and also illustrate its behavior on gene expression data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v15-liu11a, title = {Learning Scale Free Networks by Reweighted $\ell_1$ regularization}, author = {Liu, Qiang and Ihler, Alexander}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {40--48}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/liu11a/liu11a.pdf}, url = {https://proceedings.mlr.press/v15/liu11a.html}, abstract = {Methods for $\ell_1$-type regularization have been widely used in Gaussian graphical model selection tasks to encourage sparse structures. However, often we would like to include more structural information than mere sparsity. In this work, we focus on learning so-called “scale-free” models, a common feature that appears in many real-work networks. We replace the $\ell_1$ regularization with a power law regularization and optimize the objective function by a sequence of iteratively reweighted $\ell_1$ regularization problems, where the regularization coefficients of nodes with high degree are reduced, encouraging the appearance of hubs with high degree. Our method can be easily adapted to improve any existing $\ell_1$-based methods, such as graphical lasso, neighborhood selection, and JSRM when the underlying networks are believed to be scale free or have dominating hubs. We demonstrate in simulation that our method significantly outperforms the a baseline $\ell_1$ method at learning scale-free networks and hub networks, and also illustrate its behavior on gene expression data.} }
Endnote
%0 Conference Paper %T Learning Scale Free Networks by Reweighted $\ell_1$ regularization %A Qiang Liu %A Alexander Ihler %B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2011 %E Geoffrey Gordon %E David Dunson %E Miroslav Dudík %F pmlr-v15-liu11a %I PMLR %P 40--48 %U https://proceedings.mlr.press/v15/liu11a.html %V 15 %X Methods for $\ell_1$-type regularization have been widely used in Gaussian graphical model selection tasks to encourage sparse structures. However, often we would like to include more structural information than mere sparsity. In this work, we focus on learning so-called “scale-free” models, a common feature that appears in many real-work networks. We replace the $\ell_1$ regularization with a power law regularization and optimize the objective function by a sequence of iteratively reweighted $\ell_1$ regularization problems, where the regularization coefficients of nodes with high degree are reduced, encouraging the appearance of hubs with high degree. Our method can be easily adapted to improve any existing $\ell_1$-based methods, such as graphical lasso, neighborhood selection, and JSRM when the underlying networks are believed to be scale free or have dominating hubs. We demonstrate in simulation that our method significantly outperforms the a baseline $\ell_1$ method at learning scale-free networks and hub networks, and also illustrate its behavior on gene expression data.
RIS
TY - CPAPER TI - Learning Scale Free Networks by Reweighted $\ell_1$ regularization AU - Qiang Liu AU - Alexander Ihler BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics DA - 2011/06/14 ED - Geoffrey Gordon ED - David Dunson ED - Miroslav Dudík ID - pmlr-v15-liu11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 15 SP - 40 EP - 48 L1 - http://proceedings.mlr.press/v15/liu11a/liu11a.pdf UR - https://proceedings.mlr.press/v15/liu11a.html AB - Methods for $\ell_1$-type regularization have been widely used in Gaussian graphical model selection tasks to encourage sparse structures. However, often we would like to include more structural information than mere sparsity. In this work, we focus on learning so-called “scale-free” models, a common feature that appears in many real-work networks. We replace the $\ell_1$ regularization with a power law regularization and optimize the objective function by a sequence of iteratively reweighted $\ell_1$ regularization problems, where the regularization coefficients of nodes with high degree are reduced, encouraging the appearance of hubs with high degree. Our method can be easily adapted to improve any existing $\ell_1$-based methods, such as graphical lasso, neighborhood selection, and JSRM when the underlying networks are believed to be scale free or have dominating hubs. We demonstrate in simulation that our method significantly outperforms the a baseline $\ell_1$ method at learning scale-free networks and hub networks, and also illustrate its behavior on gene expression data. ER -
APA
Liu, Q. & Ihler, A.. (2011). Learning Scale Free Networks by Reweighted $\ell_1$ regularization. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 15:40-48 Available from https://proceedings.mlr.press/v15/liu11a.html.

Related Material