Handling Sparsity via the Horseshoe

Carlos M. Carvalho, Nicholas G. Polson, James G. Scott
Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:73-80, 2009.

Abstract

This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Student-t priors (e.g. the relevance vector machine). The advantages of the horseshoe are its robustness at handling unknown sparsity and large outlying signals. These properties are justifed theoretically via a representation theorem and accompanied by comprehensive empirical experiments that compare its performance to benchmark alternatives.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-carvalho09a, title = {Handling Sparsity via the Horseshoe}, author = {Carvalho, Carlos M. and Polson, Nicholas G. and Scott, James G.}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {73--80}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/carvalho09a/carvalho09a.pdf}, url = {https://proceedings.mlr.press/v5/carvalho09a.html}, abstract = {This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Student-t priors (e.g. the relevance vector machine). The advantages of the horseshoe are its robustness at handling unknown sparsity and large outlying signals. These properties are justifed theoretically via a representation theorem and accompanied by comprehensive empirical experiments that compare its performance to benchmark alternatives.} }
Endnote
%0 Conference Paper %T Handling Sparsity via the Horseshoe %A Carlos M. Carvalho %A Nicholas G. Polson %A James G. Scott %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-carvalho09a %I PMLR %P 73--80 %U https://proceedings.mlr.press/v5/carvalho09a.html %V 5 %X This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Student-t priors (e.g. the relevance vector machine). The advantages of the horseshoe are its robustness at handling unknown sparsity and large outlying signals. These properties are justifed theoretically via a representation theorem and accompanied by comprehensive empirical experiments that compare its performance to benchmark alternatives.
RIS
TY - CPAPER TI - Handling Sparsity via the Horseshoe AU - Carlos M. Carvalho AU - Nicholas G. Polson AU - James G. Scott BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-carvalho09a PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 73 EP - 80 L1 - http://proceedings.mlr.press/v5/carvalho09a/carvalho09a.pdf UR - https://proceedings.mlr.press/v5/carvalho09a.html AB - This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Student-t priors (e.g. the relevance vector machine). The advantages of the horseshoe are its robustness at handling unknown sparsity and large outlying signals. These properties are justifed theoretically via a representation theorem and accompanied by comprehensive empirical experiments that compare its performance to benchmark alternatives. ER -
APA
Carvalho, C.M., Polson, N.G. & Scott, J.G.. (2009). Handling Sparsity via the Horseshoe. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:73-80 Available from https://proceedings.mlr.press/v5/carvalho09a.html.

Related Material