Sparse Additive Matrix Factorization for Robust PCA and Its Generalization

Shinichi Nakajima, Masashi Sugiyama, S. Derin Babacan
Proceedings of the Asian Conference on Machine Learning, PMLR 25:301-316, 2012.

Abstract

Principal component analysis (PCA) can be regarded as approximating a data matrix with a low-rank one by imposing sparsity on its singular values, and its robust variant further captures sparse noise. In this paper, we extend such sparse matrix learning methods, and propose a novel unified framework called sparse additive matrix factorization (SAMF). SAMF systematically induces various types of sparsity by the so-called model-induced regularization in the Bayesian framework. We propose an iterative algorithm called the mean update (MU) for the variational Bayesian approximation to SAMF, which gives the global optimal solution for a large subset of parameters in each step. We demonstrate the usefulness of our method on artificial data and the foreground/background video separation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v25-nakajima12, title = {Sparse Additive Matrix Factorization for Robust {PCA} and Its Generalization}, author = {Nakajima, Shinichi and Sugiyama, Masashi and Babacan, S. Derin}, booktitle = {Proceedings of the Asian Conference on Machine Learning}, pages = {301--316}, year = {2012}, editor = {Hoi, Steven C. H. and Buntine, Wray}, volume = {25}, series = {Proceedings of Machine Learning Research}, address = {Singapore Management University, Singapore}, month = {04--06 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v25/nakajima12/nakajima12.pdf}, url = {https://proceedings.mlr.press/v25/nakajima12.html}, abstract = {Principal component analysis (PCA) can be regarded as approximating a data matrix with a low-rank one by imposing sparsity on its singular values, and its robust variant further captures sparse noise. In this paper, we extend such sparse matrix learning methods, and propose a novel unified framework called sparse additive matrix factorization (SAMF). SAMF systematically induces various types of sparsity by the so-called model-induced regularization in the Bayesian framework. We propose an iterative algorithm called the mean update (MU) for the variational Bayesian approximation to SAMF, which gives the global optimal solution for a large subset of parameters in each step. We demonstrate the usefulness of our method on artificial data and the foreground/background video separation.} }
Endnote
%0 Conference Paper %T Sparse Additive Matrix Factorization for Robust PCA and Its Generalization %A Shinichi Nakajima %A Masashi Sugiyama %A S. Derin Babacan %B Proceedings of the Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2012 %E Steven C. H. Hoi %E Wray Buntine %F pmlr-v25-nakajima12 %I PMLR %P 301--316 %U https://proceedings.mlr.press/v25/nakajima12.html %V 25 %X Principal component analysis (PCA) can be regarded as approximating a data matrix with a low-rank one by imposing sparsity on its singular values, and its robust variant further captures sparse noise. In this paper, we extend such sparse matrix learning methods, and propose a novel unified framework called sparse additive matrix factorization (SAMF). SAMF systematically induces various types of sparsity by the so-called model-induced regularization in the Bayesian framework. We propose an iterative algorithm called the mean update (MU) for the variational Bayesian approximation to SAMF, which gives the global optimal solution for a large subset of parameters in each step. We demonstrate the usefulness of our method on artificial data and the foreground/background video separation.
RIS
TY - CPAPER TI - Sparse Additive Matrix Factorization for Robust PCA and Its Generalization AU - Shinichi Nakajima AU - Masashi Sugiyama AU - S. Derin Babacan BT - Proceedings of the Asian Conference on Machine Learning DA - 2012/11/17 ED - Steven C. H. Hoi ED - Wray Buntine ID - pmlr-v25-nakajima12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 25 SP - 301 EP - 316 L1 - http://proceedings.mlr.press/v25/nakajima12/nakajima12.pdf UR - https://proceedings.mlr.press/v25/nakajima12.html AB - Principal component analysis (PCA) can be regarded as approximating a data matrix with a low-rank one by imposing sparsity on its singular values, and its robust variant further captures sparse noise. In this paper, we extend such sparse matrix learning methods, and propose a novel unified framework called sparse additive matrix factorization (SAMF). SAMF systematically induces various types of sparsity by the so-called model-induced regularization in the Bayesian framework. We propose an iterative algorithm called the mean update (MU) for the variational Bayesian approximation to SAMF, which gives the global optimal solution for a large subset of parameters in each step. We demonstrate the usefulness of our method on artificial data and the foreground/background video separation. ER -
APA
Nakajima, S., Sugiyama, M. & Babacan, S.D.. (2012). Sparse Additive Matrix Factorization for Robust PCA and Its Generalization. Proceedings of the Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 25:301-316 Available from https://proceedings.mlr.press/v25/nakajima12.html.

Related Material