Compressed Sensing with Very Sparse Gaussian Random Projections

Ping Li, Cun-Hui Zhang
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:617-625, 2015.

Abstract

\noindent We study the use of \em very sparse random projections \citeProc:Li_Hastie_Church_KDD06,Proc:Li_KDD07 for compressed sensing (sparse signal recovery) when the nonzero coordinates of signals can be either positive or negative. In our setting, the entries of a Gaussian design matrix are randomly sparsified so that only a very small fraction of entries are nonzero. Our proposed decoding algorithm is simple and efficient in that the major cost is one linear scan of the coordinates. Using our proposed \textbf\em “tie estimator”, we are able to recover a K-sparse signal of length N using 1.551 eK \log K/δmeasurements (where δ≤0.05 is the confidence) in one scan. The practical performance of our method, however, can be substantially better than this bound. The Gaussian design assumption is not essential although it simplifies the analysis. \noindent Prior studies have shown that existing one-scan (or roughly one-scan) recovery algorithms using sparse matrices would require substantially (e.g., one order of magnitude) more measurements than L1 decoding by linear programming, when the nonzero coordinates of signals can be either negative or positive. In this paper, following a well-known experimental setup \citeUrl:wiki_sparse, we show that, at the same number of measurements, the recovery accuracies of our proposed method are similar to the standard L1 decoding.

Cite this Paper


BibTeX
@InProceedings{pmlr-v38-li15c, title = {{Compressed Sensing with Very Sparse Gaussian Random Projections}}, author = {Li, Ping and Zhang, Cun-Hui}, booktitle = {Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics}, pages = {617--625}, year = {2015}, editor = {Lebanon, Guy and Vishwanathan, S. V. N.}, volume = {38}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {09--12 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v38/li15c.pdf}, url = {https://proceedings.mlr.press/v38/li15c.html}, abstract = {\noindent We study the use of \em very sparse random projections \citeProc:Li_Hastie_Church_KDD06,Proc:Li_KDD07 for compressed sensing (sparse signal recovery) when the nonzero coordinates of signals can be either positive or negative. In our setting, the entries of a Gaussian design matrix are randomly sparsified so that only a very small fraction of entries are nonzero. Our proposed decoding algorithm is simple and efficient in that the major cost is one linear scan of the coordinates. Using our proposed \textbf\em “tie estimator”, we are able to recover a K-sparse signal of length N using 1.551 eK \log K/δmeasurements (where δ≤0.05 is the confidence) in one scan. The practical performance of our method, however, can be substantially better than this bound. The Gaussian design assumption is not essential although it simplifies the analysis. \noindent Prior studies have shown that existing one-scan (or roughly one-scan) recovery algorithms using sparse matrices would require substantially (e.g., one order of magnitude) more measurements than L1 decoding by linear programming, when the nonzero coordinates of signals can be either negative or positive. In this paper, following a well-known experimental setup \citeUrl:wiki_sparse, we show that, at the same number of measurements, the recovery accuracies of our proposed method are similar to the standard L1 decoding.} }
Endnote
%0 Conference Paper %T Compressed Sensing with Very Sparse Gaussian Random Projections %A Ping Li %A Cun-Hui Zhang %B Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2015 %E Guy Lebanon %E S. V. N. Vishwanathan %F pmlr-v38-li15c %I PMLR %P 617--625 %U https://proceedings.mlr.press/v38/li15c.html %V 38 %X \noindent We study the use of \em very sparse random projections \citeProc:Li_Hastie_Church_KDD06,Proc:Li_KDD07 for compressed sensing (sparse signal recovery) when the nonzero coordinates of signals can be either positive or negative. In our setting, the entries of a Gaussian design matrix are randomly sparsified so that only a very small fraction of entries are nonzero. Our proposed decoding algorithm is simple and efficient in that the major cost is one linear scan of the coordinates. Using our proposed \textbf\em “tie estimator”, we are able to recover a K-sparse signal of length N using 1.551 eK \log K/δmeasurements (where δ≤0.05 is the confidence) in one scan. The practical performance of our method, however, can be substantially better than this bound. The Gaussian design assumption is not essential although it simplifies the analysis. \noindent Prior studies have shown that existing one-scan (or roughly one-scan) recovery algorithms using sparse matrices would require substantially (e.g., one order of magnitude) more measurements than L1 decoding by linear programming, when the nonzero coordinates of signals can be either negative or positive. In this paper, following a well-known experimental setup \citeUrl:wiki_sparse, we show that, at the same number of measurements, the recovery accuracies of our proposed method are similar to the standard L1 decoding.
RIS
TY - CPAPER TI - Compressed Sensing with Very Sparse Gaussian Random Projections AU - Ping Li AU - Cun-Hui Zhang BT - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics DA - 2015/02/21 ED - Guy Lebanon ED - S. V. N. Vishwanathan ID - pmlr-v38-li15c PB - PMLR DP - Proceedings of Machine Learning Research VL - 38 SP - 617 EP - 625 L1 - http://proceedings.mlr.press/v38/li15c.pdf UR - https://proceedings.mlr.press/v38/li15c.html AB - \noindent We study the use of \em very sparse random projections \citeProc:Li_Hastie_Church_KDD06,Proc:Li_KDD07 for compressed sensing (sparse signal recovery) when the nonzero coordinates of signals can be either positive or negative. In our setting, the entries of a Gaussian design matrix are randomly sparsified so that only a very small fraction of entries are nonzero. Our proposed decoding algorithm is simple and efficient in that the major cost is one linear scan of the coordinates. Using our proposed \textbf\em “tie estimator”, we are able to recover a K-sparse signal of length N using 1.551 eK \log K/δmeasurements (where δ≤0.05 is the confidence) in one scan. The practical performance of our method, however, can be substantially better than this bound. The Gaussian design assumption is not essential although it simplifies the analysis. \noindent Prior studies have shown that existing one-scan (or roughly one-scan) recovery algorithms using sparse matrices would require substantially (e.g., one order of magnitude) more measurements than L1 decoding by linear programming, when the nonzero coordinates of signals can be either negative or positive. In this paper, following a well-known experimental setup \citeUrl:wiki_sparse, we show that, at the same number of measurements, the recovery accuracies of our proposed method are similar to the standard L1 decoding. ER -
APA
Li, P. & Zhang, C.. (2015). Compressed Sensing with Very Sparse Gaussian Random Projections. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 38:617-625 Available from https://proceedings.mlr.press/v38/li15c.html.

Related Material