Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing

Rongda Zhu, Quanquan Gu
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:739-747, 2015.

Abstract

In this paper, we propose a novel algorithm based on nonconvex sparsity-inducing penalty for one-bit compressed sensing. We prove that our algorithm has a sample complexity of O(s/ε^2) for strong signals, and O(s\log d/ε^2) for weak signals, where s is the number of nonzero entries in the signal vector, d is the signal dimension and εis the recovery error. For general signals, the sample complexity of our algorithm lies between O(s/ε^2) and O(s\log d/ε^2). This is a remarkable improvement over the existing best sample complexity O(s\log d/ε^2). Furthermore, we show that our algorithm achieves exact support recovery with high probability for strong signals. Our theory is verified by extensive numerical experiments, which clearly illustrate the superiority of our algorithm for both approximate signal and support recovery in the noisy setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-zhua15, title = {Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing}, author = {Zhu, Rongda and Gu, Quanquan}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {739--747}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/zhua15.pdf}, url = {https://proceedings.mlr.press/v37/zhua15.html}, abstract = {In this paper, we propose a novel algorithm based on nonconvex sparsity-inducing penalty for one-bit compressed sensing. We prove that our algorithm has a sample complexity of O(s/ε^2) for strong signals, and O(s\log d/ε^2) for weak signals, where s is the number of nonzero entries in the signal vector, d is the signal dimension and εis the recovery error. For general signals, the sample complexity of our algorithm lies between O(s/ε^2) and O(s\log d/ε^2). This is a remarkable improvement over the existing best sample complexity O(s\log d/ε^2). Furthermore, we show that our algorithm achieves exact support recovery with high probability for strong signals. Our theory is verified by extensive numerical experiments, which clearly illustrate the superiority of our algorithm for both approximate signal and support recovery in the noisy setting.} }
Endnote
%0 Conference Paper %T Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing %A Rongda Zhu %A Quanquan Gu %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-zhua15 %I PMLR %P 739--747 %U https://proceedings.mlr.press/v37/zhua15.html %V 37 %X In this paper, we propose a novel algorithm based on nonconvex sparsity-inducing penalty for one-bit compressed sensing. We prove that our algorithm has a sample complexity of O(s/ε^2) for strong signals, and O(s\log d/ε^2) for weak signals, where s is the number of nonzero entries in the signal vector, d is the signal dimension and εis the recovery error. For general signals, the sample complexity of our algorithm lies between O(s/ε^2) and O(s\log d/ε^2). This is a remarkable improvement over the existing best sample complexity O(s\log d/ε^2). Furthermore, we show that our algorithm achieves exact support recovery with high probability for strong signals. Our theory is verified by extensive numerical experiments, which clearly illustrate the superiority of our algorithm for both approximate signal and support recovery in the noisy setting.
RIS
TY - CPAPER TI - Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing AU - Rongda Zhu AU - Quanquan Gu BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-zhua15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 739 EP - 747 L1 - http://proceedings.mlr.press/v37/zhua15.pdf UR - https://proceedings.mlr.press/v37/zhua15.html AB - In this paper, we propose a novel algorithm based on nonconvex sparsity-inducing penalty for one-bit compressed sensing. We prove that our algorithm has a sample complexity of O(s/ε^2) for strong signals, and O(s\log d/ε^2) for weak signals, where s is the number of nonzero entries in the signal vector, d is the signal dimension and εis the recovery error. For general signals, the sample complexity of our algorithm lies between O(s/ε^2) and O(s\log d/ε^2). This is a remarkable improvement over the existing best sample complexity O(s\log d/ε^2). Furthermore, we show that our algorithm achieves exact support recovery with high probability for strong signals. Our theory is verified by extensive numerical experiments, which clearly illustrate the superiority of our algorithm for both approximate signal and support recovery in the noisy setting. ER -
APA
Zhu, R. & Gu, Q.. (2015). Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:739-747 Available from https://proceedings.mlr.press/v37/zhua15.html.

Related Material