Convex Block-sparse Linear Regression with Expanders – Provably

Anastasios Kyrillidis, Bubacarr Bah, Rouzbeh Hasheminezhad, Quoc Tran Dinh, Luca Baldassarre, Volkan Cevher
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:19-27, 2016.

Abstract

Sparse matrices are favorable objects in machine learning and optimization. When such matrices are used, in place of dense ones, the overall complexity requirements in optimization can be significantly reduced in practice, both in terms of space and run-time. Prompted by this observation, we study a convex optimization scheme for block-sparse recovery from linear measurements. To obtain linear sketches, we use expander matrices, i.e., sparse matrices containing only few non-zeros per column. Hitherto, to the best of our knowledge, such algorithmic solutions have been only studied from a non-convex perspective. Our aim here is to theoretically characterize the performance of convex approaches under such setting. Our key novelty is the expression of the recovery error in terms of the model-based norm, while assuring that solution lives in the model. To achieve this, we show that sparse model-based matrices satisfy a group version of the null-space property. Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting – as opposed to using dense sensing matrices, while showing a competitive recovery performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-kyrillidis16, title = {Convex Block-sparse Linear Regression with Expanders -- Provably}, author = {Kyrillidis, Anastasios and Bah, Bubacarr and Hasheminezhad, Rouzbeh and Tran Dinh, Quoc and Baldassarre, Luca and Cevher, Volkan}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {19--27}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/kyrillidis16.pdf}, url = {https://proceedings.mlr.press/v51/kyrillidis16.html}, abstract = {Sparse matrices are favorable objects in machine learning and optimization. When such matrices are used, in place of dense ones, the overall complexity requirements in optimization can be significantly reduced in practice, both in terms of space and run-time. Prompted by this observation, we study a convex optimization scheme for block-sparse recovery from linear measurements. To obtain linear sketches, we use expander matrices, i.e., sparse matrices containing only few non-zeros per column. Hitherto, to the best of our knowledge, such algorithmic solutions have been only studied from a non-convex perspective. Our aim here is to theoretically characterize the performance of convex approaches under such setting. Our key novelty is the expression of the recovery error in terms of the model-based norm, while assuring that solution lives in the model. To achieve this, we show that sparse model-based matrices satisfy a group version of the null-space property. Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting – as opposed to using dense sensing matrices, while showing a competitive recovery performance.} }
Endnote
%0 Conference Paper %T Convex Block-sparse Linear Regression with Expanders – Provably %A Anastasios Kyrillidis %A Bubacarr Bah %A Rouzbeh Hasheminezhad %A Quoc Tran Dinh %A Luca Baldassarre %A Volkan Cevher %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-kyrillidis16 %I PMLR %P 19--27 %U https://proceedings.mlr.press/v51/kyrillidis16.html %V 51 %X Sparse matrices are favorable objects in machine learning and optimization. When such matrices are used, in place of dense ones, the overall complexity requirements in optimization can be significantly reduced in practice, both in terms of space and run-time. Prompted by this observation, we study a convex optimization scheme for block-sparse recovery from linear measurements. To obtain linear sketches, we use expander matrices, i.e., sparse matrices containing only few non-zeros per column. Hitherto, to the best of our knowledge, such algorithmic solutions have been only studied from a non-convex perspective. Our aim here is to theoretically characterize the performance of convex approaches under such setting. Our key novelty is the expression of the recovery error in terms of the model-based norm, while assuring that solution lives in the model. To achieve this, we show that sparse model-based matrices satisfy a group version of the null-space property. Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting – as opposed to using dense sensing matrices, while showing a competitive recovery performance.
RIS
TY - CPAPER TI - Convex Block-sparse Linear Regression with Expanders – Provably AU - Anastasios Kyrillidis AU - Bubacarr Bah AU - Rouzbeh Hasheminezhad AU - Quoc Tran Dinh AU - Luca Baldassarre AU - Volkan Cevher BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-kyrillidis16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 19 EP - 27 L1 - http://proceedings.mlr.press/v51/kyrillidis16.pdf UR - https://proceedings.mlr.press/v51/kyrillidis16.html AB - Sparse matrices are favorable objects in machine learning and optimization. When such matrices are used, in place of dense ones, the overall complexity requirements in optimization can be significantly reduced in practice, both in terms of space and run-time. Prompted by this observation, we study a convex optimization scheme for block-sparse recovery from linear measurements. To obtain linear sketches, we use expander matrices, i.e., sparse matrices containing only few non-zeros per column. Hitherto, to the best of our knowledge, such algorithmic solutions have been only studied from a non-convex perspective. Our aim here is to theoretically characterize the performance of convex approaches under such setting. Our key novelty is the expression of the recovery error in terms of the model-based norm, while assuring that solution lives in the model. To achieve this, we show that sparse model-based matrices satisfy a group version of the null-space property. Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting – as opposed to using dense sensing matrices, while showing a competitive recovery performance. ER -
APA
Kyrillidis, A., Bah, B., Hasheminezhad, R., Tran Dinh, Q., Baldassarre, L. & Cevher, V.. (2016). Convex Block-sparse Linear Regression with Expanders – Provably. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:19-27 Available from https://proceedings.mlr.press/v51/kyrillidis16.html.

Related Material