Online Sparse Linear Regression

Dean Foster, Satyen Kale, Howard Karloff
29th Annual Conference on Learning Theory, PMLR 49:960-970, 2016.

Abstract

We consider the online sparse linear regression problem, which is the problem of sequentially making predictions observing only a limited number of features in each round, to minimize regret with respect to the best sparse linear regressor, where prediction accuracy is measured by square loss. We give an \em inefficient algorithm that obtains regret bounded by \tildeO(\sqrtT) after T prediction rounds. We complement this result by showing that no algorithm running in polynomial time per iteration can achieve regret bounded by O(T^1-δ) for any constant δ> 0 unless \textsfNP ⊆\textsfBPP. This computational hardness result resolves an open problem presented in COLT 2014 (Kale, 2014) and also posed by Zolghadr et al. (2013). This hardness result holds even if the algorithm is allowed to access more features than the best sparse linear regressor up to a logarithmic factor in the dimension.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-foster16, title = {Online Sparse Linear Regression}, author = {Foster, Dean and Kale, Satyen and Karloff, Howard}, booktitle = {29th Annual Conference on Learning Theory}, pages = {960--970}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/foster16.pdf}, url = {https://proceedings.mlr.press/v49/foster16.html}, abstract = {We consider the online sparse linear regression problem, which is the problem of sequentially making predictions observing only a limited number of features in each round, to minimize regret with respect to the best sparse linear regressor, where prediction accuracy is measured by square loss. We give an \em inefficient algorithm that obtains regret bounded by \tildeO(\sqrtT) after T prediction rounds. We complement this result by showing that no algorithm running in polynomial time per iteration can achieve regret bounded by O(T^1-δ) for any constant δ> 0 unless \textsfNP ⊆\textsfBPP. This computational hardness result resolves an open problem presented in COLT 2014 (Kale, 2014) and also posed by Zolghadr et al. (2013). This hardness result holds even if the algorithm is allowed to access more features than the best sparse linear regressor up to a logarithmic factor in the dimension.} }
Endnote
%0 Conference Paper %T Online Sparse Linear Regression %A Dean Foster %A Satyen Kale %A Howard Karloff %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-foster16 %I PMLR %P 960--970 %U https://proceedings.mlr.press/v49/foster16.html %V 49 %X We consider the online sparse linear regression problem, which is the problem of sequentially making predictions observing only a limited number of features in each round, to minimize regret with respect to the best sparse linear regressor, where prediction accuracy is measured by square loss. We give an \em inefficient algorithm that obtains regret bounded by \tildeO(\sqrtT) after T prediction rounds. We complement this result by showing that no algorithm running in polynomial time per iteration can achieve regret bounded by O(T^1-δ) for any constant δ> 0 unless \textsfNP ⊆\textsfBPP. This computational hardness result resolves an open problem presented in COLT 2014 (Kale, 2014) and also posed by Zolghadr et al. (2013). This hardness result holds even if the algorithm is allowed to access more features than the best sparse linear regressor up to a logarithmic factor in the dimension.
RIS
TY - CPAPER TI - Online Sparse Linear Regression AU - Dean Foster AU - Satyen Kale AU - Howard Karloff BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-foster16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 960 EP - 970 L1 - http://proceedings.mlr.press/v49/foster16.pdf UR - https://proceedings.mlr.press/v49/foster16.html AB - We consider the online sparse linear regression problem, which is the problem of sequentially making predictions observing only a limited number of features in each round, to minimize regret with respect to the best sparse linear regressor, where prediction accuracy is measured by square loss. We give an \em inefficient algorithm that obtains regret bounded by \tildeO(\sqrtT) after T prediction rounds. We complement this result by showing that no algorithm running in polynomial time per iteration can achieve regret bounded by O(T^1-δ) for any constant δ> 0 unless \textsfNP ⊆\textsfBPP. This computational hardness result resolves an open problem presented in COLT 2014 (Kale, 2014) and also posed by Zolghadr et al. (2013). This hardness result holds even if the algorithm is allowed to access more features than the best sparse linear regressor up to a logarithmic factor in the dimension. ER -
APA
Foster, D., Kale, S. & Karloff, H.. (2016). Online Sparse Linear Regression. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:960-970 Available from https://proceedings.mlr.press/v49/foster16.html.

Related Material