Online Learning: Beyond Regret

Alexander Rakhlin, Karthik Sridharan, Ambuj Tewari
Proceedings of the 24th Annual Conference on Learning Theory, PMLR 19:559-594, 2011.

Abstract

We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general $\Phi$-regret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v19-rakhlin11a, title = {Online Learning: Beyond Regret}, author = {Rakhlin, Alexander and Sridharan, Karthik and Tewari, Ambuj}, booktitle = {Proceedings of the 24th Annual Conference on Learning Theory}, pages = {559--594}, year = {2011}, editor = {Kakade, Sham M. and von Luxburg, Ulrike}, volume = {19}, series = {Proceedings of Machine Learning Research}, address = {Budapest, Hungary}, month = {09--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v19/rakhlin11a/rakhlin11a.pdf}, url = {https://proceedings.mlr.press/v19/rakhlin11a.html}, abstract = {We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general $\Phi$-regret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction.} }
Endnote
%0 Conference Paper %T Online Learning: Beyond Regret %A Alexander Rakhlin %A Karthik Sridharan %A Ambuj Tewari %B Proceedings of the 24th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2011 %E Sham M. Kakade %E Ulrike von Luxburg %F pmlr-v19-rakhlin11a %I PMLR %P 559--594 %U https://proceedings.mlr.press/v19/rakhlin11a.html %V 19 %X We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general $\Phi$-regret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction.
RIS
TY - CPAPER TI - Online Learning: Beyond Regret AU - Alexander Rakhlin AU - Karthik Sridharan AU - Ambuj Tewari BT - Proceedings of the 24th Annual Conference on Learning Theory DA - 2011/12/21 ED - Sham M. Kakade ED - Ulrike von Luxburg ID - pmlr-v19-rakhlin11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 19 SP - 559 EP - 594 L1 - http://proceedings.mlr.press/v19/rakhlin11a/rakhlin11a.pdf UR - https://proceedings.mlr.press/v19/rakhlin11a.html AB - We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general $\Phi$-regret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction. ER -
APA
Rakhlin, A., Sridharan, K. & Tewari, A.. (2011). Online Learning: Beyond Regret. Proceedings of the 24th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 19:559-594 Available from https://proceedings.mlr.press/v19/rakhlin11a.html.

Related Material