Online Optimization : Competing with Dynamic Comparators

Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, Karthik Sridharan
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:398-406, 2015.

Abstract

Recent literature on online learning has focused on developing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case performance guarantees. A complementary direction is to develop prediction methods that perform well against complex benchmarks. In this paper, we address these two directions together. We present a fully adaptive method that competes with dynamic benchmarks in which regret guarantee scales with regularity of the sequence of cost functions and comparators. Notably, the regret bound adapts to the smaller complexity measure in the problem environment. Finally, we apply our results to drifting zero-sum, two-player games where both players achieve no regret guarantees against best sequences of actions in hindsight.

Cite this Paper


BibTeX
@InProceedings{pmlr-v38-jadbabaie15, title = {{Online Optimization : Competing with Dynamic Comparators}}, author = {Jadbabaie, Ali and Rakhlin, Alexander and Shahrampour, Shahin and Sridharan, Karthik}, booktitle = {Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics}, pages = {398--406}, year = {2015}, editor = {Lebanon, Guy and Vishwanathan, S. V. N.}, volume = {38}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {09--12 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v38/jadbabaie15.pdf}, url = {https://proceedings.mlr.press/v38/jadbabaie15.html}, abstract = {Recent literature on online learning has focused on developing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case performance guarantees. A complementary direction is to develop prediction methods that perform well against complex benchmarks. In this paper, we address these two directions together. We present a fully adaptive method that competes with dynamic benchmarks in which regret guarantee scales with regularity of the sequence of cost functions and comparators. Notably, the regret bound adapts to the smaller complexity measure in the problem environment. Finally, we apply our results to drifting zero-sum, two-player games where both players achieve no regret guarantees against best sequences of actions in hindsight.} }
Endnote
%0 Conference Paper %T Online Optimization : Competing with Dynamic Comparators %A Ali Jadbabaie %A Alexander Rakhlin %A Shahin Shahrampour %A Karthik Sridharan %B Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2015 %E Guy Lebanon %E S. V. N. Vishwanathan %F pmlr-v38-jadbabaie15 %I PMLR %P 398--406 %U https://proceedings.mlr.press/v38/jadbabaie15.html %V 38 %X Recent literature on online learning has focused on developing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case performance guarantees. A complementary direction is to develop prediction methods that perform well against complex benchmarks. In this paper, we address these two directions together. We present a fully adaptive method that competes with dynamic benchmarks in which regret guarantee scales with regularity of the sequence of cost functions and comparators. Notably, the regret bound adapts to the smaller complexity measure in the problem environment. Finally, we apply our results to drifting zero-sum, two-player games where both players achieve no regret guarantees against best sequences of actions in hindsight.
RIS
TY - CPAPER TI - Online Optimization : Competing with Dynamic Comparators AU - Ali Jadbabaie AU - Alexander Rakhlin AU - Shahin Shahrampour AU - Karthik Sridharan BT - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics DA - 2015/02/21 ED - Guy Lebanon ED - S. V. N. Vishwanathan ID - pmlr-v38-jadbabaie15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 38 SP - 398 EP - 406 L1 - http://proceedings.mlr.press/v38/jadbabaie15.pdf UR - https://proceedings.mlr.press/v38/jadbabaie15.html AB - Recent literature on online learning has focused on developing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case performance guarantees. A complementary direction is to develop prediction methods that perform well against complex benchmarks. In this paper, we address these two directions together. We present a fully adaptive method that competes with dynamic benchmarks in which regret guarantee scales with regularity of the sequence of cost functions and comparators. Notably, the regret bound adapts to the smaller complexity measure in the problem environment. Finally, we apply our results to drifting zero-sum, two-player games where both players achieve no regret guarantees against best sequences of actions in hindsight. ER -
APA
Jadbabaie, A., Rakhlin, A., Shahrampour, S. & Sridharan, K.. (2015). Online Optimization : Competing with Dynamic Comparators. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 38:398-406 Available from https://proceedings.mlr.press/v38/jadbabaie15.html.

Related Material