Accelerating Online Convex Optimization via Adaptive Prediction

Mehryar Mohri, Scott Yang
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:848-856, 2016.

Abstract

We present a powerful general framework for designing data-dependent online convex optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-mohri16, title = {Accelerating Online Convex Optimization via Adaptive Prediction}, author = {Mohri, Mehryar and Yang, Scott}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {848--856}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/mohri16.pdf}, url = {https://proceedings.mlr.press/v51/mohri16.html}, abstract = {We present a powerful general framework for designing data-dependent online convex optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.} }
Endnote
%0 Conference Paper %T Accelerating Online Convex Optimization via Adaptive Prediction %A Mehryar Mohri %A Scott Yang %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-mohri16 %I PMLR %P 848--856 %U https://proceedings.mlr.press/v51/mohri16.html %V 51 %X We present a powerful general framework for designing data-dependent online convex optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.
RIS
TY - CPAPER TI - Accelerating Online Convex Optimization via Adaptive Prediction AU - Mehryar Mohri AU - Scott Yang BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-mohri16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 848 EP - 856 L1 - http://proceedings.mlr.press/v51/mohri16.pdf UR - https://proceedings.mlr.press/v51/mohri16.html AB - We present a powerful general framework for designing data-dependent online convex optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods. ER -
APA
Mohri, M. & Yang, S.. (2016). Accelerating Online Convex Optimization via Adaptive Prediction. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:848-856 Available from https://proceedings.mlr.press/v51/mohri16.html.

Related Material