Open Problem: Parameter-Free and Scale-Free Online Algorithms

Francesco Orabona, Dávid Pál
29th Annual Conference on Learning Theory, PMLR 49:1659-1664, 2016.

Abstract

Existing vanilla algorithms for online linear optimization have O((ηR(u) + 1/η) \sqrtT) regret with respect to any competitor u, where R(u) is a 1-strongly convex regularizer and η> 0 is a tuning parameter of the algorithm. For certain decision sets and regularizers, the so-called \emphparameter-free algorithms have \widetilde O(\sqrtR(u) T) regret with respect to any competitor u. Vanilla algorithm can achieve the same bound only for a fixed competitor u known ahead of time by setting η= 1/\sqrtR(u). A drawback of both vanilla and parameter-free algorithms is that they assume that the norm of the loss vectors is bounded by a constant known to the algorithm. There exist \emphscale-free algorithms that have O((ηR(u) + 1/η) \sqrtT \max_1 \le t \le T \norm\ell_t) regret with respect to any competitor u and for any sequence of loss vector \ell_1, …, \ell_T. Parameter-free analogue of scale-free algorithms have never been designed. Is is possible to design algorithms that are simultaneously \emphparameter-free and \emphscale-free?

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-orabona16, title = {Open Problem: Parameter-Free and Scale-Free Online Algorithms}, author = {Orabona, Francesco and Pál, Dávid}, booktitle = {29th Annual Conference on Learning Theory}, pages = {1659--1664}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/orabona16.pdf}, url = {https://proceedings.mlr.press/v49/orabona16.html}, abstract = {Existing vanilla algorithms for online linear optimization have O((ηR(u) + 1/η) \sqrtT) regret with respect to any competitor u, where R(u) is a 1-strongly convex regularizer and η> 0 is a tuning parameter of the algorithm. For certain decision sets and regularizers, the so-called \emphparameter-free algorithms have \widetilde O(\sqrtR(u) T) regret with respect to any competitor u. Vanilla algorithm can achieve the same bound only for a fixed competitor u known ahead of time by setting η= 1/\sqrtR(u). A drawback of both vanilla and parameter-free algorithms is that they assume that the norm of the loss vectors is bounded by a constant known to the algorithm. There exist \emphscale-free algorithms that have O((ηR(u) + 1/η) \sqrtT \max_1 \le t \le T \norm\ell_t) regret with respect to any competitor u and for any sequence of loss vector \ell_1, …, \ell_T. Parameter-free analogue of scale-free algorithms have never been designed. Is is possible to design algorithms that are simultaneously \emphparameter-free and \emphscale-free? } }
Endnote
%0 Conference Paper %T Open Problem: Parameter-Free and Scale-Free Online Algorithms %A Francesco Orabona %A Dávid Pál %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-orabona16 %I PMLR %P 1659--1664 %U https://proceedings.mlr.press/v49/orabona16.html %V 49 %X Existing vanilla algorithms for online linear optimization have O((ηR(u) + 1/η) \sqrtT) regret with respect to any competitor u, where R(u) is a 1-strongly convex regularizer and η> 0 is a tuning parameter of the algorithm. For certain decision sets and regularizers, the so-called \emphparameter-free algorithms have \widetilde O(\sqrtR(u) T) regret with respect to any competitor u. Vanilla algorithm can achieve the same bound only for a fixed competitor u known ahead of time by setting η= 1/\sqrtR(u). A drawback of both vanilla and parameter-free algorithms is that they assume that the norm of the loss vectors is bounded by a constant known to the algorithm. There exist \emphscale-free algorithms that have O((ηR(u) + 1/η) \sqrtT \max_1 \le t \le T \norm\ell_t) regret with respect to any competitor u and for any sequence of loss vector \ell_1, …, \ell_T. Parameter-free analogue of scale-free algorithms have never been designed. Is is possible to design algorithms that are simultaneously \emphparameter-free and \emphscale-free?
RIS
TY - CPAPER TI - Open Problem: Parameter-Free and Scale-Free Online Algorithms AU - Francesco Orabona AU - Dávid Pál BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-orabona16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 1659 EP - 1664 L1 - http://proceedings.mlr.press/v49/orabona16.pdf UR - https://proceedings.mlr.press/v49/orabona16.html AB - Existing vanilla algorithms for online linear optimization have O((ηR(u) + 1/η) \sqrtT) regret with respect to any competitor u, where R(u) is a 1-strongly convex regularizer and η> 0 is a tuning parameter of the algorithm. For certain decision sets and regularizers, the so-called \emphparameter-free algorithms have \widetilde O(\sqrtR(u) T) regret with respect to any competitor u. Vanilla algorithm can achieve the same bound only for a fixed competitor u known ahead of time by setting η= 1/\sqrtR(u). A drawback of both vanilla and parameter-free algorithms is that they assume that the norm of the loss vectors is bounded by a constant known to the algorithm. There exist \emphscale-free algorithms that have O((ηR(u) + 1/η) \sqrtT \max_1 \le t \le T \norm\ell_t) regret with respect to any competitor u and for any sequence of loss vector \ell_1, …, \ell_T. Parameter-free analogue of scale-free algorithms have never been designed. Is is possible to design algorithms that are simultaneously \emphparameter-free and \emphscale-free? ER -
APA
Orabona, F. & Pál, D.. (2016). Open Problem: Parameter-Free and Scale-Free Online Algorithms. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:1659-1664 Available from https://proceedings.mlr.press/v49/orabona16.html.

Related Material