Towards Minimax Policies for Online Linear Optimization with Bandit Feedback

Sébastien Bubeck, Nicoló Cesa-Bianchi, Sham M. Kakade
Proceedings of the 25th Annual Conference on Learning Theory, PMLR 23:41.1-41.14, 2012.

Abstract

We address the online linear optimization problem with bandit feedback. Our contribution is twofold. First, we provide an algorithm (based on exponential weights) with a regret of order $\sqrt{dn \log N}$ for any finite action set with $N$ actions, under the assumption that the instantaneous loss is bounded by 1. This shaves off an extraneous $\sqrt{d}$ factor compared to previous works, and gives a regret bound of order $d\sqrt{n \log n}$ for any compact set of actions. Without further assumptions on the action set, this last bound is minimax optimal up to a logarithmic factor. Interestingly, our result also shows that the minimax regret for bandit linear optimization with expert advice in $d$ dimension is the same as for the basic $d$-armed bandit with expert advice. Our second contribution is to show how to use the Mirror Descent algorithm to obtain computationally efficient strategies with minimax optimal regret bounds in specific examples. More precisely we study two canonical action sets: the hypercube and the Euclidean ball. In the former case, we obtain the first computationally efficient algorithm with a $d\sqrt{n}$ regret, thus improving by a factor $\sqrt{d \log n}$ over the best known result for a computationally efficient algorithm. In the latter case, our approach gives the first algorithm with a $\sqrt{dn \log n}$, again shaving off an extraneous $\sqrt{d}$ compared to previous works.

Cite this Paper


BibTeX
@InProceedings{pmlr-v23-bubeck12a, title = {Towards Minimax Policies for Online Linear Optimization with Bandit Feedback}, author = {Bubeck, Sébastien and Cesa-Bianchi, Nicoló and Kakade, Sham M.}, booktitle = {Proceedings of the 25th Annual Conference on Learning Theory}, pages = {41.1--41.14}, year = {2012}, editor = {Mannor, Shie and Srebro, Nathan and Williamson, Robert C.}, volume = {23}, series = {Proceedings of Machine Learning Research}, address = {Edinburgh, Scotland}, month = {25--27 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v23/bubeck12a/bubeck12a.pdf}, url = {https://proceedings.mlr.press/v23/bubeck12a.html}, abstract = {We address the online linear optimization problem with bandit feedback. Our contribution is twofold. First, we provide an algorithm (based on exponential weights) with a regret of order $\sqrt{dn \log N}$ for any finite action set with $N$ actions, under the assumption that the instantaneous loss is bounded by 1. This shaves off an extraneous $\sqrt{d}$ factor compared to previous works, and gives a regret bound of order $d\sqrt{n \log n}$ for any compact set of actions. Without further assumptions on the action set, this last bound is minimax optimal up to a logarithmic factor. Interestingly, our result also shows that the minimax regret for bandit linear optimization with expert advice in $d$ dimension is the same as for the basic $d$-armed bandit with expert advice. Our second contribution is to show how to use the Mirror Descent algorithm to obtain computationally efficient strategies with minimax optimal regret bounds in specific examples. More precisely we study two canonical action sets: the hypercube and the Euclidean ball. In the former case, we obtain the first computationally efficient algorithm with a $d\sqrt{n}$ regret, thus improving by a factor $\sqrt{d \log n}$ over the best known result for a computationally efficient algorithm. In the latter case, our approach gives the first algorithm with a $\sqrt{dn \log n}$, again shaving off an extraneous $\sqrt{d}$ compared to previous works.} }
Endnote
%0 Conference Paper %T Towards Minimax Policies for Online Linear Optimization with Bandit Feedback %A Sébastien Bubeck %A Nicoló Cesa-Bianchi %A Sham M. Kakade %B Proceedings of the 25th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2012 %E Shie Mannor %E Nathan Srebro %E Robert C. Williamson %F pmlr-v23-bubeck12a %I PMLR %P 41.1--41.14 %U https://proceedings.mlr.press/v23/bubeck12a.html %V 23 %X We address the online linear optimization problem with bandit feedback. Our contribution is twofold. First, we provide an algorithm (based on exponential weights) with a regret of order $\sqrt{dn \log N}$ for any finite action set with $N$ actions, under the assumption that the instantaneous loss is bounded by 1. This shaves off an extraneous $\sqrt{d}$ factor compared to previous works, and gives a regret bound of order $d\sqrt{n \log n}$ for any compact set of actions. Without further assumptions on the action set, this last bound is minimax optimal up to a logarithmic factor. Interestingly, our result also shows that the minimax regret for bandit linear optimization with expert advice in $d$ dimension is the same as for the basic $d$-armed bandit with expert advice. Our second contribution is to show how to use the Mirror Descent algorithm to obtain computationally efficient strategies with minimax optimal regret bounds in specific examples. More precisely we study two canonical action sets: the hypercube and the Euclidean ball. In the former case, we obtain the first computationally efficient algorithm with a $d\sqrt{n}$ regret, thus improving by a factor $\sqrt{d \log n}$ over the best known result for a computationally efficient algorithm. In the latter case, our approach gives the first algorithm with a $\sqrt{dn \log n}$, again shaving off an extraneous $\sqrt{d}$ compared to previous works.
RIS
TY - CPAPER TI - Towards Minimax Policies for Online Linear Optimization with Bandit Feedback AU - Sébastien Bubeck AU - Nicoló Cesa-Bianchi AU - Sham M. Kakade BT - Proceedings of the 25th Annual Conference on Learning Theory DA - 2012/06/16 ED - Shie Mannor ED - Nathan Srebro ED - Robert C. Williamson ID - pmlr-v23-bubeck12a PB - PMLR DP - Proceedings of Machine Learning Research VL - 23 SP - 41.1 EP - 41.14 L1 - http://proceedings.mlr.press/v23/bubeck12a/bubeck12a.pdf UR - https://proceedings.mlr.press/v23/bubeck12a.html AB - We address the online linear optimization problem with bandit feedback. Our contribution is twofold. First, we provide an algorithm (based on exponential weights) with a regret of order $\sqrt{dn \log N}$ for any finite action set with $N$ actions, under the assumption that the instantaneous loss is bounded by 1. This shaves off an extraneous $\sqrt{d}$ factor compared to previous works, and gives a regret bound of order $d\sqrt{n \log n}$ for any compact set of actions. Without further assumptions on the action set, this last bound is minimax optimal up to a logarithmic factor. Interestingly, our result also shows that the minimax regret for bandit linear optimization with expert advice in $d$ dimension is the same as for the basic $d$-armed bandit with expert advice. Our second contribution is to show how to use the Mirror Descent algorithm to obtain computationally efficient strategies with minimax optimal regret bounds in specific examples. More precisely we study two canonical action sets: the hypercube and the Euclidean ball. In the former case, we obtain the first computationally efficient algorithm with a $d\sqrt{n}$ regret, thus improving by a factor $\sqrt{d \log n}$ over the best known result for a computationally efficient algorithm. In the latter case, our approach gives the first algorithm with a $\sqrt{dn \log n}$, again shaving off an extraneous $\sqrt{d}$ compared to previous works. ER -
APA
Bubeck, S., Cesa-Bianchi, N. & Kakade, S.M.. (2012). Towards Minimax Policies for Online Linear Optimization with Bandit Feedback. Proceedings of the 25th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 23:41.1-41.14 Available from https://proceedings.mlr.press/v23/bubeck12a.html.

Related Material