Minimax Policies for Combinatorial Prediction Games

Jean-Yves Audibert, Sébastien Bubeck, Gábor Lugosi
Proceedings of the 24th Annual Conference on Learning Theory, PMLR 19:107-132, 2011.

Abstract

We address the online linear optimization problem when the actions of the forecaster are represented by binary vectors. Our goal is to understand the magnitude of the minimax regret for the worst possible set of actions. We study the problem under three different assumptions for the feedback: full information, and the partial information models of the so-called “semi-bandit”, and “bandit” problems. We consider both $L_\infty$, and $L_2$-type of restrictions for the losses assigned by the adversary. We formulate a general strategy using Bregman projections on top of a potential-based gradient descent, which generalizes the ones studied in the series of papers Gyorgy et al. (2007); Dani et al. (2008); Abernethy et al. (2008); Cesa-Bianchi and Lugosi (2009); Helmbold and Warmuth (2009); Koolen et al. (2010); Uchiya et al. (2010); Kale et al. (2010) and Audibert and Bubeck (2010). We provide simple proofs that recover most of the previous results. We propose new upper bounds for the semi-bandit game. Moreover we derive lower bounds for all three feedback assumptions. With the only exception of the bandit game, the upper and lower bounds are tight, up to a constant factor. Finally, we answer a question asked by Koolen et al. (2010) by showing that the exponentially weighted average forecaster is suboptimal against $L_\infty$ adversaries.

Cite this Paper


BibTeX
@InProceedings{pmlr-v19-audibert11a, title = {Minimax Policies for Combinatorial Prediction Games}, author = {Audibert, Jean-Yves and Bubeck, S\'ebastien and Lugosi, G\'abor}, booktitle = {Proceedings of the 24th Annual Conference on Learning Theory}, pages = {107--132}, year = {2011}, editor = {Kakade, Sham M. and von Luxburg, Ulrike}, volume = {19}, series = {Proceedings of Machine Learning Research}, address = {Budapest, Hungary}, month = {09--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v19/audibert11a/audibert11a.pdf}, url = {https://proceedings.mlr.press/v19/audibert11a.html}, abstract = {We address the online linear optimization problem when the actions of the forecaster are represented by binary vectors. Our goal is to understand the magnitude of the minimax regret for the worst possible set of actions. We study the problem under three different assumptions for the feedback: full information, and the partial information models of the so-called “semi-bandit”, and “bandit” problems. We consider both $L_\infty$, and $L_2$-type of restrictions for the losses assigned by the adversary. We formulate a general strategy using Bregman projections on top of a potential-based gradient descent, which generalizes the ones studied in the series of papers Gyorgy et al. (2007); Dani et al. (2008); Abernethy et al. (2008); Cesa-Bianchi and Lugosi (2009); Helmbold and Warmuth (2009); Koolen et al. (2010); Uchiya et al. (2010); Kale et al. (2010) and Audibert and Bubeck (2010). We provide simple proofs that recover most of the previous results. We propose new upper bounds for the semi-bandit game. Moreover we derive lower bounds for all three feedback assumptions. With the only exception of the bandit game, the upper and lower bounds are tight, up to a constant factor. Finally, we answer a question asked by Koolen et al. (2010) by showing that the exponentially weighted average forecaster is suboptimal against $L_\infty$ adversaries. } }
Endnote
%0 Conference Paper %T Minimax Policies for Combinatorial Prediction Games %A Jean-Yves Audibert %A Sébastien Bubeck %A Gábor Lugosi %B Proceedings of the 24th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2011 %E Sham M. Kakade %E Ulrike von Luxburg %F pmlr-v19-audibert11a %I PMLR %P 107--132 %U https://proceedings.mlr.press/v19/audibert11a.html %V 19 %X We address the online linear optimization problem when the actions of the forecaster are represented by binary vectors. Our goal is to understand the magnitude of the minimax regret for the worst possible set of actions. We study the problem under three different assumptions for the feedback: full information, and the partial information models of the so-called “semi-bandit”, and “bandit” problems. We consider both $L_\infty$, and $L_2$-type of restrictions for the losses assigned by the adversary. We formulate a general strategy using Bregman projections on top of a potential-based gradient descent, which generalizes the ones studied in the series of papers Gyorgy et al. (2007); Dani et al. (2008); Abernethy et al. (2008); Cesa-Bianchi and Lugosi (2009); Helmbold and Warmuth (2009); Koolen et al. (2010); Uchiya et al. (2010); Kale et al. (2010) and Audibert and Bubeck (2010). We provide simple proofs that recover most of the previous results. We propose new upper bounds for the semi-bandit game. Moreover we derive lower bounds for all three feedback assumptions. With the only exception of the bandit game, the upper and lower bounds are tight, up to a constant factor. Finally, we answer a question asked by Koolen et al. (2010) by showing that the exponentially weighted average forecaster is suboptimal against $L_\infty$ adversaries.
RIS
TY - CPAPER TI - Minimax Policies for Combinatorial Prediction Games AU - Jean-Yves Audibert AU - Sébastien Bubeck AU - Gábor Lugosi BT - Proceedings of the 24th Annual Conference on Learning Theory DA - 2011/12/21 ED - Sham M. Kakade ED - Ulrike von Luxburg ID - pmlr-v19-audibert11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 19 SP - 107 EP - 132 L1 - http://proceedings.mlr.press/v19/audibert11a/audibert11a.pdf UR - https://proceedings.mlr.press/v19/audibert11a.html AB - We address the online linear optimization problem when the actions of the forecaster are represented by binary vectors. Our goal is to understand the magnitude of the minimax regret for the worst possible set of actions. We study the problem under three different assumptions for the feedback: full information, and the partial information models of the so-called “semi-bandit”, and “bandit” problems. We consider both $L_\infty$, and $L_2$-type of restrictions for the losses assigned by the adversary. We formulate a general strategy using Bregman projections on top of a potential-based gradient descent, which generalizes the ones studied in the series of papers Gyorgy et al. (2007); Dani et al. (2008); Abernethy et al. (2008); Cesa-Bianchi and Lugosi (2009); Helmbold and Warmuth (2009); Koolen et al. (2010); Uchiya et al. (2010); Kale et al. (2010) and Audibert and Bubeck (2010). We provide simple proofs that recover most of the previous results. We propose new upper bounds for the semi-bandit game. Moreover we derive lower bounds for all three feedback assumptions. With the only exception of the bandit game, the upper and lower bounds are tight, up to a constant factor. Finally, we answer a question asked by Koolen et al. (2010) by showing that the exponentially weighted average forecaster is suboptimal against $L_\infty$ adversaries. ER -
APA
Audibert, J., Bubeck, S. & Lugosi, G.. (2011). Minimax Policies for Combinatorial Prediction Games. Proceedings of the 24th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 19:107-132 Available from https://proceedings.mlr.press/v19/audibert11a.html.

Related Material