Achievability of Asymptotic Minimax Regret in Online and Batch Prediction

Kazuho Watanabe, Teemu Roos, Petri Myllymäki
Proceedings of the 5th Asian Conference on Machine Learning, PMLR 29:181-196, 2013.

Abstract

The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v29-Watanabe13, title = {Achievability of Asymptotic Minimax Regret in Online and Batch Prediction}, author = {Watanabe, Kazuho and Roos, Teemu and Myllymäki, Petri}, booktitle = {Proceedings of the 5th Asian Conference on Machine Learning}, pages = {181--196}, year = {2013}, editor = {Ong, Cheng Soon and Ho, Tu Bao}, volume = {29}, series = {Proceedings of Machine Learning Research}, address = {Australian National University, Canberra, Australia}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v29/Watanabe13.pdf}, url = {https://proceedings.mlr.press/v29/Watanabe13.html}, abstract = {The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms. } }
Endnote
%0 Conference Paper %T Achievability of Asymptotic Minimax Regret in Online and Batch Prediction %A Kazuho Watanabe %A Teemu Roos %A Petri Myllymäki %B Proceedings of the 5th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Cheng Soon Ong %E Tu Bao Ho %F pmlr-v29-Watanabe13 %I PMLR %P 181--196 %U https://proceedings.mlr.press/v29/Watanabe13.html %V 29 %X The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms.
RIS
TY - CPAPER TI - Achievability of Asymptotic Minimax Regret in Online and Batch Prediction AU - Kazuho Watanabe AU - Teemu Roos AU - Petri Myllymäki BT - Proceedings of the 5th Asian Conference on Machine Learning DA - 2013/10/21 ED - Cheng Soon Ong ED - Tu Bao Ho ID - pmlr-v29-Watanabe13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 29 SP - 181 EP - 196 L1 - http://proceedings.mlr.press/v29/Watanabe13.pdf UR - https://proceedings.mlr.press/v29/Watanabe13.html AB - The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models, no online strategy can be asymptotically minimax. We prove that this holds under a slightly stronger definition of asymptotic minimaxity. Our numerical experiments support the conjecture about non-achievability by so called last-step minimax algorithms, which are independent of n. On the other hand, we show that in the multinomial model, a Bayes mixture defined by the conjugate Dirichlet prior with a simple dependency on n achieves asymptotic minimaxity for all sequences, thus providing a simpler asymptotic minimax strategy compared to earlier work by Xie and Barron. The numerical results also demonstrate superior finite-sample behavior by a number of novel batch and online algorithms. ER -
APA
Watanabe, K., Roos, T. & Myllymäki, P.. (2013). Achievability of Asymptotic Minimax Regret in Online and Batch Prediction. Proceedings of the 5th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 29:181-196 Available from https://proceedings.mlr.press/v29/Watanabe13.html.

Related Material