Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families

Peter Bartlett, Peter Grünwald, Peter Harremoës, Fares Hedayati, Wojciech Kotlowski
Proceedings of the 26th Annual Conference on Learning Theory, PMLR 30:639-661, 2013.

Abstract

We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlett (2012) showed that a Bayesian prediction strategy with Jeffreys prior and sequential normalized maximum likelihood (SNML) coincide and are optimal if and only if the latter is exchangeable, which occurs if and only if the optimal strategy can be calculated without knowing the time horizon in advance. They put forward the question what families have exchangeable SNML strategies. We answer this question for one-dimensional exponential families: SNML is exchangeable only for three classes of natural exponential family distributions,namely the Gaussian, the gamma, and the Tweedie exponential family of order 3/2.

Cite this Paper


BibTeX
@InProceedings{pmlr-v30-Bartlett13, title = {Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families}, author = {Bartlett, Peter and Grünwald, Peter and Harremoës, Peter and Hedayati, Fares and Kotlowski, Wojciech}, booktitle = {Proceedings of the 26th Annual Conference on Learning Theory}, pages = {639--661}, year = {2013}, editor = {Shalev-Shwartz, Shai and Steinwart, Ingo}, volume = {30}, series = {Proceedings of Machine Learning Research}, address = {Princeton, NJ, USA}, month = {12--14 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v30/Bartlett13.pdf}, url = {https://proceedings.mlr.press/v30/Bartlett13.html}, abstract = {We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlett (2012) showed that a Bayesian prediction strategy with Jeffreys prior and sequential normalized maximum likelihood (SNML) coincide and are optimal if and only if the latter is exchangeable, which occurs if and only if the optimal strategy can be calculated without knowing the time horizon in advance. They put forward the question what families have exchangeable SNML strategies. We answer this question for one-dimensional exponential families: SNML is exchangeable only for three classes of natural exponential family distributions,namely the Gaussian, the gamma, and the Tweedie exponential family of order 3/2.} }
Endnote
%0 Conference Paper %T Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families %A Peter Bartlett %A Peter Grünwald %A Peter Harremoës %A Fares Hedayati %A Wojciech Kotlowski %B Proceedings of the 26th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2013 %E Shai Shalev-Shwartz %E Ingo Steinwart %F pmlr-v30-Bartlett13 %I PMLR %P 639--661 %U https://proceedings.mlr.press/v30/Bartlett13.html %V 30 %X We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlett (2012) showed that a Bayesian prediction strategy with Jeffreys prior and sequential normalized maximum likelihood (SNML) coincide and are optimal if and only if the latter is exchangeable, which occurs if and only if the optimal strategy can be calculated without knowing the time horizon in advance. They put forward the question what families have exchangeable SNML strategies. We answer this question for one-dimensional exponential families: SNML is exchangeable only for three classes of natural exponential family distributions,namely the Gaussian, the gamma, and the Tweedie exponential family of order 3/2.
RIS
TY - CPAPER TI - Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families AU - Peter Bartlett AU - Peter Grünwald AU - Peter Harremoës AU - Fares Hedayati AU - Wojciech Kotlowski BT - Proceedings of the 26th Annual Conference on Learning Theory DA - 2013/06/13 ED - Shai Shalev-Shwartz ED - Ingo Steinwart ID - pmlr-v30-Bartlett13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 30 SP - 639 EP - 661 L1 - http://proceedings.mlr.press/v30/Bartlett13.pdf UR - https://proceedings.mlr.press/v30/Bartlett13.html AB - We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlett (2012) showed that a Bayesian prediction strategy with Jeffreys prior and sequential normalized maximum likelihood (SNML) coincide and are optimal if and only if the latter is exchangeable, which occurs if and only if the optimal strategy can be calculated without knowing the time horizon in advance. They put forward the question what families have exchangeable SNML strategies. We answer this question for one-dimensional exponential families: SNML is exchangeable only for three classes of natural exponential family distributions,namely the Gaussian, the gamma, and the Tweedie exponential family of order 3/2. ER -
APA
Bartlett, P., Grünwald, P., Harremoës, P., Hedayati, F. & Kotlowski, W.. (2013). Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families. Proceedings of the 26th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 30:639-661 Available from https://proceedings.mlr.press/v30/Bartlett13.html.

Related Material