Generalized Mixability via Entropic Duality

Mark D. Reid, Rafael M. Frongillo, Robert C. Williamson, Nishant Mehta
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1501-1522, 2015.

Abstract

Mixability is a property of a loss which characterizes when constant regret is possible in the game of prediction with expert advice. We show that a key property of mixability generalizes, and the \exp and \log operations present in the usual theory are not as special as one might have thought. In doing so we introduce a more general notion of Φ-mixability where Φis a general entropy (\emphi.e., any convex function on probabilities). We show how a property shared by the convex dual of any such entropy yields a natural algorithm (the minimizer of a regret bound) which, analogous to the classical Aggregating Algorithm, is guaranteed a constant regret when used with Φ-mixable losses. We characterize which Φhave non-trivial Φ-mixable losses and relate Φ-mixability and its associated Aggregating Algorithm to potential-based methods, a Blackwell-like condition, mirror descent, and risk measures from finance. We also define a notion of “dominance” between different entropies in terms of bounds they guarantee and conjecture that classical mixability gives optimal bounds, for which we provide some supporting empirical evidence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Reid15, title = {Generalized Mixability via Entropic Duality}, author = {Reid, Mark D. and Frongillo, Rafael M. and Williamson, Robert C. and Mehta, Nishant}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {1501--1522}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Reid15.pdf}, url = {https://proceedings.mlr.press/v40/Reid15.html}, abstract = {Mixability is a property of a loss which characterizes when constant regret is possible in the game of prediction with expert advice. We show that a key property of mixability generalizes, and the \exp and \log operations present in the usual theory are not as special as one might have thought. In doing so we introduce a more general notion of Φ-mixability where Φis a general entropy (\emphi.e., any convex function on probabilities). We show how a property shared by the convex dual of any such entropy yields a natural algorithm (the minimizer of a regret bound) which, analogous to the classical Aggregating Algorithm, is guaranteed a constant regret when used with Φ-mixable losses. We characterize which Φhave non-trivial Φ-mixable losses and relate Φ-mixability and its associated Aggregating Algorithm to potential-based methods, a Blackwell-like condition, mirror descent, and risk measures from finance. We also define a notion of “dominance” between different entropies in terms of bounds they guarantee and conjecture that classical mixability gives optimal bounds, for which we provide some supporting empirical evidence.} }
Endnote
%0 Conference Paper %T Generalized Mixability via Entropic Duality %A Mark D. Reid %A Rafael M. Frongillo %A Robert C. Williamson %A Nishant Mehta %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Reid15 %I PMLR %P 1501--1522 %U https://proceedings.mlr.press/v40/Reid15.html %V 40 %X Mixability is a property of a loss which characterizes when constant regret is possible in the game of prediction with expert advice. We show that a key property of mixability generalizes, and the \exp and \log operations present in the usual theory are not as special as one might have thought. In doing so we introduce a more general notion of Φ-mixability where Φis a general entropy (\emphi.e., any convex function on probabilities). We show how a property shared by the convex dual of any such entropy yields a natural algorithm (the minimizer of a regret bound) which, analogous to the classical Aggregating Algorithm, is guaranteed a constant regret when used with Φ-mixable losses. We characterize which Φhave non-trivial Φ-mixable losses and relate Φ-mixability and its associated Aggregating Algorithm to potential-based methods, a Blackwell-like condition, mirror descent, and risk measures from finance. We also define a notion of “dominance” between different entropies in terms of bounds they guarantee and conjecture that classical mixability gives optimal bounds, for which we provide some supporting empirical evidence.
RIS
TY - CPAPER TI - Generalized Mixability via Entropic Duality AU - Mark D. Reid AU - Rafael M. Frongillo AU - Robert C. Williamson AU - Nishant Mehta BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Reid15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 1501 EP - 1522 L1 - http://proceedings.mlr.press/v40/Reid15.pdf UR - https://proceedings.mlr.press/v40/Reid15.html AB - Mixability is a property of a loss which characterizes when constant regret is possible in the game of prediction with expert advice. We show that a key property of mixability generalizes, and the \exp and \log operations present in the usual theory are not as special as one might have thought. In doing so we introduce a more general notion of Φ-mixability where Φis a general entropy (\emphi.e., any convex function on probabilities). We show how a property shared by the convex dual of any such entropy yields a natural algorithm (the minimizer of a regret bound) which, analogous to the classical Aggregating Algorithm, is guaranteed a constant regret when used with Φ-mixable losses. We characterize which Φhave non-trivial Φ-mixable losses and relate Φ-mixability and its associated Aggregating Algorithm to potential-based methods, a Blackwell-like condition, mirror descent, and risk measures from finance. We also define a notion of “dominance” between different entropies in terms of bounds they guarantee and conjecture that classical mixability gives optimal bounds, for which we provide some supporting empirical evidence. ER -
APA
Reid, M.D., Frongillo, R.M., Williamson, R.C. & Mehta, N.. (2015). Generalized Mixability via Entropic Duality. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:1501-1522 Available from https://proceedings.mlr.press/v40/Reid15.html.

Related Material