On-Line Learning Algorithms for Path Experts with Non-Additive Losses

Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Manfred Warmuth
Proceedings of The 28th Conference on Learning Theory, PMLR 40:424-447, 2015.

Abstract

We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Follow-the-Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Cortes15, title = {On-Line Learning Algorithms for Path Experts with Non-Additive Losses}, author = {Cortes, Corinna and Kuznetsov, Vitaly and Mohri, Mehryar and Warmuth, Manfred}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {424--447}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Cortes15.pdf}, url = {https://proceedings.mlr.press/v40/Cortes15.html}, abstract = {We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Follow-the-Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction. } }
Endnote
%0 Conference Paper %T On-Line Learning Algorithms for Path Experts with Non-Additive Losses %A Corinna Cortes %A Vitaly Kuznetsov %A Mehryar Mohri %A Manfred Warmuth %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Cortes15 %I PMLR %P 424--447 %U https://proceedings.mlr.press/v40/Cortes15.html %V 40 %X We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Follow-the-Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction.
RIS
TY - CPAPER TI - On-Line Learning Algorithms for Path Experts with Non-Additive Losses AU - Corinna Cortes AU - Vitaly Kuznetsov AU - Mehryar Mohri AU - Manfred Warmuth BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Cortes15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 424 EP - 447 L1 - http://proceedings.mlr.press/v40/Cortes15.pdf UR - https://proceedings.mlr.press/v40/Cortes15.html AB - We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Follow-the-Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction. ER -
APA
Cortes, C., Kuznetsov, V., Mohri, M. & Warmuth, M.. (2015). On-Line Learning Algorithms for Path Experts with Non-Additive Losses. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:424-447 Available from https://proceedings.mlr.press/v40/Cortes15.html.

Related Material