Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization

Mehrdad Mahdavi, Lijun Zhang, Rong Jin
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1305-1320, 2015.

Abstract

In this paper we derive \textithigh probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d \log T/T) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T) lower bound on the excess risk of any stochastic optimization method for \textitsquared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Mahdavi15, title = {Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization}, author = {Mahdavi, Mehrdad and Zhang, Lijun and Jin, Rong}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {1305--1320}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Mahdavi15.pdf}, url = {https://proceedings.mlr.press/v40/Mahdavi15.html}, abstract = {In this paper we derive \textithigh probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d \log T/T) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T) lower bound on the excess risk of any stochastic optimization method for \textitsquared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis.} }
Endnote
%0 Conference Paper %T Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization %A Mehrdad Mahdavi %A Lijun Zhang %A Rong Jin %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Mahdavi15 %I PMLR %P 1305--1320 %U https://proceedings.mlr.press/v40/Mahdavi15.html %V 40 %X In this paper we derive \textithigh probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d \log T/T) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T) lower bound on the excess risk of any stochastic optimization method for \textitsquared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis.
RIS
TY - CPAPER TI - Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization AU - Mehrdad Mahdavi AU - Lijun Zhang AU - Rong Jin BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Mahdavi15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 1305 EP - 1320 L1 - http://proceedings.mlr.press/v40/Mahdavi15.pdf UR - https://proceedings.mlr.press/v40/Mahdavi15.html AB - In this paper we derive \textithigh probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d \log T/T) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T) lower bound on the excess risk of any stochastic optimization method for \textitsquared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis. ER -
APA
Mahdavi, M., Zhang, L. & Jin, R.. (2015). Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:1305-1320 Available from https://proceedings.mlr.press/v40/Mahdavi15.html.

Related Material