Margins, Shrinkage, and Boosting

Matus Telgarsky
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):307-315, 2013.

Abstract

This manuscript shows that AdaBoost and its immediate variants can produce approximately maximum margin classifiers simply by scaling their step size choices by a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman’s empirically successful “shrinkage” procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line searches provide improved margin guarantees. The results hold for the exponential loss and similar losses, most notably the logistic loss.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-telgarsky13, title = {Margins, Shrinkage, and Boosting}, author = {Telgarsky, Matus}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {307--315}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/telgarsky13.pdf}, url = {https://proceedings.mlr.press/v28/telgarsky13.html}, abstract = {This manuscript shows that AdaBoost and its immediate variants can produce approximately maximum margin classifiers simply by scaling their step size choices by a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman’s empirically successful “shrinkage” procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line searches provide improved margin guarantees. The results hold for the exponential loss and similar losses, most notably the logistic loss. } }
Endnote
%0 Conference Paper %T Margins, Shrinkage, and Boosting %A Matus Telgarsky %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-telgarsky13 %I PMLR %P 307--315 %U https://proceedings.mlr.press/v28/telgarsky13.html %V 28 %N 2 %X This manuscript shows that AdaBoost and its immediate variants can produce approximately maximum margin classifiers simply by scaling their step size choices by a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman’s empirically successful “shrinkage” procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line searches provide improved margin guarantees. The results hold for the exponential loss and similar losses, most notably the logistic loss.
RIS
TY - CPAPER TI - Margins, Shrinkage, and Boosting AU - Matus Telgarsky BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-telgarsky13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 2 SP - 307 EP - 315 L1 - http://proceedings.mlr.press/v28/telgarsky13.pdf UR - https://proceedings.mlr.press/v28/telgarsky13.html AB - This manuscript shows that AdaBoost and its immediate variants can produce approximately maximum margin classifiers simply by scaling their step size choices by a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman’s empirically successful “shrinkage” procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line searches provide improved margin guarantees. The results hold for the exponential loss and similar losses, most notably the logistic loss. ER -
APA
Telgarsky, M.. (2013). Margins, Shrinkage, and Boosting. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(2):307-315 Available from https://proceedings.mlr.press/v28/telgarsky13.html.

Related Material