DART: Dropouts meet Multiple Additive Regression Trees

Rashmi Korlakai Vinayak, Ran Gilad-Bachrach
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:489-497, 2015.

Abstract

MART, an ensemble model of boosted regression trees, is known to deliver high prediction accuracy for diverse tasks, and is widely used in practice. However, it suffers an issue which we call over-specialization, wherein trees added at later iterations tend to impact the prediction of only a few instances, and make negligible contribution towards the remaining instances. This negatively affects the performance of the model on unseen data, and also makes the model over-sensitive to the contributions of the few, initially added tress. We show that the commonly used tool to address this issue, that of shrinkage, alleviates the problem only to a certain extent and the fundamental issue of over-specialization still remains. In this work, we explore a different approach to address the problem, that of employing dropouts, a tool that has been recently proposed in the context of learning deep neural networks. We propose a novel way of employing dropouts to tackle the issue of over-specialization in MART, resulting in the DART algorithm. We evaluate DART on ranking, regression and classification tasks, using large scale, publicly available datasets, and show that DART outperforms MART in each of the tasks, with a significant margin.

Cite this Paper


BibTeX
@InProceedings{pmlr-v38-korlakaivinayak15, title = {{DART: Dropouts meet Multiple Additive Regression Trees}}, author = {Korlakai Vinayak, Rashmi and Gilad-Bachrach, Ran}, booktitle = {Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics}, pages = {489--497}, year = {2015}, editor = {Lebanon, Guy and Vishwanathan, S. V. N.}, volume = {38}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {09--12 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v38/korlakaivinayak15.pdf}, url = {https://proceedings.mlr.press/v38/korlakaivinayak15.html}, abstract = {MART, an ensemble model of boosted regression trees, is known to deliver high prediction accuracy for diverse tasks, and is widely used in practice. However, it suffers an issue which we call over-specialization, wherein trees added at later iterations tend to impact the prediction of only a few instances, and make negligible contribution towards the remaining instances. This negatively affects the performance of the model on unseen data, and also makes the model over-sensitive to the contributions of the few, initially added tress. We show that the commonly used tool to address this issue, that of shrinkage, alleviates the problem only to a certain extent and the fundamental issue of over-specialization still remains. In this work, we explore a different approach to address the problem, that of employing dropouts, a tool that has been recently proposed in the context of learning deep neural networks. We propose a novel way of employing dropouts to tackle the issue of over-specialization in MART, resulting in the DART algorithm. We evaluate DART on ranking, regression and classification tasks, using large scale, publicly available datasets, and show that DART outperforms MART in each of the tasks, with a significant margin.} }
Endnote
%0 Conference Paper %T DART: Dropouts meet Multiple Additive Regression Trees %A Rashmi Korlakai Vinayak %A Ran Gilad-Bachrach %B Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2015 %E Guy Lebanon %E S. V. N. Vishwanathan %F pmlr-v38-korlakaivinayak15 %I PMLR %P 489--497 %U https://proceedings.mlr.press/v38/korlakaivinayak15.html %V 38 %X MART, an ensemble model of boosted regression trees, is known to deliver high prediction accuracy for diverse tasks, and is widely used in practice. However, it suffers an issue which we call over-specialization, wherein trees added at later iterations tend to impact the prediction of only a few instances, and make negligible contribution towards the remaining instances. This negatively affects the performance of the model on unseen data, and also makes the model over-sensitive to the contributions of the few, initially added tress. We show that the commonly used tool to address this issue, that of shrinkage, alleviates the problem only to a certain extent and the fundamental issue of over-specialization still remains. In this work, we explore a different approach to address the problem, that of employing dropouts, a tool that has been recently proposed in the context of learning deep neural networks. We propose a novel way of employing dropouts to tackle the issue of over-specialization in MART, resulting in the DART algorithm. We evaluate DART on ranking, regression and classification tasks, using large scale, publicly available datasets, and show that DART outperforms MART in each of the tasks, with a significant margin.
RIS
TY - CPAPER TI - DART: Dropouts meet Multiple Additive Regression Trees AU - Rashmi Korlakai Vinayak AU - Ran Gilad-Bachrach BT - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics DA - 2015/02/21 ED - Guy Lebanon ED - S. V. N. Vishwanathan ID - pmlr-v38-korlakaivinayak15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 38 SP - 489 EP - 497 L1 - http://proceedings.mlr.press/v38/korlakaivinayak15.pdf UR - https://proceedings.mlr.press/v38/korlakaivinayak15.html AB - MART, an ensemble model of boosted regression trees, is known to deliver high prediction accuracy for diverse tasks, and is widely used in practice. However, it suffers an issue which we call over-specialization, wherein trees added at later iterations tend to impact the prediction of only a few instances, and make negligible contribution towards the remaining instances. This negatively affects the performance of the model on unseen data, and also makes the model over-sensitive to the contributions of the few, initially added tress. We show that the commonly used tool to address this issue, that of shrinkage, alleviates the problem only to a certain extent and the fundamental issue of over-specialization still remains. In this work, we explore a different approach to address the problem, that of employing dropouts, a tool that has been recently proposed in the context of learning deep neural networks. We propose a novel way of employing dropouts to tackle the issue of over-specialization in MART, resulting in the DART algorithm. We evaluate DART on ranking, regression and classification tasks, using large scale, publicly available datasets, and show that DART outperforms MART in each of the tasks, with a significant margin. ER -
APA
Korlakai Vinayak, R. & Gilad-Bachrach, R.. (2015). DART: Dropouts meet Multiple Additive Regression Trees. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 38:489-497 Available from https://proceedings.mlr.press/v38/korlakaivinayak15.html.

Related Material