Home Page

Papers

Submissions

News

Scope

Editorial Board

Announcements

Proceedings

Open Source Software

Search

Login



RSS Feed

Speed and Sparsity of Regularized Boosting

Yongxin Xi, Zhen Xiang, Peter Ramadge, Robert Schapire; JMLR W&CP 5:615-622, 2009.

Abstract

Boosting algorithms with l1 regularization are of interest because l1 regularization leads to sparser composite classifiers. Moreover, Rosset et al. have shown that for separable data, standard lp regularized loss minimization results in a margin maximizing classifier in the limit as regularization is relaxed. For the case p=1, we extend these results by obtaining explicit convergence bounds on the regularization required to yield a margin within prescribed accuracy of the maximum achievable margin. We derive similar rates of convergence for the epsilon AdaBoost algorithm, in the process providing a new proof that epsilon AdaBoost is margin maximizing as epsilon converges to 0. Because both of these known algorithms are computationally expensive, we introduce a new hybrid algorithm, AdaBoost+L1, that combines the virtues of AdaBoost with the sparsity of l1 regularization in a computationally efficient fashion. We prove that the algorithm is margin maximizing and empirically examine its performance on five datasets.



Home Page

Papers

Submissions

News

Scope

Editorial Board

Announcements

Proceedings

Open Source Software

Search

Login



RSS Feed

Page last modified on Fri Apr 3 20:30:46 BST 2009.

Copyright @ JMLR 2000. All rights reserved.