Learning Optimally Sparse Support Vector Machines

Andrew Cotter, Shai Shalev-Shwartz, Nati Srebro
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):266-274, 2013.

Abstract

We show how to train SVMs with an optimal guarantee on the number of support vectors (up to constants), and with sample complexity and training runtime bounds matching the best known for kernel SVM optimization (i.e. without any additional asymptotic cost beyond standard SVM training). Our method is simple to implement and works well in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-cotter13, title = {Learning Optimally Sparse Support Vector Machines}, author = {Cotter, Andrew and Shalev-Shwartz, Shai and Srebro, Nati}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {266--274}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/cotter13.pdf}, url = {https://proceedings.mlr.press/v28/cotter13.html}, abstract = {We show how to train SVMs with an optimal guarantee on the number of support vectors (up to constants), and with sample complexity and training runtime bounds matching the best known for kernel SVM optimization (i.e. without any additional asymptotic cost beyond standard SVM training). Our method is simple to implement and works well in practice.} }
Endnote
%0 Conference Paper %T Learning Optimally Sparse Support Vector Machines %A Andrew Cotter %A Shai Shalev-Shwartz %A Nati Srebro %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-cotter13 %I PMLR %P 266--274 %U https://proceedings.mlr.press/v28/cotter13.html %V 28 %N 1 %X We show how to train SVMs with an optimal guarantee on the number of support vectors (up to constants), and with sample complexity and training runtime bounds matching the best known for kernel SVM optimization (i.e. without any additional asymptotic cost beyond standard SVM training). Our method is simple to implement and works well in practice.
RIS
TY - CPAPER TI - Learning Optimally Sparse Support Vector Machines AU - Andrew Cotter AU - Shai Shalev-Shwartz AU - Nati Srebro BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-cotter13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 266 EP - 274 L1 - http://proceedings.mlr.press/v28/cotter13.pdf UR - https://proceedings.mlr.press/v28/cotter13.html AB - We show how to train SVMs with an optimal guarantee on the number of support vectors (up to constants), and with sample complexity and training runtime bounds matching the best known for kernel SVM optimization (i.e. without any additional asymptotic cost beyond standard SVM training). Our method is simple to implement and works well in practice. ER -
APA
Cotter, A., Shalev-Shwartz, S. & Srebro, N.. (2013). Learning Optimally Sparse Support Vector Machines. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):266-274 Available from https://proceedings.mlr.press/v28/cotter13.html.

Related Material