Characterizing the Representer Theorem

Yaoliang Yu, Hao Cheng, Dale Schuurmans, Csaba Szepesvari
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):570-578, 2013.

Abstract

The representer theorem assures that kernel methods retain optimality under penalized empirical risk minimization. While a sufficient condition on the form of the regularizer guaranteeing the representer theorem has been known since the initial development of kernel methods, necessary conditions have only been investigated recently. In this paper we completely characterize the necessary and sufficient conditions on the regularizer that ensure the representer theorem holds. The results are surprisingly simple yet broaden the conditions where the representer theorem is known to hold. Extension to the matrix domain is also addressed.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-yu13, title = {Characterizing the Representer Theorem}, author = {Yu, Yaoliang and Cheng, Hao and Schuurmans, Dale and Szepesvari, Csaba}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {570--578}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/yu13.pdf}, url = {https://proceedings.mlr.press/v28/yu13.html}, abstract = {The representer theorem assures that kernel methods retain optimality under penalized empirical risk minimization. While a sufficient condition on the form of the regularizer guaranteeing the representer theorem has been known since the initial development of kernel methods, necessary conditions have only been investigated recently. In this paper we completely characterize the necessary and sufficient conditions on the regularizer that ensure the representer theorem holds. The results are surprisingly simple yet broaden the conditions where the representer theorem is known to hold. Extension to the matrix domain is also addressed.} }
Endnote
%0 Conference Paper %T Characterizing the Representer Theorem %A Yaoliang Yu %A Hao Cheng %A Dale Schuurmans %A Csaba Szepesvari %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-yu13 %I PMLR %P 570--578 %U https://proceedings.mlr.press/v28/yu13.html %V 28 %N 1 %X The representer theorem assures that kernel methods retain optimality under penalized empirical risk minimization. While a sufficient condition on the form of the regularizer guaranteeing the representer theorem has been known since the initial development of kernel methods, necessary conditions have only been investigated recently. In this paper we completely characterize the necessary and sufficient conditions on the regularizer that ensure the representer theorem holds. The results are surprisingly simple yet broaden the conditions where the representer theorem is known to hold. Extension to the matrix domain is also addressed.
RIS
TY - CPAPER TI - Characterizing the Representer Theorem AU - Yaoliang Yu AU - Hao Cheng AU - Dale Schuurmans AU - Csaba Szepesvari BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-yu13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 570 EP - 578 L1 - http://proceedings.mlr.press/v28/yu13.pdf UR - https://proceedings.mlr.press/v28/yu13.html AB - The representer theorem assures that kernel methods retain optimality under penalized empirical risk minimization. While a sufficient condition on the form of the regularizer guaranteeing the representer theorem has been known since the initial development of kernel methods, necessary conditions have only been investigated recently. In this paper we completely characterize the necessary and sufficient conditions on the regularizer that ensure the representer theorem holds. The results are surprisingly simple yet broaden the conditions where the representer theorem is known to hold. Extension to the matrix domain is also addressed. ER -
APA
Yu, Y., Cheng, H., Schuurmans, D. & Szepesvari, C.. (2013). Characterizing the Representer Theorem. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):570-578 Available from https://proceedings.mlr.press/v28/yu13.html.

Related Material