Data dependent kernels in nearly-linear time

Guy Lever, Tom Diethe, John Shawe-Taylor
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:685-693, 2012.

Abstract

We propose a method to efficiently construct data dependent kernels which can make use of large quantities of (unlabeled) data. Our construction makes an approximation in the standard construction of semi-supervised kernels in Sindhwani et al. (2005). In typical cases these kernels can be computed in nearly-linear time (in the amount of data), improving on the cubic time of the standard construction, enabling large scale semi-supervised learning in a variety of contexts. The methods are validated on semi-supervised and unsupervised problems on data sets containing upto 64,000 sample points.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-lever12, title = {Data dependent kernels in nearly-linear time}, author = {Lever, Guy and Diethe, Tom and Shawe-Taylor, John}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {685--693}, year = {2012}, editor = {Lawrence, Neil D. and Girolami, Mark}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/lever12/lever12.pdf}, url = {https://proceedings.mlr.press/v22/lever12.html}, abstract = {We propose a method to efficiently construct data dependent kernels which can make use of large quantities of (unlabeled) data. Our construction makes an approximation in the standard construction of semi-supervised kernels in Sindhwani et al. (2005). In typical cases these kernels can be computed in nearly-linear time (in the amount of data), improving on the cubic time of the standard construction, enabling large scale semi-supervised learning in a variety of contexts. The methods are validated on semi-supervised and unsupervised problems on data sets containing upto 64,000 sample points.} }
Endnote
%0 Conference Paper %T Data dependent kernels in nearly-linear time %A Guy Lever %A Tom Diethe %A John Shawe-Taylor %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-lever12 %I PMLR %P 685--693 %U https://proceedings.mlr.press/v22/lever12.html %V 22 %X We propose a method to efficiently construct data dependent kernels which can make use of large quantities of (unlabeled) data. Our construction makes an approximation in the standard construction of semi-supervised kernels in Sindhwani et al. (2005). In typical cases these kernels can be computed in nearly-linear time (in the amount of data), improving on the cubic time of the standard construction, enabling large scale semi-supervised learning in a variety of contexts. The methods are validated on semi-supervised and unsupervised problems on data sets containing upto 64,000 sample points.
RIS
TY - CPAPER TI - Data dependent kernels in nearly-linear time AU - Guy Lever AU - Tom Diethe AU - John Shawe-Taylor BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-lever12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 22 SP - 685 EP - 693 L1 - http://proceedings.mlr.press/v22/lever12/lever12.pdf UR - https://proceedings.mlr.press/v22/lever12.html AB - We propose a method to efficiently construct data dependent kernels which can make use of large quantities of (unlabeled) data. Our construction makes an approximation in the standard construction of semi-supervised kernels in Sindhwani et al. (2005). In typical cases these kernels can be computed in nearly-linear time (in the amount of data), improving on the cubic time of the standard construction, enabling large scale semi-supervised learning in a variety of contexts. The methods are validated on semi-supervised and unsupervised problems on data sets containing upto 64,000 sample points. ER -
APA
Lever, G., Diethe, T. & Shawe-Taylor, J.. (2012). Data dependent kernels in nearly-linear time. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 22:685-693 Available from https://proceedings.mlr.press/v22/lever12.html.

Related Material