Stability and Hypothesis Transfer Learning

Ilja Kuzborskij, Francesco Orabona
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):942-950, 2013.

Abstract

We consider the transfer learning scenario, where the learner does not have access to the source domain directly, but rather operates on the basis of hypotheses induced from it – the Hypothesis Transfer Learning (HTL) problem. Particularly, we conduct a theoretical analysis of HTL by considering the algorithmic stability of a class of HTL algorithms based on Regularized Least Squares with biased regularization. We show that the relatedness of source and target domains accelerates the convergence of the Leave-One-Out error to the generalization error, thus enabling the use of the Leave-One-Out error to find the optimal transfer parameters, even in the presence of a small training set. In case of unrelated domains we also suggest a theoretically principled way to prevent negative transfer, so that in the limit we recover the performance of the algorithm not using any knowledge from the source domain.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-kuzborskij13, title = {Stability and Hypothesis Transfer Learning}, author = {Kuzborskij, Ilja and Orabona, Francesco}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {942--950}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/kuzborskij13.pdf}, url = {https://proceedings.mlr.press/v28/kuzborskij13.html}, abstract = {We consider the transfer learning scenario, where the learner does not have access to the source domain directly, but rather operates on the basis of hypotheses induced from it – the Hypothesis Transfer Learning (HTL) problem. Particularly, we conduct a theoretical analysis of HTL by considering the algorithmic stability of a class of HTL algorithms based on Regularized Least Squares with biased regularization. We show that the relatedness of source and target domains accelerates the convergence of the Leave-One-Out error to the generalization error, thus enabling the use of the Leave-One-Out error to find the optimal transfer parameters, even in the presence of a small training set. In case of unrelated domains we also suggest a theoretically principled way to prevent negative transfer, so that in the limit we recover the performance of the algorithm not using any knowledge from the source domain.} }
Endnote
%0 Conference Paper %T Stability and Hypothesis Transfer Learning %A Ilja Kuzborskij %A Francesco Orabona %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-kuzborskij13 %I PMLR %P 942--950 %U https://proceedings.mlr.press/v28/kuzborskij13.html %V 28 %N 3 %X We consider the transfer learning scenario, where the learner does not have access to the source domain directly, but rather operates on the basis of hypotheses induced from it – the Hypothesis Transfer Learning (HTL) problem. Particularly, we conduct a theoretical analysis of HTL by considering the algorithmic stability of a class of HTL algorithms based on Regularized Least Squares with biased regularization. We show that the relatedness of source and target domains accelerates the convergence of the Leave-One-Out error to the generalization error, thus enabling the use of the Leave-One-Out error to find the optimal transfer parameters, even in the presence of a small training set. In case of unrelated domains we also suggest a theoretically principled way to prevent negative transfer, so that in the limit we recover the performance of the algorithm not using any knowledge from the source domain.
RIS
TY - CPAPER TI - Stability and Hypothesis Transfer Learning AU - Ilja Kuzborskij AU - Francesco Orabona BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-kuzborskij13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 942 EP - 950 L1 - http://proceedings.mlr.press/v28/kuzborskij13.pdf UR - https://proceedings.mlr.press/v28/kuzborskij13.html AB - We consider the transfer learning scenario, where the learner does not have access to the source domain directly, but rather operates on the basis of hypotheses induced from it – the Hypothesis Transfer Learning (HTL) problem. Particularly, we conduct a theoretical analysis of HTL by considering the algorithmic stability of a class of HTL algorithms based on Regularized Least Squares with biased regularization. We show that the relatedness of source and target domains accelerates the convergence of the Leave-One-Out error to the generalization error, thus enabling the use of the Leave-One-Out error to find the optimal transfer parameters, even in the presence of a small training set. In case of unrelated domains we also suggest a theoretically principled way to prevent negative transfer, so that in the limit we recover the performance of the algorithm not using any knowledge from the source domain. ER -
APA
Kuzborskij, I. & Orabona, F.. (2013). Stability and Hypothesis Transfer Learning. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):942-950 Available from https://proceedings.mlr.press/v28/kuzborskij13.html.

Related Material