A Theoretical Analysis of Metric Hypothesis Transfer Learning

Michaël Perrot, Amaury Habrard
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1708-1717, 2015.

Abstract

We consider the problem of transferring some a priori knowledge in the context of supervised metric learning approaches. While this setting has been successfully applied in some empirical contexts, no theoretical evidence exists to justify this approach. In this paper, we provide a theoretical justification based on the notion of algorithmic stability adapted to the regularized metric learning setting. We propose an on-average-replace-two-stability model allowing us to prove fast generalization rates when an auxiliary source metric is used to bias the regularizer. Moreover, we prove a consistency result from which we show the interest of considering biased weighted regularized formulations and we provide a solution to estimate the associated weight. We also present some experiments illustrating the interest of the approach in standard metric learning tasks and in a transfer learning problem where few labelled data are available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-perrot15, title = {A Theoretical Analysis of Metric Hypothesis Transfer Learning}, author = {Perrot, Michaël and Habrard, Amaury}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1708--1717}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/perrot15.pdf}, url = {https://proceedings.mlr.press/v37/perrot15.html}, abstract = {We consider the problem of transferring some a priori knowledge in the context of supervised metric learning approaches. While this setting has been successfully applied in some empirical contexts, no theoretical evidence exists to justify this approach. In this paper, we provide a theoretical justification based on the notion of algorithmic stability adapted to the regularized metric learning setting. We propose an on-average-replace-two-stability model allowing us to prove fast generalization rates when an auxiliary source metric is used to bias the regularizer. Moreover, we prove a consistency result from which we show the interest of considering biased weighted regularized formulations and we provide a solution to estimate the associated weight. We also present some experiments illustrating the interest of the approach in standard metric learning tasks and in a transfer learning problem where few labelled data are available.} }
Endnote
%0 Conference Paper %T A Theoretical Analysis of Metric Hypothesis Transfer Learning %A Michaël Perrot %A Amaury Habrard %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-perrot15 %I PMLR %P 1708--1717 %U https://proceedings.mlr.press/v37/perrot15.html %V 37 %X We consider the problem of transferring some a priori knowledge in the context of supervised metric learning approaches. While this setting has been successfully applied in some empirical contexts, no theoretical evidence exists to justify this approach. In this paper, we provide a theoretical justification based on the notion of algorithmic stability adapted to the regularized metric learning setting. We propose an on-average-replace-two-stability model allowing us to prove fast generalization rates when an auxiliary source metric is used to bias the regularizer. Moreover, we prove a consistency result from which we show the interest of considering biased weighted regularized formulations and we provide a solution to estimate the associated weight. We also present some experiments illustrating the interest of the approach in standard metric learning tasks and in a transfer learning problem where few labelled data are available.
RIS
TY - CPAPER TI - A Theoretical Analysis of Metric Hypothesis Transfer Learning AU - Michaël Perrot AU - Amaury Habrard BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-perrot15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1708 EP - 1717 L1 - http://proceedings.mlr.press/v37/perrot15.pdf UR - https://proceedings.mlr.press/v37/perrot15.html AB - We consider the problem of transferring some a priori knowledge in the context of supervised metric learning approaches. While this setting has been successfully applied in some empirical contexts, no theoretical evidence exists to justify this approach. In this paper, we provide a theoretical justification based on the notion of algorithmic stability adapted to the regularized metric learning setting. We propose an on-average-replace-two-stability model allowing us to prove fast generalization rates when an auxiliary source metric is used to bias the regularizer. Moreover, we prove a consistency result from which we show the interest of considering biased weighted regularized formulations and we provide a solution to estimate the associated weight. We also present some experiments illustrating the interest of the approach in standard metric learning tasks and in a transfer learning problem where few labelled data are available. ER -
APA
Perrot, M. & Habrard, A.. (2015). A Theoretical Analysis of Metric Hypothesis Transfer Learning. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1708-1717 Available from https://proceedings.mlr.press/v37/perrot15.html.

Related Material