Two-Stage Metric Learning

Jun Wang, Ke Sun, Fei Sha, Stéphane Marchand-Maillet, Alexandros Kalousis
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):370-378, 2014.

Abstract

In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric which presents unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-wangc14, title = {Two-Stage Metric Learning}, author = {Wang, Jun and Sun, Ke and Sha, Fei and Marchand-Maillet, Stéphane and Kalousis, Alexandros}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {370--378}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/wangc14.pdf}, url = {https://proceedings.mlr.press/v32/wangc14.html}, abstract = {In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric which presents unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM.} }
Endnote
%0 Conference Paper %T Two-Stage Metric Learning %A Jun Wang %A Ke Sun %A Fei Sha %A Stéphane Marchand-Maillet %A Alexandros Kalousis %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-wangc14 %I PMLR %P 370--378 %U https://proceedings.mlr.press/v32/wangc14.html %V 32 %N 2 %X In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric which presents unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM.
RIS
TY - CPAPER TI - Two-Stage Metric Learning AU - Jun Wang AU - Ke Sun AU - Fei Sha AU - Stéphane Marchand-Maillet AU - Alexandros Kalousis BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-wangc14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 370 EP - 378 L1 - http://proceedings.mlr.press/v32/wangc14.pdf UR - https://proceedings.mlr.press/v32/wangc14.html AB - In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric which presents unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM. ER -
APA
Wang, J., Sun, K., Sha, F., Marchand-Maillet, S. & Kalousis, A.. (2014). Two-Stage Metric Learning. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):370-378 Available from https://proceedings.mlr.press/v32/wangc14.html.

Related Material