Half Transductive Ranking

Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Corinna Cortes, Mehryar Mohri
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:49-56, 2010.

Abstract

We study the standard retrieval task of ranking a fixed set of items given a previously unseen query and pose it as the half transductive ranking problem. The task is transductive as the set of items is fixed. Transductive representations (where the vector representation of each example is learned) allow the generation of highly nonlinear embeddings that capture object relationships without relying on a specific choice of features, and require only relatively simple optimization. Unfortunately, they have no direct out-of-sample extension. Inductive approaches on the other hand allow for the representation of unknown queries. We describe algorithms for this setting which have the advantages of both transductive and inductive approaches, and can be applied in unsupervised (either reconstruction-based or graph-based) and supervised ranking setups. We show empirically that our methods give strong performance on all three tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-bai10a, title = {Half Transductive Ranking}, author = {Bai, Bing and Weston, Jason and Grangier, David and Collobert, Ronan and Cortes, Corinna and Mohri, Mehryar}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {49--56}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/bai10a/bai10a.pdf}, url = {https://proceedings.mlr.press/v9/bai10a.html}, abstract = {We study the standard retrieval task of ranking a fixed set of items given a previously unseen query and pose it as the half transductive ranking problem. The task is transductive as the set of items is fixed. Transductive representations (where the vector representation of each example is learned) allow the generation of highly nonlinear embeddings that capture object relationships without relying on a specific choice of features, and require only relatively simple optimization. Unfortunately, they have no direct out-of-sample extension. Inductive approaches on the other hand allow for the representation of unknown queries. We describe algorithms for this setting which have the advantages of both transductive and inductive approaches, and can be applied in unsupervised (either reconstruction-based or graph-based) and supervised ranking setups. We show empirically that our methods give strong performance on all three tasks.} }
Endnote
%0 Conference Paper %T Half Transductive Ranking %A Bing Bai %A Jason Weston %A David Grangier %A Ronan Collobert %A Corinna Cortes %A Mehryar Mohri %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-bai10a %I PMLR %P 49--56 %U https://proceedings.mlr.press/v9/bai10a.html %V 9 %X We study the standard retrieval task of ranking a fixed set of items given a previously unseen query and pose it as the half transductive ranking problem. The task is transductive as the set of items is fixed. Transductive representations (where the vector representation of each example is learned) allow the generation of highly nonlinear embeddings that capture object relationships without relying on a specific choice of features, and require only relatively simple optimization. Unfortunately, they have no direct out-of-sample extension. Inductive approaches on the other hand allow for the representation of unknown queries. We describe algorithms for this setting which have the advantages of both transductive and inductive approaches, and can be applied in unsupervised (either reconstruction-based or graph-based) and supervised ranking setups. We show empirically that our methods give strong performance on all three tasks.
RIS
TY - CPAPER TI - Half Transductive Ranking AU - Bing Bai AU - Jason Weston AU - David Grangier AU - Ronan Collobert AU - Corinna Cortes AU - Mehryar Mohri BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-bai10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 49 EP - 56 L1 - http://proceedings.mlr.press/v9/bai10a/bai10a.pdf UR - https://proceedings.mlr.press/v9/bai10a.html AB - We study the standard retrieval task of ranking a fixed set of items given a previously unseen query and pose it as the half transductive ranking problem. The task is transductive as the set of items is fixed. Transductive representations (where the vector representation of each example is learned) allow the generation of highly nonlinear embeddings that capture object relationships without relying on a specific choice of features, and require only relatively simple optimization. Unfortunately, they have no direct out-of-sample extension. Inductive approaches on the other hand allow for the representation of unknown queries. We describe algorithms for this setting which have the advantages of both transductive and inductive approaches, and can be applied in unsupervised (either reconstruction-based or graph-based) and supervised ranking setups. We show empirically that our methods give strong performance on all three tasks. ER -
APA
Bai, B., Weston, J., Grangier, D., Collobert, R., Cortes, C. & Mohri, M.. (2010). Half Transductive Ranking. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:49-56 Available from https://proceedings.mlr.press/v9/bai10a.html.

Related Material