Improved Bounds for Online Learning Over the Permutahedron and Other Ranking Polytopes

Nir Ailon
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:29-37, 2014.

Abstract

Consider the following game: There is a fixed set V of n items. At each step an adversary chooses a score function s_t:V\mapsto[0,1], a learner outputs a ranking of V, and then s_t is revealed. The learner’s loss is the sum over v∈V, of s_t(v) times v’s position (0th, 1st, 2nd, ...) in the ranking. This problem captures, for example, online systems that iteratively present ranked lists of items to users, who then respond by choosing one (or more) sought items. The loss measures the users’ burden, which increases the further the sought items are from the top. It also captures a version of online rank aggregation. We present an algorithm of expected regret O(n\sqrtOPT + n^2), where OPT is the loss of the best (single) ranking in hindsight. This improves the previously best known algorithm of Suehiro et. al (2012) by saving a factor of Ω(\sqrt\log n). We also reduce the per-step running time from O(n^2) to O(n\log n). We provide matching lower bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-ailon14, title = {{Improved Bounds for Online Learning Over the Permutahedron and Other Ranking Polytopes}}, author = {Ailon, Nir}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {29--37}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/ailon14.pdf}, url = {https://proceedings.mlr.press/v33/ailon14.html}, abstract = {Consider the following game: There is a fixed set V of n items. At each step an adversary chooses a score function s_t:V\mapsto[0,1], a learner outputs a ranking of V, and then s_t is revealed. The learner’s loss is the sum over v∈V, of s_t(v) times v’s position (0th, 1st, 2nd, ...) in the ranking. This problem captures, for example, online systems that iteratively present ranked lists of items to users, who then respond by choosing one (or more) sought items. The loss measures the users’ burden, which increases the further the sought items are from the top. It also captures a version of online rank aggregation. We present an algorithm of expected regret O(n\sqrtOPT + n^2), where OPT is the loss of the best (single) ranking in hindsight. This improves the previously best known algorithm of Suehiro et. al (2012) by saving a factor of Ω(\sqrt\log n). We also reduce the per-step running time from O(n^2) to O(n\log n). We provide matching lower bounds.} }
Endnote
%0 Conference Paper %T Improved Bounds for Online Learning Over the Permutahedron and Other Ranking Polytopes %A Nir Ailon %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-ailon14 %I PMLR %P 29--37 %U https://proceedings.mlr.press/v33/ailon14.html %V 33 %X Consider the following game: There is a fixed set V of n items. At each step an adversary chooses a score function s_t:V\mapsto[0,1], a learner outputs a ranking of V, and then s_t is revealed. The learner’s loss is the sum over v∈V, of s_t(v) times v’s position (0th, 1st, 2nd, ...) in the ranking. This problem captures, for example, online systems that iteratively present ranked lists of items to users, who then respond by choosing one (or more) sought items. The loss measures the users’ burden, which increases the further the sought items are from the top. It also captures a version of online rank aggregation. We present an algorithm of expected regret O(n\sqrtOPT + n^2), where OPT is the loss of the best (single) ranking in hindsight. This improves the previously best known algorithm of Suehiro et. al (2012) by saving a factor of Ω(\sqrt\log n). We also reduce the per-step running time from O(n^2) to O(n\log n). We provide matching lower bounds.
RIS
TY - CPAPER TI - Improved Bounds for Online Learning Over the Permutahedron and Other Ranking Polytopes AU - Nir Ailon BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-ailon14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 29 EP - 37 L1 - http://proceedings.mlr.press/v33/ailon14.pdf UR - https://proceedings.mlr.press/v33/ailon14.html AB - Consider the following game: There is a fixed set V of n items. At each step an adversary chooses a score function s_t:V\mapsto[0,1], a learner outputs a ranking of V, and then s_t is revealed. The learner’s loss is the sum over v∈V, of s_t(v) times v’s position (0th, 1st, 2nd, ...) in the ranking. This problem captures, for example, online systems that iteratively present ranked lists of items to users, who then respond by choosing one (or more) sought items. The loss measures the users’ burden, which increases the further the sought items are from the top. It also captures a version of online rank aggregation. We present an algorithm of expected regret O(n\sqrtOPT + n^2), where OPT is the loss of the best (single) ranking in hindsight. This improves the previously best known algorithm of Suehiro et. al (2012) by saving a factor of Ω(\sqrt\log n). We also reduce the per-step running time from O(n^2) to O(n\log n). We provide matching lower bounds. ER -
APA
Ailon, N.. (2014). Improved Bounds for Online Learning Over the Permutahedron and Other Ranking Polytopes. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:29-37 Available from https://proceedings.mlr.press/v33/ailon14.html.

Related Material