Ranking by calibrated AdaBoost

Ranking by calibrated AdaBoost

R. Busa-Fekete, B. Kégl, T. Éltető & G. Szarvas; JMLR W&CP 14:37–48, 2011.

Abstract

This paper describes the ideas and methodologies that we used in the Yahoo learning-to-rank challenge1. Our technique is essentially pointwise with a listwise touch at the last combination step. The main ingredients of our approach are 1) preprocessing (querywise normalization) 2) multi-class AdaBoost.MH 3) regression calibration, and 4) an exponentially weighted forecaster for model combination. In post-challenge analysis we found that preprocessing and training AdaBoost with a wide variety of hyperparameters improved individual models significantly, the final listwise ensemble step was crucial, whereas calibration helped only in creating diversity.

Page last modified on Wed Jan 26 10:36:50 2011.



Home Page

Papers

Submissions

News

Scope

Editorial Board

Announcements

Proceedings

Open Source Software

Search

Login



RSS Feed