Generalization Bounds for Online Learning Algorithms with Pairwise Loss Functions

Yuyang Wang, Roni Khardon, Dmitry Pechyony, Rosie Jones
Proceedings of the 25th Annual Conference on Learning Theory, PMLR 23:13.1-13.22, 2012.

Abstract

Efficient online learning with pairwise loss functions is a crucial component in building largescale learning system that maximizes the area under the Receiver Operator Characteristic (ROC) curve. In this paper we investigate the generalization performance of online learning algorithms with pairwise loss functions. We show that the existing proof techniques for generalization bounds of online algorithms with a pointwise loss can not be directly applied to pairwise losses. Using the Hoeffding-Azuma inequality and various proof techniques for the risk bounds in the batch learning, we derive data-dependent bounds for the average risk of the sequence of hypotheses generated by an arbitrary online learner in terms of an easily computable statistic, and show how to extract a low risk hypothesis from the sequence. In addition, we analyze a natural extension of the perceptron algorithm for the bipartite ranking problem providing a bound on the empirical pairwise loss. Combining these results we get a complete risk analysis of the proposed algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v23-wang12, title = {Generalization Bounds for Online Learning Algorithms with Pairwise Loss Functions}, author = {Wang, Yuyang and Khardon, Roni and Pechyony, Dmitry and Jones, Rosie}, booktitle = {Proceedings of the 25th Annual Conference on Learning Theory}, pages = {13.1--13.22}, year = {2012}, editor = {Mannor, Shie and Srebro, Nathan and Williamson, Robert C.}, volume = {23}, series = {Proceedings of Machine Learning Research}, address = {Edinburgh, Scotland}, month = {25--27 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v23/wang12/wang12.pdf}, url = {https://proceedings.mlr.press/v23/wang12.html}, abstract = {Efficient online learning with pairwise loss functions is a crucial component in building largescale learning system that maximizes the area under the Receiver Operator Characteristic (ROC) curve. In this paper we investigate the generalization performance of online learning algorithms with pairwise loss functions. We show that the existing proof techniques for generalization bounds of online algorithms with a pointwise loss can not be directly applied to pairwise losses. Using the Hoeffding-Azuma inequality and various proof techniques for the risk bounds in the batch learning, we derive data-dependent bounds for the average risk of the sequence of hypotheses generated by an arbitrary online learner in terms of an easily computable statistic, and show how to extract a low risk hypothesis from the sequence. In addition, we analyze a natural extension of the perceptron algorithm for the bipartite ranking problem providing a bound on the empirical pairwise loss. Combining these results we get a complete risk analysis of the proposed algorithm.} }
Endnote
%0 Conference Paper %T Generalization Bounds for Online Learning Algorithms with Pairwise Loss Functions %A Yuyang Wang %A Roni Khardon %A Dmitry Pechyony %A Rosie Jones %B Proceedings of the 25th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2012 %E Shie Mannor %E Nathan Srebro %E Robert C. Williamson %F pmlr-v23-wang12 %I PMLR %P 13.1--13.22 %U https://proceedings.mlr.press/v23/wang12.html %V 23 %X Efficient online learning with pairwise loss functions is a crucial component in building largescale learning system that maximizes the area under the Receiver Operator Characteristic (ROC) curve. In this paper we investigate the generalization performance of online learning algorithms with pairwise loss functions. We show that the existing proof techniques for generalization bounds of online algorithms with a pointwise loss can not be directly applied to pairwise losses. Using the Hoeffding-Azuma inequality and various proof techniques for the risk bounds in the batch learning, we derive data-dependent bounds for the average risk of the sequence of hypotheses generated by an arbitrary online learner in terms of an easily computable statistic, and show how to extract a low risk hypothesis from the sequence. In addition, we analyze a natural extension of the perceptron algorithm for the bipartite ranking problem providing a bound on the empirical pairwise loss. Combining these results we get a complete risk analysis of the proposed algorithm.
RIS
TY - CPAPER TI - Generalization Bounds for Online Learning Algorithms with Pairwise Loss Functions AU - Yuyang Wang AU - Roni Khardon AU - Dmitry Pechyony AU - Rosie Jones BT - Proceedings of the 25th Annual Conference on Learning Theory DA - 2012/06/16 ED - Shie Mannor ED - Nathan Srebro ED - Robert C. Williamson ID - pmlr-v23-wang12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 23 SP - 13.1 EP - 13.22 L1 - http://proceedings.mlr.press/v23/wang12/wang12.pdf UR - https://proceedings.mlr.press/v23/wang12.html AB - Efficient online learning with pairwise loss functions is a crucial component in building largescale learning system that maximizes the area under the Receiver Operator Characteristic (ROC) curve. In this paper we investigate the generalization performance of online learning algorithms with pairwise loss functions. We show that the existing proof techniques for generalization bounds of online algorithms with a pointwise loss can not be directly applied to pairwise losses. Using the Hoeffding-Azuma inequality and various proof techniques for the risk bounds in the batch learning, we derive data-dependent bounds for the average risk of the sequence of hypotheses generated by an arbitrary online learner in terms of an easily computable statistic, and show how to extract a low risk hypothesis from the sequence. In addition, we analyze a natural extension of the perceptron algorithm for the bipartite ranking problem providing a bound on the empirical pairwise loss. Combining these results we get a complete risk analysis of the proposed algorithm. ER -
APA
Wang, Y., Khardon, R., Pechyony, D. & Jones, R.. (2012). Generalization Bounds for Online Learning Algorithms with Pairwise Loss Functions. Proceedings of the 25th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 23:13.1-13.22 Available from https://proceedings.mlr.press/v23/wang12.html.

Related Material