Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees

Jean Honorio, Tommi Jaakkola
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:384-392, 2014.

Abstract

We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” setting. That is, we analyze the worst-case risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomial-tail bound for general random variables. For sub-Gaussian random variables, we derive a novel tight exponential-tail bound. We also provide new PAC-Bayes finite-sample guarantees when training data is available. Our “minimax” generalization bounds are dimensionality-independent and \mathcalO(\sqrt1/m) for m samples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-honorio14, title = {{Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees}}, author = {Honorio, Jean and Jaakkola, Tommi}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {384--392}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/honorio14.pdf}, url = {https://proceedings.mlr.press/v33/honorio14.html}, abstract = {We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” setting. That is, we analyze the worst-case risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomial-tail bound for general random variables. For sub-Gaussian random variables, we derive a novel tight exponential-tail bound. We also provide new PAC-Bayes finite-sample guarantees when training data is available. Our “minimax” generalization bounds are dimensionality-independent and \mathcalO(\sqrt1/m) for m samples.} }
Endnote
%0 Conference Paper %T Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees %A Jean Honorio %A Tommi Jaakkola %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-honorio14 %I PMLR %P 384--392 %U https://proceedings.mlr.press/v33/honorio14.html %V 33 %X We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” setting. That is, we analyze the worst-case risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomial-tail bound for general random variables. For sub-Gaussian random variables, we derive a novel tight exponential-tail bound. We also provide new PAC-Bayes finite-sample guarantees when training data is available. Our “minimax” generalization bounds are dimensionality-independent and \mathcalO(\sqrt1/m) for m samples.
RIS
TY - CPAPER TI - Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees AU - Jean Honorio AU - Tommi Jaakkola BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-honorio14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 384 EP - 392 L1 - http://proceedings.mlr.press/v33/honorio14.pdf UR - https://proceedings.mlr.press/v33/honorio14.html AB - We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” setting. That is, we analyze the worst-case risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomial-tail bound for general random variables. For sub-Gaussian random variables, we derive a novel tight exponential-tail bound. We also provide new PAC-Bayes finite-sample guarantees when training data is available. Our “minimax” generalization bounds are dimensionality-independent and \mathcalO(\sqrt1/m) for m samples. ER -
APA
Honorio, J. & Jaakkola, T.. (2014). Tight Bounds for the Expected Risk of Linear Classifiers and PAC-Bayes Finite-Sample Guarantees. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:384-392 Available from https://proceedings.mlr.press/v33/honorio14.html.

Related Material