Sharp Generalization Error Bounds for Randomly-projected Classifiers

Robert Durrant, Ata Kaban
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):693-701, 2013.

Abstract

We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomly-projected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels?’ and we therefore derive the exact probability of ‘label flipping’ under Gaussian random projection in order to quantify this effect precisely in our bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-durrant13, title = {Sharp Generalization Error Bounds for Randomly-projected Classifiers}, author = {Durrant, Robert and Kaban, Ata}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {693--701}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/durrant13.pdf}, url = {https://proceedings.mlr.press/v28/durrant13.html}, abstract = {We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomly-projected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels?’ and we therefore derive the exact probability of ‘label flipping’ under Gaussian random projection in order to quantify this effect precisely in our bounds.} }
Endnote
%0 Conference Paper %T Sharp Generalization Error Bounds for Randomly-projected Classifiers %A Robert Durrant %A Ata Kaban %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-durrant13 %I PMLR %P 693--701 %U https://proceedings.mlr.press/v28/durrant13.html %V 28 %N 3 %X We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomly-projected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels?’ and we therefore derive the exact probability of ‘label flipping’ under Gaussian random projection in order to quantify this effect precisely in our bounds.
RIS
TY - CPAPER TI - Sharp Generalization Error Bounds for Randomly-projected Classifiers AU - Robert Durrant AU - Ata Kaban BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-durrant13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 693 EP - 701 L1 - http://proceedings.mlr.press/v28/durrant13.pdf UR - https://proceedings.mlr.press/v28/durrant13.html AB - We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomly-projected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels?’ and we therefore derive the exact probability of ‘label flipping’ under Gaussian random projection in order to quantify this effect precisely in our bounds. ER -
APA
Durrant, R. & Kaban, A.. (2013). Sharp Generalization Error Bounds for Randomly-projected Classifiers. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):693-701 Available from https://proceedings.mlr.press/v28/durrant13.html.

Related Material