Learning with Marginalized Corrupted Features

Laurens Maaten, Minmin Chen, Stephen Tyree, Kilian Weinberger
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):410-418, 2013.

Abstract

The goal of machine learning is to develop predictors that generalize well to test data. Ideally, this is achieved by training on very large (infinite) training data sets that capture all variations in the data distribution. In the case of finite training data, an effective solution is to extend the training set with artificially created examples – which, however, is also computationally costly. We propose to corrupt training examples with noise from known distributions within the exponential family and present a novel learning algorithm, called marginalized corrupted features (MCF), that trains robust predictors by minimizing the expected value of the loss function under the corrupting distribution – essentially learning with infinitely many (corrupted) training examples. We show empirically on a variety of data sets that MCF classifiers can be trained efficiently, may generalize substantially better to test data, and are more robust to feature deletion at test time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-vandermaaten13, title = {Learning with Marginalized Corrupted Features}, author = {Maaten, Laurens and Chen, Minmin and Tyree, Stephen and Weinberger, Kilian}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {410--418}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/vandermaaten13.pdf}, url = {https://proceedings.mlr.press/v28/vandermaaten13.html}, abstract = {The goal of machine learning is to develop predictors that generalize well to test data. Ideally, this is achieved by training on very large (infinite) training data sets that capture all variations in the data distribution. In the case of finite training data, an effective solution is to extend the training set with artificially created examples – which, however, is also computationally costly. We propose to corrupt training examples with noise from known distributions within the exponential family and present a novel learning algorithm, called marginalized corrupted features (MCF), that trains robust predictors by minimizing the expected value of the loss function under the corrupting distribution – essentially learning with infinitely many (corrupted) training examples. We show empirically on a variety of data sets that MCF classifiers can be trained efficiently, may generalize substantially better to test data, and are more robust to feature deletion at test time. } }
Endnote
%0 Conference Paper %T Learning with Marginalized Corrupted Features %A Laurens Maaten %A Minmin Chen %A Stephen Tyree %A Kilian Weinberger %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-vandermaaten13 %I PMLR %P 410--418 %U https://proceedings.mlr.press/v28/vandermaaten13.html %V 28 %N 1 %X The goal of machine learning is to develop predictors that generalize well to test data. Ideally, this is achieved by training on very large (infinite) training data sets that capture all variations in the data distribution. In the case of finite training data, an effective solution is to extend the training set with artificially created examples – which, however, is also computationally costly. We propose to corrupt training examples with noise from known distributions within the exponential family and present a novel learning algorithm, called marginalized corrupted features (MCF), that trains robust predictors by minimizing the expected value of the loss function under the corrupting distribution – essentially learning with infinitely many (corrupted) training examples. We show empirically on a variety of data sets that MCF classifiers can be trained efficiently, may generalize substantially better to test data, and are more robust to feature deletion at test time.
RIS
TY - CPAPER TI - Learning with Marginalized Corrupted Features AU - Laurens Maaten AU - Minmin Chen AU - Stephen Tyree AU - Kilian Weinberger BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-vandermaaten13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 410 EP - 418 L1 - http://proceedings.mlr.press/v28/vandermaaten13.pdf UR - https://proceedings.mlr.press/v28/vandermaaten13.html AB - The goal of machine learning is to develop predictors that generalize well to test data. Ideally, this is achieved by training on very large (infinite) training data sets that capture all variations in the data distribution. In the case of finite training data, an effective solution is to extend the training set with artificially created examples – which, however, is also computationally costly. We propose to corrupt training examples with noise from known distributions within the exponential family and present a novel learning algorithm, called marginalized corrupted features (MCF), that trains robust predictors by minimizing the expected value of the loss function under the corrupting distribution – essentially learning with infinitely many (corrupted) training examples. We show empirically on a variety of data sets that MCF classifiers can be trained efficiently, may generalize substantially better to test data, and are more robust to feature deletion at test time. ER -
APA
Maaten, L., Chen, M., Tyree, S. & Weinberger, K.. (2013). Learning with Marginalized Corrupted Features. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):410-418 Available from https://proceedings.mlr.press/v28/vandermaaten13.html.

Related Material