Marginalized Denoising Auto-encoders for Nonlinear Representations

Minmin Chen, Kilian Weinberger, Fei Sha, Yoshua Bengio
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1476-1484, 2014.

Abstract

Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution. This process learns robust representation, though at the expense of requiring many training epochs, in which the data is explicitly corrupted. In this paper we present the marginalized Denoising Auto-encoder (mDAE), which (approximately) marginalizes out the corruption during training. Effectively, the mDAE takes into account infinitely many corrupted copies of the training data in every epoch, and therefore is able to match or outperform the DAE with much fewer training epochs. We analyze our proposed algorithm and show that it can be understood as a classic auto-encoder with a special form of regularization. In empirical evaluations we show that it attains 1-2 order-of-magnitude speedup in training time over other competing approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-cheng14, title = {Marginalized Denoising Auto-encoders for Nonlinear Representations}, author = {Chen, Minmin and Weinberger, Kilian and Sha, Fei and Bengio, Yoshua}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1476--1484}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/cheng14.pdf}, url = {https://proceedings.mlr.press/v32/cheng14.html}, abstract = {Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution. This process learns robust representation, though at the expense of requiring many training epochs, in which the data is explicitly corrupted. In this paper we present the marginalized Denoising Auto-encoder (mDAE), which (approximately) marginalizes out the corruption during training. Effectively, the mDAE takes into account infinitely many corrupted copies of the training data in every epoch, and therefore is able to match or outperform the DAE with much fewer training epochs. We analyze our proposed algorithm and show that it can be understood as a classic auto-encoder with a special form of regularization. In empirical evaluations we show that it attains 1-2 order-of-magnitude speedup in training time over other competing approaches.} }
Endnote
%0 Conference Paper %T Marginalized Denoising Auto-encoders for Nonlinear Representations %A Minmin Chen %A Kilian Weinberger %A Fei Sha %A Yoshua Bengio %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-cheng14 %I PMLR %P 1476--1484 %U https://proceedings.mlr.press/v32/cheng14.html %V 32 %N 2 %X Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution. This process learns robust representation, though at the expense of requiring many training epochs, in which the data is explicitly corrupted. In this paper we present the marginalized Denoising Auto-encoder (mDAE), which (approximately) marginalizes out the corruption during training. Effectively, the mDAE takes into account infinitely many corrupted copies of the training data in every epoch, and therefore is able to match or outperform the DAE with much fewer training epochs. We analyze our proposed algorithm and show that it can be understood as a classic auto-encoder with a special form of regularization. In empirical evaluations we show that it attains 1-2 order-of-magnitude speedup in training time over other competing approaches.
RIS
TY - CPAPER TI - Marginalized Denoising Auto-encoders for Nonlinear Representations AU - Minmin Chen AU - Kilian Weinberger AU - Fei Sha AU - Yoshua Bengio BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-cheng14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1476 EP - 1484 L1 - http://proceedings.mlr.press/v32/cheng14.pdf UR - https://proceedings.mlr.press/v32/cheng14.html AB - Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution. This process learns robust representation, though at the expense of requiring many training epochs, in which the data is explicitly corrupted. In this paper we present the marginalized Denoising Auto-encoder (mDAE), which (approximately) marginalizes out the corruption during training. Effectively, the mDAE takes into account infinitely many corrupted copies of the training data in every epoch, and therefore is able to match or outperform the DAE with much fewer training epochs. We analyze our proposed algorithm and show that it can be understood as a classic auto-encoder with a special form of regularization. In empirical evaluations we show that it attains 1-2 order-of-magnitude speedup in training time over other competing approaches. ER -
APA
Chen, M., Weinberger, K., Sha, F. & Bengio, Y.. (2014). Marginalized Denoising Auto-encoders for Nonlinear Representations. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1476-1484 Available from https://proceedings.mlr.press/v32/cheng14.html.

Related Material