Deep Learning of Representations for Unsupervised and Transfer Learning

Yoshua Bengio
Proceedings of ICML Workshop on Unsupervised and Transfer Learning, PMLR 27:17-36, 2012.

Abstract

Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution $P(x)$ is structurally related to some task of interest, say predicting $P(y|x)$. This paper focuses on the context of the Unsupervised and Transfer Learning Challenge, on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v27-bengio12a, title = {Deep Learning of Representations for Unsupervised and Transfer Learning}, author = {Bengio, Yoshua}, booktitle = {Proceedings of ICML Workshop on Unsupervised and Transfer Learning}, pages = {17--36}, year = {2012}, editor = {Guyon, Isabelle and Dror, Gideon and Lemaire, Vincent and Taylor, Graham and Silver, Daniel}, volume = {27}, series = {Proceedings of Machine Learning Research}, address = {Bellevue, Washington, USA}, month = {02 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v27/bengio12a/bengio12a.pdf}, url = {https://proceedings.mlr.press/v27/bengio12a.html}, abstract = {Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution $P(x)$ is structurally related to some task of interest, say predicting $P(y|x)$. This paper focuses on the context of the Unsupervised and Transfer Learning Challenge, on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.} }
Endnote
%0 Conference Paper %T Deep Learning of Representations for Unsupervised and Transfer Learning %A Yoshua Bengio %B Proceedings of ICML Workshop on Unsupervised and Transfer Learning %C Proceedings of Machine Learning Research %D 2012 %E Isabelle Guyon %E Gideon Dror %E Vincent Lemaire %E Graham Taylor %E Daniel Silver %F pmlr-v27-bengio12a %I PMLR %P 17--36 %U https://proceedings.mlr.press/v27/bengio12a.html %V 27 %X Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution $P(x)$ is structurally related to some task of interest, say predicting $P(y|x)$. This paper focuses on the context of the Unsupervised and Transfer Learning Challenge, on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.
RIS
TY - CPAPER TI - Deep Learning of Representations for Unsupervised and Transfer Learning AU - Yoshua Bengio BT - Proceedings of ICML Workshop on Unsupervised and Transfer Learning DA - 2012/06/27 ED - Isabelle Guyon ED - Gideon Dror ED - Vincent Lemaire ED - Graham Taylor ED - Daniel Silver ID - pmlr-v27-bengio12a PB - PMLR DP - Proceedings of Machine Learning Research VL - 27 SP - 17 EP - 36 L1 - http://proceedings.mlr.press/v27/bengio12a/bengio12a.pdf UR - https://proceedings.mlr.press/v27/bengio12a.html AB - Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution $P(x)$ is structurally related to some task of interest, say predicting $P(y|x)$. This paper focuses on the context of the Unsupervised and Transfer Learning Challenge, on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution. ER -
APA
Bengio, Y.. (2012). Deep Learning of Representations for Unsupervised and Transfer Learning. Proceedings of ICML Workshop on Unsupervised and Transfer Learning, in Proceedings of Machine Learning Research 27:17-36 Available from https://proceedings.mlr.press/v27/bengio12a.html.

Related Material