Domain Generalization via Invariant Feature Representation

Krikamol Muandet, David Balduzzi, Bernhard Schölkopf
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):10-18, 2013.

Abstract

This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-muandet13, title = {Domain Generalization via Invariant Feature Representation}, author = {Muandet, Krikamol and Balduzzi, David and Schölkopf, Bernhard}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {10--18}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/muandet13.pdf}, url = {https://proceedings.mlr.press/v28/muandet13.html}, abstract = {This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice. } }
Endnote
%0 Conference Paper %T Domain Generalization via Invariant Feature Representation %A Krikamol Muandet %A David Balduzzi %A Bernhard Schölkopf %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-muandet13 %I PMLR %P 10--18 %U https://proceedings.mlr.press/v28/muandet13.html %V 28 %N 1 %X This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.
RIS
TY - CPAPER TI - Domain Generalization via Invariant Feature Representation AU - Krikamol Muandet AU - David Balduzzi AU - Bernhard Schölkopf BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-muandet13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 10 EP - 18 L1 - http://proceedings.mlr.press/v28/muandet13.pdf UR - https://proceedings.mlr.press/v28/muandet13.html AB - This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice. ER -
APA
Muandet, K., Balduzzi, D. & Schölkopf, B.. (2013). Domain Generalization via Invariant Feature Representation. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):10-18 Available from https://proceedings.mlr.press/v28/muandet13.html.

Related Material