Asymmetric Multi-task Learning Based on Task Relatedness and Loss

Giwoong Lee, Eunho Yang, Sung Hwang
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:230-238, 2016.

Abstract

We propose a novel multi-task learning method that can minimize the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of individual task losses, which we refer to as Asymmetric Multi-task Learning (AMTL). To tackle this problem, we couple multiple tasks via a sparse, directed regularization graph, that enforces each task parameter to be reconstructed as a sparse combination of other tasks, which are selected based on the task-wise loss. We present two different algorithms to solve this joint learning of the task predictors and the regularization graph. The first algorithm solves for the original learning objective using alternative optimization, and the second algorithm solves an approximation of it using curriculum learning strategy, that learns one task at a time. We perform experiments on multiple datasets for classification and regression, on which we obtain significant improvements in performance over the single task learning and symmetric multitask learning baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-leeb16, title = {Asymmetric Multi-task Learning Based on Task Relatedness and Loss}, author = {Lee, Giwoong and Yang, Eunho and Hwang, Sung}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {230--238}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/leeb16.pdf}, url = {https://proceedings.mlr.press/v48/leeb16.html}, abstract = {We propose a novel multi-task learning method that can minimize the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of individual task losses, which we refer to as Asymmetric Multi-task Learning (AMTL). To tackle this problem, we couple multiple tasks via a sparse, directed regularization graph, that enforces each task parameter to be reconstructed as a sparse combination of other tasks, which are selected based on the task-wise loss. We present two different algorithms to solve this joint learning of the task predictors and the regularization graph. The first algorithm solves for the original learning objective using alternative optimization, and the second algorithm solves an approximation of it using curriculum learning strategy, that learns one task at a time. We perform experiments on multiple datasets for classification and regression, on which we obtain significant improvements in performance over the single task learning and symmetric multitask learning baselines.} }
Endnote
%0 Conference Paper %T Asymmetric Multi-task Learning Based on Task Relatedness and Loss %A Giwoong Lee %A Eunho Yang %A Sung Hwang %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-leeb16 %I PMLR %P 230--238 %U https://proceedings.mlr.press/v48/leeb16.html %V 48 %X We propose a novel multi-task learning method that can minimize the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of individual task losses, which we refer to as Asymmetric Multi-task Learning (AMTL). To tackle this problem, we couple multiple tasks via a sparse, directed regularization graph, that enforces each task parameter to be reconstructed as a sparse combination of other tasks, which are selected based on the task-wise loss. We present two different algorithms to solve this joint learning of the task predictors and the regularization graph. The first algorithm solves for the original learning objective using alternative optimization, and the second algorithm solves an approximation of it using curriculum learning strategy, that learns one task at a time. We perform experiments on multiple datasets for classification and regression, on which we obtain significant improvements in performance over the single task learning and symmetric multitask learning baselines.
RIS
TY - CPAPER TI - Asymmetric Multi-task Learning Based on Task Relatedness and Loss AU - Giwoong Lee AU - Eunho Yang AU - Sung Hwang BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-leeb16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 230 EP - 238 L1 - http://proceedings.mlr.press/v48/leeb16.pdf UR - https://proceedings.mlr.press/v48/leeb16.html AB - We propose a novel multi-task learning method that can minimize the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of individual task losses, which we refer to as Asymmetric Multi-task Learning (AMTL). To tackle this problem, we couple multiple tasks via a sparse, directed regularization graph, that enforces each task parameter to be reconstructed as a sparse combination of other tasks, which are selected based on the task-wise loss. We present two different algorithms to solve this joint learning of the task predictors and the regularization graph. The first algorithm solves for the original learning objective using alternative optimization, and the second algorithm solves an approximation of it using curriculum learning strategy, that learns one task at a time. We perform experiments on multiple datasets for classification and regression, on which we obtain significant improvements in performance over the single task learning and symmetric multitask learning baselines. ER -
APA
Lee, G., Yang, E. & Hwang, S.. (2016). Asymmetric Multi-task Learning Based on Task Relatedness and Loss. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:230-238 Available from https://proceedings.mlr.press/v48/leeb16.html.

Related Material