Kernel Multi-task Learning using Task-specific Features

Edwin V. Bonilla, Felix V. Agakov, Christopher K. I. Williams
Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, PMLR 2:43-50, 2007.

Abstract

In this paper we are concerned with multitask learning when task-specific features are available. We describe two ways of achieving this using Gaussian process predictors: in the first method, the data from all tasks is combined into one dataset, making use of the task-specific features. In the second method we train specific predictors for each reference task, and then combine their predictions using a gating network. We demonstrate these methods on a compiler performance prediction problem, where a task is defined as predicting the speed-up obtained when applying a sequence of code transformations to a given program.

Cite this Paper


BibTeX
@InProceedings{pmlr-v2-bonilla07a, title = {Kernel Multi-task Learning using Task-specific Features}, author = {Bonilla, Edwin V. and Agakov, Felix V. and Williams, Christopher K. I.}, booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics}, pages = {43--50}, year = {2007}, editor = {Meila, Marina and Shen, Xiaotong}, volume = {2}, series = {Proceedings of Machine Learning Research}, address = {San Juan, Puerto Rico}, month = {21--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v2/bonilla07a/bonilla07a.pdf}, url = {https://proceedings.mlr.press/v2/bonilla07a.html}, abstract = {In this paper we are concerned with multitask learning when task-specific features are available. We describe two ways of achieving this using Gaussian process predictors: in the first method, the data from all tasks is combined into one dataset, making use of the task-specific features. In the second method we train specific predictors for each reference task, and then combine their predictions using a gating network. We demonstrate these methods on a compiler performance prediction problem, where a task is defined as predicting the speed-up obtained when applying a sequence of code transformations to a given program.} }
Endnote
%0 Conference Paper %T Kernel Multi-task Learning using Task-specific Features %A Edwin V. Bonilla %A Felix V. Agakov %A Christopher K. I. Williams %B Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2007 %E Marina Meila %E Xiaotong Shen %F pmlr-v2-bonilla07a %I PMLR %P 43--50 %U https://proceedings.mlr.press/v2/bonilla07a.html %V 2 %X In this paper we are concerned with multitask learning when task-specific features are available. We describe two ways of achieving this using Gaussian process predictors: in the first method, the data from all tasks is combined into one dataset, making use of the task-specific features. In the second method we train specific predictors for each reference task, and then combine their predictions using a gating network. We demonstrate these methods on a compiler performance prediction problem, where a task is defined as predicting the speed-up obtained when applying a sequence of code transformations to a given program.
RIS
TY - CPAPER TI - Kernel Multi-task Learning using Task-specific Features AU - Edwin V. Bonilla AU - Felix V. Agakov AU - Christopher K. I. Williams BT - Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics DA - 2007/03/11 ED - Marina Meila ED - Xiaotong Shen ID - pmlr-v2-bonilla07a PB - PMLR DP - Proceedings of Machine Learning Research VL - 2 SP - 43 EP - 50 L1 - http://proceedings.mlr.press/v2/bonilla07a/bonilla07a.pdf UR - https://proceedings.mlr.press/v2/bonilla07a.html AB - In this paper we are concerned with multitask learning when task-specific features are available. We describe two ways of achieving this using Gaussian process predictors: in the first method, the data from all tasks is combined into one dataset, making use of the task-specific features. In the second method we train specific predictors for each reference task, and then combine their predictions using a gating network. We demonstrate these methods on a compiler performance prediction problem, where a task is defined as predicting the speed-up obtained when applying a sequence of code transformations to a given program. ER -
APA
Bonilla, E.V., Agakov, F.V. & Williams, C.K.I.. (2007). Kernel Multi-task Learning using Task-specific Features. Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 2:43-50 Available from https://proceedings.mlr.press/v2/bonilla07a.html.

Related Material