Collaborative hyperparameter tuning

Rémi Bardenet, Mátyás Brendel, Balázs Kégl, Michèle Sebag
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):199-207, 2013.

Abstract

Hyperparameter learning has traditionally been a manual task because of the limited number of trials. Today’s computing infrastructures allow bigger evaluation budgets, thus opening the way for algorithmic approaches. Recently, surrogate-based optimization was successfully applied to hyperparameter learning for deep belief networks and to WEKA classifiers. The methods combined brute force computational power with model building about the behavior of the error function in the hyperparameter space, and they could significantly improve on manual hyperparameter tuning. What may make experienced practitioners even better at hyperparameter optimization is their ability to generalize across similar learning problems. In this paper, we propose a generic method to incorporate knowledge from previous experiments when simultaneously tuning a learning algorithm on new problems at hand. To this end, we combine surrogate-based ranking and optimization techniques for surrogate-based collaborative tuning (SCoT). We demonstrate SCoT in two experiments where it outperforms standard tuning techniques and single-problem surrogate-based optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-bardenet13, title = {Collaborative hyperparameter tuning}, author = {Bardenet, Rémi and Brendel, Mátyás and Kégl, Balázs and Sebag, Michèle}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {199--207}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/bardenet13.pdf}, url = {https://proceedings.mlr.press/v28/bardenet13.html}, abstract = {Hyperparameter learning has traditionally been a manual task because of the limited number of trials. Today’s computing infrastructures allow bigger evaluation budgets, thus opening the way for algorithmic approaches. Recently, surrogate-based optimization was successfully applied to hyperparameter learning for deep belief networks and to WEKA classifiers. The methods combined brute force computational power with model building about the behavior of the error function in the hyperparameter space, and they could significantly improve on manual hyperparameter tuning. What may make experienced practitioners even better at hyperparameter optimization is their ability to generalize across similar learning problems. In this paper, we propose a generic method to incorporate knowledge from previous experiments when simultaneously tuning a learning algorithm on new problems at hand. To this end, we combine surrogate-based ranking and optimization techniques for surrogate-based collaborative tuning (SCoT). We demonstrate SCoT in two experiments where it outperforms standard tuning techniques and single-problem surrogate-based optimization.} }
Endnote
%0 Conference Paper %T Collaborative hyperparameter tuning %A Rémi Bardenet %A Mátyás Brendel %A Balázs Kégl %A Michèle Sebag %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-bardenet13 %I PMLR %P 199--207 %U https://proceedings.mlr.press/v28/bardenet13.html %V 28 %N 2 %X Hyperparameter learning has traditionally been a manual task because of the limited number of trials. Today’s computing infrastructures allow bigger evaluation budgets, thus opening the way for algorithmic approaches. Recently, surrogate-based optimization was successfully applied to hyperparameter learning for deep belief networks and to WEKA classifiers. The methods combined brute force computational power with model building about the behavior of the error function in the hyperparameter space, and they could significantly improve on manual hyperparameter tuning. What may make experienced practitioners even better at hyperparameter optimization is their ability to generalize across similar learning problems. In this paper, we propose a generic method to incorporate knowledge from previous experiments when simultaneously tuning a learning algorithm on new problems at hand. To this end, we combine surrogate-based ranking and optimization techniques for surrogate-based collaborative tuning (SCoT). We demonstrate SCoT in two experiments where it outperforms standard tuning techniques and single-problem surrogate-based optimization.
RIS
TY - CPAPER TI - Collaborative hyperparameter tuning AU - Rémi Bardenet AU - Mátyás Brendel AU - Balázs Kégl AU - Michèle Sebag BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-bardenet13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 2 SP - 199 EP - 207 L1 - http://proceedings.mlr.press/v28/bardenet13.pdf UR - https://proceedings.mlr.press/v28/bardenet13.html AB - Hyperparameter learning has traditionally been a manual task because of the limited number of trials. Today’s computing infrastructures allow bigger evaluation budgets, thus opening the way for algorithmic approaches. Recently, surrogate-based optimization was successfully applied to hyperparameter learning for deep belief networks and to WEKA classifiers. The methods combined brute force computational power with model building about the behavior of the error function in the hyperparameter space, and they could significantly improve on manual hyperparameter tuning. What may make experienced practitioners even better at hyperparameter optimization is their ability to generalize across similar learning problems. In this paper, we propose a generic method to incorporate knowledge from previous experiments when simultaneously tuning a learning algorithm on new problems at hand. To this end, we combine surrogate-based ranking and optimization techniques for surrogate-based collaborative tuning (SCoT). We demonstrate SCoT in two experiments where it outperforms standard tuning techniques and single-problem surrogate-based optimization. ER -
APA
Bardenet, R., Brendel, M., Kégl, B. & Sebag, M.. (2013). Collaborative hyperparameter tuning. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(2):199-207 Available from https://proceedings.mlr.press/v28/bardenet13.html.

Related Material