Unbounded Bayesian Optimization via Regularization

Bobak Shahriari, Alexandre Bouchard-Cote, Nando Freitas
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:1168-1176, 2016.

Abstract

Bayesian optimization has recently emerged as a powerful and flexible tool in machine learning for hyperparameter tuning and more generally for the efficient global optimization of expensive black box functions. The established practice requires a user-defined bounded domain, which is assumed to contain the global optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such a domain. In this work, we modify the standard Bayesian optimization framework in a principled way to allow for unconstrained exploration of the search space. We introduce a new alternative method and compare it to a volume doubling baseline on two common synthetic benchmarking test functions. Finally, we apply our proposed methods on the task of tuning the stochastic gradient descent optimizer for both a multi-layered perceptron and a convolutional neural network on the MNIST dataset.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-shahriari16, title = {Unbounded Bayesian Optimization via Regularization}, author = {Shahriari, Bobak and Bouchard-Cote, Alexandre and Freitas, Nando}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {1168--1176}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/shahriari16.pdf}, url = {https://proceedings.mlr.press/v51/shahriari16.html}, abstract = {Bayesian optimization has recently emerged as a powerful and flexible tool in machine learning for hyperparameter tuning and more generally for the efficient global optimization of expensive black box functions. The established practice requires a user-defined bounded domain, which is assumed to contain the global optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such a domain. In this work, we modify the standard Bayesian optimization framework in a principled way to allow for unconstrained exploration of the search space. We introduce a new alternative method and compare it to a volume doubling baseline on two common synthetic benchmarking test functions. Finally, we apply our proposed methods on the task of tuning the stochastic gradient descent optimizer for both a multi-layered perceptron and a convolutional neural network on the MNIST dataset.} }
Endnote
%0 Conference Paper %T Unbounded Bayesian Optimization via Regularization %A Bobak Shahriari %A Alexandre Bouchard-Cote %A Nando Freitas %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-shahriari16 %I PMLR %P 1168--1176 %U https://proceedings.mlr.press/v51/shahriari16.html %V 51 %X Bayesian optimization has recently emerged as a powerful and flexible tool in machine learning for hyperparameter tuning and more generally for the efficient global optimization of expensive black box functions. The established practice requires a user-defined bounded domain, which is assumed to contain the global optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such a domain. In this work, we modify the standard Bayesian optimization framework in a principled way to allow for unconstrained exploration of the search space. We introduce a new alternative method and compare it to a volume doubling baseline on two common synthetic benchmarking test functions. Finally, we apply our proposed methods on the task of tuning the stochastic gradient descent optimizer for both a multi-layered perceptron and a convolutional neural network on the MNIST dataset.
RIS
TY - CPAPER TI - Unbounded Bayesian Optimization via Regularization AU - Bobak Shahriari AU - Alexandre Bouchard-Cote AU - Nando Freitas BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-shahriari16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 1168 EP - 1176 L1 - http://proceedings.mlr.press/v51/shahriari16.pdf UR - https://proceedings.mlr.press/v51/shahriari16.html AB - Bayesian optimization has recently emerged as a powerful and flexible tool in machine learning for hyperparameter tuning and more generally for the efficient global optimization of expensive black box functions. The established practice requires a user-defined bounded domain, which is assumed to contain the global optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such a domain. In this work, we modify the standard Bayesian optimization framework in a principled way to allow for unconstrained exploration of the search space. We introduce a new alternative method and compare it to a volume doubling baseline on two common synthetic benchmarking test functions. Finally, we apply our proposed methods on the task of tuning the stochastic gradient descent optimizer for both a multi-layered perceptron and a convolutional neural network on the MNIST dataset. ER -
APA
Shahriari, B., Bouchard-Cote, A. & Freitas, N.. (2016). Unbounded Bayesian Optimization via Regularization. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:1168-1176 Available from https://proceedings.mlr.press/v51/shahriari16.html.

Related Material