Sample-based approximate regularization

Philip Bachman, Amir-Massoud Farahmand, Doina Precup
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1926-1934, 2014.

Abstract

We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-bachman14, title = {Sample-based approximate regularization}, author = {Bachman, Philip and Farahmand, Amir-Massoud and Precup, Doina}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1926--1934}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/bachman14.pdf}, url = {https://proceedings.mlr.press/v32/bachman14.html}, abstract = {We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets.} }
Endnote
%0 Conference Paper %T Sample-based approximate regularization %A Philip Bachman %A Amir-Massoud Farahmand %A Doina Precup %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-bachman14 %I PMLR %P 1926--1934 %U https://proceedings.mlr.press/v32/bachman14.html %V 32 %N 2 %X We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets.
RIS
TY - CPAPER TI - Sample-based approximate regularization AU - Philip Bachman AU - Amir-Massoud Farahmand AU - Doina Precup BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-bachman14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1926 EP - 1934 L1 - http://proceedings.mlr.press/v32/bachman14.pdf UR - https://proceedings.mlr.press/v32/bachman14.html AB - We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets. ER -
APA
Bachman, P., Farahmand, A. & Precup, D.. (2014). Sample-based approximate regularization. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1926-1934 Available from https://proceedings.mlr.press/v32/bachman14.html.

Related Material