Finito: A faster, permutable incremental gradient method for big data problems

Aaron Defazio, Justin Domke,  Caetano
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1125-1133, 2014.

Abstract

Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-defazio14, title = {Finito: A faster, permutable incremental gradient method for big data problems}, author = {Defazio, Aaron and Domke, Justin and Caetano, }, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1125--1133}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/defazio14.pdf}, url = {https://proceedings.mlr.press/v32/defazio14.html}, abstract = {Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.} }
Endnote
%0 Conference Paper %T Finito: A faster, permutable incremental gradient method for big data problems %A Aaron Defazio %A Justin Domke %A Caetano %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-defazio14 %I PMLR %P 1125--1133 %U https://proceedings.mlr.press/v32/defazio14.html %V 32 %N 2 %X Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.
RIS
TY - CPAPER TI - Finito: A faster, permutable incremental gradient method for big data problems AU - Aaron Defazio AU - Justin Domke AU - Caetano BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-defazio14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1125 EP - 1133 L1 - http://proceedings.mlr.press/v32/defazio14.pdf UR - https://proceedings.mlr.press/v32/defazio14.html AB - Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance. ER -
APA
Defazio, A., Domke, J. & Caetano, . (2014). Finito: A faster, permutable incremental gradient method for big data problems. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1125-1133 Available from https://proceedings.mlr.press/v32/defazio14.html.

Related Material