Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields

Mark Schmidt, Reza Babanezhad, Mohamed Ahmed, Aaron Defazio, Ann Clifton, Anoop Sarkar
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:819-828, 2015.

Abstract

We apply stochastic average gradient (SAG) algorithms for training conditional random fields (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance, and analyze the rate of convergence of the SAGA variant under non-uniform sampling. Our experimental results reveal that our method significantly outperforms existing methods in terms of the training objective, and performs as well or better than optimally-tuned stochastic gradient methods in terms of test error.

Cite this Paper


BibTeX
@InProceedings{pmlr-v38-schmidt15, title = {{Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields}}, author = {Schmidt, Mark and Babanezhad, Reza and Ahmed, Mohamed and Defazio, Aaron and Clifton, Ann and Sarkar, Anoop}, booktitle = {Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics}, pages = {819--828}, year = {2015}, editor = {Lebanon, Guy and Vishwanathan, S. V. N.}, volume = {38}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {09--12 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v38/schmidt15.pdf}, url = {https://proceedings.mlr.press/v38/schmidt15.html}, abstract = {We apply stochastic average gradient (SAG) algorithms for training conditional random fields (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance, and analyze the rate of convergence of the SAGA variant under non-uniform sampling. Our experimental results reveal that our method significantly outperforms existing methods in terms of the training objective, and performs as well or better than optimally-tuned stochastic gradient methods in terms of test error.} }
Endnote
%0 Conference Paper %T Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields %A Mark Schmidt %A Reza Babanezhad %A Mohamed Ahmed %A Aaron Defazio %A Ann Clifton %A Anoop Sarkar %B Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2015 %E Guy Lebanon %E S. V. N. Vishwanathan %F pmlr-v38-schmidt15 %I PMLR %P 819--828 %U https://proceedings.mlr.press/v38/schmidt15.html %V 38 %X We apply stochastic average gradient (SAG) algorithms for training conditional random fields (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance, and analyze the rate of convergence of the SAGA variant under non-uniform sampling. Our experimental results reveal that our method significantly outperforms existing methods in terms of the training objective, and performs as well or better than optimally-tuned stochastic gradient methods in terms of test error.
RIS
TY - CPAPER TI - Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields AU - Mark Schmidt AU - Reza Babanezhad AU - Mohamed Ahmed AU - Aaron Defazio AU - Ann Clifton AU - Anoop Sarkar BT - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics DA - 2015/02/21 ED - Guy Lebanon ED - S. V. N. Vishwanathan ID - pmlr-v38-schmidt15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 38 SP - 819 EP - 828 L1 - http://proceedings.mlr.press/v38/schmidt15.pdf UR - https://proceedings.mlr.press/v38/schmidt15.html AB - We apply stochastic average gradient (SAG) algorithms for training conditional random fields (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance, and analyze the rate of convergence of the SAGA variant under non-uniform sampling. Our experimental results reveal that our method significantly outperforms existing methods in terms of the training objective, and performs as well or better than optimally-tuned stochastic gradient methods in terms of test error. ER -
APA
Schmidt, M., Babanezhad, R., Ahmed, M., Defazio, A., Clifton, A. & Sarkar, A.. (2015). Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 38:819-828 Available from https://proceedings.mlr.press/v38/schmidt15.html.

Related Material