Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets

Dan Garber, Elad Hazan
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:541-549, 2015.

Abstract

The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth optimization has regained much interest in recent years in the context of large scale optimization and machine learning. A key advantage of the method is that it avoids projections - the computational bottleneck in many applications - replacing it by a linear optimization step. Despite this advantage, the known convergence rates of the FW method fall behind standard first order methods for most settings of interest. It is an active line of research to derive faster linear optimization-based algorithms for various settings of convex optimization. In this paper we consider the special case of optimization over strongly convex sets, for which we prove that the vanila FW method converges at a rate of \frac1t^2. This gives a quadratic improvement in convergence rate compared to the general case, in which convergence is of the order \frac1t, and known to be tight. We show that various balls induced by \ell_p norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. We further show how several previous fast-rate results for the FW method follow easily from our analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-garbera15, title = {Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets}, author = {Garber, Dan and Hazan, Elad}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {541--549}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/garbera15.pdf}, url = {https://proceedings.mlr.press/v37/garbera15.html}, abstract = {The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth optimization has regained much interest in recent years in the context of large scale optimization and machine learning. A key advantage of the method is that it avoids projections - the computational bottleneck in many applications - replacing it by a linear optimization step. Despite this advantage, the known convergence rates of the FW method fall behind standard first order methods for most settings of interest. It is an active line of research to derive faster linear optimization-based algorithms for various settings of convex optimization. In this paper we consider the special case of optimization over strongly convex sets, for which we prove that the vanila FW method converges at a rate of \frac1t^2. This gives a quadratic improvement in convergence rate compared to the general case, in which convergence is of the order \frac1t, and known to be tight. We show that various balls induced by \ell_p norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. We further show how several previous fast-rate results for the FW method follow easily from our analysis.} }
Endnote
%0 Conference Paper %T Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets %A Dan Garber %A Elad Hazan %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-garbera15 %I PMLR %P 541--549 %U https://proceedings.mlr.press/v37/garbera15.html %V 37 %X The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth optimization has regained much interest in recent years in the context of large scale optimization and machine learning. A key advantage of the method is that it avoids projections - the computational bottleneck in many applications - replacing it by a linear optimization step. Despite this advantage, the known convergence rates of the FW method fall behind standard first order methods for most settings of interest. It is an active line of research to derive faster linear optimization-based algorithms for various settings of convex optimization. In this paper we consider the special case of optimization over strongly convex sets, for which we prove that the vanila FW method converges at a rate of \frac1t^2. This gives a quadratic improvement in convergence rate compared to the general case, in which convergence is of the order \frac1t, and known to be tight. We show that various balls induced by \ell_p norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. We further show how several previous fast-rate results for the FW method follow easily from our analysis.
RIS
TY - CPAPER TI - Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets AU - Dan Garber AU - Elad Hazan BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-garbera15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 541 EP - 549 L1 - http://proceedings.mlr.press/v37/garbera15.pdf UR - https://proceedings.mlr.press/v37/garbera15.html AB - The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth optimization has regained much interest in recent years in the context of large scale optimization and machine learning. A key advantage of the method is that it avoids projections - the computational bottleneck in many applications - replacing it by a linear optimization step. Despite this advantage, the known convergence rates of the FW method fall behind standard first order methods for most settings of interest. It is an active line of research to derive faster linear optimization-based algorithms for various settings of convex optimization. In this paper we consider the special case of optimization over strongly convex sets, for which we prove that the vanila FW method converges at a rate of \frac1t^2. This gives a quadratic improvement in convergence rate compared to the general case, in which convergence is of the order \frac1t, and known to be tight. We show that various balls induced by \ell_p norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. We further show how several previous fast-rate results for the FW method follow easily from our analysis. ER -
APA
Garber, D. & Hazan, E.. (2015). Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:541-549 Available from https://proceedings.mlr.press/v37/garbera15.html.

Related Material