How Does Momentum Help Frank Wolfe?

by   Bingcong Li, et al.

We unveil the connections between Frank Wolfe (FW) type algorithms and the momentum in Accelerated Gradient Methods (AGM). On the negative side, these connections illustrate why momentum is unlikely to be effective for FW type algorithms. The encouraging message behind this link, on the other hand, is that momentum is useful for FW on a class of problems. In particular, we prove that a momentum variant of FW, that we term accelerated Frank Wolfe (AFW), converges with a faster rate Õ(1/k^2) on certain constraint sets despite the same O(1/k) rate as FW on general cases. Given the possible acceleration of AFW at almost no extra cost, it is thus a competitive alternative to FW. Numerical experiments on benchmarked machine learning tasks further validate our theoretical findings.


page 1

page 2

page 3

page 4


Katyusha: The First Direct Acceleration of Stochastic Gradient Methods

Nesterov's momentum trick is famously known for accelerating gradient de...

Understanding the Role of Momentum in Stochastic Gradient Methods

The use of momentum in stochastic gradient methods has become a widespre...

Conformal Symplectic and Relativistic Optimization

Although momentum-based optimization methods have had a remarkable impac...

Momentum Accelerated Multigrid Methods

In this paper, we propose two momentum accelerated MG cycles. The main i...

Average-case Acceleration Through Spectral Density Estimation

We develop a framework for designing optimal quadratic optimization meth...

Practical and Fast Momentum-Based Power Methods

The power method is a classical algorithm with broad applications in mac...

Echo disappears: momentum term structure and cyclic information in turnover

We extract cyclic information in turnover and find it can explain the mo...

Please sign up or login with your details

Forgot password? Click here to reset