Convergence Rate of Frank-Wolfe for Non-Convex Objectives

07/01/2016
by   Simon Lacoste-Julien, et al.
0

We give a simple proof that the Frank-Wolfe algorithm obtains a stationary point at a rate of O(1/√(t)) on non-convex objectives with a Lipschitz continuous gradient. Our analysis is affine invariant and is the first, to the best of our knowledge, giving a similar rate to what was already proven for projected gradient methods (though on slightly different measures of stationarity).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2022

Convergence Error Analysis of Reflected Gradient Langevin Dynamics for Globally Optimizing Non-Convex Constrained Problems

Non-convex optimization problems have various important applications, wh...
research
04/08/2018

Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms

We consider a class of distributed non-convex optimization problems ofte...
research
07/30/2022

A Gradient Smoothed Functional Algorithm with Truncated Cauchy Random Perturbations for Stochastic Optimization

In this paper, we present a stochastic gradient algorithm for minimizing...
research
06/12/2020

Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs

Adaptive gradient methods have attracted much attention of machine learn...
research
06/05/2015

Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

Many classical algorithms are found until several years later to outlive...
research
03/26/2018

Revisiting First-Order Convex Optimization Over Linear Spaces

Two popular examples of first-order optimization methods over linear spa...
research
11/06/2021

AGGLIO: Global Optimization for Locally Convex Functions

This paper presents AGGLIO (Accelerated Graduated Generalized LInear-mod...

Please sign up or login with your details

Forgot password? Click here to reset