Catalyst Acceleration for Gradient-Based Non-Convex Optimization

03/31/2017
by   Courtney Paquette, et al.
0

We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorithms originally designed for minimizing convex functions. When the objective is convex, the proposed approach enjoys the same properties as the Catalyst approach of Lin et al. [22]. When the objective is nonconvex, it achieves the best known convergence rate to stationary points for first-order methods. Specifically, the proposed algorithm does not require knowledge about the convexity of the objective; yet, it obtains an overall worst-case efficiency of Õ(ε^-2) and, if the function is convex, the complexity reduces to the near-optimal rate Õ(ε^-2/3). We conclude the paper by showing promising experimental results obtained by applying the proposed approach to SVRG and SAGA for sparse matrix factorization and for learning neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2023

Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization

Quasar convexity is a condition that allows some first-order methods to ...
research
11/07/2017

Convex Optimization with Nonconvex Oracles

In machine learning and optimization, one often wants to minimize a conv...
research
02/27/2020

Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions?

It is well-known that given a bounded, smooth nonconvex function, standa...
research
08/29/2015

Generalized Uniformly Optimal Methods for Nonlinear Programming

In this paper, we present a generic framework to extend existing uniform...
research
05/25/2021

Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums

The problem of finding near-stationary points in convex optimization has...
research
09/01/2016

Understanding Trainable Sparse Coding via Matrix Factorization

Sparse coding is a core building block in many data analysis and machine...
research
03/29/2022

Convergence and Complexity of Stochastic Subgradient Methods with Dependent Data for Nonconvex Optimization

We show that under a general dependent data sampling scheme, the classic...

Please sign up or login with your details

Forgot password? Click here to reset