Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic Optimization Problems

05/19/2020
by   Parvin Nazari, et al.
0

In this paper, we design and analyze a new family of adaptive subgradient methods for solving an important class of weakly convex (possibly nonsmooth) stochastic optimization problems. Adaptive methods that use exponential moving averages of past gradients to update search directions and learning rates have recently attracted a lot of attention for solving optimization problems that arise in machine learning. Nevertheless, their convergence analysis almost exclusively requires smoothness and/or convexity of the objective function. In contrast, we establish non-asymptotic rates of convergence of first and zeroth-order adaptive methods and their proximal variants for a reasonably broad class of nonsmooth & nonconvex optimization problems. Experimental results indicate how the proposed algorithms empirically outperform stochastic gradient descent and its zeroth-order variant for solving such optimization problems.

READ FULL TEXT
research
04/26/2021

Solving a class of non-convex min-max games using adaptive momentum methods

Adaptive momentum methods have recently attracted a lot of attention for...
research
06/11/2020

Convergence of adaptive algorithms for weakly convex constrained optimization

We analyze the adaptive first order algorithm AMSGrad, for solving a con...
research
11/11/2019

Stochastic Difference-of-Convex Algorithms for Solving nonconvex optimization problems

The paper deals with stochastic difference-of-convex functions programs,...
research
01/13/2022

Consistent Approximations in Composite Optimization

Approximations of optimization problems arise in computational procedure...
research
08/08/2018

On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization

This paper studies a class of adaptive gradient based momentum algorithm...
research
09/22/2013

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type alg...
research
07/06/2021

Distributed stochastic optimization with large delays

One of the most widely used methods for solving large-scale stochastic o...

Please sign up or login with your details

Forgot password? Click here to reset