A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization

07/08/2019
by   Quoc Tran-Dinh, et al.
0

In this paper, we introduce a new approach to develop stochastic optimization algorithms for solving stochastic composite and possibly nonconvex optimization problems. The main idea is to combine two stochastic estimators to form a new hybrid one. We first introduce our hybrid estimator and then investigate its fundamental properties to form a foundation theory for algorithmic development. Next, we apply our theory to develop several variants of stochastic gradient methods to solve both expectation and finite-sum composite optimization problems. Our first algorithm can be viewed as a variant of proximal stochastic gradient methods with a single-loop, but can achieve O(σ^3ε^-1 + σε^-3) complexity bound that is significantly better than the O(σ^2ε^-4)-complexity in state-of-the-art stochastic gradient methods, where σ is the variance and ε is a desired accuracy. Then, we consider two different variants of our method: adaptive step-size and double-loop schemes that have the same theoretical guarantees as in our first algorithm. We also study two mini-batch variants and develop two hybrid SARAH-SVRG algorithms to solve the finite-sum problems. In all cases, we achieve the best-known complexity bounds under standard assumptions. We test our methods on several numerical examples with real datasets and compare them with state-of-the-arts. Our numerical experiments show that the new methods are comparable and, in many cases, outperform their competitors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2019

Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex Optimization

We introduce a hybrid stochastic estimator to design stochastic gradient...
research
02/15/2019

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

In this paper, we propose a new stochastic algorithmic framework to solv...
research
08/20/2020

An Optimal Hybrid Variance-Reduced Algorithm for Stochastic Composite Nonconvex Optimization

In this note we propose a new variant of the hybrid variance-reduced pro...
research
12/19/2022

Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation Constrained Optimization

Many real-world problems not only have complicated nonconvex functional ...
research
02/17/2020

Stochastic Gauss-Newton Algorithms for Nonconvex Compositional Optimization

We develop two new stochastic Gauss-Newton algorithms for solving a clas...
research
03/30/2020

Stochastic Proximal Gradient Algorithm with Minibatches. Application to Large Scale Learning Models

Stochastic optimization lies at the core of most statistical learning mo...
research
06/27/2020

Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems

We develop a novel variance-reduced algorithm to solve a stochastic nonc...

Please sign up or login with your details

Forgot password? Click here to reset