Better Theory for SGD in the Nonconvex World

02/09/2020
by   Ahmed Khaled, et al.
26

Large-scale nonconvex optimization problems are ubiquitous in modern machine learning, and among practitioners interested in solving them, Stochastic Gradient Descent (SGD) reigns supreme. We revisit the analysis of SGD in the nonconvex setting and propose a new variant of the recently introduced expected smoothness assumption which governs the behaviour of the second moment of the stochastic gradient. We show that our assumption is both more general and more reasonable than assumptions made in all prior work. Moreover, our results yield the optimal O(ε^-4) rate for finding a stationary point of nonconvex smooth functions, and recover the optimal O(ε^-1) rate for finding a global solution if the Polyak-Łojasiewicz condition is satisfied. We compare against convergence rates under convexity and prove a theorem on the convergence of SGD under Quadratic Functional Growth and convexity, which might be of independent interest. Moreover, we perform our analysis in a framework which allows for a detailed study of the effects of a wide array of sampling strategies and minibatch sizes for finite-sum optimization problems. We corroborate our theoretical results with experiments on real and synthetic data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2016

Stochastic Variance Reduction for Nonconvex Optimization

We study nonconvex finite-sum problems and analyze stochastic variance r...
research
06/12/2020

A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization

In this paper, we study the performance of a large family of SGD variant...
research
06/04/2019

Scenario approach for minmax optimization with emphasis on the nonconvex case: positive results and caveats

We treat the so-called scenario approach, a popular probabilistic approx...
research
06/12/2020

SGD with shuffling: optimal rates without component convexity and large epoch requirements

We study without-replacement SGD for solving finite-sum optimization pro...
research
02/27/2017

Dropping Convexity for More Efficient and Scalable Online Multiview Learning

Multiview representation learning is very popular for latent factor anal...
research
08/25/2020

PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization

In this paper, we propose a novel stochastic gradient estimator—ProbAbil...
research
08/29/2018

Online ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processes

Solving statistical learning problems often involves nonconvex optimizat...

Please sign up or login with your details

Forgot password? Click here to reset