Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints

07/19/2021
by   Shaojie Li, et al.
0

Generalization performance of stochastic optimization stands a central place in learning theory. In this paper, we investigate the excess risk performance and towards improved learning rates for two popular approaches of stochastic optimization: empirical risk minimization (ERM) and stochastic gradient descent (SGD). Although there exists plentiful generalization analysis of ERM and SGD for supervised learning, current theoretical understandings of ERM and SGD either have stronger assumptions in convex learning, e.g., strong convexity, or show slow rates and less studied in nonconvex learning. Motivated by these problems, we aim to provide improved rates under milder assumptions in convex learning and derive faster rates in nonconvex learning. It is notable that our analysis span two popular theoretical viewpoints: stability and uniform convergence. Specifically, in stability regime, we present high probability learning rates of order 𝒪 (1/n) w.r.t. the sample size n for ERM and SGD with milder assumptions in convex learning and similar high probability rates of order 𝒪 (1/n) in nonconvex learning, rather than in expectation. Furthermore, this type of learning rate is improved to faster order 𝒪 (1/n^2) in uniform convergence regime. To our best knowledge, for ERM and SGD, the learning rates presented in this paper are all state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2021

Learning Rates for Nonconvex Pairwise Learning

Pairwise learning is receiving increasing attention since it covers many...
research
06/14/2022

Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems

Stochastic optimization has found wide applications in minimizing object...
research
01/09/2023

Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation

The stochastic proximal point (SPP) methods have gained recent attention...
research
11/12/2020

Towards Optimal Problem Dependent Generalization Error Bounds in Statistical Learning Theory

We study problem-dependent rates, i.e., generalization errors that scale...
research
02/27/2017

Dropping Convexity for More Efficient and Scalable Online Multiview Learning

Multiview representation learning is very popular for latent factor anal...
research
06/08/2018

Lightweight Stochastic Optimization for Minimizing Finite Sums with Infinite Data

Variance reduction has been commonly used in stochastic optimization. It...
research
05/04/2022

Making SGD Parameter-Free

We develop an algorithm for parameter-free stochastic convex optimizatio...

Please sign up or login with your details

Forgot password? Click here to reset