Fast Rates of ERM and Stochastic Approximation: Adaptive to Error Bound Conditions

05/11/2018
by   Mingrui Liu, et al.
0

Error bound conditions (EBC) are properties that characterize the growth of an objective function when a point is moved away from the optimal set. They have recently received increasing attention in the field of optimization for developing optimization algorithms with fast convergence. However, the studies of EBC in statistical learning are hitherto still limited. The main contributions of this paper are two-fold. First, we develop fast and intermediate rates of empirical risk minimization (ERM) under EBC for risk minimization with Lipschitz continuous, and smooth convex random functions. Second, we establish fast and intermediate rates of an efficient stochastic approximation (SA) algorithm for risk minimization with Lipschitz continuous random functions, which requires only one pass of n samples and adapts to EBC. For both approaches, the convergence rates span a full spectrum between O(1/√(n)) and O(1/n) depending on the power constant in EBC, and could be even faster than O(1/n) in special cases for ERM. Moreover, these convergence rates are automatically adaptive without using any knowledge of EBC. Overall, this work not only strengthens the understanding of ERM for statistical learning but also brings new fast stochastic algorithms for solving a broad range of statistical learning problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2017

Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity

We generalize the classic convergence rate theory for subgradient method...
research
02/01/2021

Fast rates in structured prediction

Discrete supervised learning problems such as classification are often t...
research
08/13/2019

Distributionally Robust Optimization: A Review

The concepts of risk-aversion, chance-constrained optimization, and robu...
research
11/24/2015

Performance Limits of Stochastic Sub-Gradient Learning, Part I: Single Agent Case

In this work and the supporting Part II, we examine the performance of s...
research
07/16/2021

Adaptive first-order methods revisited: Convex optimization without Lipschitz requirements

We propose a new family of adaptive first-order methods for a class of c...
research
12/16/2021

Analysis of Generalized Bregman Surrogate Algorithms for Nonsmooth Nonconvex Statistical Learning

Modern statistical applications often involve minimizing an objective fu...

Please sign up or login with your details

Forgot password? Click here to reset