Accelerated Stochastic Subgradient Methods under Local Error Bound Condition

07/04/2016
by   Yi Xu, et al.
0

In this paper, we propose two accelerated stochastic subgradient methods for stochastic non-strongly convex optimization problems by leveraging a generic local error bound condition. The novelty of the proposed methods lies at smartly leveraging the recent historical solution to tackle the variance in the stochastic subgradient. The key idea of both methods is to iteratively solve the original problem approximately in a local region around a recent historical solution with size of the local region gradually decreasing as the solution approaches the optimal set. The difference of the two methods lies at how to construct the local region. The first method uses an explicit ball constraint and the second method uses an implicit regularization approach. For both methods, we establish the improved iteration complexity in a high probability for achieving an ϵ-optimal solution. Besides the improved order of iteration complexity with a high probability, the proposed algorithms also enjoy a logarithmic dependence on the distance of the initial solution to the optimal set. We also consider applications in machine learning and demonstrate that the proposed algorithms enjoy faster convergence than the traditional stochastic subgradient method. For example, when applied to the ℓ_1 regularized polyhedral loss minimization (e.g., hinge loss, absolute loss), the proposed stochastic methods have a logarithmic iteration complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2015

RSG: Beating Subgradient Method without Smoothness and Strong Convexity

In this paper, we study the efficiency of a Restarted Sub Gradient (RS...
research
08/11/2016

A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates

This paper focuses on convex constrained optimization problems, where th...
research
10/03/2022

High Probability Convergence for Accelerated Stochastic Mirror Descent

In this work, we describe a generic approach to show convergence with hi...
research
06/10/2021

Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise

Thanks to their practical efficiency and random nature of the data, stoc...
research
10/11/2022

Stochastic Constrained DRO with a Complexity Independent of Sample Size

Distributionally Robust Optimization (DRO), as a popular method to train...
research
08/07/2019

A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints

Stochastic convex optimization problems with expectation constraints (SO...
research
01/25/2019

A Laplacian Approach to ℓ_1-Norm Minimization

We propose a novel differentiable reformulation of the linearly-constrai...

Please sign up or login with your details

Forgot password? Click here to reset