Stochastic gradient descent algorithms for strongly convex functions at O(1/T) convergence rates

by   Shenghuo Zhu, et al.

With a weighting scheme proportional to t, a traditional stochastic gradient descent (SGD) algorithm achieves a high probability convergence rate of O(κ/T) for strongly convex functions, instead of O(κ ln(T)/T). We also prove that an accelerated SGD algorithm also achieves a rate of O(κ/T).


page 1

page 2

page 3

page 4


Conditional Accelerated Lazy Stochastic Gradient Descent

In this work we introduce a conditional accelerated lazy stochastic grad...

Simple and optimal high-probability bounds for strongly-convex stochastic gradient descent

We consider stochastic gradient descent algorithms for minimizing a non-...

Accelerated Stochastic Gradient Descent for Minimizing Finite Sums

We propose an optimization method for minimizing the finite sums of smoo...

Stochastic Smoothing for Nonsmooth Minimizations: Accelerating SGD by Exploiting Structure

In this work we consider the stochastic minimization of nonsmooth convex...

Stochastic Zeroth order Descent with Structured Directions

We introduce and analyze Structured Stochastic Zeroth order Descent (S-S...

Averaging Stochastic Gradient Descent on Riemannian Manifolds

We consider the minimization of a function defined on a Riemannian manif...

Federated Accelerated Stochastic Gradient Descent

We propose Federated Accelerated Stochastic Gradient Descent (FedAc), a ...

1 Introduction

Consider a stochastic optimization problem

where is a nonempty bounded closed convex set,

is a random variable,

is a smooth convex function,

is a smooth strongly-convex function. The requirement of smoothness simplifies the analysis. If the objective function is nonsmooth but satisfies Lipschitz continuity, stochastic gradient descent algorithms can replace gradients with subgradients, but the analysis has to introduce an additional term in the same order as the variance term. Some nonsmooth cases have been studied in

[lan08:_effic_method_stoch_compos_optim] and [ghadimi12:_optim_stoch_approx_algor_stron].

Assume that the domain is bounded, i.e. . Let be a stochastic gradient of function at with a random variable . Then is a gradient of . Assume that , where is known as the Lipschitz constant. We only consider strongly convex function in this note, thus assume that there is , such that . We assume that stochastic gradients are bounded, i.e., there exists , such that

We are interested in the conditional number , which is defined as . The conditional number, , could be as large as , where is the number of samples and

. One reference case is regularized linear classifiers

[smale03:_estim_approx_error_learn_theor], where the regularization factor could be as large as . The other reference case is the conditional number of a random matrix [rudelson09:_small]

, where the smallest singular value is

. When , , which bridges the gap between the convergence rate for strongly convex functions and that for those without strongly convex condition. In this note, we assume . We use big- notation in term of and and hide the factors , and besides constants.


Denote by . Let be a sequence of independent random variables. Denote . We define . Then , and for , .

2 Stochastic gradient descent algorithm

1:  Input: initial solution , step sizes and averaging factor .
2:  for   do
3:     Let sample gradient , where is independent from .
4:     Let ;
5:     Set ;
6:  end for
7:  Output: .
Algorithm 1 Stochastic gradient descent algorithm

Algorithm 1 shows the stochastic gradient descent method. Unlike the conventional averaging by equal weights , we use a weighting scheme , where . Theorem 1 shows a convergence rate of , assuming that . Let , , , and the coefficients and . The informal argument is that the weighting scheme equalizes the variance of each iteration, since and are assuming that . Assume that the underlying function is strongly convex, i.e., . Let . If , , then it holds for Algorithm 1 that for ,



Similarly with traditional equal weighting scheme, , we have a convergence rate of in Proposition 2. Informally, implies a convergence rate of . Assume that . Let . If , , then for ,


Proposition 1 shows that if the optimal solution is an interior point, it is possible to simply take the non-averaged solution, . The convergence rate is . However, if , means not convergent, just like the non-averaged SGD solution without strongly convex conditions. Assume that and the optimal solution is an interior point. Let . If , then for ,


There are studies on the high probability convergence rate of stochastic algorithm on strongly convex functions, such as [rakhlin12:_makin_gradien_descen_optim_stron]. The convergence rate usefully is . Here, we prove a convergence rate of with proper weighting scheme.

3 Accelerated Stochastic Gradient Descent Algorithm

1:  Input: , , , ;
2:  Let ;
3:  for   do
4:     Let ;
5:     Let , where is a sample;
6:      Let ;
7:     Set ;
8:  end for
9:  Output: .
Algorithm 2 Accelerated Stochastic Gradient Descent algorithm

Algorithm 2 is a stochastic variant of Nesterov’s accelerated methods. The convergence rate is also . Comparing with Theorem 1, the determinant part in Theorem 3 have a better rate, i.e. . Assume that . If , , then for ,


The paper [ghadimi12:_optim_stoch_approx_algor_stron] has its strongly convex version for AC-SA for sub-Gaussian gradient assumption, but its proof relies on a multi-stage algorithm.

Although SAGE [hu09:_accel_gradien_method_stoch_optim_onlin_learn] also provided a stochastic algorithm based on Nesterov’s method for strongly convexity, the high probability bound was not given in the paper.

4 A note on weighting schemes

In this study, we find the interesting property of weighting scheme with , i.e. . The scheme takes advantage of a sequence with variance at the decay rate of . Now let informally investigate a sequence with homogeneous variance, say . With a constant weighting scheme, , i.e. , the averaged variance is . With an exponential weighting scheme, , , i.e. and , the averaged variance is , which is translated to that the number of effective tail samples is a constant . With the weighting scheme or , the averaged variance is , which is translated to effective tail samples. This is a trade-off between sample efficiency and recency. To make other trade-offs, We can use a generalized scheme111An alternative scheme is or , where , or . Then the averaged variance is approximately .

5 Proofs

The proof strategy is first to construct inequalities from the algorithms in Lemma 5 and 5, then to apply Lemma 5 to derive the probability inequalities. Assume that is martingale difference, , , , , , , , , and


If the following conditions hold

  1. for ,

  2. for ,


then for ,


We will prove the following inequality by induction,


Eq. 4 implies that Eq. (7) holds for . For ,


where Eq. (8) is due to the assumption of induction; Eq. (9) is due to Eq. (2,3); Eq. (10) is due to ; Eq. (11) is due to , , and Hoeffding’s lemma, thus ; Eq. (12) is due to ; Eq. (13) is due to Eqs. (5). Then for ,

Eq. (6) follows Lemma Supporting lemma. ∎

We prove Lemma 5, which is the same as Lemma 7 of [lan08:_effic_method_stoch_compos_optim] except for the strong convexity. Let , , , . If and , it holds for Algorithm 1 that


Let .


Eq. (14) is due to the Lipschitz continuity of , Eq. (15) due to the strong convexity of , Eq. (16) due to the optimality of Step 4. ∎

Proof of Theorem 1.

Because , it follows Lemma 5 that

As it follows Lemma 5 that

where , , and . Let . Assume that and . Then

Note that we use the factor for simplicity. Let , , , , and