1 Introduction
Consider a stochastic optimization problem
where is a nonempty bounded closed convex set,
is a random variable,
is a smooth convex function,is a smooth stronglyconvex function. The requirement of smoothness simplifies the analysis. If the objective function is nonsmooth but satisfies Lipschitz continuity, stochastic gradient descent algorithms can replace gradients with subgradients, but the analysis has to introduce an additional term in the same order as the variance term. Some nonsmooth cases have been studied in
[lan08:_effic_method_stoch_compos_optim] and [ghadimi12:_optim_stoch_approx_algor_stron].Assume that the domain is bounded, i.e. . Let be a stochastic gradient of function at with a random variable . Then is a gradient of . Assume that , where is known as the Lipschitz constant. We only consider strongly convex function in this note, thus assume that there is , such that . We assume that stochastic gradients are bounded, i.e., there exists , such that
We are interested in the conditional number , which is defined as . The conditional number, , could be as large as , where is the number of samples and
. One reference case is regularized linear classifiers
[smale03:_estim_approx_error_learn_theor], where the regularization factor could be as large as . The other reference case is the conditional number of a random matrix [rudelson09:_small], where the smallest singular value is
. When , , which bridges the gap between the convergence rate for strongly convex functions and that for those without strongly convex condition. In this note, we assume . We use big notation in term of and and hide the factors , and besides constants.Notation
Denote by . Let be a sequence of independent random variables. Denote . We define . Then , and for , .
2 Stochastic gradient descent algorithm
Algorithm 1 shows the stochastic gradient descent method. Unlike the conventional averaging by equal weights , we use a weighting scheme , where . Theorem 1 shows a convergence rate of , assuming that . Let , , , and the coefficients and . The informal argument is that the weighting scheme equalizes the variance of each iteration, since and are assuming that . Assume that the underlying function is strongly convex, i.e., . Let . If , , then it holds for Algorithm 1 that for ,
(1) 
where
Similarly with traditional equal weighting scheme, , we have a convergence rate of in Proposition 2. Informally, implies a convergence rate of . Assume that . Let . If , , then for ,
where
Proposition 1 shows that if the optimal solution is an interior point, it is possible to simply take the nonaveraged solution, . The convergence rate is . However, if , means not convergent, just like the nonaveraged SGD solution without strongly convex conditions. Assume that and the optimal solution is an interior point. Let . If , then for ,
where
There are studies on the high probability convergence rate of stochastic algorithm on strongly convex functions, such as [rakhlin12:_makin_gradien_descen_optim_stron]. The convergence rate usefully is . Here, we prove a convergence rate of with proper weighting scheme.
3 Accelerated Stochastic Gradient Descent Algorithm
Algorithm 2 is a stochastic variant of Nesterov’s accelerated methods. The convergence rate is also . Comparing with Theorem 1, the determinant part in Theorem 3 have a better rate, i.e. . Assume that . If , , then for ,
where
The paper [ghadimi12:_optim_stoch_approx_algor_stron] has its strongly convex version for ACSA for subGaussian gradient assumption, but its proof relies on a multistage algorithm.
Although SAGE [hu09:_accel_gradien_method_stoch_optim_onlin_learn] also provided a stochastic algorithm based on Nesterov’s method for strongly convexity, the high probability bound was not given in the paper.
4 A note on weighting schemes
In this study, we find the interesting property of weighting scheme with , i.e. . The scheme takes advantage of a sequence with variance at the decay rate of . Now let informally investigate a sequence with homogeneous variance, say . With a constant weighting scheme, , i.e. , the averaged variance is . With an exponential weighting scheme, , , i.e. and , the averaged variance is , which is translated to that the number of effective tail samples is a constant . With the weighting scheme or , the averaged variance is , which is translated to effective tail samples. This is a tradeoff between sample efficiency and recency. To make other tradeoffs, We can use a generalized scheme^{1}^{1}1An alternative scheme is or , where , or . Then the averaged variance is approximately .
5 Proofs
The proof strategy is first to construct inequalities from the algorithms in Lemma 5 and 5, then to apply Lemma 5 to derive the probability inequalities. Assume that is martingale difference, , , , , , , , , and
(2)  
(3)  
If the following conditions hold

for ,
(4) 
for ,
(5)
then for ,
(6) 
Proof.
We will prove the following inequality by induction,
(7) 
Eq. 4 implies that Eq. (7) holds for . For ,
(8)  
(9)  
(10)  
(11)  
(12)  
(13) 
where Eq. (8) is due to the assumption of induction; Eq. (9) is due to Eq. (2,3); Eq. (10) is due to ; Eq. (11) is due to , , and Hoeffding’s lemma, thus ; Eq. (12) is due to ; Eq. (13) is due to Eqs. (5). Then for ,
Eq. (6) follows Lemma Supporting lemma. ∎
We prove Lemma 5, which is the same as Lemma 7 of [lan08:_effic_method_stoch_compos_optim] except for the strong convexity. Let , , , . If and , it holds for Algorithm 1 that