## Authors

• 17 publications
• 100 publications
• ### mS2GD: Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting

We propose a mini-batching scheme for improving the theoretical complexi...
10/17/2014 ∙ by Jakub Konečný, et al. ∙ 0

• ### Topology Optimization under Uncertainty using a Stochastic Gradient-based Approach

Topology optimization under uncertainty (TOuU) often defines objectives ...
02/11/2019 ∙ by Subhayan De, et al. ∙ 0

• ### Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting

We propose mS2GD: a method incorporating a mini-batching scheme for impr...
04/16/2015 ∙ by Jakub Konečný, et al. ∙ 0

• ### Befriending The Byzantines Through Reputation Scores

We propose two novel stochastic gradient descent algorithms, ByGARS and ...
06/24/2020 ∙ by Jayanth Regatti, et al. ∙ 8

• ### The Lingering of Gradients: How to Reuse Gradients over Time

Classically, the time complexity of a first-order method is estimated by...
01/09/2019 ∙ by Zeyuan Allen-Zhu, et al. ∙ 0

• ### Sinkhorn Algorithm as a Special Case of Stochastic Mirror Descent

We present a new perspective on the celebrated Sinkhorn algorithm by sho...
09/16/2019 ∙ by Konstantin Mishchenko, et al. ∙ 0

• ### Stochastic gradient descent for hybrid quantum-classical optimization

Within the context of hybrid quantum-classical optimization, gradient de...
10/02/2019 ∙ by Ryan Sweke, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Many problems in data science (e.g., machine learning, optimization and statistics) can be cast as loss minimization problems of the form

 minx∈Rdf(x), (1)

where

 f(x)def=1nn∑i=1fi(x). (2)

Here typically denotes the number of features / coordinates, the number of examples, and is the loss incurred on example . That is, we are seeking to find a predictor minimizing the average loss . In big data applications, is typically very large; in particular, .

Note that this formulation includes more typical formulation of -regularized objectives — We hide the regularizer into the function for the sake of simplicity of resulting analysis.

### 1.1 Motivation

Let us now briefly review two basic approaches to solving problem (1).

 xk+1=xk−hf′(xk),

where is a stepsize parameter and is the gradient of at . We will refer to by the name full gradient. In order to compute , we need to compute the gradients of functions. Since is big, it is prohibitive to do this at every iteration.

 xk+1=xk−hf′i(xk).

Note that this strategy drastically reduces the amount of work that needs to be done in each iteration (by the factor of ). Since

 E(f′i(xk))=f′(xk),

we have an unbiased estimator of the full gradient. Hence, the gradients of the component functions

will be referred to as stochastic gradients. A practical issue with SGD is that consecutive stochastic gradients may vary a lot or even point in opposite directions. This slows down the performance of SGD. On balance, however, SGD is preferable to GD in applications where low accuracy solutions are sufficient. In such cases usually only a small number of passes through the data (i.e., work equivalent to a small number of full gradient evaluations) are needed to find an acceptable . For this reason, SGD is extremely popular in fields such as machine learning.

In order to improve upon GD, one needs to reduce the cost of computing a gradient. In order to improve upon SGD, one has to reduce the variance of the stochastic gradients. In this paper we propose and analyze a

Semi-Stochastic Gradient Descent (S2GD) method. Our method combines GD and SGD steps and reaps the benefits of both algorithms: it inherits the stability and speed of GD and at the same time retains the work-efficiency of SGD.

### 1.2 Brief literature review

Several recent papers, e.g., Richtárik & Takáč [8], Le Roux, Schmidt & Bach [11, 12], Shalev-Shwartz & Zhang [13] and Johnson & Zhang [3] proposed methods which achieve such a variance-reduction effect, directly or indirectly. These methods enjoy linear convergence rates when applied to minimizing smooth strongly convex loss functions.

The method in [8] is known as Random Coordinate Descent for Composite functions (RCDC), and can be either applied directly to (1)—in which case a single iteration requires work for a dense problem, and iterations in total—or to a dual version of (1), which requires work per iteration and iterations in total. Application of a coordinate descent method to a dual formulation of (1) is generally referred to as Stochastic Dual Coordinate Ascent (SDCA) [2]. The algorithm in [13] exhibits this duality, and the method in [14] extends the primal-dual framework to the parallel / mini-batch setting. Parallel and distributed stochastic coordinate descent methods were studied in [9, 1, 10].

Stochastic Average Gradient (SAG) [11] is one of the first SGD-type methods, other than coordinate descent methods, which were shown to exhibit linear convergence. The method of Johnson and Zhang [3], called Stochastic Variance Reduced Gradient (SVRG), arises as a special case in our setting for a suboptimal choice of a single parameter of our method. The Epoch Mixed Gradient Descent (EMGD) method [16] is similar in spirit to SVRG, but achieves a quadratic dependence on the condition number instead of a linear dependence, as is the case with SAG, SVRG and with our method.

For classical work on semi-stochastic gradient descent methods we refer333We thank Zaid Harchaoui who pointed us to these papers a few days before we posted our work to arXiv. the reader to the papers of Murti and Fuchs [4, 5].

### 1.3 Outline

We start in Section 2 by describing two algorithms: S2GD, which we analyze, and S2GD+, which we do not analyze, but which exhibits superior performance in practice. We then move to summarizing some of the main contributions of this paper in Section 3. Section 4

is devoted to establishing expectation and high probability complexity results for S2GD in the case of a strongly convex loss. The results are generic in that the parameters of the method are set arbitrarily. Hence, in Section

5 we study the problem of choosing the parameters optimally, with the goal of minimizing the total workload (# of processed examples) sufficient to produce a result of sufficient accuracy. In Section 6 we establish high probability complexity bounds for S2GD applied to a non-strongly convex loss function. Finally, in Section 7 we perform very encouraging numerical experiments on real and artificial problem instances. A brief conclusion can be found in Section 8.

In this section we describe two novel algorithms: S2GD and S2GD+. We analyze the former only. The latter, however, has superior convergence properties in our experiments.

We assume throughout the paper that the functions are convex and -smooth.

###### Assumption 1.

The functions have Lipschitz continuous gradients with constant (in other words, they are -smooth). That is, for all and all ,

 fi(z)≤fi(x)+⟨f′i(x),z−x⟩+L2∥z−x∥2.

(This implies that the gradient of is Lipschitz with constant , and hence satisfies the same inequality.)

In one part of the paper (Section 4) we also make the following additional assumption:

###### Assumption 2.

The average loss is -strongly convex, . That is, for all ,

 f(z)≥f(x)+⟨f′(x),z−x⟩+μ2∥z−x∥2. (3)

(Note that, necessarily, .)

### 2.1 S2gd

Algorithm 1 (S2GD) depends on three parameters: stepsize , constant limiting the number of stochastic gradients computed in a single epoch, and a , where is the strong convexity constant of . In practice, would be a known lower bound on . Note that the algorithm works also without any knowledge of the strong convexity parameter — the case of .

The method has an outer loop, indexed by epoch counter , and an inner loop, indexed by . In each epoch , the method first computes —the full gradient of at . Subsequently, the method produces a random number of steps, following a geometric law, where

 βdef=m∑t=1(1−νh)m−t, (4)

with only two stochastic gradients computed in each step444It is possible to get away with computinge only a single stochastic gradient per inner iteration, namely , at the cost of having to store in memory for . This, however, will be impractical for big .. For each , the stochastic gradient is subtracted from , and is added to , which ensures that, one has

 E(gj+f′i(yj,t)−f′i(xj))=f′(yj,t),

where the expectation is with respect to the random variable

.

Hence, the algorithm is stochastic gradient descent – albeit executed in a nonstandard way (compared to the traditional implementation described in the introduction).

Note that for all , the expected number of iterations of the inner loop, , is equal to

 ξ=ξ(m,h)def=m∑t=1t(1−νh)m−tβ. (5)

Also note that , with the lower bound attained for , and the upper bound for .

### 2.2 S2gd+

We also implement Algorithm 2, which we call S2GD+. In our experiments, the performance of this method is superior to all methods we tested, including S2GD. However, we do not analyze the complexity of this method and leave this as an open problem.

In brief, S2GD+ starts by running SGD for 1 epoch (1 pass over the data) and then switches to a variant of S2GD in which the number of the inner iterations, , is not random, but fixed to be or a small multiple of .

The motivation for this method is the following. It is common knowledge that SGD is able to progress much more in one pass over the data than GD (where this would correspond to a single gradient step). However, the very first step of S2GD is the computation of the full gradient of . Hence, by starting with a single pass over data using SGD and then switching to S2GD, we obtain a superior method in practice.555Using a single pass of SGD as an initialization strategy was already considered in [11]. However, the authors claim that their implementation of vanilla SAG did not benefit from it. S2GD does benefit from such an initialization due to it starting, in theory, with a (heavy) full gradient computation.

## 3 Summary of Results

In this section we summarize some of the main results and contributions of this work.

1. Complexity for strongly convex . If is strongly convex, S2GD needs

 W=O((n+κ)log(1/ε)) (6)

work (measured as the total number of evaluations of the stochastic gradient, accounting for the full gradient evaluations as well) to output an -approximate solution (in expectation or in high probability), where is the condition number. This is achieved by running S2GD with stepsize , epochs (this is also equal to the number of full gradient evaluations) and (this is also roughly equal to the number of stochastic gradient evaluations in a single epoch). The complexity results are stated in detail in Sections 4 and 5 (see Theorems 4, 5 and 6; see also (27) and (26)).

2. Comparison with existing results. This complexity result (6) matches the best-known results obtained for strongly convex losses in recent work such as [11], [3] and [16]. Our treatment is most closely related to [3], and contains their method (SVRG) as a special case. However, our complexity results have better constants, which has a discernable effect in practice. In Table 1 we compare our results in the strongly convex case with other existing results for different algorithms.

We should note that the rate of convergence of Nesterov’s algorithm [7] is a deterministic result. EMGD and S2GD results hold with high probability. The remaining results hold in expectation. Complexity results for stochastic coordinate descent methods are also typically analyzed in the high probability regime [8].

3. Complexity for convex . If is not strongly convex, then we propose that S2GD be applied to a perturbed version of the problem, with strong convexity constant . An -accurate solution of the original problem is recovered with arbitrarily high probability (see Theorem 8 in Section 6). The total work in this case is

 W=O((n+L/ε))log(1/ε)),

that is, , which is better than the standard rate of SGD.

4. Optimal parameters. We derive formulas for optimal parameters of the method which (approximately) minimize the total workload, measured in the number of stochastic gradients computed (counting a single full gradient evaluation as evaluations of the stochastic gradient). In particular, we show that the method should be run for epochs, with stepsize and . No such results were derived for SVRG in [3].

5. One epoch. In the case when S2GD is run for 1 epoch only, effectively limiting the number of full gradient evaluations to 1, we show that S2GD with needs

 O(n+(κ/ε)log(1/ε))

work only (see Table 2). This compares favorably with the optimal complexity in the case (which reduces to SVRG), where the work needed is

 O(n+κ/ε2).

For two epochs one could just say that we need decrease in iach epoch, thus having complexity of . This is already better than general rate of SGD

6. Special cases. GD and SVRG arise as special cases of S2GD, for and , respectively.666While S2GD reduces to GD for , our analysis does not say anything meaningful in the case - it is too coarse to cover this case. This is also the reason behind the empty space in the “Complexity” box column for GD in Table 2.

7. Low memory requirements. Note that SDCA and SAG, unlike SVRG and S2GD, need to store all gradients (or dual variables) throughout the iterative process. While this may not be a problem for a modest sized optimization task, this requirement makes such methods less suitable for problems with very large .

8. S2GD+. We propose a “boosted” version of S2GD, called S2GD+, which we do not analyze. In our experiments, however, it performs vastly superior to all other methods we tested, including GD, SGD, SAG and S2GD. S2GD alone is better than both GD and SGD if a highly accurate solution is required. The performance of S2GD and SAG is roughly comparable, even though in our experiments S2GD turned to have an edge.

## 4 Complexity Analysis: Strongly Convex Loss

For the purpose of the analysis, let

 Fj,tdef=σ(x1,x2,…,xj;yj,1,yj,2,…,yj,t) (7)

be the -algebra generated by the relevant history of S2GD. We first isolate an auxiliary result.

###### Lemma 3.

Consider the S2GD algorithm. For any fixed epoch number , the following identity holds:

 E(f(xj+1))=1βm∑t=1(1−νh)m−tE(f(yj,t−1)). (8)
###### Proof.

By the tower property of expectations and the definition of in the algorithm, we obtain

 E(f(xj+1))=E(E(f(xj+1)|Fj,m)) = E(m∑t=1(1−νh)m−tβf(yj,t−1)) = 1βm∑t=1(1−νh)m−tE(f(yj,t−1)).

We now state and prove the main result of this section.

###### Theorem 4.

Let Assumptions 1 and 2 be satisfied. Consider the S2GD algorithm applied to solving problem (1). Choose , , and let be sufficiently large so that

 cdef=(1−νh)mβμh(1−2Lh)+2(L−μ)h1−2Lh<1. (9)

Then we have the following convergence in expectation:

 E(f(xj)−f(x∗))≤cj(f(x0)−f(x∗)). (10)

Before we proceed to proving the theorem, note that in the special case with , we recover the result of Johnson and Zhang [3] (with a minor improvement in the second term of where is replaced by ), namely

 c=1μh(1−2Lh)m+2(L−μ)h1−2Lh. (11)

If we set , then can be written in the form (see (4))

 c=(1−μh)m(1−(1−μh)m)(1−2Lh)+2(L−μ)h1−2Lh. (12)

Clearly, the latter is a major improvement on the former one. We shall elaborate on this further later.

###### Proof.

It is well-known [7, Theorem 2.1.5] that since the functions are -smooth, they necessarily satisfy the following inequality:

 ∥f′i(x)−f′i(x∗)∥2≤2L[fi(x)−fi(x∗)−⟨f′i(x∗),x−x∗⟩].

By summing these inequalities for , and using we get

 1nn∑i=1∥f′i(x)−f′i(x∗)∥2≤2L[f(x)−f(x∗)−⟨f′(x∗),x−x∗⟩]=2L(f(x)−f(x∗)). (13)

Let be the direction of update at iteration in the outer loop and iteration in the inner loop. Taking expectation with respect to , conditioned on the -algebra (7), we obtain777For simplicity, we supress the notation here.

 E(∥Gj,t∥2) = E(∥f′i(yj,t−1)−f′i(x∗)−f′i(xj)+f′i(x∗)+gj∥2) (14) ≤ 2E(∥f′i(yj,t−1)−f′i(x∗)∥2)+2E(∥[f′i(xj)−f′i(x∗)]−f′(xj)∥2) = 2E(∥f′i(yj,t−1)−f′i(x∗)∥2) +2E(∥f′i(xj)−f′i(x∗)∥2)−4E(⟨f′(xj),f′i(xj)−f′i(x∗)⟩)+2∥f′(xj)∥2 4L[f(yj,t−1)−f(x∗)+f(xj)−f(x∗)]−2∥f′(xj)∥2−4⟨f′(xj),f′(x∗)⟩ 4L[f(yj,t−1)−f(x∗)]+4(L−μ)[f(xj)−f(x∗)].

Above we have used the bound and the fact that

 E(Gj,t|Fj,t−1)=f′(yj,t−1). (15)

We now study the expected distance to the optimal solution (a standard approach in the analysis of gradient methods):

 E(∥yj,t−x∗∥2|Fj,t−1) = ∥yj,t−1−x∗∥2−2h⟨E(Gj,t|Fj,t−1),yj,t−1−x∗⟩ (16) +h2E(∥Gj,t∥2|Fj,t−1) ∥yj,t−1−x∗∥2−2h⟨f′(yj,t−1),yj,t−1−x∗⟩ +4Lh2[f(yj,t−1)−f(x∗)]+4(L−μ)h2[f(xj)−f(x∗)] ∥yj,t−1−x∗∥2−2h[f(yj,t−1)−f(x∗)]−νh∥yj,t−1−x∗∥2 +4Lh2[f(yj,t−1)−f(x∗)]+4(L−μ)h2[f(xj)−f(x∗)] = (1−νh)∥yj,t−1−x∗∥2−2h(1−2Lh)[f(yj,t−1)−f(x∗)] +4(L−μ)h2[f(xj)−f(x∗)].

By rearranging the terms in (16) and taking expectation over the -algebra , we get the following inequality:

 E(∥yj,t−x∗∥2)+2h(1−2Lh) E(f(yj,t−1)−f(x∗)) ≤(1−νh)E(∥yj,t−1−x∗∥2)+4(L−μ)h2E(f(xj)−f(x∗)). (17)

Finally, we can analyze what happens after one iteration of the outer loop of S2GD, i.e., between two computations of the full gradient. By summing up inequalities (17) for , with inequality multiplied by , we get the left-hand side

 LHS = E(∥yj,m−x∗∥2)+2h(1−2Lh)m∑t=1(1−νh)m−tE(f(yj,t−1)−f(x∗)) E(∥yj,m−x∗∥2)+2βh(1−2Lh)E(f(xj+1)−f(x∗)),

and the right-hand side

 RHS = (1−νh)mE(∥xj−x∗∥2)+4β(L−μ)h2E(f(xj)−f(x∗)) 2(1−νh)mμE(f(xj)−f(x∗))+4β(L−μ)h2E(f(xj)−f(x∗)) = 2((1−νh)mμ+2β(L−μ)h2)E(f(xj)−f(x∗)).

Since , we finally conclude with

 E(f(xj+1)−f(x∗)) ≤ cE(f(xj)−f(x∗))−E(∥yj,m−x∗∥2)2βh(1−2Lh)≤cE(f(xj)−f(x∗)).

Since we have established linear convergence of expected values, a high probability result can be obtained in a straightforward way using Markov inequality.

###### Theorem 5.

Consider the setting of Theorem 4. Then, for any , and

 j≥log(1ερ)log(1c), (18)

we have

 P(f(xj)−f(x∗)f(x0)−f(x∗)≤ε)≥1−ρ. (19)
###### Proof.

This follows directly from Markov inequality and Theorem 4:

 P(f(xj)−f(x∗)>ε(f(x0)−f(x∗))E(f(xj)−f(x∗))ε(f(x0)−f(x∗))≤cjερ

This result will be also useful when treating the non-strongly convex case.

## 5 Optimal Choice of Parameters

The goal of this section is to provide insight into the choice of parameters of S2GD; that is, the number of epochs (equivalently, full gradient evaluations) , the maximal number of steps in each epoch , and the stepsize . The remaining parameters () are inherent in the problem and we will hence treat them in this section as given.

In particular, ideally we wish to find parameters , and solving the following optimization problem:

 minj,m,h~W(j,m,h)def=j(n+2ξ(m,h)), (20)

subject to

 E(f(xj)−f(x∗))≤ε(f(x0)−f(x∗)). (21)

Note that is the expected work, measured by the number number of stochastic gradient evaluations, performed by S2GD when running for epochs. Indeed, the evaluation of is equivalent to stochastic gradient evaluations, and each epoch further computes on average stochastic gradients (see (5)). Since , we can simplify and solve the problem with set to the conservative upper estimate .

In view of (10), accuracy constraint (21) is satisfied if (which depends on and ) and satisfy

 cj≤ε. (22)

We therefore instead consider the parameter fine-tuning problem

 minj,m,hW(j,m,h)def=j(n+2m)subject % toc≤ε1/j. (23)

In the following we (approximately) solve this problem in two steps. First, we fix and find (nearly) optimal and . The problem reduces to minimizing subject to by fine-tuning . While in the case it is possible to obtain closed form solution, this is not possible for .

However, it is still possible to obtain a good formula for leading to expression for good which depends on in the correct way. We then plug the formula for obtained this way back into (23), and study the quantity as a function of , over which we optimize optimize at the end.

###### Theorem 6 (Choice of parameters).

Fix the number of epochs , error tolerance , and let . If we run S2GD with the stepsize

 h=h(j)def=14Δ(L−μ)+2L (24)

and

 m≥m(j)def=⎧⎪⎨⎪⎩(4(κ−1)Δ+2κ)log(2Δ+2κ−1κ−1),ifν=μ,8(κ−1)Δ2+8κΔ+2κ2κ−1,ifν=0, (25)

then

In particular, if we choose , then , and hence , leading to the workload

 W(j∗,m(j∗),h(j∗))=⌈log(1ε)⌉(n+O(κ))=O((n+κ)log(1ε)). (26)
###### Proof.

We only need to show that , where is given by (12) for and by (11) for . We denote the two summands in expressions for as and . We choose the and so that both and are smaller than , resulting in .

The stepsize is chosen so that

 c2def=2(L−μ)h1−2Lh=Δ2,

and hence it only remains to verify that . In the case, is chosen so that . In the case, holds for , where . We only need to observe that decreases as increases, and apply the inequality .

We now comment on the above result:

1. Workload. Notice that for the choice of parameters , , and any , the method needs computations of the full gradient (note this is independent of ), and computations of the stochastic gradient. This result, and special cases thereof, are summarized in Table 2.

2. Simpler formulas for . If , we can instead of (25) use the following (slightly worse but) simpler expressions for , obtained from (25) by using the bounds , and in appropriate places (e.g., , ):

 m≥~m(j)def=⎧⎨⎩6κΔlog(5Δ),ifν=μ,20κΔ2,ifν=0. (27)
3. Optimal stepsize in the case. Theorem 6 does not claim to have solved problem (23); the problem in general does not have a closed form solution. However, in the case a closed-form formula can easily be obtained:

 h(j)=14Δ(L−μ)+4L,m≥m(j)def=8(κ−1)Δ2+8κΔ. (28)

Indeed, for fixed , (23) is equivalent to finding that minimizes subject to the constraint . In view of (11), this is equivalent to searching for maximizing the quadratic , which leads to (28).

Note that both the stepsize and the resulting are slightly larger in Theorem 6 than in (28). This is because in the theorem the stepsize was for simplicity chosen to satisfy , and hence is (slightly) suboptimal. Nevertheless, the dependence of on is of the correct (optimal) order in both cases. That is, for and for .

4. Stepsize choice. In cases when one does not have a good estimate of the strong convexity constant to determine the stepsize via (24), one may choose suboptimal stepsize that does not depend on and derive similar results to those above. For instance, one may choose .

In Table 3 we provide comparison of work needed for small values of , and different values of and Note, for instance, that for any problem with and , S2GD outputs a highly accurate solution () in the amount of work equivalent to evaluations of the full gradient of !

## 6 Complexity Analysis: Convex Loss

If is convex but not strongly convex, we define , for small enough (we shall see below how the choice of affects the results), and consider the perturbed problem

 minx∈Rd^f(x), (29)

where

 ^f(x)def=1nn∑i=1^fi(x)=f(x)+μ2∥x−x0∥2. (30)

Note that is -strongly convex and