Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting

04/16/2015 ∙ by Jakub Konečný, et al. ∙ 0

We propose mS2GD: a method incorporating a mini-batching scheme for improving the theoretical complexity and practical performance of semi-stochastic gradient descent (S2GD). We consider the problem of minimizing a strongly convex function represented as the sum of an average of a large number of smooth convex functions, and a simple nonsmooth convex regularizer. Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps. The process is repeated a few times with the last iterate becoming the new starting point. The novelty of our method is in introduction of mini-batching into the computation of stochastic steps. In each step, instead of choosing a single function, we sample b functions, compute their gradients, and compute the direction based on this. We analyze the complexity of the method and show that it benefits from two speedup effects. First, we prove that as long as b is below a certain threshold, we can reach any predefined accuracy with less overall work than without mini-batching. Second, our mini-batching scheme admits a simple parallel implementation, and hence is suitable for further acceleration by parallelization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In this work we are concerned with the problem of minimizing the sum of two convex functions,

(1)

where the first component, , is smooth, and the second component, , is possibly nonsmooth (and extended real-valued, which allows for the modeling of constraints).

In the last decade, an intensive amount of research was conducted into algorithms for solving problems of the form (1), largely motivated by the realization that the underlying problem has a considerable modeling power. One of the most popular and practical methods for (1) is the accelerated proximal gradient method of Nesterov [1], with its most successful variant being FISTA [2].

In many applications in optimization, signal processing and machine learning,

has an additional structure. In particular, it is often the case that is the average of a number of convex functions:

(2)

Indeed, even one of the most basic optimization problems—least squares regression—lends itself to a natural representation of the form (2).

I-a Stochastic methods.

For problems of the form (1)+(2), and especially when is large and when a solution of low to medium accuracy is sufficient, deterministic methods do not perform as well as classical stochastic111Depending on conventions used in different communities, the terms randomized or sketching are used instead of the word stochastic. In signal processing, numerical linear algebra and theoretical computer science, for instance, the terms sketching and randomized are used more often. In machine learning and optimization, the terms stochastic and randomized are used more often. In this paper, stochasticity does not refer to a data generation process, but to randomization embedded in an algorithm which is applied to a deterministic problem. Having said that, the deterministic problem can and often does arise as a sample average approximation of stochastic problem (average replaces an expectation), which further blurs the lines between the terms. methods. The prototype method in this category is stochastic gradient descent (SGD), dating back to the 1951 seminal work of Robbins and Monro [3]. SGD selects an index uniformly at random, and then updates the variable using

— a stochastic estimate of

. Note that the computation of is times cheaper than the computation of the full gradient . For problems where is very large, the per-iteration savings can be extremely large, spanning several orders of magnitude.

These savings do not come for free, however (modern methods, such as the one we propose, overcome this – more on that below). Indeed, the stochastic estimate of the gradient embodied by has a non-vanishing variance. To see this, notice that even when started from an optimal solution , there is no reason for to be zero, which means that SGD drives away from the optimal point. Traditionally, there have been two ways of dealing with this issue. The first one consists in choosing a decreasing sequence of stepsizes. However, this means that a much larger number of iterations is needed. A second approach is to use a subset (“minibatch”) of indices , as opposed to a single index, in order to form a better stochastic estimate of the gradient. However, this results in a method which performs more work per iteration. In summary, while traditional approaches manage to decrease the variance in the stochastic estimate, this comes at a cost.

I-B Modern stochastic methods

Very recently, starting with the SAG [4], SDCA [5], SVRG [6] and S2GD [7] algorithms from year 2013, it has transpired that neither decreasing stepsizes nor mini-batching are necessary to resolve the non-vanishing variance issue inherent in the vanilla SGD methods. Instead, these modern stochastic222These methods are randomized algorithms. However, the term “stochastic” (somewhat incorrectly) appears in their names for historical reasons, and quite possibly due to their aspiration to improve upon stochastic gradient descent (SGD). method are able to dramatically improve upon SGD in various different ways, but without having to resort to the usual variance-reduction techniques (such as decreasing stepsizes or mini-batching) which carry with them considerable costs drastically reducing their power. Instead, these modern methods were able to improve upon SGD without any unwelcome side effects. This development led to a revolution in the area of first order methods for solving problem (1)+(2). Both the theoretical complexity and practical efficiency of these modern methods vastly outperform prior gradient-type methods.

In order to achieve -accuracy, that is,

(3)

modern stochastic methods such as SAG, SDCA, SVRG and S2GD require only

(4)

units of work, where is a condition number associated with , and one unit of work corresponds to the computation of the gradient of for a random index , followed by a call to a prox-mapping involving . More specifically, , where is a uniform bound on the Lipschitz constants of the gradients of functions and is the strong convexity constant of . These quantities will be defined precisely in Section IV.

The complexity bound (4) should be contrasted with that of proximal gradient descent (e.g., ISTA), which requires units of work, or FISTA, which requires units of work333However, it should be remarked that the condition number in these latter methods is slightly different from that appearing in the bound (4).. Note that while all these methods enjoy linear convergence rate, the modern stochastic methods can be many orders of magnitude faster than classical deterministic methods. Indeed, one can have

Based on this, we see that these modern methods always beat (proximal) gradient descent (), and also outperform FISTA as long as . In machine learning, for instance, one usually has , in which case the improvement is by a factor of when compared to FISTA, and by a factor of over ISTA. For applications where is massive, these improvements are indeed dramatic.

For more information about modern dual and primal methods we refer the reader to the literature on randomized coordinate descent methods [8, 9, 10, 11, 12, 5, 13, 14, 15, 16, 17, 18] and stochastic gradient methods [4, 19, 20, 21, 22, 23, 17, 24], respectively.

I-C Linear systems and sketching.

In the case when , all stationary points (i.e., points satisfying ) are optimal for (1)+(2). In the special case when the functions are convex quadratics of the form , the equation reduces to the linear system , where . Recently, there has been considerable interest in designing and analyzing randomized methods for solving linear systems; also known under the name of sketching methods. Much of this work was done independently from the developments in (non-quadratic) optimization, despite the above connection between optimization and linear systems. A randomized version of the classical Kaczmarz method was studied in a seminal paper by Strohmer and Vershynin [25]. Subsequently, the method was extended and improved upon in several ways [26, 27, 28, 29]. The randomized Kaczmarz method is equivalent to SGD with a specific stepsize choice [30, 31]. The first randomized coordinate descent method, for linear systems, was analyzed by Lewis and Leventhal [32], and subsequently generalized in various ways by numerous authors (we refer the reader to [17] and the references therein). Gower and Richtárik [31] have recently studied randomized iterative methods for linear systems in a general sketch and project framework, which in special cases includes randomized Kaczmarz, randomized coordinate descent, Gaussian descent, randomized Newton, their block variants, variants with importance sampling, and also an infinite array of new specific methods. For approaches of a combinatorial flavour, specific to diagonally dominant systems, we refer to the influential work of Spielman and Teng [33].

Ii Contributions

In this paper we equip moderns stochastic methods—methods which already enjoy the fast rate (4)—with the ability to process data in mini-batches. None of the primal444By a primal method we refer to an algorithm which operates directly to solve (1)+(2) without explicitly operating on the dual problem. Dual methods have very recently been analyzed in the mini-batch setting. For a review of such methods we refer the reader to the paper describing the QUARTZ method [34] and the references therein. modern methods have been analyzed in the mini-batch setting. This paper fills this gap in the literature.

While we have argued above that the modern methods, S2GD included, do not have the “non-vanishing variance” issue that SGD does, and hence do not need mini-batching for that purpose, mini-batching is still useful. In particular, we develop and analyze the complexity of mS2GD (Algorithm 1) — a mini-batch proximal variant of semi-stochastic gradient descent (S2GD) [7]. While the S2GD method was analyzed in the case only, we develop and analyze our method in the proximal555Note that the Prox-SVRG method [35] can also handle the composite problem (1). setting (1). We show that mS2GD enjoys several benefits when compared to previous modern methods. First, it trivially admits a parallel implementation, and hence enjoys a speedup in clocktime in an HPC environment. This is critical for applications with massive datasets and is the main motivation and advantage of our method. Second, our results show that in order to attain a specified accuracy , mS2GD can get by with fewer gradient evaluations than S2GD. This is formalized in Theorem 2, which predicts more than linear speedup up to a certain threshold mini-batch size after which the complexity deteriorates. Third, compared to [35], our method does not need to average the iterates produced in each inner loop; we instead simply continue from the last one. This is the approach employed in S2GD [7].

Iii The Algorithm

In this section we first briefly motivate the mathematical setup of deterministic and stochastic proximal gradient methods in Section IIIA, followed by the introduction of semi-stochastic gradient descent in Section IIIB. We will the be ready to describe the mS2GD method in Section IIIC.

Iii-a Deterministic and stochastic proximal gradient methods

The classical deterministic proximal gradient approach [2, 36, 37] to solving (1) is to form a sequence via

where . Note that in view of Assumption 1, which we shall use in our analysis in Section IV, is an upper bound on whenever is a stepsize parameter satisfying . This procedure can be compactly written using the proximal operator as follows:

where

In a large-scale setting it is more efficient to instead consider the stochastic proximal gradient approach, in which the proximal operator is applied to a stochastic gradient step:

(5)

where is a stochastic estimate of the gradient .

Iii-B Semi-stochastic methods

Of particular relevance to our work are the SVRG [6], S2GD [7] and Prox-SVRG [35] methods where the stochastic estimate of is of the form

(6)

where is an “old” reference point for which the gradient was already computed in the past, and is a random index equal to

with probability

. Notice that

is an unbiased estimate of the gradient of

at :

Methods such as S2GD, SVRG, and Prox-SVRG update the points in an inner loop, and the reference point

in an outer loop (“epoch”) indexed by

. With this new outer iteration counter we will have instead of , instead of and instead of . This is the notation we will use in the description of our algorithm in Section IIIC. The outer loop ensures that the squared norm of approaches zero as (it is easy to see that this is equivalent to saying that the stochastic estimate has a diminishing variance), which ultimately leads to extremely fast convergence.

Iii-C Mini-batch S2GD

We are now ready to describe the mS2GD method666 A more detailed algorithm and the associated analysis (in which we benefit from the knowledge of lower-bound on the strong convexity parameters of the functions and ) can be found in the arXiv preprint [38]. The more general algorithm mainly differs in being chosen according to a geometric probability law which depends on the estimates of the convexity constants. (Algorithm 1).

1:Input: (max # of stochastic steps per epoch); (stepsize); (starting point); mini-batch size
2:for  do
3:     Compute and store
4:     Initialize the inner loop:
5:     Choose uniformly at random
6:     for  to  do
7:         Choose mini-batch of size ,
.     uniformly at random
8:          Compute a stochastic estimate of :
ij   
9:         
10:     end for
11:     Set
12:end for
Algorithm 1 mS2GD

The algorithm includes an outer loop, indexed by epoch counter , and an inner loop, indexed by . Each epoch is started by computing , which is the (full) gradient of at . It then immediately proceeds to the inner loop. The inner loop is run for iterations, where is chosen uniformly at random from . Subsequently, we run iterations in the inner loop (corresponding to Steps 6–10). Each new iterate is given by the proximal update (5), however with the stochastic estimate of the gradient in (6), which is formed by using a mini-batch of examples of size . Each inner iteration requires units of work777It is possible to finish each iteration with only evaluations for component gradients, namely , at the cost of having to store , which is exactly the way that SAG [4] works. This speeds up the algorithm; nevertheless, it is impractical for big ..

Iv Analysis

In this section, we lay down the assumptions, state our main complexity result, and comment on how to optimally choose the parameters of the method.

Iv-a Assumptions

Our analysis is performed under the following two assumptions.

Assumption 1.

Function (regularizer/proximal term) is convex and closed. The functions have Lipschitz continuous gradients with constant . That is, for all , where is the -norm.

Hence, the gradient of is also Lipschitz continuous with the same constant .

Assumption 2.

is strongly convex with parameter . That is for all and ,

(7)

where is the subdifferential of at .

Lastly, by and we denote the strong convexity constants of and , respectively. We allow both of these quantities to be equal to , which simply means that the functions are convex (which we already assumed above). Hence, this is not necessarily an additional assumption.

Iv-B Main result

We are now ready to formulate our complexity result.

Theorem 1.

Let Assumptions 1 and 2 be satisfied, let and choose . Assume that , and that are further chosen so that

(8)

where . Then mS2GD has linear convergence in expectation with rate :

Notice that for any fixed , by properly adjusting the parameters and we can force to be arbitrarily small. Indeed, the second term can be made arbitrarily small by choosing small enough. Fixing the resulting , the first term can then be made arbitrarily small by choosing large enough. This may look surprising, since this means that only a single outer loop () is needed in order to obtain a solution of any prescribed accuracy. While this is indeed the case, such a choice of the parameters of the method (, , ) would not be optimal – the resulting workload would to be too high as the complexity of the method would depend sublinearly on . In order to obtain a logarithmic dependence on , i.e., in order to obtain linear convergence, one needs to perform outer loops, and set the parameters and to appropriate values (generally, and ).

Iv-C Special cases: and

In the special case with (no mini-batching), we get , and the rate given by (8) exactly recovers the rate achieved by Prox-SVRG [35] (in the case when the Lipschitz constants of are all equal). The rate is also identical to the rate of S2GD [7] (in the case of , since S2GD was only analyzed in that case). If we set the number of outer iterations to , choose the stepsize as , where , and choose , then the total workload of mS2GD for achieving (3) is units of work. Note that this recovers the fast rate (4).

In the batch setting, that is when , we have and hence . By choosing , , and , we obtain the rate . This is the standard rate of (proximal) gradient descent.

Hence, by modifying the mini-batch size

in mS2GD, we interpolate between the fast rate of S2GD and the slow rate of GD.

Iv-D Mini-batch speedup

In this section we will derive formulas for good choices of the parameter and of our method as a function of . Hence, throughout this section we shall consider fixed.

Fixing , it is easy to see that in order for to be an -accurate solution (i.e., in order for (3) to hold), it suffices to choose . Notice that the total workload mS2GD will do in order to arrive at is

units of work. If we now consider fixed (we may try to optimize for it later), then clearly the total workload is proportional to . The free parameters of the method are the stepsize and the inner loop size . Hence, in order to set the parameters so as to minimize the workload (i.e., optimize the complexity bound), we would like to (approximately) solve the optimization problem

Let denote the optimal pair (we highlight the dependence on as it will be useful). Note that if for some , then mini-batching can help us reach the -solution with smaller overall workload. The following theorem presents the formulas for and .

Theorem 2.

Fix and and let

If , then and

(9)

where is the condition number. If , then and

(10)

Note that if , then

Equation (9) suggests that as long as the condition holds, is decreasing at a rate faster than . Hence, we can find the solution with less overall work when using a minibatch of size than when using a minibatch of size .

Iv-E Convergence rate

In this section we study the total workload of mS2GD in the regime of small mini-batch sizes.

Corollary 3.

Fix , choose the number of outer iterations equal to

and fix the target decrease in Theorem 2 to satisfy . Further, pick a mini-batch size satisfying , let the stepsize be as in (34) and let be as in (33). Then in order for mS2GD to find satisfying (3), mS2GD needs at most

(11)

units of work, where , which leads to the overall complexity of

units of work.

Proof.

Available in Appendix B-D. ∎

This result shows that as long as the mini-batch size is small enough, the total work performed by mS2GD is the same as in the case. If the updates can be performed in parallel, then this leads to linear speedup.

Fig. 1: Complexity of Acc-Prox-SVRG and mS2GD in terms of total work done for , and small (; top row) and large (; bottom row) condition number.

Iv-F Comparison with Acc-Prox-SVRG

The Acc-Prox-SVRG [23] method of Nitanda, which was not available online before the first version of this paper appeared on arXiv, incorporates both a mini-batch scheme and Nesterov’s acceleration [39, 1]. The author claims that when , with the threshold defined as , the overall complexity of the method is

and otherwise it is

This suggests that acceleration will only be realized when the mini-batch size is large, while for small , Acc-Prox-SVRG achieves the same overall complexity, , as mS2GD.

We will now take a closer look at the theoretical results given by Acc-Prox-SVRG and mS2GD, for each . In particular, we shall numerically minimize the total work of mS2GD, i.e.,

over and (compare this with (11)); and compare these results with similar fine-tuned quantities for Acc-Prox-SVRG.888 is the best choice of for Acc-Prox-SVRG and mS2GD, respectively. Meanwhile, is within the safe upper bounds for both methods.

Fig. 1 illustrates these theoretical complexity bounds for both ill-conditioned and well-conditioned data. With small-enough mini-batch size , mS2GD is better than Acc-Prox-SVRG. However, for a large mini-batch size , the situation reverses because of the acceleration inherent in Acc-Prox-SVRG.999We have experimented with different values for and , and this result always holds. Plots with illustrate the cases where we cannot observe any differences between the methods.

Note however that accelerated methods are very prone to error accumulation. Moreover, it is not clear that an efficient implementation of Acc-Prox-SVRG is possible for sparse data. As shall show in the next section, mS2GD allows for such an implementation.

V Efficient implementation for sparse data

Let us make the following assumption about the structure of functions in (2).

Assumption 3.

The functions arise as the composition of a univariate smooth function and an inner product with a datapoint/example : for .

Many functions of common practical interest satisfy this assumption including linear and logistic regression. Very often, especially for large scale datasets, the data are extremely sparse, i.e. the vectors

contains many zeros. Let us denote the number of non-zero coordinates of by and the set of indexes corresponding to non-zero coordinates by , where denotes the coordinate of vector .

Assumption 4.

The regularization function is separable.

This includes the most commonly used regularization functions as or .

Let us take a brief detour and look at the classical SGD algorithm with . The update would be of the form

(12)

If evaluation of the univariate function takes amount of work, the computation of will account for work. Then the update (12) would cost too, which implies that the classical SGD method can naturally benefit from sparsity of data.

Now, let us get back to the Algorithm 1. Even under the sparsity assumption and structural Assumption 3 the Algorithm 1 suggests that each inner iteration will cost because is in general fully dense and hence in Step 9 of Algorithm 1 we have to update all coordinates.

However, in this Section, we will introduce and describe the implementation trick which is based on “lazy/delayed” updates. The main idea of this trick is not to perform Step 9 of Algorithm 1 for all coordinates, but only for coordinates . The algorithm is described in Algorithm 2.

1: for
2:for  to  do
3:     Choose mini-batch of size ,
.     uniformly at random
4:     for  do
5:         for  do
6:              
7:              
8:         end for
9:     end for
10:     
11:end for
12:for  to  do
13:     
14:end for
Algorithm 2 ”Lazy” updates for mS2GD
(these replace steps 6–10 in Algorithm  1)

To explain the main idea behind the lazy/delayed updates, consider that it happened that during the fist iterations, the value of the fist coordinate in all datapoints which we have used was 0. Then given the values of and we can compute the true value of easily. We just need to apply the operator times, i.e. where the function is described in Algorithm 3.

for   do
     
end for
return
Algorithm 3

The vector in Algorithm 2 is enabling us to keep track of the iteration when corresponding coordinate of was updated for the last time. E.g. if in iteration we will be updating the coordinate for the first time, and after we compute and update the true value of , its value will be set to . Lines 5-8 in Algorithm 2 make sure that the coordinates of which will be read and used afterwards are up-to-date. At the end of the inner loop, we will updates all coordinates of to the most recent value (lines 12-14). Therefore, those lines make sure that the of Algorithms 1 and 2 will be the same.

However, one could claim that we are not saving any work, as when needed, we still have to compute the proximal operator many times. Although this can be true for a general function , for particular cases, and , we provide following Lemmas which give a closed form expressions for the operator.

Fig. 2: Comparison of mS2GD with different mini-batch sizes on rcv1 (left) and astro-ph (right).
Fig. 3: Parallelism speedup for rcv1 (left) and astro-ph (right) in theory (unachievable in practice).
Lemma 1 (Proximal Lazy Updates with -Regularizer).

If with then

where .

Lemma 2 (Proximal Lazy Updates with -Regularizer).

Assume that with . Let us define and as follows,

and let . Then the value of can be expressed based on one of the 3 situations described below:

  1. If , then by letting , the operator can be defined as

  2. If , then the operator can be defined as

  3. If , then by letting , the operator can be defined as

The proofs of Lemmas 1 and 2 are available in APPENDIX C.

Remark: Upon completion of the paper, we learned that similar ideas of lazy updates were proposed in [40] and [41] for online learning and multinomial logistic regression, respectively. However, our method can be seen as a more general result applied to a stochastic gradient method and its variants under Assumptions 3 and 4.

Fig. 4: Comparison of several algorithms on four datasets: rcv1 (top left), news20 (top right), covtype (bottom left) and astro-ph (bottom right). We have used mS2GD with .

Vi Experiments

In this section we perform numerical experiments to illustrate the properties and performance of our algorithm. In Section  VI-A we study the total workload and parallelization speedup of mS2GD as a function of the mini-batch size . In Section VI-B we compare mS2GD with several other algorithms. Finally, in Section VI-C we briefly illustrate that our method can be efficiently applied to a deblurring problem.

In Sections VI-A and VI-B we conduct experiments with and of the form (2), where

is the logistic loss function:

(13)

These functions are often used in machine learning, with , , being a training dataset of example-label pairs. The resulting optimization problem (1)+(2) takes the form

(14)

and is used in machine learning for binary classification. In these sections we have performed experiments on four publicly available binary classification datasets, namely rcv1, news20, covtype 101010rcv1, covtype and news20 are available at http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. and astro-ph 111111Available at http://users.cecs.anu.edu.au/~xzhang/data/..

In the logistic regression problem, the Lipschitz constant of function is equal to . Our analysis assumes (Assumption 1) the same constant for all functions. Hence, we have . We set the regularization parameter in our experiments, resulting in the problem having the condition number . In Table I we summarize the four datasets, including the sizes , dimensions , their sparsity levels as a proportion of nonzero elements, and the Lipschitz constants .

Dataset Sparsity
rcv1 20,242 47,236 0.1568% 0.2500
news20 19,996 1,355,191 0.0336% 0.2500
covtype 581,012 54 22.1212% 1.9040
astro-ph 62,369 99,757 0.0767% 0.2500
TABLE I: Summary of datasets used for experiments.

Vi-a Speedup of mS2GD

Mini-batches allow mS2GD to be accelerated on a computer with a parallel processor. In Section IV-D, we have shown in that up to some threshold mini-batch size, the total workload of mS2GD remains unchanged. Figure 2 compares the best performance of mS2GD used with various mini-batch sizes on datasets rcv1 and astro-ph. An effective pass (through the data) corresponds to units of work. Hence, the evaluation of a gradient of counts as one effective pass. In both cases, by increasing the mini-batch size to , the performance of mS2GD is the same or better than that of S2GD () without any parallelism.

Although for larger mini-batch sizes mS2GD would be obviously worse, the results are still promising with parallelism. In Figure 3,we show the ideal speedup—one that would be achievable if we could always evaluate the gradients in parallel in exactly the same amount of time as it would take to evaluate a single gradient.121212In practice, it is impossible to ensure that the times of evaluating different component gradients are the same..

Vi-B mS2GD vs other algorithms

In this part, we implemented the following algorithms to conduct a numerical comparison:
1) SGDcon: Proximal stochastic gradient descent method with a constant step-size which gave the best performance in hindsight.
2) SGD+: Proximal stochastic gradient descent with variable step-size , where is the number of effective passes, and is some initial constant step-size.
3) FISTA: Fast iterative shrinkage-thresholding algorithm proposed in [2].
4) SAG: Proximal version of the stochastic average gradient algorithm [4]. Instead of using , which is analyzed in the reference, we used a constant step size.
5) S2GD: Semi-stochastic gradient descent method proposed in [7]. We applied proximal setting to the algorithm and used a constant stepsize.
6) mS2GD: mS2GD with mini-batch size . Although a safe step-size is given in our theoretical analyses in Theorem 1, we ignored the bound, and used a constant step size.

In all cases, unless otherwise stated, we have used the best constant stepsizes in hindsight.

Figure 4 demonstrates the superiority of mS2GD over other algorithms in the test pool on the four datasets described above. For mS2GD, the best choices of parameters with are given in Table II.

Parameter rcv1 news20 covtype astro-ph
0.11 0.10 0.07 0.08
TABLE II: Best choices of parameters in mS2GD.

Vi-C Image deblurring

In this section we utilize the Regularization Toolbox [42] 131313Regularization Toolbox available for Matlab can be obtained from http://www.imm.dtu.dk/~pcha/Regutools/ . We use the blur function available therein to obtain the original image and generate a blurred image (we choose following values of parameters for blur function:

, band=9, sigma=10). The purpose of the blur function is to generate a test problem with an atmospheric turbulence blur. In addition, an additive Gaussian white noise with stand deviation of

is added to the blurred image. This forms our testing image as a vector . The image dimension of the test image is , which means that . We would not expect our method to work particularly well on this problem since mS2GD works best when . However, as we shall see, the method’s performance is on a par with the performance of the best methods in our test pool.

Our goal is to reconstruct (de-blur) the original image by solving a LASSO problem: . We have chosen . In our implementation, we normalized the objective function by , and hence our objective value being optimized is in fact , where , similarly as was done in [2].

Figure (5) shows the original test image (left) and a blurred image with added Gaussian noise (right). Figure 6 compares the mS2GD algorithm with SGD+, S2GD and FISTA. We run all algorithms for 100 epochs and plot the error. The plot suggests that SGD+ decreases the objective function very rapidly at beginning, but slows down after 10-20 epochs.

Fig. 5: Original (left) and blurred & noisy (right) test image.
Fig. 6: Comparison of several algorithms for the de-blurring problem.

Finally, Fig. 7 shows the reconstructed image after epochs.

FISTA
SGD+
S2GD
mS2GD
mS2GD
Fig. 7: Reconstruction of the test image from Figure 5 via FISTA, SGD+, S2GD and mS2GD after epochs (one epoch corresponds to work equivalent to the computation of one gradient.)

Vii Conclusion

We have proposed mS2GD—a mini-batch semi-stochastic gradient method—for minimizing a strongly convex composite function. Such optimization problems arise frequently in inverse problems in signal processing and statistics. Similarly to SAG, SVRG, SDCA and S2GD, our algorithm also outperforms existing deterministic method such as ISTA and FISTA. Moreover, we have shown that the method is by design amenable to a simple parallel implementation. Comparisons to state-of-the-art algorithms suggest that mS2GD, with a small-enough mini-batch size, is competitive in theory and faster in practice than other competing methods even without parallelism. The method can be efficiently implemented for sparse data sets.

Appendix A Technical Results

Lemma 3 (Lemma 3.6 in [35]).

Let be a closed convex function on and , then

Note that contractiveness of the proximal operator is a standard result in optimization literature [43, 44].

Lemma 4.

Let be vectors in and . Let be a random subset of of size , chosen uniformly at random from all subsets of this cardinality. Taking expectation with respect to , we have

(15)

Following from the proof of Corollary 3.5 in [35], by applying Lemma 4 with , we have the bound for variance as follows.

Theorem 4 (Bounding Variance).

Let . Considering the definition of in Algorithm 1, conditioned on , we have and the variance satisfies,

(16)

Appendix B Proofs

B-a Proof of Lemma 4

As in the statement of the lemma, by we denote expectation with respect to the random set . First, note that

If we let , we can thus write

where in the last step we have used the bound

B-B Proof of Theorem 1

The proof is following the steps in [35]. For convenience, let us define the stochastic gradient mapping

then the iterate update can be written as Let us estimate the change of . It holds that

(17)

Applying Lemma 3.7 in [35] (this is why we need to assume that ) with , , , , and , we get

(18)

and therefore,

(19)

In order to bound