Linearly convergent stochastic heavy ball method for minimizing generalization error

10/30/2017 ∙ by Nicolas Loizou, et al. ∙ 0

In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we study the stochastic optimization problem:

(1)

where is a data matrix,

is a vector of labels,

is a matrix with rows (and arbitrary number of columns, e.g., 1), is a distribution over such matrices and is a least-squares function with respect to a random pseudo-norm defined by a specific symmetric positive semidefinite matrix which depends on

and the random matrix

. In particular, and , where denotes the Moore-Penrose pseudoinverse. Note that the function is finite if and only if exists and is finite. Hence, we assume this throughout this paper.

Problem (1) was first proposed in [ASDA], where the authors focus on stochastic reformulations of a consistent linear system . The authors further give necessary and sufficient conditions on for the set of solutions of (1) to be equal to the set of solutions of the linear system ; a property for which the term exactness was coined in [ASDA]. Exactness conditions are very weak, allowing to be virtually any distribution of random matrices. For instance, a sufficient condition for exactness is for the matrix to be positive definite. This is indeed a weak condition since it is easy to see that this matrix is symmetric and positive semidefinite without the need to invoke any assumptions; simply by design. We refer the reader to [ASDA] for more insights into the reformulation (1), its properties and other equivalent reformulations (e.g., stochastic fixed point problem, probabilistic intersection problem, and stochastic linear system).

In [ASDA], the authors consider solving (1

) via stochastic gradient descent (SGD)

(2)

where is a fixed stepsize and is sampled afresh in each iteration from . It is shown that, SGD converges to an which satisfies

(3)

where is the starting point. It was observed that, surprisingly, SGD is in this setting equivalent to the stochastic (pseudo)-Newton method, and the stochastic proximal point method, and that it converges at a linear rate despite the following obstacles: is not necessarily strongly convex, (1) is not a finite-sum problem, and a fixed stepsize is used.

1.1 Contributions

In this paper we take an alternative route, and develop a stochastic variant of the heavy ball method for solving the stochastic optimization problem (1). Applied to (1), the classical heavy ball method of Polyak [polyak1964some, polyak1987introduction], with constant stepsize and constant momentum parameter , takes the form

(4)

This method introduces the momentum term into the gradient descent method to achieve acceleration.

Our stochastic variant of the heavy ball method, which we henceforth simply refer to by the name stochastic heavy ball method (SHB)

, replaces the (costly) computation of the gradient by an unbiased estimator of the gradient (“stochastic gradient”) which is hopefully much cheaper to compute:

(5)

We establish global linear convergence (in expectation) of the iterates and function values: (L2 convergence) and . Without the exactness assumption we prove that , where is the Cesaro average. Finally, we study the convergence of the expected iterates (L1 convergence), , and establish global accelerated linear rate. That is, this quantity falls below after iterations, where (resp. 

) are the largest (resp. smallest nonzero) eigenvalues of:

It turns out that all eigenvalues of belong to the interval .

1.2 Related Work

Stochastic variants of heavy ball method have been employed widely in practice, especially in the area of deep learning

[sutskever2013importance, szegedy2015going, krizhevsky2012imagenet]. Despite the popularity of the method both in convex and non-convex optimization its convergence properties are not very well understood. Recent papers that provide complexity analysis of SHB (in different setting than ours) include [yang2016unified] and [gadat2016stochastic]. In [yang2016unified]

the authors analyzed SHB for general Lipshitz continuous convex objective functions (with bounded variance) and proved the

sublinear rate . In [gadat2016stochastic], a complexity analysis is provided for the case of quadratic strongly convex smooth coercive functions. A sublinear convergence rate , where , was proved. In contrast to our results, where we assume fixed stepsize , both papers analyze SHB with diminishing stepsizes. For our problem, variance reduction methods like SVRG [johnson2013accelerating], S2GD [S2GD], mS2GD [mS2GD], SAG [schmidt2017minimizing] and SAGA [defazio2014saga] are not necessary. To the best of our knowledge, our work provides the first linear convergence rate for SHB in any setting.

2 Convergence Results

In this section we state our convergence results for SHB.

2.1 convergence: linear rate

We study L2 convergence of SHB; that is, we study the convergence of the quantity to zero. We show that for a range of stepsize parameters and momentum parameters , SHB enjoys global non-asymptotic linear convergence rate. As a corollary of L2 convergence, we obtain convergence of the expected function values.

Theorem 1.

Choose . Assume exactness. Let be the sequence of random iterates produced by SHB. Assume and and that the expressions

satisfy . Let be the projection of onto . Then

(6)

and

where and . Moreover, .

Remark 1.

In the above theorem we obtain global linear rate. To the best of our knowledge, this is the first time that linear rate is established for a stochastic variant of the heavy ball method in any setting. All existing results are sublinear.

Remark 2.

If we choose , then the condition is satisfied for all

Remark 3.

If , SHB reduces to the “basic method” in [ASDA] (SGD with constant stepsize). In this special case, , which is the rate established in [ASDA]. Hence, our result is more general.

Remark 4.

Let be the rate as a function of . Note that since , we have

(7)

Clearly, the lower bound on is an increasing function of . Also, for any the rate is always inferior to that of SGD (). It is an open problem whether one can prove a strictly better rate for SHB than for SGD.

2.2 Cesaro average: sublinear rate without exactness assumption

In this section we present convergence results for function values computed at the Cesaro average of all past iterates. Again, our results are global in nature. To the best of our knowledge, an analysis of the Cesaro average for the SHB with rate was not established before for any class of functions. Moreover, the result holds without the exactness assumption.

Theorem 2.

Choose and let be the random iterates produced by SHB, where the momentum parameter and relaxation parameter (stepsize) satisfy . Let be any vector satisfying . If we let , then

Remark 5.

In the special case of we have , which is the convergence rate for Cesaro average of the “basic method” analyzed in [ASDA].

2.3 convergence: accelerated linear rate

In this section we show that by a proper combination of the stepsize parameter and the momentum parameter the proposed algorithm enjoys accelerated linear convergence rate with respect to the expected iterates.

Theorem 3.

Assume exactness. Let be the sequence of random iterates produced SHB, started with satisfying the relation , with stepsize parameter and momentum parameter . Then there exists constant such that for all we have

  • If we choose and , then

    and the iteration complexity becomes .

  • If we choose and , then

    and the iteration complexity becomes

Note that the convergence factor is precisely equal to the value of the momentum parameter.

Remark 6.

Let be any random vector in with finite mean , and be any reference vector (for instance, any solution of ). Then we have the identity (see, for instance [gower2015randomized])

This means that the quantity appearing in our L2 convergence result (Theorem 1) is larger than appearing in the L1 convergence result (Theorem 3), and hence harder to push to zero. As a corollary, L2 convergence implies L1 convergence. However, note that in Theorem 3 we have established an accelerated rate.

3 Experiments

In this section we present a preliminary experiment to evaluate the performance of the SHB for solving the stochastic optimization problem (1). Matrices are picked from the LIBSVM library [chang2011libsvm]. To ensure consistency of the linear system, we take the optimal solution to be i.i.d and the right hand sight is set to . In each iteration, the random matrix is chosen as

with probability

. Here is the unit coordinate vector in . In this setup the update rule (5) of the SHB simplifies to

This is a randomized Kaczmarz method (RK) with momentum. Note that for and this reduces to the celebrated Randomized Kaczmarz method (RK) of Strohmer and Vershynin [RK]. In Figure 1, RK with momentum is tested for several values of the momentum parameters and fixed stepsize . For the evaluation we use both the relative error measure and the function suboptimality . The starting point is chosen as . For the horizontal axis we use either the number of iterations or the wall-clock time measured using the tic-toc Julia function. It is clear that in this setting the addition of momentum parameter is beneficial and leads to faster convergence.

Figure 1: The performance of RK with momentum for several momentum parameters on real data from LIBSVM [chang2011libsvm]. mushrooms: , splice: . The graphs in the first (second) column plot iterations (time) against residual error while those in the third (forth) column plot iterations (time) against function values. The “Error" on the vertical axis represents the relative error and the function values refer to function (1).

References