Nesterov's Accelerated Gradient and Momentum as approximations to Regularised Update Descent

07/07/2016 ∙ by Aleksandar Botev, et al. ∙ 0

We present a unifying framework for adapting the update direction in gradient-based iterative optimization methods. As natural special cases we re-derive classical momentum and Nesterov's accelerated gradient method, lending a new intuitive interpretation to the latter algorithm. We show that a new algorithm, which we term Regularised Gradient Descent, can converge more quickly than either Nesterov's algorithm or the classical momentum algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We present a framework for optimisation by directly setting the parameter update to optimise the objective function. Under natural approximations, two special cases of this framework recover Nesterov’s Accelerated Gradient (NAG) descent[3] and the classical momentum method (MOM)[5]. This is particularly interesting in the case of NAG since, though popular and theoretically principled, it has largely defied intuitive interpretation. We show that (at least for the special quadratic objective case) our algorithm can converge more quickly than either NAG or MOM. 

Given a continuous objective we consider iterative algorithms to minimise . We write for the gradient of the function evaluated at , and similarly for the second derivative111

These definitions extend in an obvious way to the gradient vector and Hessian in the vector

case.. Our focus is on first-order methods, namely those that form the parameter update on the basis of only first order gradient information.

1.1 Gradient Descent

Perhaps the simplest optimisation approach is Gradient Descent (GD) which, starting from the current parameters, locally modifies the parameter at iteration to reduce . Based on the Taylor series expansion

(1)

for a small learning rate , setting reduces . This motivates the GD update . For convex Lipshitz GD converges to the optimum value as [4]. Whilst gradient descent is universally popular, alternative methods such as momentum and Nesterov’s Accelerated Gradient (NAG) can result in significantly faster convergence to the optimum.

1.2 Momentum

The intuition behind momentum (MOM) is to continue updating the parameter along the previous update direction. This gives the algorithm (see for example [5])

(2)

where is the momentum parameter. It is well known that GD can suffer from plateauing when the objective landscape has ridges (due to poor scaling of the objective, for instance) causing the optimization path to zig-zag. Momentum can alleviate this since persistent ascent directions accumulate in (2), whereas directions in which the gradient is quickly changing tend to cancel each other out. The algorithm is also useful in the stochastic setting when only a sample of the gradient is available. By averaging the gradient over several minibatches/samples, the averaged gradient will better approximate the full batch gradient. In a different setting, when the objective function becomes flat, momentum is useful to maintain progress along directions of shallow gradient. As far as we are aware, relatively little is known about the convergence properties of momentum. We show below, at least for a special quadratic objective, that momentum indeed converges.

1.3 Nesterov’s Accelerated Gradient

Nesterov’s Accelerated Gradient (NAG) [3] is given by

(3)

NAG has the interpretation that the previous two parameter values are smoothed and a gradient descent step is taken from this smoothed value. For Lipshitz convex functions (and a suitable schedule for and ), NAG converges at rate . Nesterov proved that this is the optimal possible rate for any method based on first order gradients222This is a ‘worst case’ result. For example for quadratic functions, convergence is exponentially fast, leaving open the possibility that other algorithms may have superior convergence on ‘benign’ problems. [3]. Nesterov proposed the schedule and fixed , which we adopt in the experiments. 

Recently, [6] showed that by setting , equation (3) can be rewritten as:

(4)

This formulation reveals the relation of NAG to the classical momentum algorithm equation (2) which uses in place of in equation (4). In both cases, NAG and MOM tend to continue updating the parameters along the previous update direction. 

In the machine learning community, NAG is largely viewed as somewhat mysterious and explained as performing a lookahead gradient evaluation and then performing a correction

[6]. The closely related momentum is often loosely motivated by analogy with a physical system [5]. One contribution of our work, presented in section(2), shows that these algorithms can be intuitively understood from the perspective of optimising the objective with respect to the update itself.

2 Regularised Update Descent

We consider a separable objective

(5)

for which the that minimises is clearly the same as the one that minimises , with at the minimum. We propose333Previous authors have also considered optimising the update, for example [2]. to update to to reduce . To do this we update to reduce444Note that the regulariser term is necessary. For the objective alone, the update would be . In this case, convergence for occurs when , for which . Using the update would then result in the parameter never converging; the parameter would pass though the minimum and continue beyond this, never to return.

(6)

We note that the optimum of occurs when

(7)

These two conditions give

(8)

which implies that at the optimum and therefore that when we have found the optimum of . Hence, the that minimises also minimises .  

Differentiating with respect to we obtain

(9)

We thus make a gradient descent update in the direction that lowers :

(10)

We initially proposed to optimise via the update by performing gradient descent on with respect to . However, we have now improved to . This suggests therefore that a superior update for is . The complete Regularised Update Descent (RUD) algorithm is given by (see also algorithm(1))

(11)

where . As we converge towards a minimum, the update will become small (since the gradient is small) and the regularisation term can be safely tuned down. This means that should be set so that it tends to 1 with increasing iterations. As we will show below one can view MOM and NAG as approximations to RUD based on a first order expansion (for MOM) and a more accurate second order expansion (for NAG).

1:Initial guess , learning rates and increasing momentum schedule
2: 0
3:for  do
4:     
5:     
6:end for
7:return
Algorithm 1 Regularised Update Descent for iterations

2.1 Deriving MOM from RUD

We consider an update at the current . Assuming is small:

(12)

Under this first order approximation, the RUD objective becomes

(13)

Differentiating wrt we get

(14)

We thus make an update in this direction:

(15)
(16)

where should be close to 1. We then make a parameter update

(17)

which recovers the momentum algorithm. We can therefore view momentum as optimising, with respect to the update, a first order approximation of the RUD objective.

2.2 Deriving NAG from RUD

Expanding to the next order, we obtain

(18)

Since is not infinitesimally small, we cannot ‘trust’ the higher order terms as we move away from ; as we move further from we are trying to approximate the function based on curvature information at , rather than the current point . This is analogous to the idea of trust regions in Quasi-Newton approaches which limit the extent to which the Taylor expansion is trusted away from the origin [4]. To encode this lack of trust, we reduce the second order term by a factor and add another term to encourage to be small. This gives the modified approximate RUD objective

(19)

Differentiating with respect to we get

(20)

We then update to reduce this approximate RUD objective:

(21)
(22)

We are free to choose , and which should both be small. Ideally should be close to 1. Hence, it is reasonable to set and choose to be close to 1. This setting recovers the NAG algorithm:

(23)
(24)

and explains why we want to tend to 1 as we converge, since as we zoom in to the minimum, we can trust more a quadratic approximation to the objective. An alternative interpretation of NAG (as a two stage optimisation process) and its relation to RUD is outlined in Appendix (A).  

From the perspective that NAG and MOM are approximations to RUD, NAG is preferable to MOM since it is based on a more accurate expansion. In terms of RUD versus NAG, the difference between NAG and RUD is the use of in the argument of in NAG, whereas we use in RUD. This means that RUD ‘looks further forward’ than NAG (since ) in a manner more consistent with the eventual parameter update . This tentatively explains why RUD can outperform NAG.

3 Comparison on a Quadratic function

An interesting question is whether and under what conditions RUD may converge more quickly than NAG for convex Lipshitz functions. To date we have not been able to fully analyse this. In lieu of a more complete understanding we consider the simple quadratic objective555For the simple quadratic objective, the convergence is exponentially fast in terms of the number of iterations. This is clearly a very special case compared to the more general convex Lipshitz scenario. Nevertheless, the analysis gives some insight that some improvement over NAG might be possible.

(25)

For this simple function the gradient is given by and, for fixed , , we are able to fully compute the update trajectories for NAG and RUD and MOM.

3.1 Nag

For NAG, the algorithm is given by

(26)
(27)

Assuming , and a given value for , this gives . Similarly, for both MOM and NAG, is given by the same value. 

Figure 1: (a) Shaded is the parameter region for which RUD converges for the simple quadratic function . (b) Shaded is the parameter region for which RUD converges more quickly than NAG. (c) Shaded is the region in which MOM converges more quickly than NAG. (d) Shaded is the region in which MOM converges more quickly than RUD.

We can write equations (26,27) as a single second order difference equation

(28)

where

(29)
(30)

For the scalar case , assuming a solution of the form gives

(31)

which defines two values and , so that the general solution is given by

(32)

where and are determined by the linear equations

(33)
(34)

A sufficient condition for NAG to converge is that and which is equivalent to the conditions , [7]. For any learning rate and momentum , it is straightforward to show that these conditions hold and thus that NAG converges to the minimum .

Figure 2: Optimising a 1000 dimensional quadratic function using different algorithms, all with the same learning rate and schedule. (a) The log objective for Gradient Descent, Momentum, Nesterov’s Accelerated Gradient and Regularised Update Descent. (b) Trajectories of the different algorithms plotted for the first two components . The behaviour demonstrated is typical in that momentum tends to more significantly overshoot the minimum than RUD or NAG, with RUD typically outperforming NAG.

3.2 Mom

The above analysis carries over directly to the MOM algorithm, with the only change being

(35)
(36)

It is straightforward to show that for any learning rate and momentum , the corresponding conditions and are always satisfied. Therefore the MOM algorithm (at least for this problem) always converges. For MOM to have better asymptotic convergence rate than NAG, we need . From fig(1) we see that MOM only outperforms NAG (and RUD) when the momentum is small. This is essentially the uninteresting regime since, in practice, we will typically use a value of momentum that is close to 1. For this simple quadratic case, for practical purposes, MOM therefore performs worse than RUD or NAG.

3.3 Rud

For the RUD algorithm the corresponding solutions are given by setting

(37)
(38)

RUD has more complex convergence behaviour than NAG or MOM. The conditions and are satisfied only within the region as shown in fig(1a), which is determined by

(39)

The main requirement is that the learning rate should not be too high, at least for values of momentum less than 0.5. Unlike NAG and MOM, RUD has therefore the possibility to diverge. 

In fig(1b) we show the region for which the asymptotic convergence of RUD is faster than NAG. The main requirement is that the momentum needs to be high (say above 0.8) and is otherwise largely independent of the learning rate (provided ).

4 Experiments

4.1 A toy high dimensional quadratic function

In fig(2) we show the progress for different algorithms using the same learning rate and for a toy 1000 dimension quadratic function for randomly chosen and . This simple experiment shows that the theoretical property derived in section(3) that RUD can outperform NAG and MOM carries over to the more general quadratic setting. Indeed, in our experience, the improved convergence of RUD over NAG for the quadratic objective function is typical behaviour.

4.2 Deep Learning: MNIST

Whilst RUD has interesting convergence for quadratic functions, in practice of course it is important to see how it behaves in the case of more general non-convex functions. In fig(3

) we look at a classical deep learning problem of training an

autoencoder for handwritten digit reconstruction [1]. The dataset consists of black and white images of size 28x28 and we used 50000 training images, with the images scaled to lie in the 0 to 1 range. The target is for the network to learn to reduce the dimensionality of the input to a 30 dimensional vector and then to reconstruct the input. The nonlinearity at each layer is the hyperbolic tangent666

We tried also rectifier linear units and leaky rectifier linear units, but they did not affect the relative performance of any of the algorithms.

and for the last layer we used the binary cross entropy loss.  

Since NAG and RUD are closely related, we use the same schedule

for both algorithms. All remaining hyperparameters for each method (learning rates) were set optimally based on a grid search over a set of reasonable parameters for each algorithm. For this problem, there is little difference between NAG and RUD, with RUD slightly outperforming NAG.

Figure 3: The negative log loss for the classical MNIST autoencoder network [1] trained using minibatches contains 200 examples. Similar to the small quadratic objective experiments, we see that on this much larger problem, as expected, NAG and RUD perform very similarly (with RUD slightly outperforming NAG). All methods used the same learning rate and momentum parameter schedule.

5 Conclusion

We described a general approach to first order optimisation based on optimising the objective with respect to the updates. This gives a simple optimisation algorithm which we termed Regularised Update Descent; we showed that his algorithm can converge more quickly than Nesterov’s Accelerated Gradient. In addition to being a potentially useful optimisation algorithm in its own right, the main contribution of this work is to show that the Nesterov and momentum algorithms can be viewed as approximations to the Regularised Update Descent algorithm.

References

  • [1] G. E. Hinton and R. R Salakhutdinov.

    Reducing the dimensionality of data with neural networks.

    Science, 313(5786):504–507, 2006.
  • [2] P-Y. Massé and Y. Ollivier. Speed learning on the fly. arXiv preprint arXiv:1511.02540, 2015.
  • [3] Y. Nesterov. A method of solving a convex programming problem with convergence rate . In Soviet Mathematics Doklady, volume 27, pages 372–376, 1983.
  • [4] J. Nocedal and S. J. Wright. Numerical optimization. Springer Series in Operations Research and Financial Engineering. Springer, Berlin, 2006.
  • [5] N. Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1):145–151, 1999.
  • [6] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th international conference on machine learning (ICML-13), pages 1139–1147, 2013.
  • [7] K. Sydsaeter and P. Hammond. Essential Mathematics for Economic Analysis. Prentice Hall, 2008.

Appendix A Alternative NAG derivation

For the objective

(40)

we consider a two stage process of optimizing . The algorithm proceeds as follows: given and we first perform a descent step only on the regularizer, followed by a descent step on the ‘lookahead’ . After this we perform the usual step on based on the final updated . The procedure is summarized below:

(41)

Setting recovers the NAG formulation as in [6]. RUD therefore differs from NAG in that it does not perform the initial descent step on the regulariser term so that for RUD .