DeepAI

# Analysis Of Momentum Methods

Gradient decent-based optimization methods underpin the parameter training which results in the impressive results now found when testing neural networks. Introducing stochasticity is key to their success in practical problems, and there is some understanding of the role of stochastic gradient decent in this context. Momentum modifications of gradient decent such as Polyak's Heavy Ball method (HB) and Nesterov's method of accelerated gradients (NAG), are widely adopted. In this work, our focus is on understanding the role of momentum in the training of neural networks, concentrating on the common situation in which the momentum contribution is fixed at each step of the algorithm; to expose the ideas simply we work in the deterministic setting. We show that, contrary to popular belief, standard implementations of fixed momentum methods do no more than act to rescale the learning rate. We achieve this by showing that the momentum method converges to a gradient flow, with a momentum-dependent time-rescaling, using the method of modified equations from numerical analysis. Further we show that the momentum method admits an exponentially attractive invariant manifold on which the dynamic reduces to a gradient flow with respect to a modified loss function, equal to the original one plus a small perturbation.

• 6 publications
• 28 publications
10/30/2019

### Understanding the Role of Momentum in Stochastic Gradient Methods

The use of momentum in stochastic gradient methods has become a widespre...
04/12/2016

### Unified Convergence Analysis of Stochastic Momentum Methods for Convex and Non-convex Optimization

Recently, stochastic momentum methods have been widely adopted in train...
12/20/2017

Two major momentum-based techniques that have achieved tremendous succes...
02/26/2021

### On the Generalization of Stochastic Gradient Descent with Momentum

While momentum-based methods, in conjunction with stochastic gradient de...
10/11/2019

### Decaying momentum helps neural network training

Momentum is a simple and popular technique in deep learning for gradient...
01/20/2022

### Accelerated Gradient Flow: Risk, Stability, and Implicit Regularization

Acceleration and momentum are the de facto standard in modern applicatio...

## 1 Introduction

### 1.1 Background and Literature Review

At the core of many machine learning tasks is solution of the optimization problem

 argminu∈RdΦ(u) (1)

where is an objective (or loss) function that is, in general, non-convex and differentiable. Finding global minima of such objective functions is an important and challenging task with a long history, one in which the use of stochasticity has played a prominent role for many decades, with papers in the early development of machine learning Geman and Geman (1987); Styblinski and Tang (1990), together with concomitant theoretical analyses for both discrete Bertsimas et al. (1993) and continuous problems Kushner (1987); Kushner and Clark (2012). Recent successes in the training of deep neural networks have built on this older work, leveraging the enormous computer power now available, together with empirical experience about good design choices for the architecture of the networks; reviews may be found in Goodfellow et al. (2016); LeCun et al. (2015). Gradient descent plays a prominent conceptual role in many algorithms, following from the observation that the equation

 dudt=−∇Φ(u) (2)

will decrease along trajetories. The most widely adopted methods use stochastic gradient decent (SGD), a concept introduced in Robbins and Monro (1951); the basic idea is to use gradient decent steps based on a noisy approximation to the gradient of . Building on deep work in the convex optimization literature, momentum-based modifications to stochastic gradient decent have also become widely used in optimization. Most notable amongst these momentum-based methods are the Heavy Ball Method (HB), due to Polyak (1964), and Nesterov’s method of accelerated gradients (NAG) Nesterov (1983). To the best of our knowledge, the first application of HB to neural network training appears in Rumelhart et al. (1986). More recent work, such as Sutskever et al. (2013), has even argued for the indispensability of such momentum based methods for the field of deep learning.

From these two basic variants on gradient decent, there have come a plothera of adaptive methods, incorporating momentum-like ideas, such as Adam Kingma and Ba (2014), Adagrad Duchi et al. (2011)

, and RMSProp

Tieleman and Hinton (2012). There is no consensus on which method performs best and results vary based on application. The recent work of Wilson et al. (2017)

argues that the rudimentary, non-adaptive schemes SGD, HB, and NAG result in solutions with the greatest generalization performance for supervised learning applications with deep neural network models.

There is a natural physical analogy for HB methods, namely that they relate to a damped second order Hamiltonian dynamic with potential :

 md2udt2+γ(t)dudt+∇Φ(u)=0. (3)

This persective was introduced in Qian (1999) although no proof was given. For NAG, the work of Su et al. (2014) shows that the method approximates a damped Hamiltonian system of precisely this form, with a time-dependent damping coefficient. The analysis holds when the momentum factor is step-dependent and choosen as in the original work of Nesterov (1983). However this is not the way that NAG is usually used for machine learning applications, especially for deep learning: in many situations the method is employed with a constant momentum factor. In fact, popular books on the subject such as Goodfellow et al. (2016) introduce the method in this way, and popular articles, such as He et al. (2016)

to name one of many, simply state the value of the constant momentum factor used in their experiments. Widely used deep learning libraries such as Tensorflow

and PyTorch

Paszke et al. (2017) implement the method with a fixed choice of momentum factor.

Momentum based methods, as practically implemented in many machine learning optimization tasks, with fixed momentum, have not been carefully analyzed. We will undertake such an analysis, using ideas from numerical analsysis, and in particular the concept of modified equations Griffiths and Sanz-Serna (1986); Chartier et al. (2007) and from the theory of attractive invariant manifolds Hirsch et al. (2006); Wiggins (2013). Both ideas are explained in the text Stuart and Humphries (1998).

### 1.2 Our Contribution

We study momentum-based optimization algorithms for the minimization task (1), with fixed momentum, focussing on deterministic methods for clarity of exposition. We make the following contributions to their understanding.

• We show that momentum-based methods as used by machine learning practitioners, with fixed momentum, satisfy, in the continuous-time limit, a rescaled version of the gradient flow equation (2).

• We show that such methods also approximate a damped Hamiltonian system of the form (3), with small mass (on the order of the learning rate) and constant damping ; this approximation has the same order of accuracy as the approximation of the rescaled equation (2) but can provide a better qualitative approximation.

• Furthermore, for the approximate Hamiltonian system, we show that the dynamics admit an exponentially attractive invariant manifold which is locally representable as a graph mapping co-ordinates to their velocities. The map generating this graph describes a gradient flow in a potential which is a small (on the order of the learning rate) perturbation of

• On the invariant manifold, we show that momentum methods are approximated by the perturbed gradient flow (18) to second order accuracy.

• We provide numerical experiments which illustrate the foregoing considerations.

Taken together our results are interesting because they demonstrate that the popular belief that (fixed) momentum methods resemble the dynamics induced by (3) is misleading. Whilst it is true, the mass in the approximating equation is small and as a consequence understanding the dynamics as gradient flows (2), with modified potential, is more instructive. In fact, in the first application of HB to neural networks by Rumelhart et al. (1986), the authors state that [their] experience has been that [one] get[s] the same solutions by setting [the momentum factor to zero] and reducing the size of [the learning rate]. While our analysis is confined to the non-stochastic case to simplify the exposition, the results will, with some care, extend to the stochastic setting using ideas from averaging and homogenization Pavliotis and Stuart (2008) as well as continuum analyses of SGD as in Li et al. (2017); Feng et al. (2018). Furthermore we also confine our analysis to fixed learning rate, and impose global bounds on the relevant derivatives of ; this further simplifies the exposition of the key ideas, but is not essential to them; with considerably more analysis the ideas exposed in this paper will transfer to adaptive time-stepping methods.

The paper is organized as follows. Section 2 introduces the optimization procedures and states the convergence result to a rescaled gradient flow. In section 3 we derive the modified, second-order equation and state convergence of the schemes to this equation. Section 4 asserts the existence of an attractive invariant manifold, demonstrating that it results in a gradient flow with respect to a small perturbation of . We conclude in section 5. All proofs of theorems are given in the appendices so that the ideas of the theorems can be presented clearly within the main body of the text.

### 1.3 Notation

We use to denote the Euclidean norm on We define by for any . Given parameter we define .

For two Banach spaces , and a subset in , we denote by the set of -times continously differentiable functions with domain and range . For a function , we let denote its -th (total) Fréchet derivative for . For a function , we denote its derivatives by etc. or equivalently by etc.

To simplify our proofs, we make the following assumption about the objective function.

###### Assumption

Suppose with uniformly bounded derivatives. Namely, there exist constants such that

 ∥Dj−1f∥=∥DjΦ∥≤Bj−1

for where denotes any appropriate operator norm.

Finally we observe that the nomenclature “learning rate” is now prevalent in machine learning, and so we use it in this paper; it refers to the object commonly refered to as “time-step” in the field of numerical analysis.

## 2 Momentum Methods and Convergence to Gradient Flow

In subsection 2.1 we state Theorem 2.1 concerning the convergence of a class of momentum methods to a rescaled gradient flow. Subsection 2.2 demonstrates that the HB and NAG methods are special cases of our general class of momentum methods, and gives intuition for proof of Theorem 2.1; the proof itself is given in Appendix A. Subsection 2.3 contains a numerical illustration of Theorem 2.1.

### 2.1 Main Result

The standard Euler discretization of (2) gives the discrete time optimization scheme

 un+1=un+hf(un),n=0,1,2,…. (4)

Implementation of this scheme requires an initial guess . For simplicity we consider a fixed learning rate . Equation (2) has a unique solution under Assumption 1.3 and for

 sup0≤nh≤T|un−un|≤C(T)h;

see Stuart and Humphries (1998), for example.

In this section we consider a general class of momentum methods for the minimization task (1) which can be written in the form, for some and ,

 un+1=un+λ(un−un−1)+hf(un+a(un−un−1)),n=0,1,2,…,u1=u0+hf(u0). (5)

Again, implementation of this scheme requires an an initial guess . The parameter choice gives HB and gives NAG. In Appendix A we prove the following:

Suppose Assumption 1.3 holds and let be the solution to

 dudt=−(1−λ)−1∇Φ(u)u(0)=u0 (6)

with . For let be the sequence given by (5) and define . Then for any , there is a constant such that

 sup0≤nh≤T|un−un|≤Ch.

Note that (6) is simply a sped-up version of (2): if solves (2) and solves (6) then for any . This demonstrates that introduction of momentum in the form used within both HB and NAG results in numerical methods that do not differ substantially from gradient descent.

### 2.2 Link to HB and NAG

The HB method is usually written as a two-step scheme taking the form (Sutskever et al. (2013))

 vn+1 =λvn+hf(un) un+1 =un+vn+1

with , the momentum factor, and the learning rate. We can re-write this update as

 un+1 =un+λvn+hf(un) =un+λ(un−un−1)+hf(un)

 un+1=un+λ(un−un−1)+hf(un)u1=u0+hf(u0). (7)

Similarly NAG is usually written as (Sutskever et al. (2013))

 vn+1 =λvn+hf(un+λvn) un+1 =un+vn+1

with . Define then

 wn+1 =un+1+λvn+1 =un+1+λ(un+1−un)

and

 un+1 =un+λvn+hf(un+λvn) =un+(wn−un)+hf(wn) =wn+hf(wn).

Hence the method may be written as

 un+1=un+λ(un−un−1)+hf(un+λ(un−un−1))u1=u0+hf(u0). (8)

It is clear that (7) and (8) are special cases of (5) with giving HB and giving NAG. To intuitively understand Theorem 2.1, re-write (6) as

 dudt−λdudt=f(u).

If we discretize the term using forward differences and the term using backward differences, we obtain

 u(t+h)−u(t)h−λu(t)−u(t−h)h≈f(u(t))≈f(u(t)+hau(t)−u(t−h)h)

with the second approximate equality coming from the Taylor expansion of . This can be rearragned as

 u(t+h)≈u(t)+λ(u(t)−u(t−h))+hf(u(t)+a(u(t)−u(t−h)))

which has the form of (5) with the identification .

### 2.3 Numerical Illustration

Figure 1 compares trajectories of the momentum numerical method (5) with the rescaled gradient flow (6), for the one-dimensional problem . Panels (a) and (d) show that, for fixed , the trajectories of the numerical method match those of the gradient flow increasingly well as the learning rate is decreased; however some initial transient oscillations are present. The same phenomenon is clear in Panels (b) and (e), but becasue is increased to

, the transient oscillations are more pronounced; however convergence to the gradient flow is still apparent as the learning rate is decreased. Panels (c) and (f) estimate the rate of convergence, as a function of

, which is defined as

 Δ=log2∥u(h)−u∥∞∥u(h/2)−u∥∞

where is the numerical solution using timestep and show that it is close to , as predicted by our theory. In summary the behaviour of the momentum methods is precisely that of a rescaled gradient flow, but with initial transient oscillations which capture momentum effects, but disappear as the learning rate is decreased. We model these oscillations in the next section via use of a modified equation.

## 3 Modified Equations

The previous section demonstrates how the momentum methods approximate a time rescaled version of the gradient flow (2). In this section we show how the same methods may also be viewed as approximations of the damped Hamiltonian system (3), with mass on the order of the learning rate, using the method of modified equations. In subsection 3.1 we state and discuss the main result of the section, Theorem 3.1. Subsection 3.2 gives intuition for proof of Theorem 3.1; the proof itself is given in Appendix B. And the section also contains comments on generalizing the idea of modified equations. In subsection 3.3 we describe a numerical illustration of Theorem 3.1.

### 3.1 Main Result

The main result of this section quantifies the sense in which momentum methods do, in fact, approximate a damped Hamiltonian system; it is proved in Appendix B.

Fix and assume that is chosen so that is strictly positive. Suppose Assumption 1.3 holds and let be the solution to

 hαd2udt2+(1−λ)dudt=−∇Φ(u)u(0)=u0,dudt(0)=u′0 (9)

Suppose futher that . For let be the sequence given by (5) and define . Then for any , there is a constant such that

 sup0≤nh≤T|un−un|≤Ch.

Theorem 2.1 demonstrates the same order of convergence, namely , to the rescaled gradient flow equation (6), obtained from (9) simply by setting In the standard method of modified equations the limit system (here (6)) is perturbed by small terms (in terms of the assumed small learning rate) and an increased rate of convergence is obtained to the modified equation (here (9)). In our setting however, because the small modification is to a higher derivative (here second) than appears in the limit equation (here first order), an increased rate of convergence is not obtained. This is due to the nature of the modified equation, whose solution has derivatives that are inversely proportional to powers of ; this fact is quantified in Lemma Appendix B from Appendix B. It is precisely because the modified equation does not lead to a higher rate of convergence that the initial parameter is arbitrary; the same rate of convergence is obtain no matter what value it takes.

It is natural to ask, therefore, what is learned from the convergence result in Theorem 3.1. The answer is that, although the modified equation (9) is approximated at the same order as the limit equation (6), it actually contains considerably more qualitative information about the dynamics of the system, particularly in the early transient phase of the algorithm; this will be illustrated in subsection 3.3. Indeed we will make a specific choice of in our numerical experiments, namely

 dudt(0)=1−2α2α−λ+1f(u0), (10)

to better match the transient dynamics.

### 3.2 Intuition and Wider Context

#### 3.2.1 Idea Behind The Modified Equations

In this subsection, we show that the scheme (5) exhibits momentum, in the sense of approximating a momentum equation, but the size of the momentum term is on the order of the step size . To see this intuitively, we add and subtract to the right hand size of (5) then we can rearange it to obtian

 hun+1−2un+un−1h2+(1−λ)un−un−1h=f(un+a(un−un−1)).

This can be seen as a second order central difference and first order backward difference discretization of the momentum equation

 hd2udt2+(1−λ)dudt=f(u)

noting that the second derivative term has size of order .

#### 3.2.2 Higher Order Modified Equations For HB

We will now show that, for HB, we may derive higher order modified equations that are consistent with (7). Taking the limit of these equations yields an operator that agrees with with our intuition for discretizing (6). To this end, suppose and consider the ODE(s),

 p∑k=1hk−1(1+(−1)kλ)k!dkudtk=f(u) (11)

noting that gives (6) and gives (9). Let be the solution to (11) and define , for and . Taylor expanding yields

 un±1=un+p∑k=1(±1)khkk!u(k)n+hp+1I±n

where

 I±n=(±1)p+1p!∫10(1−s)pdp+1udtp+1((n±s)h)ds.

Then

 un+1−un−λ(un−un−1) =p∑k=1hkk!u(k)n+λp∑k=1(−1)khkk!u(k)n+hp+1(I+n−λI−n) =hp∑k=1hk−1(1+(−1)kλ)k!u(k)n+hp+1(I+n−λI−n) =hf(un)+hp+1(I+n−λI−n)

showing consistency to order . As is the case with (9) however, the terms will be inversely proportional to powers of hence global accuracy will not improve.

We now study the differential operator on the l.h.s. of (11) as . Define the sequence of differential operators by

 Tpu=p∑k=1hk−1(1+(−1)kλ)k!dkudtk,∀u∈C∞([0,∞),Rd).

Taking the Fourier transform yields

 F(Tpu)(ω)=p∑k=1hk−1(1+(−1)kλ)(iω)kk!F(u)(ω)

where denotes the imaginary unit. Suppose there is a limiting operator as then taking the limit yields

 F(Tu)(ω)=1h(eihω+λe−ihω−λ−1)F(u)(ω).

Taking the inverse transform and using the convolution theorem, we obtain

 (Tu)(t) =1hF−1(eihω+λe−ihω−λ−1)(t)∗u(t) =1h(−(1+λ)δ(t)+λδ(t+h)+δ(t−h))∗u(t) =1h∫∞−∞(−(1+λ)δ(t−τ)+λδ(t−τ+h)+δ(t−τ−h))u(τ)dτ =1h(−(1+λ)u(t)+λu(t−h)+u(t+h)) =u(t+h)−u(t)h−λ(u(t)−u(t−h)h)

where denotes the Dirac-delta distribution and we abuse notation by writing its action as an integral. The above calculation does not prove convergence of to , but simply confirms our intuition that (7) is a forward and backward discretization of (6).

### 3.3 Numerical Illustration

Figure 2 shows trajectories of (5) and (9) for different values of , , and on the one-dimensional problem . We make the specific choice of implied by the initial condition (10). Panels (c),(f) shows the numerical order of convergence as a function of , as defined in Section 2.3, which is near 1, matching our theory. We note that the oscillations in HB are captured well by (9) expect for a slight shift when is large. This is due to our choice of initial condition which cancels the maximum number of terms in the Taylor expansion initially, but the overall rate of convergence remains due to Lemma Appendix B. Other choices of also result in convergence and can be picked on a case-by-case basis to obtain consistency with different qualitative phenomena of interest in the dynamics. Note also that . As a result the transient oscillations in (9) are more quickly damped in the NAG case than in the HB case; this is consistent with the numerical results. Indeed panels (d),(e) show that (9) is not able to adequately capture the oscillations of NAG when is relatively large.

## 4 Invariant Manifold

The key lessons of the previous two sections are that the momentum methods approximate a rescaled gradient flow of the form (2) and a damped Hamiltonian system of the form (3), with small mass which scales with the learning rate, and constant damping Both approximations hold with the same order of accuracy, in terms of the learning rate, and numerics demonstrate that the Hamiltonian system is particularly useful in providing intuition for the transient regime of the algorithm. In this section we link the two theorems from the two preceding sections by showing that the Hamiltonian dynamics with small mass from section 3 has an exponentially attractive invariant manifold on which the dynamics is, to leading order, a gradient flow. That gradient flow is a small, in terms of the learning rate, perturbation of the time-rescaled gradient flow from section 2.

### 4.1 Main Result

Define

 vn\coloneqq(un−un−1)/h (12)

noting that then (5) becomes

 un+1=un+hλvn+hf(un+havn)

and

 vn+1=un+1−unh=λvn+f(un+havn).

Hence we can re-write (5) as

 un+1=un+hλvn+hf(un+havn)vn+1=λvn+f(un+havn). (13)

Note that if then (13) shows that is constant in , and that converges to This suggests that, for small, there is an invariant manifold which is a small perturbation of the relation and is representable as a graph. Motivated by this, we look for a function such that the manifold

 v=¯λf(u)+hg(u) (14)

is invariant for the dynamics of the numerical method:

 vn=¯λf(un)+hg(un)⟺vn+1=¯λf(un+1)+hg(un+1). (15)

We will prove the existence of such a function by use of the contraction mapping theorem to find fixed point of mapping defined in subsection 4.2 below. We seek this fixed point in set which we now define:

Let be as in Lemmas Appendix C., Appendix C.. Define to be the closed subset of consisting of -bounded functions:

 ∥g∥Γ\coloneqqsupξ∈Rd|g(ξ)|≤γ,∀g∈Γ

that are -Lipshitz:

 |g(ξ)−g(η)|≤δ|ξ−η|,∀g∈Γ,ξ,η∈Rd.

Fix Suppose that is chosen small enough so that Assumption Appendix C. holds. For , let , be the sequences given by (13). Then there is a such that, for all , there is a unique such that (15) holds. Furthermore,

 |vn−¯λf(un)−hg(un)|≤(λ+h2λδ)n|v0−¯λf(u0)−hg(u0)|

where .

The statement of Assumption Appendix C., and the proof of the preceding theorem, are given in Appendix C. The assumption appears somewhat involved at first glance but inspection reveals that it simply places an upper bound on the learning rate as detailed in Lemmas Appendix C., Appendix C.. The proof of the theorem rests on the Lemmas Appendix C., Appendix C. and Appendix C. which establish that the operator is well-defined, maps to , and is a contraction on . The operator is defined, and expressed in a helpful form for the purposes of analysis, in the next subsection.

In the next subsection we obtain the leading order approximation for , given in equation (29). Theorem 4.1 implies that the large-time dynamics are governed by the dynamics on the invariant manifold. Substituting the leading order approximation for into the invariant manifold (14) and using this expression in the definition (12) shows that

 vn =−(1−λ)−1∇(Φ(un)+12h¯λ(¯λ−a)|∇Φ(un)|2), (16a) un =un−1−h(1−λ)−1∇(Φ(un)+12h¯λ(¯λ−a)|∇Φ(un)|2). (16b)

Setting

 c=¯λ(¯λ−a+12) (17)

we see that for large time the dynamics of momentum methods, including HB and NAG, are approximately those of the modified gradient flow

 dudt=−(1−λ)−1∇Φh(u) (18)

with

 Φh(u)=Φ(u)+12hc|∇Φ(u)|2. (19)

To see this we proceed as follows. Note that from (18)

 d2udt2=−12(1−λ)−2∇|∇Φ(u)|2+O(h)

then Taylor expansion shows that, for ,

 un =un−1+h˙un−h22¨un+O(h3) =un−1−h¯λ(∇Φ(un)+12hc∇|∇Φ(un)|2)+14h2¯λ2∇|∇Φ(un)|2+O(h3)

where we have used that

 Df(u)f(u)=12∇(|∇Φ(u)|2).

Choosing we see that

 un=un−1−h(1−λ)−1∇(Φ(un)+12h¯λ(¯λ−a)|∇Φ(un)|2)+O(h3) (20)

Notice that comparison of (16b) and (20) shows that, on the invariant manifold, the dynamics are to the same as the equation (18); this is because the truncation error between (16b) and (20) is .

Thus we have proved:

Suppose that the conditions of Theorem 4.1 hold. Then for initial data started on the invariant manifold and any , there is a constant such that

 sup0≤nh≤T|un−un|≤Ch2,

where solves the modified equation (18) with .

### 4.2 Intuition

We will define mapping via the equations

 (21)

A fixed point of the mapping will give function so that, under (21), identity (15) holds. Later we will show that, for in and all sufficiently small, can be found from (21a) for every , and that thus (21b) defines a mapping from into We will then show that, for sufficiently small, and is a contraction.

For any and define

 wg(ξ) \coloneqq¯λf(ξ)+hg(ξ) (22) zg(ξ) \coloneqqλwg(ξ)+f(ξ+hawg(ξ)). (23)

With this notation the fixed point mapping (21) for may be written

 p=ξ+hzg(ξ),¯λf(p)+h(Tg)(p)=zg(ξ). (24)

Then, by Taylor expansion,

 f(ξ+ha(¯λf(ξ)+hg(ξ)))=f(ξ+hawg(ξ))=f(ξ)+ha∫10Df(ξ+shawg(ξ))wg(ξ)ds=f(ξ)+haI(1)g(ξ) (25)

where the last line defines . Similarly

 f(p)=f(ξ+hzg(ξ))=f(ξ)+h∫10Df(ξ+shzg(ξ))zg(ξ)ds=f(ξ)+hI(2)g(ξ), (26)

where the last line now defines . Then (21b) becomes

 ¯λ(f(ξ)+hI(2)g(ξ))+h(Tg)(p)=λ¯λf(ξ)+hλg(ξ)+f(ξ)+haI(1)g(ξ)

and we see that

 (Tg)(p)=λg(ξ)+aI(1)g(ξ)−¯λI(2)g(ξ).

In this light, we can rewrite the defining equations (21) for as

 p =ξ+hzg(ξ), (27) (Tg)(p) =λg(ξ)+aI(1)g(ξ)−¯λI(2)g(ξ). (28)

for any .

Perusal of the above definitions reveals that, to leading order in ,

 wg(ξ)=zg(ξ)=¯λf(ξ),I(1)g(ξ)=I(2)g(ξ)=¯λDf(ξ)f(ξ).

Thus setting in (27), (28) shows that, to leading order in ,

 g(p)=¯λ2(a−¯λ)Df(p)f(p). (29)

Note that since , is the negative Hessian of and is thus symmetric. Hence we can write in gradient form, leading to

 g(p)=12¯λ2(a−¯λ)∇(|∇Φ(p)|2). (30)

This modified potential (19) also arises in the construction of Lyapunov functions for the one-stage theta method – see Corollary 5.6.2 in Stuart and Humphries (1998).

### 4.3 Numerical Illustration

In Figure 3 panels (a) and (b), we plot the components and found by solving (13) with initial conditions and in the case where . These initial conditions correspond to initializing the map off the invariant manifold. To leading order in the invariant manifold is given by (see equation (16))

 v=−(1−λ)−1∇(Φ(u)+12h¯λ(¯λ−a)|∇Φ(u)|2). (31)

To measure the distance of the trajectory shown in panels (a), (b) from the invariant manifold we define

 (32)

Panel (c) shows the evolution of as well as the (approximate) bound on it found from substituing the leading order approximation of into the following upper bound from Theorem 4.1:

 (λ+h2λδ)n|v0−¯λf(u0)−hg(u0)|.

## 5 Conclusion

Together, equations (6), (9) and (18) describe the dynamical systems which are approximated by momentum methods, when implemented with fixed momentum, in a manner made precise by the four theorems in this paper. The insight obtained from these theorems sheds light on how momentum methods perform optimization tasks.

Both authors are supported, in part, by the US National Science Foundation (NSF) grant DMS 1818977, the US Office of Naval Research (ONR) grant N00014-17-1-2079, and the US Army Research Office (ARO) grant W911NF-12-2-0022.

## Appendix A

[of Theorem 2.1] Taylor expanding yields

 un+1=un+h¯λf(un)+O(h2)

and

 un=un−1+h¯λf(un)+O(h2).

Hence

 (1+λ)un−λun−1=un+hλ¯λf(un)+O(h2).

Subtracting the third identity from the first, we find that

 un+1−((1+λ)un−λun−1)=hf(un)+O(h2)

by noting . Similarly,

 a(un−un−1)=ha¯λf(un)+O(h2)

hence Taylor expanding yields

From this, we conclude that

 hf(un+a(un−un−1))=hf(un)+O(h2)

hence

 un+1=(1+λ)un−λun−1+hf(un+a(un−un−1))+O(h2).

Define the error then

 en+1 =(1+λ)en−λen−1+h(f(un+a(un−un−1))−f(un+a(un−un−1)))+O(h2) =(1+λ)en−λen−1+hMn((1+a)en−aen−1)+O(h2)

where, from the mean value theorem, we have

 Mn=∫10Df(s(un+a(un−un−1))+(1−s)(un+a(un−un−1)))ds.

Now define the concatenation then

 En+1=A(λ)En+hA(a)nEn+O(h2)

where are the block matricies

 A(λ)\coloneqq[(1+λ)I