On the Convergence of Adam and Adagrad

by   Alexandre Défossez, et al.

We provide a simple proof of the convergence of the optimization algorithms Adam and Adagrad with the assumptions of smooth gradients and almost sure uniform bound on the ℓ_∞ norm of the gradients. This work builds on the techniques introduced by Ward et al. (2019) and extends them to the Adam optimizer. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer and the total number of iterations N. This bound can be made arbitrarily small. In particular, Adam with a learning rate α=1/√(N) and a momentum parameter on squared gradients β_2=1 - 1/N achieves the same rate of convergence O(ln(N)/√(N)) as Adagrad. Thus, it is possible to use Adam as a finite horizon version of Adagrad, much like constant step size SGD can be used instead of its asymptotically converging decaying step size version.


page 1

page 2

page 3

page 4


An improvement of the convergence proof of the ADAM-Optimizer

A common way to train neural networks is the Backpropagation. This algor...

Linear Convergence of Adaptive Stochastic Gradient Descent

We prove that the norm version of the adaptive stochastic gradient metho...

Comment on Stochastic Polyak Step-Size: Performance of ALI-G

This is a short note on the performance of the ALI-G algorithm (Berrada ...

diffGrad: An Optimization Method for Convolutional Neural Networks

Stochastic Gradient Decent (SGD) is one of the core techniques behind th...

Theoretical Interpretation of Learned Step Size in Deep-Unfolded Gradient Descent

Deep unfolding is a promising deep-learning technique in which an iterat...

Accumulated Gradient Normalization

This work addresses the instability in asynchronous data parallel optimi...

On the convergence proof of AMSGrad and a new version

The adaptive moment estimation algorithm Adam (Kingma and Ba, ICLR 2015)...

1 Introduction

First order methods with adaptive step sizes have proved useful in many fields of machine learning, be it for sparse optimization 

(Duchi et al., 2013)

, tensor factorization 

(Lacroix et al., 2018)

or deep learning 

(Goodfellow et al., 2016).

Adagrad (Duchi et al., 2011) rescales each coordinate by a sum of squared past gradient values. While Adagrad proved effective for sparse optimization (Duchi et al., 2013), experiments showed that it under-performed when applied to deep learning (Wilson et al., 2017)

. The large impact of past gradients prevents it from adapting to local changes in the smoothness of the function. With RMSProp,

Tieleman and Hinton (2012) proposed an exponential moving average instead of a cumulative sum to forget past gradients. Adam (Kingma and Ba, 2014), currently one of the most popular adaptive algorithms in deep learning, built upon RMSProp and added corrective term to the step sizes at the beginning of training, together with heavy-ball style momentum.

In the online convex optimization setting, Adagrad was shown to achieve minimal regret for online convex optimization (Duchi et al., 2011). In the original Adam paper, Kingma and Ba (2014) offered a proof that it would converge to the optimum with a step size decaying as where is the number of iterations, even though this proof was later questioned by Reddi et al. (2019). In the non-convex setting, Ward et al. (2019) showed convergence with rate to a critical point for the scalar, i.e., single step size, version of Adagrad. Zou et al. (2019b)

extended this proof to the vector case, while

Zou et al. (2019a) proved the convergence of Adam when the decay of the exponential moving average scales as and the learning rate scales as

. Moreover, compared to plain stochastic gradient descent, adaptive algorithms are known to be less sensitive to hyperparameter setting. The theoretical results above confirm this observation by showing the convergence for a step size parameter that does not depend on the regularity parameters of the objective function or the bound on the variance of the stochastic gradients.

In this paper, we present a new proof of convergence to a critical point for Adagrad and Adam for stochastic non-convex smooth optimization, under the assumptions that the stochastic gradients of the iterates are almost surely bounded. These assumptions are weaker and more realistic than those of prior work on these algorithms. In particular, we show for a fully connected feed forward neural networks with sigmoid activation trained with

regularization, the iterates of Adam or Adagrad almost surely stay bounded, which in turn implies a bound on the stochastic gradient as long as the training input data is also bounded. We recover the standard convergence rate for Adagrad for all step sizes, and the same rate with Adam with an appropriate rescaling of the step sizes and decay parameters. Compared to previous work, our bound significantly improves the dependency on the momentum parameter . The best know bounds for Adagrad and Adam are respectively in and (see Section 3), while our result is in for both algorithms.

Another important contribution of this work is a significantly simpler proof than previous ones. The reason is that in our approach, the main technical steps are carried out jointly for Adagrad and Adam with constant parameters, while previous attempts at unified proofs required varying parameters through the iterations (Chen et al., 2018; Zou et al., 2019a, b).

The precise setting and assumptions are stated in the next section, and previous work is then described 3. Next, we discuss the relevance of our assumptions in the context of deep learning using containment arguments inspired by Bottou (1999). The main theorems are presented in Section 5, followed by a full proof for the case without momentum in Section 6. The full proof of the convergence with momentum is deferred to the supplementary material.

2 Setup

2.1 Notation

Let be the dimension of the problem and take . Given a function , we note its gradient and the -th component of the gradient. In the entire paper, represents a small constant, e.g., , used for numerical stability. Given a sequence with , we note for and the -th component of the -th element of the sequence.

We want to optimize a function . We assume there exists a random function such that and that we have access to an oracle providing i.i.d. samples . In machine learning, typically represents the weights of a linear or deep model, represents the loss from individual training examples or minibatches, and is the full training objective function. The goal is to find a critical point of .

2.2 Adaptive methods

We study a family of algorithms that covers both Adagrad (Duchi et al., 2011) and Adam (Kingma and Ba, 2014). We assume we have an infinite stream of i.i.d. copies of , and , and a non negative sequence .

Given our starting point and , , we iterate, for every ,


The real number is a heavy-ball style momentum parameter (Polyak, 1964), while controls the rate at which the scale of past gradients is forgotten.

Taking , and gives Adagrad. While the original Adagrad algorithm (Duchi et al., 2011) did not include a heavy-ball-like momentum, our analysis also applies to the case . On the other hand, when , , taking


leads to an algorithm close to Adam. Indeed, the step size in (2.4) is rescaled based on the number of past gradients that were accumulated. This is equivalent to the correction performed by Adam, which compensates for the possible smaller scale of when only few gradients have been accumulated.111Adam updates are usually written and . These are equivalent to ours because the factor is transfered to a multiplication of by . The same apply to . When there is no momentum () the only difference with Adam is that in (2.3) is outside the square root in the original algorithm. When , an additional difference is that we do not compensate for being smaller during the first few iterations.

The slight difference in step size when simplifies the proof at a minimum practical cost: the first few iterations of Adam are usually noisy, in particular due to having seen few samples, and (2.4) is equivalent to taking a smaller step size during the first iterations. Since Kingma and Ba (2014) suggested a default value of , our update rule differs significantly from the original Adam only during the first few tens of iterations.

2.3 Assumptions

We make four assumptions. We first assume is bounded below by , that is,


We assume the iterates are contained within an ball almost surely,


We then assume the norm of the stochastic gradients is almost surely bounded over this ball: for all such that ,


and finally, the smoothness of the objective function over this ball, e.g., its gradient is -Liptchitz-continuous with respect to the -norm: for all such that and ,


Note that, if is -smooth over and the stochastic gradients are uniformly almost surely bounded over , then one can take , and (2.6) is then verified. This case matches more usual assumptions, but it is rarely met in practice, as explained in Section 3. However, note that (2.6) is verified with for some cases of deep neural network training, as proven in Section 4.

3 Related work

Work on adaptive optimization methods started with the seminal papers of McMahan and Streeter (2010) and Duchi et al. (2011). They showed that adaptive methods like Adagrad achieve an optimal rate of convergence of for convex optimization (Agarwal et al., 2009). Practical experiences with training deep neural networks led to the development of adaptive methods using an exponential moving average of past squared gradients like RMSProp (Tieleman and Hinton, 2012) or Adam (Kingma and Ba, 2014).

Kingma and Ba (2014) claimed that Adam with decreasing step sizes converges to an optimal solutions for convex objectives. However, the proof contained a mistake spotted by Reddi et al. (2019), who also gave examples of convex problems where Adam does not converge to an optimal solution. They proposed AMSGrad as a convergent variant of Adam, which consisted in retaining the maximum value of the exponential moving average. The examples given by Reddi et al. (2019) illustrate a behavior of Adam that is coherent with our results and previous work (Zou et al., 2019a), because they use a small exponential decay parameter . Under our assumptions, Adam with constant is guaranteed to not diverge, but it is not guaranteed to converge to a stationary point.

Regarding the non-convex setting, Li and Orabona (2019) showed the convergence of Adagrad for the non-convex case but under unpractical conditions, in particular the step size should verify . Ward et al. (2019) showed the convergence of a variant of Adagrad (in the sense of the expected squared norm at a random iterate) for any value of , but only for the “scalar” version of Adagrad, with a rate of . While our approach builds on this work, we significantly extend it to apply to both Adagrad and Adam, in their coordinate-wise version used in practice, while also supporting heavy-ball momentum.

Zou et al. (2019b) showed the convergence of Adagrad with either heavy-ball or Nesterov style momentum. We recover a similar result for Adagrad with heavy-ball momentum, under different but interchangeable hypotheses, as explained in Section 5.2. Their proof technique work with a variety of averaging scheme for the past squared gradients, including Adagrad. In that case, we obtain the same rate as them as a function of (i.e., ), but we improve the dependence on the momentum parameter from to . Chen et al. (2019) also present bounds for Adagrad and Adam, without convergence guarantees for Adam. The dependence of their bounds in is worse than that of Zou et al. (2019b).

Zou et al. (2019a) propose unified convergence bounds for Adagrad and Adam. We recover the same scaling of the bound with respect to and . However their bound has a dependency in with respect to , while we prove , a significant reduction.

In previous work (Zou et al., 2019a, b), the assumption given by (2.7) is replaced by


First, notice that we assume an almost sure bound instead of a bound on the expectation of the squared stochastic gradients. However this lead to a weaker convergence result, e.g. a bound on the expected norm of the full gradient at the iterates taken to the power instead of , as explained in Section 5.2. The proof remains mostly identical whether we assume an almost sure bound or bound in expectation of the squared stochastic gradients. Given that for a fixed

, the variance of the stochastic gradients for machine learning models comes from the variance of the training data, going from a bound in expectation of the squared gradients to an almost sure bound is easily accomplished by the removal of outliers in the training set.

Second, assumption (3.1) rarely hold in practice as it assume boundness of the gradient over

. It is not verified by any deep learning network with more than one layer, linear regression, nor logistic regression with

regularization. In fact, a deep learning network with two layers is not even -smooth over , as the norm of the gradient for the first layer is multiplied by the norm of the gradient for the second layer. We show in the next section that for deep neural networks with sigmoid activations and regularization, (2.6) is verified, as long as the data in the training set is bounded, which implies both (2.7) and (2.8).

4 Containment of the iterates

Following Bottou (1999) we show in this section that (2.6) is verified for a fully connected feed forward neural network with sigmoid activations and regularization. The goal of this section is to show that there is an upper-bound on the weights of this neural network when trained with Adam or Adagrad even though the bound we obtain grows super exponentially with the depth.

We assume that for simplicity, so that for any iteration and coordinate , . We assume is the concatenation of , where is the number of layers and for all , is the weight of the -th layer, being the dimension of the input data. For clarity, we assume , i.e. the neural network has a single output. The fully connected network is represented by the function,

Then, the stochastic objective function is given by,


is a random variable over

representing the input training data, is the label over a set ,

is the loss function, and

the regularization parameter. We assume that the norm of is almost surely bounded by and that for any label , . This is verified for the Huber loss, or the cross entropy loss. When writing , we always mean its derivative with respect to its first argument. Finally, we note the output of the -th layer, i.e.

and . In particular, .

We will prove the bound on the iterates through induction, starting the output layer and going backward up to the input layer. We assume all the weights are initialized with a size much smaller than the bound we will derive.

4.1 Containment of the last layer

In the following, is the Jacobian operator with respect to the weights of a specific layer . Taking the derivative of with respect to , we get,

Given that , we have,


Updates of Adam or Adagrad are bounded

For any iteration , we have for Adam and for Adagrad . We note . Besides, for any coordinate , we have , so that


Bound on

Let us assume that there exist and a coordinate corresponding to a weight of the last layer, such that . Given (4.1), we have,

Thus and using (4.2),

so that . So if at any point goes over , the next iterates decrease until they go back below . Given that the maximum increase between two update is , it means we have for any iteration , and for any coordinate corresponding to a weight of the last layer,

Applying the same technique we can show that and finally,

In particular, this implies that the Frobenius norm of the weight of the last layer stays bounded for all the iterates.

4.2 Containment of the previous layers

Now taking a layer , we have,

Let us assume we have shown that for layers , , then we can immediately derive that the above gradient is bounded in norm. Applying the same method as in 4.1, we can then show that the weights stay bounded as well, with respect to the norm, by . Thus, by induction, we can show that the weights of all layers stay bounded for all iterations, albeit with a bound growing more than exponentially with depth.

5 Main results

For any total number of iterations , we define a random index with value in , verifying


If , this is equivalent to sampling uniformly in . If , the last few iterations are sampled rarely, and all iterations older than a few times that number are sampled almost uniformly. All our results bound the expected squared norm of the total gradient at iteration , which is standard for non convex stochastic optimization (Ghadimi and Lan, 2013).

5.1 Convergence bounds

For simplicity, we first give convergence results for , along with a complete proof in Section 6.1 and Section 6.2. We show convergence for any , however the theoretical bound is always worse than for , while the proof becomes significantly more complex. Therefore, we delay the complete proof with momentum to the Appendix, Section A.5. We still provide the results with momentum in the second part of this section. Note that the disadvantageous dependency of the bound on is not specific to our proof but can be observed in previous adaptive methods bounds (Chen et al., 2019; Zou et al., 2019b).

Theorem 1 (Convergence of Adam without momentum).

Given the assumptions introduced in Section 2.3, the iterates defined in Section 2.2 with hyper-parameters verifying , with and , we have for any , taking defined by (5.1),



Theorem 2 (Convergence of Adagrad without momentum).

Given the assumptions introduced in Section 2.3, the iterates defined in Section 2.2 with hyper-parameters verifying , with and , we have for any , taking as defined by (5.1),

Theorem 3 (Convergence of Adam with momentum).

Given the assumptions introduced in Section 2.3, the iterates defined in Section 2.2 with hyper-parameters verifying , with and , we have for any such that , taking defined by (5.1),




Theorem 4 (Convergence of Adagrad with momentum).

Given the assumptions introduced in Section 2.3, the iterates defined in Section 2.2 with hyper-parameters verifying , with and , we have for any such that , taking as defined by (5.1),






5.2 Analysis of the bounds

Depencency in .

Looking at bounds introduced in the previous section, one can notice the presence of two terms: the forgetting of the initial condition, proportional to , and a second term that scales as . The scaling as is inevitable given our hypothesis, in particular the use of a bound on the -norm of the gradients. Indeed, for any bound valid for a function with , then we can build a new function , i.e., we replicate times the same optimization problem. The Hessian of is diagonal with each diagonal element being the same as the Hessian of , thus the smoothness constant is unchanged, nor is the bound on the stochastic gradients. Each dimension is independent from the other and equivalent to the single dimension problem given by , thus scales as .

Almost sure bound on the gradient.

We chose to assume the existence of an almost sure -bound on the gradients given by (2.7). We use it only in (6.16) and (6.18). It is possible instead to use the Hölder inequality, which is the choice made by Ward et al. (2019) and Zou et al. (2019b). This however deteriorate the bound, instead of a bound on , this would give a bound on . We also used the bound on the gradient in Lemma 6.1, to obtain (6.9) and (6.12), however in that case, a bound on the expected squared norm of the gradients is sufficient.

Impact of heavy-ball momentum.

Looking at Theorems 3 and 4, we see that increasing always deteriorate the bound. Taking in those theorems gives us almost exactly the bound without heavy-ball momentum from Theorems 1 and 2, up to a factor 3 in the terms of the form . As discussed in the related work, Section 3, we significantly improve the dependency in , compared with previous work (Zou et al., 2019b, a). We provide a more detailed analysis in the Appendix, Sections A.

5.3 Optimal finite horizon Adam is Adagrad

Let us take a closer look at the result from Theorem 1. It might seem like some quantities might explode but actually not for any reasonable values of , and . Let us assume , and . Then we immediately have




Putting those together and ignoring the log terms for now,


The best overall rate we can obtain is , and it is only achieved for and , i.e., and . We can see the resemblance between Adagrad and Adam with a finite horizon and such parameters, as the exponential moving average for the denominator has a typical averaging window length of . In particular, the bound for Adam now becomes


which differ from (5.3) only by a next to the log term.

Adam and Adagrad are twins.

We discovered an important fact from the bounds we introduced in Section 5.1: Adam is to Adagrad like constant step size SGD is to decaying step size SGD. While Adagrad is asymptotically optimal, it has a slower forgetting of the initial condition , as instead of for Adam. Furthermore, Adam adapts to local change of the smoothness faster than Adagrad as it eventually forgets about past gradients. This fast forgetting of the initial condition and improved adaptivity comes at a cost as Adam does not converge. It is however possible to chose parameters and as to achieve an critical point for arbitrarily small and in particular, for a known time horizon, they can be chosen to obtain the exact same bound as Adagrad.

6 Proofs for (no momentum)

We assume here for simplicity that , i.e., there is no heavy-ball style momentum. The recursions introduced in Section 2.2 can be simplified into


Throughout the proof we note by the conditional expectation with respect to . In particular, , is deterministic knowing . For all , we also define so that for all ,


i.e., is obtained from by replacing the last gradient contribution by its expected value knowing .

6.1 Technical lemmas

A problem posed by the update in (6.2) is the correlation between the numerator and denominator. This prevents us from easily computing the conditional expectation and as noted by Reddi et al. (2019), the expected direction of update can have a positive dot product with the objective gradient. It is however possible to control the deviation from the descent direction, following Ward et al. (2019) with this first lemma.

Lemma 6.1 (adaptive update approximately follow a descent direction).

For all and , we have:


We take and note , , and .


Given that and are independent given , we immediately have


Now we need to control the size of ,

the last inequality coming from the fact that and .

Following Ward et al. (2019), we can use the following inequality to bound and ,


First applying (6.7) to with

we obtain

Given that and taking the conditional expectation, we can simplify as


Given that and , we can simplify (6.8) as


Now turning to , we use (6.7) with


we obtain

Given that and taking the conditional expectation we obtain


which we simplify using the same argument as for (6.9) into


Notice that in 6.10, we possibly divide by zero. It suffice to notice that if then a.s. so that and (6.12) is still verified.

Summing (6.9) and (6.12) we can bound


Injecting (6.13) and (6.6) into (6.5) finishes the proof. ∎

Anticipating on Section 6.2, we can see that for a coordinate and iteration , the deviation from a descent direction is at most

While for any specific iteration, this deviation can take us away from a descent direction, the next lemma tells us that when we sum those deviations over all iterations, it cannot grow larger than a logarithmic term. This key insight introduced by Ward et al. (2019) is what makes the proof work.

Lemma 6.2 (sum of ratios with the denominator increasing as the numerator).

We assume we have and a non-negative sequence . We define with the convention . Then we have,


Given that concavity of , and the fact that , we have for all ,

The first term on the right hand side forms a telescoping series, while the last term is bounded by as . increasing with and thus is bounded by . Summing over all gives the desired result. ∎

6.2 Proof of Adam and Adagrad without momentum

For all iterations , we define the update ,



Let us a take an iteration . We note (see (2.4) in Section 2.2. Using the smoothness of defined in (2.8), we have

Notice that due to the a.s. bound on the gradients (2.7), we have for any , , so that,


Taking the conditional expectation with respect to we can apply the descent Lemma 6.1 and use (6.16) to obtain,

Given that , we have . Summing the previous inequality for all and taking the complete expectation yields


The application of Lemma 6.2 immediately gives for all ,

Injecting into (6.17) and rearranging the terms, the result of Theorem 1 follows immediately.


Let us now take and to recover Adagrad. Using again the smoothness of defined in (2.8), we have