1 Introduction
First order optimization methods, which access a function to be optimized through its gradient or an unbiased approximation of its gradient, are the workhorses for modern large scale optimization problems, which include training the current stateoftheart deep neural networks. Gradient descent
(Cauchy, 1847) is the simplest first order method that is used heavily in practice. However, it is known that for the class of smooth convex functions as well as some simple nonsmooth problems (Nesterov, 2012a)), gradient descent is suboptimal (Nesterov, 2004) and there exists a class of algorithms called fast gradient/momentum based methods which achieve optimal convergence guarantees. The heavy ball method (Polyak, 1964) and Nesterov’s accelerated gradient descent (Nesterov, 1983) are two of the most popular methods in this category.On the other hand, training deep neural networks on large scale datasets have been possible through the use of Stochastic Gradient Descent (SGD) (Robbins and Monro, 1951)
, which samples a random subset of training data to compute gradient estimates that are then used to optimize the objective function. The advantages of SGD for large scale optimization and the related issues of tradeoffs between computational and statistical efficiency was highlighted in
Bottou and Bousquet (2007).The above mentioned theoretical advantages of fast gradient methods (Polyak, 1964; Nesterov, 1983) (albeit for smooth convex problems) coupled with cheap to compute stochastic gradient estimates led to the influential work of Sutskever et al. (2013), which demonstrated the empirical advantages possessed by SGD when augmented with the momentum machinery. This work has led to widespread adoption of momentum methods for training deep neural nets; so much so that, in the context of neural network training, gradient descent often refers to momentum methods.
But, there is a subtle difference between classical momentum methods and their implementation in practice – classical momentum methods work in the exact first order oracle model (Nesterov, 2004), i.e., they employ exact gradients (computed on the full training dataset), while in practice (Sutskever et al., 2013), they are implemented with stochastic gradients (estimated from a randomly sampled minibatch of training data). This leads to a natural question:
“Are momentum methods optimal even in the stochastic first order oracle (SFO) model, where we access stochastic gradients computed on a small constant sized minibatches (or a batchsize of ?)”
Even disregarding the question of optimality of momentum methods in the SFO model, it is not even known if momentum methods (say, Polyak (1964); Nesterov (1983)) provide any provable improvement over SGD in this model. While these are open questions, a recent effort of Jain et al. (2017) showed that improving upon SGD (in the stochastic first order oracle) is rather subtle as there exists problem instances in SFO model where it is not possible to improve upon SGD, even information theoretically. Jain et al. (2017) studied a variant of Nesterov’s accelerated gradient updates (Nesterov, 2012b)
for stochastic linear regression and show that their method improves upon SGD wherever it is information theoretically admissible. Through out this paper, we refer to the algorithm of
Jain et al. (2017) as Accelerated Stochastic Gradient Method (ASGD) while we refer to a stochastic version of the most widespread form of Nesterov’s method (Nesterov, 1983) as NAG; HB denotes a stochastic version of the heavy ball method (Polyak, 1964). Critically, while Jain et al. (2017) shows that ASGD improves on SGD in any informationtheoretically admissible regime, it is still not known whether HB and NAG can achieve a similar performance gain.A key contribution of this work is to show that HB does not provide similar performance gains over SGD even when it is informationallytheoretically admissible. That is, we provide a problem instance where it is indeed possible to improve upon SGD (and ASGD achieves this improvement), but HB cannot achieve any improvement over SGD. We validate this claim empirically as well. In fact, we provide empirical evidence to the claim that NAG also do not achieve any improvement over SGD for several problems where ASGD can still achieve better rates of convergence.
This raises a question about why HB and NAG provide better performance than SGD in practice (Sutskever et al., 2013), especially for training deep networks. Our conclusion (that is well supported by our theoretical result) is that HB and NAG’s improved performance is attributed to minibatching and hence, these methods will often struggle to improve over SGD with small constant batch sizes. This is in stark contrast to methods like ASGD, which is designed to improve over SGD across both small or large minibatch sizes. In fact, based on our experiments, we observe that on the task of training deep residual networks (He et al., 2016a) on the cifar10 dataset, we note that ASGD offers noticeable improvements by achieving better test error over HB and NAG even with commonly used batch sizes like during the initial stages of the optimization.
1.1 Contributions
The contributions of this paper are as follows.

In Section 3, we prove that HB is not optimal in the SFO model. In particular, there exist linear regression problems for which the performance of HB (with any step size and momentum) is either the same or worse than that of SGD while ASGD improves upon both of them.

Experiments on several linear regression problems suggest that the suboptimality of HB in the SFO model is not restricted to special cases – it is rather widespread. Empirically, the same holds true for NAG as well (Section 5).

The above observations suggest that the only reason for the superiority of momentum methods in practice is minibatching, which reduces the variance in stochastic gradients and moves the SFO closer to the exact first order oracle. This conclusion is supported by empirical evidence through training deep residual networks on cifar10, with a batch size of
(see Section 5.3). 
We present an intuitive and easier to tune version of ASGD (see Section 4
) and show that ASGD can provide significantly faster convergence to a reasonable accuracy than SGD, HB, NAG, while still providing favorable or comparable asymptotic accuracy as these methods, particularly on several deep learning problems.
Hence, the takehome message of this paper is: HB and NAG are not optimal in the SFO model. The only reason for the superiority of momentum methods in practice is minibatching. ASGD provides a distinct advantage in training deep networks over SGD, HB and NAG.
2 Notation
We denote matrices by boldface capital letters and vectors by lowercase letters.
denotes the function to optimize w.r.t. model parameters . denotes exact gradient of at while denotes a stochastic gradient of . That is, where is sampled uniformly at random from . For linear regression, where is the target and is the covariate, and . In this case, denotes the Hessian of and denotes it’s condition number.Algorithm 1 provides a pseudocode of HB method (Polyak, 1964). is the momentum term and denotes the momentum parameter. Next iterate is obtained by a linear combination of the SGD update and the momentum term. Algorithm 2 provides pseudocode of a stochastic version of the most commonly used form of Nesterov’s accelerated gradient descent (Nesterov, 1983).
3 Suboptimality of Heavy Ball Method
In this section, we show that there exists linear regression problems where the performance of HB (Algorithm 1) is no better than that of SGD, while ASGD significantly improves upon SGD’s performance. Let us now describe the problem instance.
Fix and let be a sample from the distribution such that:
where are canonical basis vectors, . Let
be a random variable such that
and . Hence, we have: for . Now, our goal is to minimize:Let and denote the computational and statistical condition numbers – see Jain et al. (2017) for definitions. For the problem above, we have and . Then we obtain following convergence rates for SGD and ASGD when applied to the above given problem instance:
Corollary 1 (of Theorem of Jain et al. (2016)).
Let be the iterate of SGD on the above problem with starting point and stepsize . The error of can be bounded as,
On the other hand, ASGD achieves the following superior rate.
Corollary 2 (of Theorem of Jain et al. (2017)).
Let be the iterate of ASGD on the above problem with starting point and appropriate parameters. The error of can be bounded as,
Note that for a given problem/input distribution is a constant while can be arbitrarily large. Note that . Hence, ASGD improves upon rate of SGD by a factor of . The following proposition, which is the main result of this section, establishes that HB (Algorithm 1) cannot provide a similar improvement over SGD as what ASGD offers. In fact, we show no matter the choice of parameters of HB, its performance does not improve over SGD by more than a constant.
Proposition 3.
Let be the iterate of HB (Algorithm 1) on the above problem with starting point . For any choice of stepsize and momentum , large enough such that , we have,
where depends on and (but not on ).
Thus, to obtain s.t. , HB requires samples and iterations. On the other hand, ASGD can obtain approximation to in iterations. We note that the gains offered by ASGD are meaningful when (Jain et al., 2017); otherwise, all the algorithms including SGD achieve nearly the same rates (upto constant factors). While we do not prove it theoretically, we observe empirically that for the same problem instance, NAG also obtains nearly same rate as HB and SGD. We conjecture that a lower bound for NAG can be established using a similar proof technique as that of HB (i.e. Proposition 3). We also believe that the constant in the lower bound described in proposition 3 can be improved to some small number ().
4 Algorithm
We will now present and explain an intuitive version of ASGD (pseudo code in Algorithm 3). The algorithm takes three inputs: short step , long step parameter and statistical advantage parameter . The short step is precisely the same as the step size in SGD, HB or NAG. For convex problems, this scales inversely with the smoothness of the function. The long step parameter is intended to give an estimate of the ratio of the largest and smallest curvatures of the function; for convex functions, this is just the condition number. The statistical advantage parameter captures trade off between statistical and computational condition numbers – in the deterministic case, and ASGD is equivalent to NAG, while in the high stochasticity regime, is much smaller. The algorithm maintains two iterates: descent iterate and a running average . The running average is a weighted average of the previous average and a long gradient step from the descent iterate, while the descent iterate is updated as a convex combination of short gradient step from the descent iterate and the running average. The idea is that since the algorithm takes a long step as well as short step and an appropriate average of both of them, it can make progress on different directions at a similar pace. Appendix B shows the equivalence between Algorithm 3 and ASGD as proposed in Jain et al. (2017). Note that the constant appearing in Algorithm 3 has no special significance. Jain et al. (2017) require it to be smaller than but any constant smaller than seems to work in practice.
5 Experiments
We now present our experimental results exploring performance of SGD, HB, NAG and ASGD. Our experiments are geared towards answering the following questions:

Even for linear regression, is the suboptimality of HB restricted to specific distributions in Section 3 or does it hold for more general distributions as well? Is the same true of NAG?

What is the reason for the superiority of HB and NAG in practice? Is it because momentum methods have better performance that SGD for stochastic gradients or due to minibatching? Does this superiority hold even for small minibatches?

How does the performance of ASGD compare to that of SGD, HB and NAG, when training deep networks?
Section 5.1 and parts of Section 5.2 address the first two questions. Section 5.2 and 5.3 address Question 2 partially and the last question. We use Matlab to conduct experiments presented in Section 5.1
and use PyTorch
(pytorch, 2017) for our deep networks related experiments. Pytorch code implementing the ASGD algorithm can be found at https://github.com/rahulkidambi/AccSGD.5.1 Linear Regression
In this section, we will present results on performance of the four optimization methods (SGD, HB, NAG, and ASGD) for linear regression problems. We consider two different class of linear regression problems, both in two dimensions. For a given condition number , we consider the following two distributions:
Discrete: w.p. and with ; is the standard basis vector.
Gaussian : is distributed as a Gaussian random vector with covariance matrix .
We fix a randomly generated and for both the distributions above, we let . We vary from and for each in this set, we run 100 independent runs of all four methods, each for a total of iterations. We define that the algorithm converges if there is no error in the second half (i.e. after updates) that exceeds the starting error  this is reasonable since we expect geometric convergence of the initial error.
Unlike ASGD and SGD, we do not know optimal learning rate and momentum parameters for NAG and HB in the stochastic gradient model. So, we perform a grid search over the values of the learning rate and momentum parameters. In particular, we lay a grid in for learning rate and momentum and run NAG and HB. Then, for each grid point, we consider the subset of trials that converged and computed the final error using these. Finally, the parameters that yield the minimal error are chosen for NAG and HB, and these numbers are reported. We measure convergence performance of a method using:
(1) 
Algorithm  Slope – discrete  Slope – Gaussian 

SGD  0.9302  0.8745 
HB  0.8522  0.8769 
NAG  0.98  0.9494 
ASGD  0.5480  0.5127 
We compute the rate (1) for all the algorithms with varying condition number . Given a rate vs plot for a method, we compute it’s slope (denoted as ) using linear regression. Table 1 presents the estimated slopes (i.e. ) for various methods for both the discrete and the Gaussian case. The slope values clearly show that the rate of SGD, HB and NAG have a nearly linear dependence on while that of ASGD seems to scale linearly with .
5.2 Deep Autoencoders for MNIST
In this section, we present experimental results on training deep autoencoders for the mnist dataset, and we follow the setup of
Hinton and Salakhutdinov (2006). This problem is a standard benchmark for evaluating optimization algorithms e.g., Martens (2010); Sutskever et al. (2013); Martens and Grosse (2015); Reddi et al. (2017). The network architecture follows previous work (Hinton and Salakhutdinov, 2006) and is represented as with the first and last nodes representing the input and output respectively. All hidden/output nodes employ sigmoid activations except for the layer with nodes which employs linear activations and we use MSE loss. We use the initialization scheme of Martens (2010), also employed in Sutskever et al. (2013); Martens and Grosse (2015). We perform training with two minibatch sizes and . The runs with minibatch size of were run for epochs while the runs with minibatch size of were run for epochs. For each of SGD, HB, NAG and ASGD, a grid search over learning rate, momentum and long step parameter (whichever is applicable) was done and best parameters were chosen based on achieving the smallest training error in the same protocol followed by Sutskever et al. (2013). The grid was extended whenever the best parameter fell at the edge of a grid. For the parameters chosen by grid search, we perform runs with different seeds and averaged the results. The results are presented in Figures 2 and 3. Note that the final loss values reported are suboptimal compared to the published literature e.g., Sutskever et al. (2013); while Sutskever et al. (2013) report results after updates with a large batch size of (which implies a total of M gradient evaluations), whereas, ours are after M updates of SGD with a batch size (which is just M gradient evaluations).Effect of minibatch sizes: While HB and NAG decay the loss faster compared to SGD for a minibatch size of (Figure 2), this superior decay rate does not hold for a minibatch size of (Figure 3). This supports our intuitions from the stochastic linear regression setting, where we demonstrate that HB and NAG are suboptimal in the stochastic first order oracle model.
5.3 Deep Residual Networks for CIFAR10
We will now present experimental results on training deep residual networks (He et al., 2016b) with preactivation blocks He et al. (2016a)
for classifying images in cifar10
(Krizhevsky and Hinton, 2009); the network we use has layers (dubbed preresnet44). The code for this section was downloaded from preresnet (2017). One of the most distinct characteristics of this experiment compared to our previous experiments is learning rate decay. We use a validation set based decay scheme, wherein, after every epochs, we decay the learning rate by a certain factor (which we grid search on) if the validation zero one error does not decrease by at least a certain amount (precise numbers are provided in the appendix since they vary across batch sizes). Due to space constraints, we present only a subset of training error plots. Please see Appendix C.3 for some more plots on training errors.Effect of minibatch sizes: Our first experiment tries to understand how the performance of HB and NAG compare with that of SGD and how it varies with minibatch sizes. Figure 4 presents the test zero one error for minibatch sizes of and . While training with batch size was done for epochs, with batch size , it was done for epochs. We perform a grid search over all parameters for each of these algorithms. See Appendix C.3 for details on the grid search parameters. We observe that final error achieved by SGD, HB and NAG are all very close for both batch sizes. While NAG exhibits a superior rate of convergence compared to SGD and HB for batch size , this superior rate of convergence disappears for a batch size of .
Comparison of ASGD with momentum methods: The next experiment tries to understand how ASGD compares with HB and NAG. The errors achieved by various methods when we do grid search over all parameters are presented in Table 2. Note that the final test errors for batch size are better than those for batch size since the former was run for epochs while the latter was run only for epochs (due to time constraints).
Algorithm  Final test error – batch size  Final test error – batch size 

SGD  
HB  
NAG  
ASGD 
. The hyperparameters have been chosen by grid search.
While the final error achieved by ASGD is similar/favorable compared to all other methods, we are also interested in understanding whether ASGD has a superior convergence speed. For this experiment, we need to address the issue of differing learning rates used by various algorithms and different iterations where they decay learning rates. So, for each of HB and NAG, we choose the learning rate and decay factors by grid search, use these values for ASGD and do grid search only over long step parameter and momentum for ASGD. The results are presented in Figures 5 and 6. For batch size , ASGD decays error at a faster rate compared to both HB and NAG. For batch size , while we see a superior convergence of ASGD compared to NAG, we do not see this superiority over HB. The reason for this turns out to be that the learning rate for HB, which we also use for ASGD, turns out to be quite suboptimal for ASGD. So, for batch size , we also compare fully optimized (i.e., grid search over learning rate as well) ASGD with HB. The superiority of ASGD over HB is clear from this comparison. These results suggest that ASGD decays error at a faster rate compared to HB and NAG across different batch sizes.
6 Related Work
First order oracle methods: The primary method in this family is Gradient Descent (GD) (Cauchy, 1847). As mentioned previously, GD is suboptimal for smooth convex optimization (Nesterov, 2004), and this is addressed using momentum methods such as the Heavy Ball method (Polyak, 1964) (for quadratics), and Nesterov’s Accelerated gradient descent (Nesterov, 1983).
Stochastic first order methods and noise stability: The simplest method employing the SFO is SGD (Robbins and Monro, 1951); the effectiveness of SGD has been immense, and its applicability goes well beyond optimizing convex objectives. Accelerating SGD is a tricky proposition given the instability of fast gradient methods in dealing with noise, as evidenced by several negative results which consider statistical (Proakis, 1974; Polyak, 1987; Roy and Shynk, 1990), numerical (Paige, 1971; Greenbaum, 1989) and adversarial errors (d’Aspremont, 2008; Devolder et al., 2014). A result of Jain et al. (2017) developed the first provably accelerated SGD method for linear regression which achieved minimax rates, inspired by a method of Nesterov (2012b). Schemes of Ghadimi and Lan (2012, 2013); Dieuleveut et al. (2016), which indicate acceleration is possible with noisy gradients do not hold in the SFO model satisfied by algorithms that are run in practice (see Jain et al. (2017) for more details).
While HB (Polyak, 1964) and NAG (Nesterov, 1983) are known to be effective in case of exact first order oracle, for the SFO, the theoretical performance of HB and NAG is not well understood.
Understanding Stochastic Heavy Ball: Understanding HB’s performance with inexact gradients has been considered in efforts spanning several decades, in many communities like controls, optimization and signal processing. Polyak (1987) considered HB with noisy gradients and concluded that the improvements offered by HB with inexact gradients vanish unless strong assumptions on the inexactness was considered; an instance of this is when the variance of inexactness decreased as the iterates approach the minimizer. Proakis (1974); Roy and Shynk (1990); Sharma et al. (1998) suggest that the improved nonasymptotic rates offered by stochastic HB arose at the cost of worse asymptotic behavior. We resolve these unquantified improvements on rates as being just constant factors over SGD, in stark contrast to the gains offered by ASGD. Loizou and Richtárik (2017) state their method as Stochastic HB but require stochastic gradients that nearly behave as exact gradients; indeed, their rates match that of the standard HB method (Polyak, 1964). Such rates are not information theoretically possible (see Jain et al. (2017)), especially with a batch size of or even with constant sized minibatches.
Accelerated and Fast Methods for finitesums: There have been developments pertaining to faster methods for finitesums (also known as offline stochastic optimization): amongst these are methods such as SDCA (ShalevShwartz and Zhang, 2012), SAG (Roux et al., 2012), SVRG (Johnson and Zhang, 2013), SAGA (Defazio et al., 2014), which offer linear convergence rates for strongly convex finitesums, improving over SGD’s sublinear rates (Rakhlin et al., 2012). These methods have been improved using accelerated variants (ShalevShwartz and Zhang, 2014; Frostig et al., 2015a; Lin et al., 2015; Defazio, 2016; AllenZhu, 2016). Note that these methods require storing the entire training set in memory and taking multiple passes over the same for guaranteed progress. Furthermore, these methods require computing a batch gradient or require memory requirements (typically ). For deep learning problems, data augmentation is often deemed necessary for achieving good performance; this implies computing quantities such as batch gradient (or storage necessities) over this augmented dataset is often infeasible. Such requirements are mitigated by the use of simple streaming methods such as SGD, ASGD, HB, NAG. For other technical distinctions between the offline and online stochastic methods refer to Frostig et al. (2015b).
Practical methods for training deep networks: Momentum based methods employed with stochastic gradients (Sutskever et al., 2013) have become standard and popular in practice. These schemes tend to outperform standard SGD on several important practical problems. As previously mentioned, we attribute this improvement to effect of minibatching rather than improvement offered by HB or NAG in the SFO model. Schemes such as Adagrad (Duchi et al., 2011)
, RMSProp
(Tieleman and Hinton, 2012), Adam (Kingma and Ba, 2014) represent an important and useful class of algorithms. The advantages offered by these methods are orthogonal to the advantages offered by fast gradient methods; it is an important direction to explore augmenting these methods with ASGD as opposed to standard HB or NAG based acceleration schemes.Chaudhari et al. (2017) proposed EntropySGD, which is an altered objective that adds a local strong convexity term to the actual empirical risk objective, with an aim to improve generalization. However, we do not understand convergence rates for convex problems or the generalization ability of this technique in a rigorous manner. Chaudhari et al. (2017) propose to use SGD in their procedure but mention that they employ the HB/NAG method in their implementation for achieving better performance. Naturally, we can use ASGD in this context. Path normalized SGD (Neyshabur et al., 2015) is a variant of SGD that alters the metric on which the weights are optimized. As noted in their paper, path normalized SGD could be improved using HB/NAG (or even the ASGD method).
7 Conclusions and Future Directions
In this paper, we show that the performance gain of HB over SGD in stochastic setting is attributed to minibatching rather than the algorithm’s ability to accelerate with stochastic gradients. Concretely, we provide a formal proof that for several easy problem instances, HB does not outperform SGD despite large condition number of the problem; we observe this trend for NAG in our experiments. In contrast, ASGD (Jain et al., 2017) provides significant improvement over SGD for these problem instances. We observe similar trends when training a resnet on cifar10 and an autoencoder on mnist. This work motivates several directions such as understanding the behavior of ASGD on domains such as NLP, and developing automatic momentum tuning schemes (Zhang et al., 2017).
Acknowledgments
Sham Kakade acknowledges funding from Washington Research Foundation Fund for Innovation in DataIntensive Discovery and the NSF through awards CCF, CCF and CCF.
References
 AllenZhu (2016) Z. AllenZhu. Katyusha: The first direct acceleration of stochastic gradient methods. CoRR, abs/1603.05953, 2016.
 Bottou and Bousquet (2007) L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In NIPS 20, 2007.
 Cauchy (1847) L. A. Cauchy. Méthode générale pour la résolution des systémes d’équations simultanees. C. R. Acad. Sci. Paris, 1847.
 Chaudhari et al. (2017) P. Chaudhari, A. Choromanska, S. Soatto, Y. LeCun, C. Baldassi, C. Borgs, J. Chayes, L. Sagun, and R. Zecchina. Entropysgd: Biasing gradient descent into wide valleys. CoRR, abs/1611.01838, 2017.
 d’Aspremont (2008) A. d’Aspremont. Smooth optimization with approximate gradient. SIAM Journal on Optimization, 19(3):1171–1183, 2008.
 Defazio (2016) A. Defazio. A simple practical accelerated method for finite sums. Advances in Neural Information Processing Systems 29 (NIPS 2016), 2016.
 Defazio et al. (2014) A. Defazio, F. R. Bach, and S. LacosteJulien. SAGA: A fast incremental gradient method with support for nonstrongly convex composite objectives. In NIPS 27, 2014.
 Devolder et al. (2014) O. Devolder, F. Glineur, and Y. E. Nesterov. Firstorder methods of smooth convex optimization with inexact oracle. Mathematical Programming, 146:37–75, 2014.
 Dieuleveut et al. (2016) A. Dieuleveut, N. Flammarion, and F. R. Bach. Harder, better, faster, stronger convergence rates for leastsquares regression. CoRR, abs/1602.05419, 2016.

Duchi et al. (2011)
J. C. Duchi, E. Hazan, and Y. Singer.
Adaptive subgradient methods for online learning and stochastic
optimization.
Journal of Machine Learning Research
, 12:2121–2159, 2011.  Frostig et al. (2015a) R. Frostig, R. Ge, S. Kakade, and A. Sidford. Unregularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization. In ICML, 2015a.
 Frostig et al. (2015b) R. Frostig, R. Ge, S. M. Kakade, and A. Sidford. Competing with the empirical risk minimizer in a single pass. In COLT, 2015b.
 Ghadimi and Lan (2012) S. Ghadimi and G. Lan. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: A generic algorithmic framework. SIAM Journal on Optimization, 2012.
 Ghadimi and Lan (2013) S. Ghadimi and G. Lan. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, ii: shrinking procedures and optimal algorithms. SIAM Journal on Optimization, 2013.
 Greenbaum (1989) A. Greenbaum. Behavior of slightly perturbed lanczos and conjugategradient recurrences. Linear Algebra and its Applications, 1989.
 He et al. (2016a) K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV (4), Lecture Notes in Computer Science, pages 630–645. Springer, 2016a.
 He et al. (2016b) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016b.
 Hinton and Salakhutdinov (2006) G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006.
 Jain et al. (2016) P. Jain, S. M. Kakade, R. Kidambi, P. Netrapalli, and A. Sidford. Parallelizing stochastic approximation through minibatching and tailaveraging. arXiv preprint arXiv:1610.03774, 2016.
 Jain et al. (2017) P. Jain, S. M. Kakade, R. Kidambi, P. Netrapalli, and A. Sidford. Accelerating stochastic gradient descent. arXiv preprint arXiv:1704.08227, 2017.
 Johnson and Zhang (2013) R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS 26, 2013.
 Kingma and Ba (2014) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
 Krizhevsky and Hinton (2009) A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
 Lin et al. (2015) H. Lin, J. Mairal, and Z. Harchaoui. A universal catalyst for firstorder optimization. In NIPS, 2015.
 Loizou and Richtárik (2017) N. Loizou and P. Richtárik. Linearly convergent stochastic heavy ball method for minimizing generalization error. 2017.
 Martens (2010) J. Martens. Deep learning via hessianfree optimization. In International conference on machine learning, 2010.
 Martens and Grosse (2015) J. Martens and R. Grosse. Optimizing neural networks with kroneckerfactored approximate curvature. In International conference on machine learning, 2015.
 Nesterov (1983) Y. Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, volume 27, pages 372–376, 1983.
 Nesterov (2012a) Y. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming Series B, 2012a.
 Nesterov (2004) Y. E. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87 of Applied Optimization. Kluwer Academic Publishers, 2004.
 Nesterov (2012b) Y. E. Nesterov. Efficiency of coordinate descent methods on hugescale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012b.
 Neyshabur et al. (2015) B. Neyshabur, R. Salakhutdinov, and N. Srebro. Pathsgd: Pathnormalized optimization in deep neural networks. CoRR, abs/1506.02617, 2015.

Paige (1971)
C. C. Paige.
The computation of eigenvalues and eigenvectors of very large sparse matrices.
PhD Thesis, University of London, 1971.  Polyak (1964) B. T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.
 Polyak (1987) B. T. Polyak. Introduction to Optimization. Optimization Software, 1987.
 preresnet (2017) preresnet. Preresnet44 for cifar10. https://github.com/DXY/ResNeXtDenseNet, 2017. Accessed: 20171025.
 Proakis (1974) J. G. Proakis. Channel identification for high speed digital communications. IEEE Transactions on Automatic Control, 1974.
 pytorch (2017) pytorch. Pytorch. https://github.com/pytorch, 2017. Accessed: 20171025.
 Rakhlin et al. (2012) A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In ICML, 2012.
 Reddi et al. (2017) S. Reddi, M. Zaheer, S. Sra, B. Poczos, F. Bach, R. Salakhutdinov, and A. Smola. A generic approach for escaping saddle points. arXiv preprint arXiv:1709.01434, 2017.
 Robbins and Monro (1951) H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, vol. 22, 1951.
 Roux et al. (2012) N. L. Roux, M. Schmidt, and F. R. Bach. A stochastic gradient method with an exponential convergence rate for stronglyconvex optimization with finite training sets. In NIPS 25, 2012.
 Roy and Shynk (1990) S. Roy and J. J. Shynk. Analysis of the momentum lms algorithm. IEEE Transactions on Acoustics, Speech and Signal Processing, 1990.
 ShalevShwartz and Zhang (2012) S. ShalevShwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. CoRR, abs/1209.1873, 2012.
 ShalevShwartz and Zhang (2014) S. ShalevShwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. In ICML, 2014.
 Sharma et al. (1998) R. Sharma, W. A. Sethares, and J. A. Bucklew. Analysis of momentum adaptive filtering algorithms. IEEE Transactions on Signal Processing, 1998.
 Sutskever et al. (2013) I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147, 2013.
 Tieleman and Hinton (2012) T. Tieleman and G. Hinton. Lecture 6.5rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 2012.
 Zhang et al. (2017) J. Zhang, I. Mitliagkas, and C. Ré. Yellowfin and the art of momentum tuning. CoRR, abs/1706.03471, 2017.
Appendix A Suboptimality of HB: Proof of Proposition 3
Before proceeding to the proof, we introduce some additional notation. Let denote the concatenated and centered estimates in the direction for .
Since the distribution over is such that the coordinates are decoupled, we see that can be written in terms of as:
Let denote the covariance matrix of . We have with, defined as
We prove Proposition 3 by showing that for any choice of stepsize and momentum, either of the two holds:

has an eigenvalue larger than , or,

the largest eigenvalue of is greater than .
This is formalized in the following two lemmas.
Lemma 4.
If the stepsize is such that , then has an eigenvalue .
Lemma 5.
If the stepsize is such that , then has an eigenvalue of magnitude .
Given this notation, we can now consider the dimension without the superscripts; when needed, they will be made clear in the exposition. Denoting and , we have:
a.1 Proof
The analysis goes via computation of the characteristic polynomial of and evaluating it at different values to obtain bounds on its roots.
Lemma 6.
The characteristic polynomial of is:
Proof.
We first begin by writing out the expression for the determinant:
expanding along the first column, we have:
Expanding the terms yields the expression in the lemma. ∎
The next corollary follows by some simple arithmetic manipulations.
Corollary 7.
Substituting in the characteristic equation of Lemma 6, we have:
(2) 
Proof of Lemma 4.
The first observation necessary to prove the lemma is that the characteristic polynomial approaches as , i.e., .
Remark 8.
The above characterization is striking in the sense that for any , increasing the momentum parameter naturally requires the reduction in the step size to permit the convergence of the algorithm, which is not observed when fast gradient methods are employed in deterministic optimization. For instance, in the case of deterministic optimization, setting yields . On the other hand, when employing the stochastic heavy ball method with , we have the condition that , and this implies, .
We now prove Lemma 5. We first consider the large momentum setting.
Lemma 9.
When the momentum parameter is set such that , has an eigenvalue of magnitude .
Proof.
This follows easily from the fact that , thus implying . ∎
Remark 10.
Note that the above lemma holds for any value of the learning rate , and holds for every eigen direction of . Thus, for “large” values of momentum, the behavior of stochastic heavy ball does degenerate to the behavior of stochastic gradient descent.
We now consider the setting where momentum is bounded away from .
Corollary 11.
Consider , by substituting , in equation (7) and accumulating terms in varying powers of , we obtain:
(3) 
Lemma 12.
Let , , . Then, .
Proof.
Since , this implies , thus implying, .
Substituting the value of in equation (11), the coefficient of is .
We will bound this term along with to obtain:
where, we use the fact that , . The natural implication of this bound is that the terms that are lower order, such as and will be negative owing to the large constant above. Let us verify that this is indeed the case by considering the terms having powers of and from equation (11):
The expression above evaluates to given an upperbound on the value of . The expression above follows from the fact that .
Next, consider the terms involving and , in particular,
Next,
In both these cases, we used the fact that implying . Finally, other remaining terms are negative. ∎
Before rounding up the proof of the proposition, we need the following lemma to ensure that our lower bounds on the largest eigenvalue of indeed affect the algorithm’s rates and are true irrespective of where the algorithm is begun. Note that this allows our result to be much stronger than typical optimization lowerbounds that rely on specific initializations to ensure a component along the largest eigendirection of the update operator, for which bounds are proven.
Lemma 13.
For any starting iterate , the HB method produces a nonzero component along the largest eigen direction of .
Proof.
We note that in a similar manner as other proofs, it suffices to argue for each dimension of the problem separately. But before we start looking at each dimension separately, let us consider the dimension, and detail the approach we use to prove the claim: the idea is to examine the subspace spanned by covariance of the iterates , for every starting iterate and prove that the largest eigenvector of the expected operator is not orthogonal to this subspace. This implies that there exists a nonzero component of in the largest eigen direction of , and this decays at a rate that is at best .
Since , we begin by examining the expected covariance spanned by the iterates . Let . Now, this implies . Then,
This implies that just appears as a scale factor. This in turn implies that in order to analyze the subspace spanned by the covariance of iterates , we can assume without any loss in generality. This implies, . Note that with this in place, we see that we can now drop the superscript that represents the dimension, since the analysis decouples across the dimensions . Furthermore, let the entries of the vector be represented as Next, denote . This implies,
Furthermore,
(4) 
Let us consider the vectorized form of , and we denote this as . Note that makes become a column vector of size . Now, consider for and concatenate these to form a matrix that we denote as , i.e.
Now, since we note that is a symmetric matrix, should contain two identical rows implying that it has an eigenvalue that is zero and a corresponding eigenvector that is . It turns out that this is also an eigenvector of with an eigenvalue . Note that . This implies there are two cases that we need to consider: (i) when all eigenvalues of have the same magnitude (). In this case, we are already done, because there exists at least one non zero eigenvalue of and this should have some component along one of the eigenvectors of and we know that all eigenvectors have eigenvalues with a magnitude equal to . Thus, there exists an iterate which has a nonzero component along the largest eigendirection of . (ii) the second case is the situation when we have eigenvalues with different magnitudes. In this case, note that implying . In this case, we need to prove that spans a threedimensional subspace; if it does, it contains a component along the largest eigendirection of which will round up the proof. Since we need to understand whether spans a three dimensional subspace, we can consider a different (yet related) matrix, which we call and this is defined as:
Given the expressions for (by definition of
Comments
There are no comments yet.