Stochastic optimization is the process of minimizing a deterministic objective function via the simulation of random elements, and it is one of the most successful methods for optimizing complex or unknown objectives. Relatively simple stochastic optimization procedures—in particular, stochastic gradient descent (SGD)—have become the backbone of modern machine learning (ML). To improve understanding of stochastic optimization in ML, and particularly why SGD and its extensions work so well, recent theoretical work has sought to study its properties and dynamics .
Such analyses typically approach the problem through one of two perspectives. The first perspective, an optimization (or quenching) perspective, examines convergence either in expectation [11, 84, 28, 60, 20]
or with some positive (high) probability[66, 19, 41, 77] through the lens of a deterministic counterpart. This perspective inherits some limitations of deterministic optimizers, including assumptions (e.g., convexity, Polyak-Łojasiewicz criterion, etc.) that are either not satisfied by state-of-the-art problems, or not strong enough to imply convergence to a quality (e.g., global) optimum. More concerning, however, is the inability to explain what has come to be known as the “generalization gap” phenomenon: increasing stochasticity by reducing batch size appears to improve generalization performance [38, 55]. Empirically, existing strategies do tend to break down for inference tasks when using large batch sizes . The second perspective, a probabilistic (annealing) perspective, examines algorithms through the lens of Markov process theory [21, 34, 52, 18]
. Here, stochastic optimizers are interpreted as samplers from probability distributions concentrated around optima, and annealing the optimizer (by reducing step size) increasingly concentrates probability mass around global optima . Traditional analyses trade restrictions on the objective for precise annealing schedules that guarantee adequate mixing and ensure convergence. However, it is uncommon in practice to consider step size schedules that decrease sufficiently slowly as to guarantee convergence to global optima with probability one . In fact, SGD based methods with poor initialization can easily get stuck near poor local minima using a typical step decay schedule .
More recent efforts conduct a distributional analysis
, directly examining the probability distribution that a stochastic optimizer targets for each fixed set of hyperparameters[52, 4, 18]. Here, one can assess a stochastic optimizer according to its capacity to reach and then occupy neighbourhoods of high-quality optima in the initial stages, where the step size is large and constant. As the step size is then rapidly reduced, tighter neighbourhoods with higher probability mass surrounding nearby minima are achievable. This is most easily accomplished using a variational approach by appealing to continuous-time Langevin approximations [52, 10], whose stationary distributions are known explicitly ; but these approaches also require restrictive assumptions, such as constant or bounded volatility [53, 61]. Interestingly, these assumptions parallel the common belief that the predominant part of the stochastic component of an optimizer is an additive perturbation [41, 82]. Such analyses have been questioned with recent discoveries of non-Gaussian noise  that leads to heavy-tailed stationary behaviour (whose distributions have infinite Laplace transform). This behavior implies stronger exploratory properties and an increased tendency to rapidly reach faraway basins than earlier Langevin-centric analyses suggest; but even these recent non-Gaussian analyses have relied on strong assumptions, additive noise, and continuous-time approximations of their own .
We model stochastic optimizers as Markov random recurrence relations, thereby avoiding continuous-time approximations, and we examine their stationary distributions. The formulation of this model is described in §2. We show that multiplicative noise, frequently assumed away in favour of more convenient additive noise in continuous analyses, can often lead to heavy-tailed stationary behaviour, playing a critical role in the dynamics of a stochastic optimizer, and influencing its capacity to hop between basins in the loss landscape. In this paper, we consider heavy-tailed behavior that assumes a power law functional form. We say that the stationary distributions of the parameters/weights exhibit power laws, with tail probabilities as , for some called the tail exponent (where smaller tail exponents correspond to heavier tails) — further details, including precise definitions are in Appendix A. To inform our analysis, in §3, we examine the special case of constant step-size SGD applied to linear least squares, and we find it obeys a random linear recurrence relation displaying both multiplicative and additive noise. Using well-known results , we isolate three regimes determining the tail behaviour of SGD (shown in Table 1, discussed in detail in §3), finding stationary behaviour always exhibits a precise power law in an infinite data regime.
|Regime||Condition on||Tails for||Tails for|
|Heavy-tailed (multiplicative) (§A.2)|
|Heavy-tailed (additive) (§A.3)||—|
|Regime||Noise||Condition on||Quadratic Example|
|Light-tailed ||White noise||bounded|
|Heavy-tailed (multiplicative) ||White noise|
|Heavy-tailed (additive) ||Lèvy noise||—||arbitrary|
In §4, these observations are extended by providing sufficient conditions for the existence of power laws arising in arbitrary iterative stochastic optimization algorithms on both convex and non-convex problems, with more precise results when updates are Lipschitz. Using our results, factors influencing tail behaviour are examined, with existing empirical findings supporting the hypothesis that heavier tails coincide with improved generalization performance . Numerical experiments are conducted in §5, illustrating how multiplicative noise and heavy-tailed stationary behaviour improve capacity for basin hopping (relative to light-tailed stationary behaviour) in the exploratory phase of learning. We finish by discussing impact on related work in §6, including a continuous-time analogue of Table 1 (shown in Table 2).
There is a large body of related work, and we review only the most directly related here. Analysis of stochastic optimizers via stationary distributions of Markov processes was recently considered in [52, 4, 18]
. The latter, in particular, examined first and second moments of the stationary distribution, although these can be ineffective measures of concentration in heavy-tailed settings. Heavy tails in machine learning have been observed and empirically examined in spectral distributions of weights[54, 55, 56, 57, 58] and in the weights themselves [69, 64], but (ML style) theoretical analyses on the subject seem limited to continuous-time examinations [69, 70]111Immediately prior to uploading this article to the arXiv, we became aware of  (submitted to the arXiv a few days ago), which conducts a detailed analysis of heavy tails in the stationary distribution of SGD applied to least-squares linear regression. Our discussion in §3 and Appendix A, which for us serves predominantly as motivation in this special case, recovers a number of their results. Our primary contributions in §2 and §4 provide extensions of these results to more general stochastic optimization algorithms and more general stochastic optimization problems. We encourage readers to see  for details in the linear least-squares setting. . Connections between multiplicative noise and heavy-tailed fluctuations can be seen throughout the wider literature [15, 22, 73, 9]. From a physical point of view, multiplicative noise acts as an external environmental field, capable of exhibiting drift towards low energy states . Hysteretic optimization  is one example of a stochastic optimization algorithm taking advantage of this property. To our knowledge, no theoretical analysis of this phenomenon has been conducted in a general optimization setting. Indeed, while multiplicative noise in stochastic optimization has been explored in some recent empirical analyses [79, 83, 36], its impact appears underappreciated, relative to the well-studied and exploited effects of additive noise [23, 37, 19, 41]. Section 6 contains a discussion of additional related work in light of our results.
We let denote the identity matrix. Unless otherwise stated, we default to the following norms: for vector , we let denote the Euclidean norm of , and for matrix , the norm denotes the -induced (spectral) norm. Also, for matrix , is the Frobenius norm, and ,
are its smallest and largest singular values, respectively. For two vectors/matrices, we let
denote their tensor (or Kronecker) product. Knuth asymptotic notation is adopted. The notation is reserved to denote an appropriate underlying probability space. For two random elements , if have the same distribution. Finally, for random vector/matrix , for , . All proofs are relegated to Appendix C.
2 Stochastic optimization as a Markov chain
In this section, we describe how to model a stochastic optimization algorithm as a Markov chain — in particular, arandom recurrence relation. This formulation is uncommon in the ML literature, but will be important for our analysis. Consider a general single-objective stochastic optimization problem, where the goal is to solve problems of the form
for some scalar loss function, and random element (the data) [45, §12]. In the sequel, we shall assume the weights occupy a vector space with norm . To minimize with respect to , some form of fixed point iteration is typically adopted. Supposing that there exists some continuous map such that any fixed point of is a minimizer of , the sequence of iterations
either diverges, or converges to a minimizer of . In practice, this expectation may not be easily computed, so one could instead consider the sequence of iterated random functions
For example, one could consider the Monte Carlo approximation of (1):
which can be interpreted in the context of randomized subsampling with replacement, where each is a batch of subsamples drawn from a large dataset . Subsampling without replacement can also be treated in the form (3) via the Markov chain , where
is the number of minibatches in each epoch.
Since (2) will not, in general, converge to a single point, the stochastic approximation (SA) approach of Robbins–Munro  considers a corresponding sequence of maps given by where is a decreasing sequence of step sizes. Provided is uniformly bounded, and , the sequence of iterates converges in and almost surely to a minimizer of . Note that this differs from the sample average approximation (SAA) approach , where a deterministic optimization algorithm is applied to a random objective (e.g., gradient descent on subsampled empirical risk).
Here are examples of how popular ML stochastic optimization algorithms fit into this framework.
Example 1 (SGD & SGD with momentum).
Minibatch SGD with step size coincides with (3) under Incorporating momentum is possible by considering the augmented space of weights and velocities together. In the standard setup, letting denote velocity, and the weights, .
Example 2 (Adam).
Example 3 (Stochastic Newton).
The iterations (2) are also sufficiently general to incorporate other random effects, e.g., dropout . Instead of taking the SA approach, in our analysis we will examine the original Markov chain (2), where hyperparameters (including step size) are fixed. This will provide a clearer picture of ongoing dynamics at each stage within an arbitrary learning rate schedule . Also, assuming reasonably rapid mixing, global tendencies of a stochastic optimization algorithm, such as the degree of exploration in space, can be examined through the stationary distribution of (2).
3 The linear case with SGD
In this section, we consider the special case of ridge regression, i.e., least squares linear regression withregularization, using vanilla SGD. Let denote a dataset comprised of inputs and corresponding labels . Supposing that is a pair of random vectors uniformly drawn from , for , we seek a solution to
one can show that the solution to (4) is
(Note that need not be strictly positive for the inverse to exist.) On the other hand, applying minibatch SGD to solving (4) with constant (sufficiently small) step size results in a Markov chain
in the estimated parameter matrix. We start with the following observation, which is immediate but important.
Let denote the size of each minibatch comprised of independent and identically distributed copies of for . For the vectorization of , iterations of minibatch SGD undergo the following random linear recurrence relation
where , and . If are non-atomic, , are integrable, and , then (5) is ergodic.
From here, it immediately follows that, even in this simple setting, SGD possesses multiplicative noise in the form of the factor , as well as additive noise in the form of the factor . Under the conditions of Lemma 1, the expectations of (5) converge to . Although the dynamics of this process, as well as the shape of its stationary distribution, are not as straightforward, random linear recurrence relations are among the most well-studied discrete-time processes, and as multiplicative processes, are well-known to exhibit heavy-tailed behaviour. In Appendix A, we discuss classical results on the topic. The three primary tail regimes are summarized in Table 1, where each denotes a strictly positive value. In particular, we have the following result.
Assume that the distribution of has full support on . Then any stationary distribution of (5) is heavy-tailed.
Lemma 2 suggests that heavy-tailed fluctuations could be more common than previously considered. This is perhaps surprising, given that common Langevin approximations of SGD applied to this problem exhibit light-tailed stationary distributions [52, 61]. In (5), light-tailed stationary distributions can only occur when does not have full support on and the step size is taken sufficiently small. Referring to Table 1, there are two possible mechanisms by which the stationary distribution of (5) can be heavy-tailed. Most discussions about SGD focus on the additive noise component  (presumably since the analysis is simpler). In this case, if is heavy-tailed, then the stationary distribution of (5) is also heavy-tailed. This is the assumption considered in [69, 70].
However, this is not the only way heavy-tailed noise can arise. Indeed, the multiplicative heavy-tailed regime illustrates how a power law can arise from light-tailed data (in fact, even from data with finite support). One of the most important attributes here is that the tail exponent of the stationary distribution is entirely due to the recursion and properties of the multiplicative noise . In fact, this is true even if displays (not too significant) heavy-tailed behaviour. Here, the heavy-tailed behaviour arises due to intrinsic factors to the stochastic optimization, and they tend to dominate over time. Later, we shall go further and analyze factors influencing the heaviness of these tails, or more precisely, its tail exponent, denoted by in Table 1. Evidently, is dependent on the dispersal of the distribution of , which is itself dependent (in differing ways) on the step size , batch size , and the dispersal and condition number of the covariance matrices of the input data. These three factors have been noted to be integral to the practical performance of SGD [71, 80].
4 Power laws for general objectives and optimizers
In this section, we consider the general case (2), and we examine how heavy-tailed stationary behaviour can arise for any stochastic optimizer, applied to both convex and non-convex problems. As in Section 3, here we are most interested in the presence of heavy-tailed fluctuations due to multiplicative factors. (The case when heavy-tailed fluctuations arise from the additive noise case is clear: if, for every , is heavy-tailed/has infinite th-moment, then for each , is heavy-tailed/has infinite th moment also, irrespective of the dynamics of .)
In our main result (Theorem 1), we illustrate how power laws arise in general Lipschitz stochastic optimization algorithms that are contracting on average and also strongly convex near infinity with positive probability.
Let be a separable Banach space. Assume that is a random function on such that is almost surely Lipschitz continuous and the probability measure of is non-atomic. We let denote a random variable such that
denote a random variable such thatis the Lipschitz constant of for each . Assume that is integrable, and is integrable for some . Suppose there exist non-negative random variables such that, with probability one,
If , then the Markov chain given by where each is an independent and identically distributed copy of , is geometrically ergodic with satisfying the distributional fixed point equation . Furthermore, if with positive probability and for any , then:
There exist such that
There exist such that and and for ,
For any and , the moments of are bounded above and below by
implying , where
Geometric rates of convergence in the Prohorov and total variation metrics to stationarity are discussed in [16, 1]. From Theorem 1, we find that the presence of expanding multiplicative noise implies the stationary distribution of (2) is stochastically bounded between two power laws. In particular, this suggests that Lipschitz stochastic optimizers satisfying (6) and can conduct wide exploration of the loss landscape. As an example, for SGD satisfying (6), we have that
Therefore, for the linear case (5), using Theorem 1, we can recover the same tail exponent seen in Theorem 2. To treat other stochastic optimizers that are not Lipschitz, or do not satisfy the conditions of Theorem 1, we present in Lemma 3 an abstract sufficient condition for heavy-tailed stationary distributions of ergodic Markov chains.
For a general metric space , suppose that is an ergodic Markov chain on with as , and let be some scalar-valued function. If there exists some such that , then is heavy-tailed.
Under our model, Lemma 3 says that convergent constant step-size stochastic optimizers will exhibit heavy-tailed stationary behaviour if there is some positive probability of the optimizer moving further away from an optimum, irrespective of how near or far you are from it. Analyses concerning stochastic optimization algorithms almost always consider rates of contraction towards a nearby optimum, quantifying the exploitative behaviour of a stochastic optimizer. To our knowledge, this is the first time that consequences of the fact that stochastic optimization algorithms could, at any stage, move away from any optimum, have been examined. In the convex setting, this appears detrimental, but this is critical in non-convex settings, where exploration (rather than exploitation to a local optimum) is important. Indeed, we find that it is this behaviour that directly determines the tails of the stochastic optimizer’s stationary distribution, and therefore, its exploratory behaviour. Altogether, our results suggest that heavy-tailed noise could be more common and more beneficial in stochastic optimization than previously acknowledged.
Factors influencing the tail exponent.
Recent analyses have associated heavier-tailed structures (i.e., larger tail exponents) with improved generalization performance [55, 58, 69]. From the bounds in Theorem 1, and established theory for the special linear case (5) (see Appendix A.2), the following factors play a role in decreasing the tail exponent (generally speaking, increasing in expectation or dispersion implies decreased ) resulting in heavier-tailed noise.
Decreasing batch size: For fixed hyperparameters, becomes less deterministic as the batch size decreases, resulting in an decreased (i.e., heavier) tail exponent for ergodic stochastic optimizers. This is in line with [81, 55, 57, 58], and it suggests a relationship to the generalization gap phenomenon.
Increasing step size: Also in line with [55, 57, 58], increased step sizes exacerbate fluctuations and thus result in increased tail exponents. However, step sizes also affect stability of the algorithm. The relationship between step and batch sizes has received attention [5, 72]; choosing these parameters to increase heavy-tailed fluctuations while keeping variance sensible could yield a valuable exploration strategy.
More dispersed data: Increasing dispersion in the data implies increased dispersion for the distribution of , and hence heavier tails. For the same model trained to different datasets, a smaller tail exponent may be indicative of richer data, not necessarily of higher variance, but exhibiting a larger moment of some order. Data augmentation is a strategy to achieve this .
Increasing regularization: The addition of an explicit -regularizer to the objective function (known to help avoiding bad minima ) results in larger , and hence, heavier-tailed noise.
Increasing average condition number of Hessian: The Hessian is known to play an important role in the performance of stochastic optimizers [81, 80]. Evidently seen in (7) in the case of SGD, and potentially more generally, a larger condition number of the Hessian can allow for larger tail exponents while remaining ergodic.
In each case (excluding step size, which is complicated since step size also affects stability), the literature supports the hypothesis that factors influencing heavier tails coincide with improved generalization performance; see, e.g.,  and references therein. Vanilla SGD and stochastic Newton both exhibit heavy-tailed behaviour when and the Jacobian of have unbounded spectral distributions, respectively (as might be the case when has full support on ). However, adaptive optimizers such as momentum and Adam incorporate geometric decay that can prevent heavy-tailed fluctuations, potentially limiting exploration while excelling at exploitation to nearby optima. It has recently been suggested that these adaptive aspects should be turned off during an initial warmup phase , implying that exacerbating heavy-tailed fluctuations and increased tail exponents could aid exploratory behaviour in the initial stages of training.
5 Numerical experiments
To illustrate the advantages that multiplicative noise and heavy tails offer in non-convex optimization, in particular for exploration (of the entire loss surface) versus exploitation (to a local optimum), we first consider stochastic optimization algorithms in the non-stationary regime. To begin, in one dimension, we compare perturbed gradient descent (GD)  with additive light-tailed and heavy-tailed noise against a version with additional multiplicative noise. That is,
where is the objective function, , in (a) and (c), and in (b), are i.i.d.
-distributed (heavy-tailed) with 3 degrees of freedom, normalized to have unit variance. Algorithms (a)-(c) are applied to the objective, which achieves a global (wide) minimum at zero. Iterations of (a)-(c) have expectation on par with classical GD. For fixed step size and initial , the distribution of successive iterates are presented in Figure 1 for small (), moderate (), and strong () noise. Both (b) and (c) readily enable jumps between basins. However, while (a), (b) smooth the effective loss landscape, making it easier to jump between basins, it also has the side effect of reducing resolution in the vicinity of minima. On the other hand, (c) maintains close proximity (peaks in the distribution) to critical points.
Figure 1 also illustrates exploration/exploitation benefits of multiplicative noise (and associated heavier tails) over additive noise (and associated lighter tails). While rapid exploration may be achieved using heavy-tailed additive noise , since reducing the step size may not reduce the tail exponent, efficient exploitation of local minima can become challenging, even for small step sizes. On the other hand, multiplicative noise has the benefit of behaving similarly to Gaussian additive noise for small step sizes . We can see evidence of this behaviour in the leftmost column in Figure 1, where the size of the multiplicative noise is small. As the step size is annealed to zero, multiplicative noise resembles the convolutional nature of additive noise .
This same basin hopping behaviour can be observed heuristically in SGD for higher-dimensional models with the aid of principal component analysis (PCA), although jumps become much more frequent. To see this, we consider fitting a two-layer neural network with 16 hidden units for classification of the Musk dataset (168 attributes; 6598 instances) with cross-entropy loss without regularization and step size . Two stochastic optimizers are compared: (a) SGD with a single sample per batch (without replacement), and (b) perturbed GD , where covariance of iterations in (b) is chosen to approximate that of (a) on average. PCA-projected trajectories are presented in Figure 2. SGD frequently exhibits jumps between basins, a feature not shared by perturbed GD with only additive noise. Further numerical analysis is conducted in Appendix B to support claims in §4.
Lazy training and implicit renewal theory.
Theory concerning random linear recurrence relations can be extended directly to SGD applied to objective functions whose gradients are “almost” linear, such as those seen in lazy training , due to the implicit renewal theory of Goldie . However, it seems unlikely that multilayer networks should exhibit the same tail exponent between each layer, where the conditions of Breiman’s lemma (see Appendix A) break down. Significant discrepancies between tail exponents across layers were observed in other appearances of heavy-tailed noise [58, 69]. From the point of view of tail exponents, it appears that most practical finite-width deep neural networks do not exhibit lazy training. This observation agrees with the poor relative generalization performance exhibited by linearized neural networks [12, 62].
where , . The most common of these are Langevin models where is Brownian motion (white noise), although heavy-tailed noise has also been considered [69, 70]. In the case where is diagonal, as a corollary of [67, 51], we may determine conditions for heavy-tailed stationary behaviour. Letting , in Table 2 we present a continuous time analogue of Table 1 with examples of choices of that result in each regime when is quadratic. Langevin algorithms commonly seen in the literature [52, 61] assume constant or bounded volatility (), resulting in light-tailed stationary distributions for -regularized loss functions. However, even a minute presence of multiplicative noise (unbounded ) can yield a heavy-tailed stationary distribution . Fortunately, more recent analyses are allowing for the presence of multiplicative noise in their approximations . Based on our results, it seems clear that this trend is critical for adequate investigation of stochastic optimizers using the continuous-time approach. On the other hand, [69, 70]
invoked the generalized central limit theorem (CLT) to propose a continuous-time model with heavy-tailed (additive) Lèvy noise, in response to the observation that stochastic gradient noise is frequently heavy-tailed.In their model, the heaviness of the noise does not change throughout the optimization. Furthermore, observed that stochastic gradient noise is typically Gaussian for large batch sizes, which is incompatible with generalized CLT. We have illustrated how multiplicative noise also yields heavy-tailed fluctuations. Alternatively, heavy tails in the stochastic gradient noise for fixed weights could be log-normal instead, which are known to arise in deep neural networks . A notable consequence of the Lèvy noise model is that exit times for basin hopping are essentially independent of basin height, and dependent instead on basin width, which is commonly observed with SGD . In the absence of heavy-tailed additive noise, we observed similar behaviour in our experiments for large multiplicative noise, although precise theoretical treatment remains the subject of future work.
Stochastic gradient MCMC.
The same principles discussed here also apply to stochastic gradient MCMC (SG-MCMC) algorithms . For example, the presence of multiplicative noise could suggest why SG-MCMC achieves superior mixing rates to classical Metropolis–Hastings MCMC in high dimensions, since the latter is known to struggle with multimodality, although there is some skepticism in this regard . Theorem 1 implies SG-MCMC yields excellent samples for satisfying the tail growth rate conditions required in importance sampling; see  for example.
Geometric properties of multiplicative noise.
It has been suggested that increased noise in SGD acts as a form of convolution, smoothing the effective landscape . This appears to be partially true. As seen in Figure 1, smoothing behaviour is common for additive noise, and reduces resolution in the troughs. Multiplicative noise, which SGD also exhibits, has a different effect. As seen in , in the continuous-time setting, multiplicative noise equates to conducting additive noise on a modified (via Lamperti transform) loss landscape. There is also a natural interpretation of multiplicative noise through choice of geometry in Riemannian Langevin diffusion [24, 51]. Under either interpretation, it appears multiplicative noise shrinks the width of peaks and widens troughs in the effective loss landscape, potentially negating some of the undesirable side effects of additive noise.
Heavy-tailed mechanistic universality.
In a recent series of papers [55, 56, 57, 58], an empirical (or phenomenological) theory of heavy-tailed self-regularization is developed, proposing that sufficiently complex and well-trained deep neural networks exhibit heavy-tailed mechanistic universality
: the spectral distribution of large weight matrices display a power law whose tail exponent is negatively correlated with generalization performance. If true, examination of this tail exponent provides an indication of model quality, and factors that are positively associated with improved test performance, such as decreased batch size. From random matrix theory, these heavy-tailed spectral distributions may arise either due to strong correlations arising in the weight matrices, or heavy-tailed distributions arising in each of the weights over time. The reality may be some combination of both; here, we have illustrated that the second avenue is at least possible, and relationships between factors of the optimization and the tail exponent agree with the present findings. Proving that heavy-tailed behaviour can arise in spectral distributions of the weights (as opposed to their probability distributions, which we have treated here) is a challenging problem, and remains the subject of future work. Casting the evolution of spectral distributions into the framework of iterated random functions could prove fruitful in this respect.
A theoretical analysis on the relationship between multiplicative noise and heavy-tailed fluctuations in stochastic optimization algorithms has been conducted. We propose that viewing stochastic optimizers through the lens of Markov process theory and examining stationary behaviour is key to understanding exploratory behaviour on non-convex landscapes in the initial phase of training. Our results suggest that heavy-tailed fluctuations may be more common than previous analyses have suggested, and they further the hypothesis that such fluctuations are correlated to improved generalization performance. From this viewpoint, we maintain that multiplicative noise should not be overlooked in future analyses on the subject.
We would like to acknowledge DARPA, NSF, and ONR for providing partial support of this work.
-  G. Alsmeyer. On the Harris recurrence of iterated random Lipschitz functions and related convergence rate results. Journal of Theoretical Probability, 16(1):217–247, 2003.
-  G. Alsmeyer. On the stationary tail index of iterated random Lipschitz functions. Stochastic Processes and their Applications, 126(1):209–233, 2016.
-  J. Alstott and D. P. Bullmore. powerlaw: a Python package for analysis of heavy-tailed distributions. PLOS One, 9(1), 2014.
D. Babichev and F. Bach.
Constant step size stochastic gradient descent for probabilistic
Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (UAI 2018), 2018.
-  L. Balles, J. Romero, and P. Hennig. Coupling adaptive batch sizes with learning rates. Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI 2017), 2017.
-  N. H. Bingham, C. M. Goldie, and J. L. Teugels. Regular variation, volume 27. Cambridge University Press, 1989.
-  T. S. Biró and A. Jakovác. Power-law tails from multiplicative noise. Physical Review Letters, 94(13):132302, 2005.
-  J. R. Blum. Approximation methods which converge with probability one. The Annals of Mathematical Statistics, 25(2):382–386, 1954.
-  D. Buraczewski, E. Damek, and T. Mikosch. Stochastic models with power-law tails. Springer, 2016.
-  P. Chaudhari and S. Soatto. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. In 2018 Information Theory and Applications Workshop (ITA), pages 1–10. IEEE, 2018.
-  X. Chen, S. Liu, R. Sun, and M. Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. Proceedings of the 7th International Conference on Learning Representations (ICLR 2019), 2019.
-  L. Chizat, E. Oyallon, and F. Bach. On lazy training in differentiable programming. In Advances in Neural Information Processing Systems, pages 2933–2943, 2019.
-  A. Clauset, C. R. Shalizi, and M. E. Newman. Power-law distributions in empirical data. SIAM Review, 51(4):661–703, 2009.
-  P. Cortez, A. Cerdeira, F. Almeida, T. Matos, and J. Reis. Modeling wine preferences by data mining from physicochemical properties. Decision Support Systems, 47(4):547–553, 2009.
-  J. M. Deutsch. Generic behavior in linear systems with multiplicative noise. Physical Review E, 48(6):R4179, 1993.
-  P. Diaconis and D. Freedman. Iterated random functions. SIAM Review, 41(1):45–76, 1999.
-  T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2):31–71, 1997.
-  A. Dieuleveut, A. Durmus, and F. Bach. Bridging the gap between constant step size stochastic gradient descent and Markov chains. arXiv preprint arXiv:1707.06386, 2017.
-  S. S. Du, C. Jin, J. D. Lee, M. I. Jordan, A. Singh, and B. Poczos. Gradient descent can take exponential time to escape saddle points. In Advances in Neural Information Processing Systems, pages 1067–1077, 2017.
-  X. Fontaine, V. De Bortoli, and A. Durmus. Continuous and Discrete-Time Analysis of Stochastic Gradient Descent for Convex and Non-Convex Functions. arXiv preprint arXiv:2004.04193, 2020.
-  M. I. Freidlin and A. D. Wentzell. Random perturbations. In Random perturbations of dynamical systems, pages 15–43. Springer, 1998.
-  U. Frisch and D. Sornette. Extreme deviations and applications. Journal de Physique I, 7(9):1155–1171, 1997.
-  R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In JMLR: Workshop and Conference Proceedings, volume 40, pages 797–842, 2015.
-  M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123–214, 2011.
-  C. M. Goldie. Implicit renewal theory and tails of solutions of random equations. The Annals of Applied Probability, 1(1):126–166, 1991.
-  C. M. Goldie and R. Grübel. Perpetuities with thin tails. Advances in Applied Probability, 28(2):463–480, 1996.
-  N. Golmant, N. Vemuri, Z. Yao, V. Feinberg, A. Gholami, K. Rothauge, M. W. Mahoney, and J. Gonzalez. On the computational inefficiency of large batch sizes for stochastic gradient descent. arXiv preprint arXiv:1811.12941, 2018.
-  R. M. Gower, N. Loizou, X. Qian, A. Sailanbayev, E. Shulgin, and P. Richtárik. SGD: General analysis and improved rates. Proceedings of the 36th International Conference on Machine Learning (ICML 2019), 2019.
-  A. Granas and J. Dugundji. Fixed point theory. Springer Science & Business Media, 2013.
-  D. R. Grey. Regular variation in the tail behaviour of solutions of random difference equations. The Annals of Applied Probability, pages 169–183, 1994.
-  A. K. Grincevičius. One limit distribution for a random walk on the line. Lithuanian Mathematical Journal, 15(4):580–589, 1975.
-  M. Gürbüzbalaban, U. Şimşekli, and L. Zhu. The Heavy-Tail Phenomenon in SGD. arXiv preprint arXiv:2006.04740, 2020.
-  B. Hanin and M. Nica. Products of many large random matrices and gradients in deep neural networks. Communications in Mathematical Physics, pages 1–36, 2019.
-  D. Henderson, S. H. Jacobson, and A. W. Johnson. The theory and practice of simulated annealing. In Handbook of Metaheuristics, pages 287–319. Springer, 2003.
-  L. Hodgkinson, R. Salomone, and F. Roosta. The reproducing Stein kernel approach for post-hoc corrected sampling. arXiv preprint arXiv:2001.09266, 2020.
-  M. J. Holland. Robust descent using smoothed multiplicative noise. Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS 2019), 2019.
-  C. Jin, R. Ge, P. Netrapalli, S. M. Kakade, and M. I. Jordan. How to escape saddle points efficiently. In Proceedings of the 34th International Conference on Machine Learning (ICML 2017), pages 1724–1732. JMLR.org, 2017.
N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang.
On large-batch training for deep learning: generalization gap and sharp minima.Proceedings of the 5th International Conference on Learning Representations (ICLR 2017), 2017.
-  H. Kesten. Random difference equations and renewal theory for products of random matrices. Acta Mathematica, 131:207–248, 1973.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), 2015.
-  R. Kleinberg, Y. Li, and Y. Yuan. An alternative view: When does SGD escape local minima? Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018.
-  A. J. Kleywegt, A. Shapiro, and T. Homem-de Mello. The sample average approximation method for stochastic discrete optimization. SIAM Journal on Optimization, 12(2):479–502, 2002.
-  D. E. Knuth. Big omicron and big omega and big theta. ACM Sigact News, 8(2):18–24, 1976.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
-  D. P. Kroese, T. Taimre, and Z. I. Botev. Handbook of Monte Carlo methods, volume 706. John Wiley & Sons, 2013.
-  M. Li, E. Yumer, and D. Ramanan. Budgeted Training: Rethinking Deep Neural Network Training Under Resource Constraints. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020), 2020.
-  G.-H. Liu and E. A. Theodorou. Deep learning theory review: An optimal control and dynamical systems perspective. arXiv preprint arXiv:1908.10920, 2019.
-  L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han. On the variance of the adaptive learning rate and beyond. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020), 2020.
-  S. Liu, D. Papailiopoulos, and D. Achlioptas. Bad global minima exist and SGD can reach them. arXiv preprint arXiv:1906.02613, 2019.
S. Ma, R. Bassily, and M. Belkin.
The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning.Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018.
-  Y.-A. Ma, T. Chen, and E. Fox. A complete recipe for stochastic gradient MCMC. In Advances in Neural Information Processing Systems, pages 2917–2925, 2015.
-  S. Mandt, M. Hoffman, and D. Blei. A variational analysis of stochastic gradient algorithms. In Proceedings of the 33rd International Conference on Machine Learning (ICML 2016), pages 354–363, 2016.
S. Mandt, M. D. Hoffman, and D. M. Blei.
Stochastic gradient descent as approximate Bayesian inference.Journal of Machine Learning Research, 18(1):4873–4907, 2017.
-  C. H. Martin and M. W. Mahoney. Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior. arXiv preprint arXiv:1710.09553, 2017.
-  C. H. Martin and M. W. Mahoney. Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. arXiv preprint arXiv:1810.01075, 2018.
-  C. H. Martin and M. W. Mahoney. Traditional and heavy-tailed self regularization in neural network models. Proceedings of the 36th International Conference on Machine Learning (ICML 2019), 2019.
-  C. H. Martin and M. W. Mahoney. Heavy-tailed Universality predicts trends in test accuracies for very large pre-trained deep neural networks. In Proceedings of the 2020 SIAM International Conference on Data Mining, pages 505–513. SIAM, 2020.
-  C. H. Martin and M. W. Mahoney. Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data. arXiv preprint arXiv:2002.06716, 2020.
-  S. P. Meyn and R. L. Tweedie. Markov chains and stochastic stability. Springer Science & Business Media, 2012.
-  D. Nagaraj, P. Netrapalli, and P. Jain. SGD without replacement: Sharper rates for general smooth convex functions. Proceedings of the 36th International Conference on Machine Learning (ICML 2019), 2019.
-  A. Orvieto and A. Lucchi. Continuous-time models for stochastic optimization algorithms. In Advances in Neural Information Processing Systems, pages 12589–12601, 2019.
-  S. Oymak, Z. Fabian, M. Li, and M. Soltanolkotabi. Generalization, Adaptation and Low-Rank Representation in Neural Networks. In 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pages 581–585. IEEE, 2019.
-  K. F. Pál. Hysteretic optimization, faster and simpler. Physica A: Statistical Mechanics and its Applications, 360(2):525–533, 2006.
-  A. Panigrahi, R. Somani, N. Goyal, and P. Netrapalli. Non-gaussianity of stochastic gradient noise. Science meets Engineering of Deep Learning (SEDL) workshop, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019.
-  H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951.
-  F. Roosta-Khorasani and M. W. Mahoney. Sub-sampled Newton methods II: Local convergence rates. arXiv preprint arXiv:1601.04738, 2016.
-  K. J. Rubin, G. Pruessner, and G. A. Pavliotis. Mapping multiplicative to additive noise. Journal of Physics A: Mathematical and Theoretical, 47(19):195001, 2014.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen.
MobileNetv2: Inverted residuals and linear bottlenecks.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018.
-  U. Şimşekli, L. Sagun, and M. Gürbüzbalaban. A tail-index analysis of stochastic gradient noise in deep neural networks. Proceedings of the 36th International Conference on Machine Learning (ICML 2019), 2019.
-  U. Şimşekli, L. Zhu, Y. W. Teh, and M. Gürbüzbalaban. Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise. arXiv preprint arXiv:2002.05685, 2020.
-  L. N. Smith. A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay. US Naval Research Laboratory Technical Report 5510-026, 2018.
-  S. L. Smith, P.-J. Kindermans, C. Ying, and Q. V. Le. Don’t decay the learning rate, increase the batch size. Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), 2018.
-  D. Sornette and R. Cont. Convergent multiplicative processes repelled from zero: power laws and truncated power laws. Journal de Physique I, 7(3):431–444, 1997.
-  N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
-  G. Volpe and J. Wehr. Effective drifts in dynamical systems with multiplicative noise: a review of recent progress. Reports on Progress in Physics, 79(5):053901, 2016.
-  J. Wang and L. Perez. The effectiveness of data augmentation in image classification using deep learning. Convolutional Neural Networks Vis. Recognit, page 11, 2017.
-  R. Ward, X. Wu, and L. Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes, from any initialization. Proceedings of the 36th International Conference on Machine Learning (ICML 2019), 2019.
-  F. Wenzel, K. Roth, B. S. Veeling, J. Świątkowski, L. Tran, S. Mandt, J. Snoek, T. Salimans, R. Jenatton, and S. Nowozin. How good is the bayes posterior in deep neural networks really? arXiv preprint arXiv:2002.02405, 2020.
-  J. Wu, W. Hu, H. Xiong, J. Huan, V. Braverman, and Z. Zhu. On the Noisy Gradient Descent that Generalizes as SGD. arXiv preprint arXiv:1906.07405, 2020.
-  C. Xing, D. Arpit, C. Tsirigotis, and Y. Bengio. A Walk with SGD. arXiv preprint arXiv:1802.08770, 2018.
-  Z. Yao, A. Gholami, Q. Lei, K. Keutzer, and M. W. Mahoney. Hessian-based analysis of large batch training and robustness to adversaries. In Advances in Neural Information Processing Systems, pages 4949–4959, 2018.
-  G. Zhang, L. Li, Z. Nado, J. Martens, S. Sachdeva, G. Dahl, C. Shallue, and R. B. Grosse. Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model. In Advances in Neural Information Processing Systems, pages 8194–8205, 2019.
-  Z. Zhang, Y. Zhang, and Z. Li. Removing the feature correlation effect of multiplicative noise. In Advances in Neural Information Processing Systems, pages 627–636, 2018.
-  D. Zhou, Y. Tang, Z. Yang, Y. Cao, and Q. Gu. On the convergence of adaptive gradient methods for nonconvex optimization. arXiv preprint arXiv:1808.05671, 2018.
Appendix A Random linear recurrence relations
Here, we shall discuss existing theory concerning the random linear recurrence relation that arises in (5). Because for each is independent and identically distributed, we let , noting that for all . First, we state conditions under which (5) yields an ergodic Markov chain. The following lemma combines [9, Theorem 4.1.4 and Proposition 4.2.1] and implies Lemma 1.
The intuition behind the presence of heavy-tailed behaviour is easily derived from the Breiman lemma concerning regularly varying random vectors. Recall that a random vector is regularly varying if there exists a measure with zero mass at infinity such that
By Karamata’s characterization theorem [6, Theorem 1.4.1], for any regularly varying random vector , there exists an such that converges as to a non-null measure. In particular, and for every obey a power law with tail exponent subject to slowly varying functions222Recall that a function is slowly varying if as , for any . , :
Lemma 5 (Breiman’s lemma).
Let be an -RV random vector, a random matrix such that333For example, could be -RV with . , and a random vector such that as . Then is -RV.
In other words, the index of regular variation is preserved under random linear operations, and so regularly varying random vectors are distributional fixed points of random linear recurrence relations. Conditions for the converse statement are well-known in the literature . Here, we provide brief expositions of the three primary regimes dictating the tails of any stationary distribution of (5). It is worth noting that other corner cases do exist, including super-heavy tails (see [9, Section 5.5] for example), but are outside the scope of this paper.
a.1 The Goldie–Grübel (light-tailed) regime
To start, consider the case where neither nor are heavy-tailed and the stochastic optimization dynamics are such that is light-tailed. In particular, assume that all moments of are finite. By applying the triangle inequality to (5), one immediately finds
Therefore, if almost surely and , then for any , and so is bounded in . The Markov chain (5) is clearly ergodic, and the existence of all moments suggests that the limiting distribution of cannot satisfy a power law. With significant effort, one can show that more is true: Goldie and Grübel proved in [26, Theorem 2.1] that if is also light-tailed, then is light-tailed. To our knowledge, this is the only setting where one can prove that the Markov chain (5) possesses a light-tailed limiting distribution, and it requires contraction (and therefore, consistent linear convergence) at every step with probability one. In the stochastic optimization setting, the Goldie–Grübel regime coincides with optimizers that have purely exploitative (no explorative) behaviour. Should the chain fail to contract even once, we move outside of this regime and enter the territory of heavy-tailed stationary distributions.
a.2 The Kesten–Goldie (heavy-tailed due to intrinsic factors) regime
Next, consider the case where neither nor are heavy-tailed, but the stochastic optimization dynamics are such that is heavy-tailed. To consider a lower bound, recall that the smallest singular value of , , satisfies . Therefore, once again from (5),
Assuming that the Markov chain (5) is ergodic with limiting distribution , by the -norm ergodic theorem [59, Theorem 14.0.1], is finite if and only if is bounded in for any initial . However, if , then there exists some such that . If is finite, then is unbounded when is sufficiently large, implying that is heavy-tailed.
This suggests that the tails of the distribution of are at least as heavy as a power law. To show they are dictated precisely by a power law, that is, is -RV for some , is more challenging. The following theorem is a direct corollary of the Kesten’s celebrated theorem [39, Theorem 6], and Goldie’s generalizations thereof in .
Theorem 2 (Kesten–Goldie theorem).
Assume the following:
The Markov chain (5) is ergodic with (in distribution).
The distribution of
has absolutely continuous component with respect to Lebesgue density that has support containing the zero matrix, andis non-zero with positive probability.
There exists such that .
is almost surely invertible and .
Then is -RV for some .
a.3 The Grincevičius–Grey (heavy-tailed due to extrinsic factors) regime
Finally, consider the case where is heavy-tailed, in particular, that is -RV. If , then the arguments seen in the Kesten–Goldie regime can no longer hold, since would be infinite for any such that . Instead, by [9, Theorem 4.4.24], a limiting distribution of (5) is necessarily -RV, provided that is finite for some . This was proved in the univariate case by Grincevičius [31, Theorem 1], later updated by Grey  to include the converse result: if the limiting distribution of (5) is -RV and , then is -RV.
Contrary to the Kesten–Goldie regime, here neither nor the recursion itself play much of a role. The optimization procedure itself is fairly irrelevant: from Breiman’s lemma, the distribution of is heavy-tailed after only the first iteration, and the tail exponent remains constant, i.e., is heavy-tailed. Therefore, in the Grincevičius–Grey regime, the dynamics of the stochastic optimization are dominated by extrinsic factors.
Appendix B Numerical examinations of heavy tails
Power laws are notoriously treacherous to investigate empirically , especially in higher dimensions , and this plays a significant role in our focus on establishing mathematical theory. Nevertheless, due to the mystique surrounding heavy tails and our discussion in §4 concerning the impact of various factors on the tail exponent being predominantly informal, we also recognize the value of empirical confirmation. Here, we shall conduct a few numerical examinations to complement our main discussion. For further empirical analyses concerning non-Gaussian fluctuations in stochastic optimization, we refer to [69, 64].
As a quick illustration, in Figure 3, we contrast tail behaviour in the stationary distributions of the Markov chains induced by optimizers (a) (additive) and (c) (multiplicative) introduced in §5. Three different step sizes are used, with constant . To exacerbate multimodality in the stationary distribution, we consider an objective with derivative , visualized in the upper part of Figure 3
. Accurately visualizing the stationary distribution, especially its tails, is challenging: to do so, we apply a low bandwidth kernel density estimate tosteps. As expected, multiplicative noise exhibits slowly decaying heavy tails in contrast to the rapidly decaying Gaussian tails seen with additive light noise. Furthermore, the heaviness of the tails increases with the step size.
To estimate the power law from empirically obtained data, in the sequel, we shall make use of the powerlaw Python package , which applies a combination of maximum likelihood estimation and Kolmogorov-Smirnov techniques (see ) to fit a Pareto distribution to data. Recall that a Pareto distribution has density for , where is the scale parameter (that is, where the power law in the tail begins), and is the tail exponent in the density. Note that this is related to our definition of the tail exponent by
. Unbiased estimates of this tail exponentobtained from the powerlaw package will be denoted by .
b.1 The linear case with SGD
Let us reconsider the simple case discussed in §3 and illustrate power laws arising from SGD on ridge regression. As a particularly simple illustration, first consider the one-dimensional case of (5) with , , , and standard normal synthetic data. The resulting Markov chain is
where . Starting from , Figure 4 shows a trace plot of iterations of (11). One can observe the sporadic spikes that are indicative of heavy-tailed fluctuations. Also in Figure 4 is an approximation of the probability density of magnitudes of the iterations. Both exponential and log-normal fits are obtained via maximum likelihood estimation and compared with the power law predicted from Theorem 2 (). Visually, the power law certainly provides the best fit. Using the Python package powerlaw
, a Pareto-distribution was fitted to the iterations. The theoretical tail exponent falls within the 95% confidence interval for the estimated tail exponent:. However, even for this simple case where the stationary distribution is known to exhibit a power law and a significant number of samples are available, a likelihood ratio test was found incapable of refuting a Kolmogorov-Smirnov lognormal fit to the tail.
As the dimension increases, the upper bound on the power law from Theorem 2 becomes increasingly less tight. To see this, we conduct least-squares linear regression to the Wine Quality dataset  (12 attributes; 4898 instances) using vanilla SGD with step size , regularization parameter , and minibatch size . These parameters are so chosen to ensure that the resulting sequence of iterates satisfying (5) is just barely ergodic, and exhibits a profoundly heavy tail. Starting from standard normal , Figure 5 shows a trace plot of million iterations, together with an approximation of the probability density of magnitudes of the iterations. A Pareto-distribution fit obtained using the powerlaw package is also drawn and can be seen to be an excellent match to the data; the corresponding estimated tail exponent is to 95% confidence. However, applying Theorem 2 to the chain formed from every iterations reveals a much larger upper bound on the tail exponent: .
b.2 Factors influencing tail exponents
. The effect of changing each of these hyperparameters individually was examined. Three other factors were also considered: (i) the effect of smoothing input data by adding Gaussian noise with standard deviation; (ii) the effect of adding momentum (and increasing this hyperparameter); and (iii) changing the optimizer between SGD, Adagrad, Adam, and subsampled Newton (SSN).
|step size||minibatch size||regularization||data smoothing|
|0.001||4.12 0.04||10||5.99 0.05||2.97 0.03||0||2.97 0.03|
|0.005||3.70 0.02||5||4.98 0.07||0.01||3.02 0.02||0.1||2.96 0.05|
|0.01||3.71 0.04||2||3.62 0.03||0.1||2.77 0.01||0.5||3.05 0.02|
|0.025||2.97 0.03||1||2.97 0.03||0.2||2.55 0.01||1||2.36 0.13|
|0.5||4.99 0.03||Adagrad||3.2 0.1|
|0.25||2.48 0.02||Adam||2.119 0.005|
|0.1||2.84 0.02||SGD||2.93 0.03|
|0||2.93 0.03||SSN||0.79 0.04|
In each case, we ran the stochastic optimizer for epochs (roughly 2.5 million iterations). Instead of directly measuring norms of the weights, we prefer to look at norms of the steps . There are two reasons for this: (1) unlike steps, the mode of the stationary distribution of the norms of the weights will not be close to zero, incurring a significant challenge to the estimation of tail exponents; and (2) if steps at stationarity are heavy-tailed in the sense of having infinite th moment, then the stationary distribution of the norms of the weights will have infinite th moment also. This is due to the triangle inequality: assuming is ergodic with , . Density estimates for the steps of each run, varying each factor individually, are displayed in Figure 6. Using the powerlaw package, tail exponents were estimated in each case, and are presented in Table 3 as 95% confidence intervals. As expected, both increasing step size and decreasing minibatch size can be seen to decrease tail exponents, resulting in heavier tails. Unfortunately, the situation is not as clear for the other factors; from Figure 6, we can see that this is possibly due in part to the unique shapes of the distributions, preventing effective estimates of the scale parameter, upon which the tail exponent (and its confidence intervals) are dependent. Nevertheless, there are a few comments that we can make. Firstly, the inclusion of momentum does not seem to prohibit heavy-tailed behaviour, even though the theory breaks down in these cases. On the other hand, as can be seen in Figure 6, Adam appears to exhibit very light tails compared to other optimizers. Adagrad exhibits heavy-tailed behaviour despite taking smaller steps on average. SSN shows the strongest heavy-tailed behaviour among all the stochastic optimizers considered. Increasing regularization does increase variance of the steps, but does not appear to make a significant difference to the tails in this test case. Similarly, the effect of adding noise to the data is unclear, although our claim that increasing dispersion of the data (which the addition of large amounts of noise would certainly do) results in heavier-tailed behaviour, is supported by the case.
b.3 Non-Gaussian stationary behaviour in MobileNetV2
upscaled using bicubic filtering by a factor of 7. Applying transfer learning, we begin with pretrained weights for ImageNet, replacing the final fully-connected layer with a 10-class classifier and fine-tune the model, fixing all weights except the final layer. Norms of the stepsfor three epochs of vanilla SGD with minibatch size , regularization parameter , and a step size of (far larger than common choices for training MobileNetV2, instead chosen to exacerbate heavy tails for the purpose of illustration) were recorded. These are displayed in Figure 7: on the left side is a density estimate obtained via pseudo-Gaussian smoothing with large bandwidth; on the right side is the histogram of steps on log-log scale together with the densities of corresponding exponential and log-normal fits, obtained via maximum likelihood estimation. A maximum likelihood estimate of the tail exponent for a power law distributional fit reveals
to 95% confidence. Evidently, steps are not light-tailed: a likelihood ratio test strongly supports rejection of the light-tailed (exponential) null hypothesis (
). Once again, we are unable to refute the hypothesis that the data comes from a log-normal distribution.
Appendix C Proofs
Proof of Lemma 2.
Observe that has full support on the space of square matrices when , and full support on rank- matrices otherwise. In the former (over)determined case, both the smallest and largest singular values of have full support on , and hence the stationary distribution of (5) decays as a power law. The latter underdetermined case is not so straightforward: while still has full support on , almost surely. Instead, observe that the Markov chain that arises from taking steps is also a random linear recurrence relation with the same stationary distribution as (5):
One may verify that has full support on the space of all square matrices, and hence, (5) will also exhibit a stationary distribution with heavy tails. ∎
For the general case, our strategy is to prove that, for some , diverges as . If we assume that is ergodic with limiting distribution , then the -norm ergodic theorem [59, Theorem 14.0.1] implies that converges if and only if is finite, hence the divergence of implies has infinite -th moment. In particular, here is the proof of Lemma 3.
Proof of Lemma 3.
It suffices to show that for any , where
Let be arbitrary. For , let be the event that . Also, let from the hypotheses. Since is independent of , by laws of conditional expectation, for any ,
For any , , and hence diverges as . By the -norm ergodic theorem, cannot be integrable; if it were, then would converge to . ∎
The arguments of  almost imply Theorem 1, but are incompatible with the conditions that is non-negative, and can be zero. Instead, more subtle arguments are required; for these, we draw inspiration from [25, 26].
Proof of Theorem 1.
Since the probability measure of is non-atomic, [1, Theorem 2.1] implies that is Harris recurrent. Together with [1, Theorem 3.2], we find that is geometrically ergodic. That satisfies the distributional fixed point equation is precisely Letac’s principle [25, Theorem 2.1].
We shall begin by proving (2). Recall that444This is readily shown using L’Hôpital’s rule. , and so
Therefore, there exists some such that . Using Hölder’s inequality, one finds that for any . Likewise, since with positive probability, with positive probability also, and hence, with positive probability. Therefore, there exists such that , and so for any . We now consider similar arguments to the proof of Lemma 3. Since is geometrically ergodic, by the -norm ergodic theorem, for any , is finite if and only if is bounded in . Letting , for each ,