Entropy-SGD: Biasing Gradient Descent Into Wide Valleys

This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.

Authors

• 26 publications
• 27 publications
• 111 publications
• 98 publications
• 21 publications
• 15 publications
• 13 publications
• 19 publications
• 18 publications
06/14/2020

Entropic gradient descent algorithms and wide flat minima

The properties of flat minima in the empirical risk landscape of neural ...
07/25/2021

SGD May Never Escape Saddle Points

Stochastic gradient descent (SGD) has been deployed to solve highly non-...
01/12/2022

On generalization bounds for deep networks based on loss surface implicit regularization

The classical statistical learning theory says that fitting too many par...
05/27/2022

Modern deep neural networks (DNNs) have achieved state-of-the-art perfor...
01/20/2022

Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape

In this paper, we study the sharpness of a deep learning (DL) loss lands...
10/07/2021

Efficient Sharpness-aware Minimization for Improved Training of Neural Networks

Overparametrized Deep Neural Networks (DNNs) often achieve astounding pe...
06/16/2020

Flatness is a False Friend

Hessian based measures of flatness, such as the trace, Frobenius and spe...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper presents a new optimization tool for deep learning designed to exploit the local geometric properties of the objective function. Consider the histogram we obtained in Fig.

1 showing the spectrum of the Hessian at an extremum discovered by Adam (Kingma & Ba, 2014)(LeCun et al., 1998) ( weights, cf. Sec. 5.1). It is evident that:

1. a large number of directions () have near-zero eigenvalues (magnitude less than ),

2. positive eigenvalues (right inset) have a long tail with the largest one being almost ,

3. negative eigenvalues (left inset), which are directions of descent that the optimizer missed, have a much faster decay (the largest negative eigenvalue is only ).

Interestingly, this trend is not unique to this particular network. Rather, its qualitative properties are shared across a variety of network architectures, network sizes, datasets or optimization algorithms (refer to Sec. 5 for more experiments). Local minima that generalize well and are discovered by gradient descent lie in “wide valleys” of the energy landscape, rather than in sharp, isolated minima. For an intuitive understanding of this phenomenon, imagine a Bayesian prior concentrated at the minimizer of the expected loss, the marginal likelihood of wide valleys under this prior is much higher than narrow, sharp valleys even if the latter are close to the global minimum in training loss. Almost-flat regions of the energy landscape are robust to data perturbations, noise in the activations, as well as perturbations of the parameters, all of which are widely-used techniques to achieve good generalization. This suggests that wide valleys should result in better generalization and, indeed, standard optimization algorithms in deep learning seem to discover exactly that — without being explicitly tailored to do so. For another recent analysis of the Hessian, see the parallel work of Sagun et al. (2016).

Based on this understanding of how the local geometry looks at the end of optimization, can we modify SGD to actively seek such regions? Motivated by the work of Baldassi et al. (2015) on shallow networks, instead of minimizing the original loss , we propose to maximize

 F(x,γ)=log∫x′∈Rn exp(−f(x′)−γ2 ∥x−x′∥22) dx′.

The above is a log-partition function that measures both the depth of a valley at a location , and its flatness through the entropy of ; we call it “local entropy” in analogy to the free entropy used in statistical physics. The

algorithm presented in this paper employs stochastic gradient Langevin dynamics (SGLD) to approximate the gradient of local entropy. Our algorithm resembles two nested loops of SGD: the inner loop consists of SGLD iterations while the outer loop updates the parameters. We show that the above modified loss function results in a smoother energy landscape defined by the hyper-parameter

which we can think of as a “scope” that seeks out valleys of specific widths. Actively biasing the optimization towards wide valleys in the energy landscape results in better generalization error. We present experimental results on fully-connected and convolutional neural networks (CNNs) on the MNIST and CIFAR-10

(Krizhevsky, 2009)

datasets and recurrent neural networks (RNNs) on the Penn Tree Bank dataset (PTB)

(Marcus et al., 1993) and character-level text prediction. Our experiments show that scales to deep networks used in practice, obtains comparable generalization error as competitive baselines and also trains much more quickly than SGD (we get a speed-up over SGD on RNNs).

2 Related work

Our above observation about the spectrum of Hessian (further discussed in Sec. 5

) is similar to results on a perceptron model in

Dauphin et al. (2014) where the authors connect the loss function of a deep network to a high-dimensional Gaussian random field. They also relate to earlier studies such as Baldi & Hornik (1989); Fyodorov & Williams (2007); Bray & Dean (2007) which show that critical points with high training error are exponentially likely to be saddle points with many negative directions and all local minima are likely to have error that is very close to that of the global minimum. The authors also argue that convergence of gradient descent is affected by the proliferation of saddle points surrounded by high error plateaus — as opposed to multiple local minima. One can also see this via an application of Kramer’s law: the time spent by diffusion is inversely proportional to the smallest negative eigenvalue of the Hessian at a saddle point (Bovier & den Hollander, 2006).

The existence of multiple — almost equivalent — local minima in deep networks has been predicted using a wide variety of theoretical analyses and empirical observations, e.g., papers such as Choromanska et al. (2015a, b); Chaudhari & Soatto (2015) that build upon results from statistical physics as also others such as Haeffele & Vidal (2015) and Janzamin et al. (2015)

that obtain similar results for matrix and tensor factorization problems. Although assumptions in these works are somewhat unrealistic in the context of deep networks used in practice, similar results are also true for linear networks which afford a more thorough analytical treatment

(Saxe et al., 2014). For instance, Soudry & Carmon (2016) show that with mild over-parameterization and dropout-like noise, training error for a neural network with one hidden layer and piece-wise linear activation is zero at every local minimum. All these results suggest that the energy landscape of deep neural networks should be easy to optimize and they more or less hold in practice — it is easy to optimize a prototypical deep network to near-zero loss on the training set (Hardt et al., 2015; Goodfellow & Vinyals, 2015).

Obtaining good generalization error, however, is challenging: complex architectures are sensitive to initial conditions and learning rates (Sutskever et al., 2013) and even linear networks (Kawaguchi, 2016) may have degenerate and hard to escape saddle points (Ge et al., 2015; Anandkumar & Ge, 2016). Techniques such as adaptive (Duchi et al., 2011) and annealed learning rates, momentum (Tieleman & Hinton, 2012), as well as architectural modifications like dropout (Srivastava et al., 2014)

(Ioffe & Szegedy, 2015; Cooijmans et al., 2016), weight scaling (Salimans & Kingma, 2016) etc. are different ways of tackling this issue by making the underlying landscape more amenable to first-order algorithms. However, the training process often requires a combination of such techniques and it is unclear beforehand to what extent each one of them helps.

Closer to the subject of this paper are results by Baldassi et al. (2015, 2016a, 2016b)

who show that the energy landscape of shallow networks with discrete weights is characterized by an exponential number of isolated minima and few very dense regions with lots of local minima close to each other. These dense local minima can be shown to generalize well for random input data; more importantly, they are also accessible by efficient algorithms using a novel measure called “robust ensemble” that amplifies the weight of such dense regions. The authors use belief propagation to estimate local entropy for simpler models such as committee machines considered there. A related work in this context is EASGD

(Zhang et al., 2015) which trains multiple deep networks in parallel and modulates the distance of each worker from the ensemble average. Such an ensemble training procedure enables improved generalization by ensuring that different workers land in the same wide valley and indeed, it turns out to be closely related to the replica theoretic analysis of Baldassi et al. (2016a).

Our work generalizes the local entropy approach above to modern deep networks with continuous weights. It exploits the same phenomenon of wide valleys in the energy landscape but does so without incurring the hardware and communication complexity of replicated training or being limited to models where one can estimate local entropy using belief propagation. The enabling technique in our case is using Langevin dynamics for estimating the gradient of local entropy, which can be done efficiently even for large deep networks using mini-batch updates.

Motivated by the same final goal, viz. flat local minima, the authors in Hochreiter & Schmidhuber (1997b) introduce hard constraints on the training loss and the width of local minima and show using the Gibbs formalism (Haussler & Opper, 1997) that this leads to improved generalization. As the authors discuss, the effect of hyper-parameters for the constraints is intricately tied together and they are difficult to choose even for small problems. Our local entropy based objective instead naturally balances the energetic term (training loss) and the entropic term (width of the valley). The role of is clear as a focusing parameter (cf. Sec. 4.3) and effectively exploiting this provides significant computational advantages. Among other conceptual similarities with our work, let us note that local entropy in a flat valley is a direct measure of the width of the valley which is similar to their usage of Hessian, while the Gibbs variant to averaging in weight space (Eqn. 33 of Hochreiter & Schmidhuber (1997b)) is similar to our Eqn. (7). Indeed, Gibbs formalism used in their analysis is a very promising direction to understanding generalization in deep networks (Zhang et al., 2016).

Homotopy continuation methods convolve the loss function to solve sequentially refined optimization problems (Allgower & Georg, 2012; Mobahi & Fisher III, 2015), similarly, methods that perturb the weights or activations to average the gradient (Gulcehre et al., 2016) do so with an aim to smooth the rugged energy landscape. Such smoothing is however very different from local entropy. For instance, the latter places more weight on wide local minima even if they are much shallower than the global minimum (cf. Fig. 2); this effect cannot be obtained by smoothing. In fact, smoothing can introduce an artificial minimum between two nearby sharp valleys which is detrimental to generalization. In order to be effective, continuation techniques also require that minimizers of successively smaller convolutions of the loss function lie close to each other (Hazan et al., 2016); it is not clear whether this is true for deep networks. Local entropy, on the other hand, exploits wide minima which have been shown to exist in a variety of learning problems (Monasson & Zecchina, 1995; Cocco et al., 1996). Please refer to Appendix C for a more elaborate discussion as well as possible connections to stochastic variational inference (Blei et al., 2016).

3 Local entropy

We first provide a simple intuition for the concept of local entropy of an energy landscape. The discussion in this section builds upon the results of Baldassi et al. (2016a) and extends it for the case of continuous variables. Consider a cartoon energy landscape in Fig. 2 where the x-axis denotes the configuration space of the parameters. We have constructed two local minima: a shallower although wider one at and a very sharp global minimum at

. Under a Bayesian prior on the parameters, say a Gaussian of a fixed variance at locations

and respectively, the wider local minimum has a higher marginalized likelihood than the sharp valley on the right.

The above discussion suggests that parameters that lie in wider local minima like , which may possibly have a higher loss than the global minimum, should generalize better than the ones that are simply at the global minimum. In a neighborhood of , “local entropy” as introduced in Sec. 1 is large because it includes the contributions from a large region of good parameters; conversely, near , there are fewer such contributions and the resulting local entropy is low. The local entropy thus provides a way of picking large, approximately flat, regions of the landscape over sharp, narrow valleys in spite of the latter possibly having a lower loss. Quite conveniently, the local entropy is also computed from the partition function with a local re-weighting term.

Formally, for a parameter vector

, consider a Gibbs distribution corresponding to a given energy landscape :

 P(x; β)=Z−1β exp (−β f(x)); (1)

where is known as the inverse temperature and is a normalizing constant, also known as the partition function. As

, the probability distribution above concentrates on the global minimum of

(assuming it is unique) given as:

 x∗=argminx f(x), (2)

which establishes the link between the Gibbs distribution and a generic optimization problem (2

). We would instead like the probability distribution — and therefore the underlying optimization problem — to focus on flat regions such as

in Fig. 2. With this in mind, let us construct a modified Gibbs distribution:

 (3)

The distribution in (3

) is a function of a dummy variable

and is parameterized by the original location . The parameter biases the modified distribution (3) towards ; a large results in a with all its mass near irrespective of the energy term . For small values of , the term in the exponent dominates and the modified distribution is similar to the original Gibbs distribution in (1). We will set the inverse temperature to because affords us similar control on the Gibbs distribution.

Definition 1 (Local entropy).

The local free entropy of the Gibbs distribution in (1), colloquially called “local entropy” in the sequel and denoted by , is defined as the log-partition function of modified Gibbs distribution (3), i.e.,

 F(x,γ) =logZx,1,γ =log∫x′ exp(−f(x′)−γ2 ∥x−x′∥22) dx′. (4)

The parameter is used to focus the modified Gibbs distribution upon a local neighborhood of and we call it a “scope” henceforth.

Effect on the energy landscape:

Fig. 2 shows the negative local entropy for two different values of . We expect to be more robust than to perturbations of data or parameters and thus generalize well and indeed, the negative local entropy in Fig. 2 has a global minimum near . For low values of , the energy landscape is significantly smoother than the original landscape and still maintains our desired characteristic, viz. global minimum at a wide valley. As increases, the local entropy energy landscape gets closer to the original energy landscape and they become equivalent at . On the other hand, for very small values of , the local entropy energy landscape is almost uniform.

Connection to classical entropy:

The quantity we have defined as local entropy in Def. 1 is different from classical entropy which counts the number of likely configurations under a given distribution. For a continuous parameter space, this is given by

 S(x,β,γ)=−∫x′ logP(x′; x,β,γ) dP(x′; x,β,γ)

for the Gibbs distribution in (3). Minimizing classical entropy however does not differentiate between flat regions that have very high loss versus flat regions that lie deeper in the energy landscape. For instance in Fig. 2, classical entropy is smallest in the neighborhood of which is a large region with very high loss on the training dataset and is unlikely to generalize well.

4 Entropy-guided SGD

Simply speaking, our algorithm minimizes the negative local entropy from Sec. 3. This section discusses how the gradient of local entropy can be computed via Langevin dynamics. The reader will see that the resulting algorithm has a strong flavor of “SGD-inside-SGD”: the outer SGD updates the parameters, while an inner SGD estimates the gradient of local entropy.

Consider a typical classification setting, let be the weights of a deep neural network and be samples from a dataset of size . Let

be the loss function, e.g., cross-entropy of the classifier on a sample

. The original optimization problem is:

 x∗=argminx 1N N∑k=1 f(x; ξk); (5)

where the objective is typically a non-convex function in both the weights and the samples . The algorithm instead solves the problem

 x∗Entropy-SGD=argminx −F(x,γ; Ξ); (6)

where we have made the dependence of local entropy on the dataset explicit.

The gradient of local entropy over a randomly sampled mini-batch of samples denoted by for is easy to derive and is given by

 −∇xF(x,γ; Ξℓ)=γ (x−⟨x′; Ξℓ⟩); (7)

where the notation denotes an expectation of its arguments (we have again made the dependence on the data explicit) over a Gibbs distribution of the original optimization problem modified to focus on the neighborhood of ; this is given by

 P(x′; x,γ)∝ exp[−(1m m∑i=1 f(x′; ξℓi))−γ2 ∥x−x′∥22]. (8)

Computationally, the gradient in (7) involves estimating with the current weights fixed to

. This is an expectation over a Gibbs distribution and is hard to compute. We can however approximate it using Markov chain Monte-Carlo (MCMC) techniques. In this paper, we use stochastic gradient Langevin dynamics (SGLD)

(Welling & Teh, 2011) that is an MCMC algorithm for drawing samples from a Bayesian posterior and scales to large datasets using mini-batch updates. Please see Appendix A for a brief overview of SGLD. For our application, as lines 3-6 of Alg. 1 show, SGLD resembles a few iterations of SGD with a forcing term and additive gradient noise.

We can obtain some intuition on how works using the expression for the gradient: the term is the average over a locally focused Gibbs distribution and for two local minima in the neighborhood of roughly equivalent in loss, this term points towards the wider one because is closer to it. This results in a net gradient that takes SGD towards wider valleys. Moreover, if we unroll the SGLD steps used to compute (cf. line 5 in Alg. 1), it resembles one large step in the direction of the (noisy) average gradient around the current weights and thus looks similar to averaged SGD in the literature (Polyak & Juditsky, 1992; Bottou, 2012). These two phenomena intuitively explain the improved generalization performance of .

4.2 Algorithm and Implementation details

Alg. 1 provides the pseudo-code for one iteration of the algorithm. At each iteration, lines - perform iterations of Langevin dynamics to estimate . The weights are updated with the modified gradient on line .

Let us now discuss a few implementation details. Although we have written Alg. 1 in the classical SGD setup, we can easily modify it to include techniques such as momentum and gradient pre-conditioning (Duchi et al., 2011) by changing lines and . In our experiments, we have used SGD with Nesterov’s momentum (Sutskever et al., 2013) and Adam for outer and inner loops with similar qualitative results. We use exponential averaging to estimate in the SGLD loop (line 6) with so as to put more weight on the later samples, this is akin to a burn-in in standard SGLD.

We set the number of SGLD iterations to depending upon the complexity of the dataset. The learning rate is fixed for all our experiments to values between . We found that annealing (for instance, setting it to be the same as the outer learning rate ) is detrimental; indeed a small learning rate leads to poor estimates of local entropy towards the end of training where they are most needed. The parameter in SGLD on line 5 is the thermal noise and we fix this to . Having thus fixed and

, an effective heuristic to tune the remaining parameter

is to match the magnitude of the gradient of the local entropy term, viz. , to the gradient for vanilla SGD, viz. .

4.3 Scoping of γ

The scope is fixed in Alg. 1. For large values of , the SGLD updates happen in a small neighborhood of the current parameters while low values of allow the inner SGLD to explore further away from . In the context of the discussion in Sec. 3, a “reverse-annealing” of the scope , i.e. increasing as training progresses has the effect of exploring the energy landscape on progressively finer scales. We call this process “scoping” which is similar to that of Baldassi et al. (2016a) and use a simple exponential schedule given by

 γ(t)=γ0 (1+γ1)t;

for the parameter update. We have experimented with linear, quadratic and bounded exponential () scoping schedules and obtained qualitatively similar results.

Scoping of unfortunately interferes with the learning rate annealing that is popular in deep learning, this is a direct consequence of the update step on line 7 of Alg. 1. In practice, we therefore scale down the local entropy gradient by before the weight update and modify the line to read

 x←x−η(x−μ).

Our general strategy during hyper-parameter tuning is to set the initial scope to be very small, pick a large value of and set to be such that the magnitude of the local entropy gradient is comparable to that of SGD. We can use much larger learning rates than SGD in our experiments because the local entropy gradient is less noisy than the original back-propagated gradient. This also enables very fast progress in the beginning with a smooth landscape of a small .

4.4 Theoretical Properties

We can show that results in a smoother loss function and obtains better generalization error than the original objective (5). With some overload of notation, we assume that the original loss is -smooth, i.e., for all , we have . We additionally assume for the purpose of analysis that no eigenvalue of the Hessian lies in the set for some small .

Lemma 2.

The objective in (6) is -Lipschitz and -smooth.

Please see Appendix B for the proof. The local entropy objective is thus smoother than the original objective. Let us now obtain a bound on the improvement in generalization error. We denote an optimization algorithm, viz., SGD or by , it is a function of the dataset and outputs the parameters upon termination. Stability of the algorithm (Bousquet & Elisseeff, 2002) is then a notion of how much its output differs in loss upon being presented with two datasets, and , that differ in at most one sample:

 supξ ∈ Ξ ∪ Ξ′ [f(A(Ξ),ξ)−f(A(Ξ′),ξ)]≤ϵ.

Hardt et al. (2015) connect uniform stability to generalization error and show that an -stable algorithm has generalization error bounded by , i.e., if terminates with parameters ,

 |EΞ(RΞ(x∗)−R(x∗))|≤ϵ;

where the left hand side is the generalization error: it is the difference between the empirical loss and the population loss . We now employ the following theorem that bounds the stability of an optimization algorithm through the smoothness of its loss function and the number of iterations on the training set.

Theorem 3 (Hardt et al. (2015)).

For an -Lipschitz and -smooth loss function, if SGD converges in iterations on samples with decreasing learning rate the stability is bounded by

 ϵ ⪅1N α1/(1+β) T1−1/(1+β).

Using Lemma 2 and Theorem 3 we have

 ϵ Entropy-SGD⪅(α T−1)(1−11+γ−1c) β ϵ SGD, (9)

which shows that generalizes better than SGD for all if they both converge after passes over the samples.

Let us note that while the number of passes over the dataset for and SGD are similar for our experiments on CNNs, makes only half as many passes as SGD for our experiments on RNNs. As an aside, it is easy to see from the proof of Lemma 2 that for a convex loss function , the local entropy objective does not change the minimizer of the original problem.

Remark 4.

The above analysis hinges upon an assumption that the Hessian does not have eigenvalues in the set for a constant . This is admittedly unrealistic, for instance, the eigenspectrum of the Hessian at a local minimum in Fig. 1 has a large fraction of its eigenvalues almost zero. Let us however remark that the result in Thm. 3 by Hardt et al. (2015) assumes global conditions on the smoothness of the loss function; one imagines that Eqn. 9 remains qualitatively the same (with respect to in particular) even if this assumption is violated to an extent before convergence happens. Obtaining a rigorous generalization bound without this assumption would require a dynamical analysis of SGD and seems out of reach currently.

5 Experiments

In Sec. 5.1, we discuss experiments that suggest that the characteristics of the energy landscape around local minimal accessible by SGD are universal to deep architectures. We then present experimental results on two standard image classification datasets, viz. MNIST and CIFAR-10 and two datasets for text prediction, viz. PTB and the text of War and Peace. Table 1 summarizes the results of these experiments on deep networks.

5.1 Universality of the Hessian at local minima

We use automatic differentiation to compute the Hessian at a local minimum obtained at the end of training for the following networks:

1. small-LeNet on MNIST: This network has parameters and is similar to LeNet but with and channels respectively in the first two convolutional layers and hidden units in the fully-connected layer. We train this with Adam to obtain a test error of .

2. small-mnistfc on MNIST: A fully-connected network ( parameters) with one layer of

hidden units, ReLU non-linearities and cross-entropy loss; it converges to a test error of

with momentum-based SGD.

3. char-lstm for text generation

: This is a recurrent network with

hidden units and Long Short-Term Memory (LSTM) architecture

(Hochreiter & Schmidhuber, 1997a). It has parameters and we train it with Adam to re-generate a small piece of text consisting of lines of length each and

-bit one-hot encoded characters.

4. All-CNN-BN on CIFAR-10: This is similar to the All-CNN-C network (Springenberg et al., 2014) with million weights (cf. Sec. 5.3) which we train using Adam to obtain an error of . Exact Hessian computation is in this case expensive and thus we instead compute the diagonal of the Fisher information matrix (Wasserman, 2013)

using the element-wise first and second moments of the gradients that Adam maintains, i.e.,

where is the back-propagated gradient. Fisher information measures the sensitivity of the log-likelihood of data given parameters in a neighborhood of a local minimum and thus is exactly equal to the Hessian of the negative log-likelihood. We will consider the diagonal of the empirical Fisher information matrix as a proxy for the eigenvalues of the Hessian, as is common in the literature.

We choose to compute the exact Hessian and to keep the computational and memory requirements manageable, the first three networks considered above are smaller than standard deep networks used in practice. For the last network, we sacrifice the exact computation and instead approximate the Hessian of a large deep network. We note that recovering an approximate Hessian from Hessian-vector products (Pearlmutter, 1994) could be a viable strategy for large networks.

Fig. 1 in the introductory Sec. 1 shows the eigenspectrum of the Hessian for small-LeNet while Fig. 3 shows the eigenspectra for the other three networks. A large proportion of eigenvalues of the Hessian are very close to zero or positive with a very small (relative) magnitude. This suggests that the local geometry of the energy landscape is almost flat at local minima discovered by gradient descent. This agrees with theoretical results such as Baldassi et al. (2016c)

where the authors predict that flat regions of the landscape generalize better. Standard regularizers in deep learning such as convolutions, max-pooling and dropout seem to bias SGD towards flatter regions in the energy landscape. Away from the origin, the right tails of the eigenspectra are much longer than the left tails. Indeed, as discussed in numerous places in literature

(Bray & Dean, 2007; Dauphin et al., 2014; Choromanska et al., 2015a), SGD finds low-index critical points, i.e., optimizers with few negative eigenvalues of the Hessian. What is interesting and novel is that the directions of descent that SGD misses do not have a large curvature.

5.2 Mnist

We consider two prototypical networks: the first, called mnistfc, is a fully-connected network with two hidden layers of units each and the second is a convolutional neural network with the same size as LeNet but with batch-normalization (Ioffe & Szegedy, 2015); both use a dropout of probability after each layer. As a baseline, we train for epochs with Adam and a learning rate of that drops by a factor of after every epochs to obtain an average error of and for mnistfc and LeNet respectively, over independent runs.

For both these networks, we train for epochs with and reduce the dropout probability ( for mnistfc and for LeNet). The learning rate of the SGLD updates is fixed to while the outer loop’s learning rate is set to and drops by a factor of after the second epoch; we use Nesterov’s momentum for both loops. The thermal noise in SGLD updates (line 5 of Alg. 1) is set to . We use an exponentially increasing value of for scoping, the initial value of the scope is set to and this increases by a factor of after each parameter update. The results in Fig. 3(a) and Fig. 3(b) show that obtains a comparable generalization error: and , for mnistfc and LeNet respectively. While trains slightly faster in wall-clock time for LeNet; it is marginally slower for mnistfc which is a small network and trains in about two minutes anyway.

Remark on the computational complexity: Since runs steps of SGLD before each parameter update, the effective number of passes over the dataset is times that of SGD or Adam for the same number of parameter updates. We therefore plot the error curves of in Figs. 45, and 6 against the “effective number of epochs”, i.e. by multiplying the abscissae by a factor of . (we define for SGD or Adam). Modulo the time required for the actual parameter updates (which are fewer for ), this is a direct measure of wall-clock time, agnostic to the underlying hardware and implementation.

5.3 Cifar-10

We train on CIFAR-10 without data augmentation after performing global contrast normalization and ZCA whitening (Goodfellow et al., 2013). We consider the All-CNN-C network of Springenberg et al. (2014) with the only difference that a batch normalization layer is added after each convolutional layer; all other architecture and hyper-parameters are kept the same. We train for epochs with SGD and Nesterov’s momentum during which the initial learning rate of decreases by a factor of after every epochs. We obtain an average error of in epochs vs.  error in epochs that the authors in Springenberg et al. (2014) report and this is thus a very competitive baseline for this network. Let us note that the best result in the literature on non-augmented CIFAR-10 is the ELU-network by Clevert et al. (2015) with test error.

We train with for epochs with the original dropout of . The initial learning rate of the outer loop is set to and drops by a factor of every epochs, while the learning rate of the SGLD updates is fixed to with thermal noise . As the scoping scheme, we set the initial value of the scope to which increases by a factor of after each parameter update. Fig. 5 shows the training and validation error curves for compared with SGD. It shows that local entropy performs as well as SGD on a large CNN; we obtain a validation error of in about effective epochs.

We see almost no plateauing of training loss or validation error for in Fig. 4(a); this trait is shared across different networks and datasets in our experiments and is an indicator of the additional smoothness of the local entropy landscape coupled with a good scoping schedule for .

5.4 Recurrent neural networks

5.4.1 Penn Tree Bank

We train an LSTM network on the Penn Tree Bank (PTB) dataset for word-level text prediction. This dataset contains about one million words divided into a training set of about words, a validation set of words and words with a vocabulary of size . Our network called PTB-LSTM consists of two layers with hidden units, each unrolled for time steps; note that this is a large network with about million weights. We recreated the training pipeline of Zaremba et al. (2014) for this network (SGD without momentum) and obtained a word perplexity of on the validation set and on the test set with this setup; these numbers closely match the results of the original authors.

We run on PTB-LSTM for epochs with , note that this results in only effective epochs. We do not use scoping for this network and instead fix . The initial learning rate of the outer loop is which reduces by a factor of at each epoch after the third epoch. The SGLD learning rate is fixed to with . We obtain a word perplexity of on the validation set and on the test set. As Fig. 5(a) shows, trains significantly faster than SGD ( effective epochs vs. epochs of SGD) and also achieves a slightly better generalization perplexity.

5.4.2 char-LSTM on War and Peace

Next, we train an LSTM to perform character-level text-prediction. As a dataset, following the experiments of Karpathy et al. (2015), we use the text of War and Peace by Leo Tolstoy which contains about million characters divided into training (), validation () and test () sets. We use an LSTM consisting of two layers of hidden units unrolled for time steps and a vocabulary of size . We train the baseline with Adam for epochs with an initial learning rate of that decays by a factor of after every epochs to obtain a validation perplexity of and a test perplexity of .

As noted in Sec. 4.2, we can easily wrap Alg. 1 inside other variants of SGD such as Adam; this involves simply substituting the local entropy gradient in place of the usual back-propagated gradient. For this experiment, we constructed which is equivalent to Adam with the local entropy gradient (which is computed via SGLD). We run for epochs with and a fixed with an initial learning rate of that decreases by a factor of at each epoch. Note that this again results in only effective epochs, i.e. half as much wall-clock time as SGD or Adam. We obtain a validation perplexity of and a test perplexity of over independent runs which is better than the baseline. Fig. 5(b) shows the error curves for this experiment.

Tuning the momentum in was crucial to getting good results on RNNs. While the SGD baseline on PTB-LSTM does not use momentum (and in fact, does not train well with momentum) we used a momentum of for . On the other hand, the baseline for char-LSTM was trained with Adam with ( in Adam controls the moving average of the gradient) while we set for . In contrast to this observation about RNNs, all our experiments on CNNs used a momentum of . In order to investigate this difficulty, we monitored the angle between the local entropy gradient and the vanilla SGD gradient during training. This angle changes much more rapidly for RNNs than for CNNs which suggests a more rugged energy landscape for the former; local entropy gradient is highly uncorrelated with the SGD gradient in this case.

6 Discussion

In our experiments, results in generalization error comparable to SGD, but always with lower cross-entropy loss on the training set. This suggests the following in the context of energy landscapes of deep networks. Roughly, wide valleys favored by are located deeper in the landscape with a lower empirical loss than local minima discovered by SGD where it presumably gets stuck. Such an interpretation is in contrast to theoretical models of deep networks (cf. Sec. 2) which predict multiple equivalent local minima with the same loss. Our work suggests that geometric properties of the energy landscape are crucial to generalize well and provides algorithmic approaches to exploit them. However, the literature lacks general results about the geometry of the loss functions of deep networks — convolutional neural networks in particular — and this is a promising direction for future work.

If we focus on the inner loop of the algorithm, SGLD updates compute the average gradient (with Langevin noise) in a neighborhood of the parameters while maintaining the Polyak average of the new parameters. Such an interpretation is very close to averaged SGD of Polyak & Juditsky (1992) and Bottou (2012) and worth further study. Our experiments show that while trains significantly faster than SGD for recurrent networks, it gets relatively minor gains in terms of wall-clock time for CNNs. Estimating the gradient of local entropy cheaply with few SGLD iterations, or by using a smaller network to estimate it in a teacher-student framework (Balan et al., 2015) is another avenue for extensions to this work.

7 Conclusions

We introduced an algorithm named for optimization of deep networks. This was motivated from the observation that the energy landscape near a local minimum discovered by SGD is almost flat for a wide variety of deep networks irrespective of their architecture, input data or training methods. We connected this observation to the concept of local entropy which we used to bias the optimization towards flat regions that have low generalization error. Our experiments showed that is applicable to large convolutional and recurrent deep networks used in practice.

8 Acknowledgments

This work was supported by ONR N00014-13-1-034, AFOSR F9550-15-1-0229 and ARO W911NF-15-1-0564/66731-CS.

Appendix A Stochastic gradient Langevin dynamics (SGLD)

Local entropy in Def. (1) is an expectation over the entire configuration space and is hard to compute; we can however approximate its gradient using Markov chain Monte-Carlo (MCMC) techniques. In this section, we briefly review stochastic gradient Langevin dynamics (Welling & Teh, 2011) that is an MCMC algorithm designed to draw samples from a Bayesian posterior and scales to large datasets using mini-batch updates.

For a parameter vector with a prior distribution and if the probability of generating a data item given a model parameterized by is , the posterior distribution of the parameters based on data items can be written as

 p(x|ξ k≤N) ∝ p(x) N∏k=1 p(ξk|x). (10)

Langevin dynamics (Neal, 2011) injects Gaussian noise into maximum-a-posteriori (MAP) updates to prevent over-fitting the solution of the above equation. The updates can be written as

 Δxt=η2 (∇logp(xt)+N∑k=1 ∇p(ξk|xt))+√η ϵt; (11)

where is Gaussian noise and is the learning rate. In this form, Langevin dynamics faces two major hurdles for applications to large datasets. First, computing the gradient over all samples for each update becomes prohibitive. However, as Welling & Teh (2011) show, one can instead simply use the average gradient over data samples (mini-batch) as follows:

 Δxt=ηt2 (∇logp(xt)+Nm m∑k=1 ∇p(ξk|xt))+√ηt ϵt. (12)

Secondly, Langevin dynamics in (11) is the discrete-time approximation of a continuous-time stochastic differential equation (Mandt et al., 2016) thereby necessitating a Metropolis-Hastings (MH) rejection step (Roberts & Stramer, 2002) which again requires computing over the entire dataset. However, if the learning rate , we can also forgo the MH step (Chen et al., 2014). Welling & Teh (2011) also argue that the sequence of samples generated by updating (12) converges to the correct posterior (10) and one can hence compute the statistics of any function of the parameters using these samples. Concretely, the posterior expectation is given by which is the average computed by weighing each sample by the corresponding learning rate in (12). In this paper, we will consider a uniform prior on the parameters and hence the first term in (12), viz., vanishes.

Let us note that there is a variety of increasingly sophisticated MCMC algorithms applicable to our problem, e.g., Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) by Chen et al. (2014) based on volume preserving flows in the “parameter-momentum” space, stochastic annealing thermostats (Santa) by Chen et al. (2015) etc. We can also employ these techniques, although we use SGLD for ease of implementation; the authors in Ma et al. (2015) provide an elaborate overview.

Appendix B Proofs

Proof of Lemma 2.

The gradient is computed in Sec. 4.1 to be . Consider the term

 x−⟨x′; x⟩ =x−Z−1x,γ ∫x′ x′ e−f(x′)−γ2 ∥x−x′∥2 dx′ ≈x−Z−1x,γ ∫s (x+s) e−f(x)−∇f(x)⊤s−12 s⊤(γ+∇2f(x))s ds =x(1−Z−1x,γ ∫s e−f(x)−∇f(x)⊤s−12 s⊤(γ+∇2f(x))s ds)−Z−1x,γ ∫s s e−f(x)−∇f(x)⊤s−12 s⊤(γ+∇2f(x))s ds =−Z−1x,γ e−f(x) ∫s s e−∇f(x)⊤s−12 s⊤(γ+∇2f(x))s ds.

The above expression is the mean of a distribution . We can approximate it using the saddle point method as the value of that minimizes the exponent to get

 x−⟨x′; x⟩≈(∇2f(x)+γ I)−1 ∇f(x).

Let us denote . Plugging this into the condition for smoothness, we have

 ∥∇F(x,γ)−∇F(y,γ)∥ =∥A(x) ∇f(x)−A(y) ∇f(y)∥ ≤(supx ∥A(x)∥) β ∥x−y∥.

Unfortunately, we can only get a uniform bound if we assume that for a small constant , no eigenvalue of lies in the set . This gives

 (supx ∥A(x)∥)≤11+γ−1 c.

This shows that a smaller value of results in a smoother energy landscape, except at places with very flat directions. The Lipschitz constant also decreases by the same factor.

Appendix C Connection to variational inference

The fundamental motivations of (stochastic) variational inference (SVI) and local entropy are similar: they both aim to generalize well by constructing a distribution on the weight space. In this section, we explore whether they are related and how one might reconcile the theoretical and algorithmic implications of the local entropy objective with that of SVI.

Let denote the entire dataset, denote the weights of a deep neural network and be the parameters of a variational distribution . The Evidence Lower Bound (ELBO) can be then be written as

 logp(Ξ)≥Ez∼qx(z) [logp(Ξ | z)]−KL(qx(z) || p(z)); (13)

where denotes a parameter-free prior on the weights and controls, through their -divergence, how well the posited posterior fits the data. Stochastic variational inference involves maximizing the right hand side of the above equation with respect to after choosing a suitable prior and a family of distributions . These choices are typically dictated by the ease of sampling , e.g. a mean-field model where factorizes over , and being able to compute the -divergence term, e.g. a mixture of Gaussians.

On the other hand, if we define the loss as the log-likelihood of data, viz. , we can write the logarithm of the local entropy in Eqn. (4) as

 logF(x,γ) =log∫z ∈ Rn exp[−f(z; Ξ)−γ2 ∥x−z∥2] dz, ≥∫z ∈ Rn [logp(Ξ | z)−γ2 ∥x−z∥2] dz; (14)

by an application of Jensen’s inequality. It is thus clear that Eqn. (13) and (14) are very different in general and one cannot choose a prior, or a variational family, that makes them equivalent and interpret local entropy as ELBO.

Eschewing rigor, formally, if we modify Eqn. (13) to allow the prior to depend upon , we can see that the two lower bounds above are equivalent iff

belongs to a “flat variational family”, i.e. uniform distributions with

as the mean and . We emphasize that the distribution depends on the parameters themselves and is thus, not really a prior, or one that can be derived using the ELBO.

This “moving prior” is absent in variational inference and indeed, a crucial feature of the local entropy objective. The gradient of local entropy in Eqn. (7) clarifies this point:

 ∇F(x,γ) =−γ (x−⟨z; Ξ⟩)=−γ Ez ∼r(z;x) [z];

where the distribution is given by

 r(z; x)∝ p(Ξ | z) exp(−γ2 ∥x−z∥2);

it thus contains a data likelihood term along with a prior that “moves” along with the current iterate .

Let us remark that methods in the deep learning literature that average the gradient through perturbations in the neighborhood of  (Mobahi, 2016)

or noisy activation functions

(Gulcehre et al., 2016) can be interpreted as computing the data likelihood in ELBO (without the KL-term); such an averaging is thus different from local entropy.

c.1 Comparison with SGLD

We use stochastic gradient Langevin dynamics (cf. Appendix A) to estimate the gradient of local entropy in Alg. 1. It is natural then, to ask the question whether vanilla SGLD performs as well as local entropy. To this end, we compare the performance of SGLD on two prototypical networks: LeNet on MNIST and All-CNN-BN on CIFAR-10. We follow the experiments in Welling & Teh (2011) and Chen et al. (2015) and set the learning rate schedule to be where the initial learning rate and are hyper-parameters. We make sure that other architectural aspects (dropout, batch-normalization) and regularization (weight decay) are consistent with the experiments in Sec. 5.

After a hyper-parameter search, we obtained a test error on LeNet of after epochs and on All-CNN-BN after epochs. Even if one were to disregard the slow convergence of SGLD, its generalization error is much worse than our experimental results; we get on LeNet and on All-CNN-BN with . For comparison, the authors in Chen et al. (2015) report error on MNIST on a slightly larger network. Our results with local entropy on RNNs are much better than those reported in Gan et al. (2016) for SGLD. On the PTB dataset, we obtain a test perplexity of vs. for the same model whereas we obtain a test perplexity of vs. for char-LSTM on the War and Peace dataset.

Training deep networks with SGLD, or other more sophisticated MCMC algorithms such as SGHMC, SGNHT etc. (Chen et al., 2014; Neal, 2011) to errors similar to those of SGD is difficult, and the lack of such results in the literature corroborates our experimental experience. Roughly speaking, local entropy is so effective because it operates on a transformation of the energy landscape that exploits entropic effects. Conventional MCMC techniques such as SGLD or Nose’-Hoover thermostats (Ding et al., 2014) can only trade energy for entropy via the temperature parameter which does not allow the direct use of the geometric information of the energy landscape and does not help with narrow minima.