1 Introduction
The development and assessment of optimization methods for the training of deep neural networks has brought forward novel questions that call for new theoretical insights and computational techniques [3]. The performance of a network is determined by its ability to generalize, and choosing the network parameters by finding the global minimizer of the loss may be not only unfeasible, but also undesirable. In fact, training to a prescribed accuracy with competing optimization schemes may lead, consistenly, to different generalization error [11]. A possible explanation is that parameters in flat local minima of the loss give better generalization [10], [11], [5], [4]
and that certain schemes favor convergence to wide valleys of the loss function. These observations have led to the design of algorithms that employ gradient descent on a regularized loss, actively seeking minima located in wide valleys of the original loss
[4]. While it has been demonstrated that the flatness of minima cannot fully explain generalization in deep learning [6], [15], there are various heuristic
[2], theoretical [5], and empirical [4] arguments that support regularizing the loss. In this paper we aim to provide new understanding on two such regularizations, referred to as local entropy and heat regularization.Our first contribution is to introduce variational characterizations for both regularized loss functions. These characterizations, drawn from the literature on large deviations [7], naturally suggest a twostep scheme for their optimization, based on the iterative shift of a probability density and the calculation of a best Gaussian approximation in KullbackLeibler divergence. The schemes for both regularized losses differ only over the argument of the (asymmetric) KullbackLeibler divergence that they minimize. Local entropy minimizes over the second argument, and the solution is given by moment matching; heat regularization minimizes over the first argument, and its solution is defined implicitly.
The second contribution of this paper is to investigate some theoretical and computational implications of the variational characterizations. On the theoretical side, we prove that if the best KullbackLeibler approximations could be computed exactly, then the regularized losses are monotonically decreasing along the sequence of optimization iterates. This monotonic behavior suggests that the twostep iterative optimization schemes have the potential of being stable provided that the KullbackLeibler minimizers can be computed accurately. On the computational side, we show that the twostep iterative optimization of local entropy agrees with gradient descent on the regularized loss provided that the learning rate matches the regularization parameter. Thus, the twostep iterative optimization of local entropy computes gradients implicitly in terms of expected values; this observation opens an avenue for gradientfree, paralelizable training of neural networks based on sampling. In contrast, the scheme for heat regularization finds the best KullbackLeibler Gaussian approximation over the first argument, and its computation via stochastic optimization [17], [16] involves evaluation of gradients of the original loss.
Finally, our third contribution is to perform a numerical casestudy to assess the performance of various implementations of the twostep iterative optimization of local entropy and heat regularized functionals. These implementations differ in how the minimization of KullbackLeibler is computed and the argument that is minimized. Our experiments suggest, on the one hand, that the computational overload of the regularized methods far exceeds the cost of performing stochastic gradient descent on the original loss. On the other hand, they also suggest that for moderatesize architectures —where the best KullbackLeibler Gaussian approximations can be computed effectively— the generalization error with regularized losses is more stable than for stochastic gradient descent over the original loss. For this reason, we investigate using stochastic gradient descent on the original loss for the first parameter updates, and then switching to optimize over a regularized loss. We also investigate numerically the choice and scoping of the regularization parameter. Our understanding upon conducting thorough numerical experiments is that, while samplingbased optimization of local entropy has the potential of being practical if parallelization is exploited and backpropagation gradient calculations are expensive, existing implementations of regularized methods in standard architectures are more expensive than stochastic gradient descent and do not clearly outperform it.
Several research directions stem from this work. A broad one is to explore the use of local entropy and heat regularizations in complex optimization problems outside of deep learning, e.g. in the computation of maximum a posteriori estimates in high dimensional Bayesian inverse problems. A more concrete direction is to generalize the Gaussian approximations within our twostep iterative schemes and allow to update both the mean and covariance of the Gaussian measures.
The rest of the paper is organized as follows. Section 2 provides background on optimization problems arising in deep learning, and reviews various analytical and statistical interpretations of local entropy and heat regularized losses. In Section 3 we introduce the variational characterization of local entropy, and derive from it a twostep iterative optimization scheme. Section 4 contains analogous developments for heat regularization. Our presentation in Section 4 is parallel to that in Section 3, as we aim to showcase the unity that comes from the variational characterizations of both loss functions. Section 5 reviews various algorithms for KullbackLeibler minimization, and we conclude in Section 6 with a numerical case study.
2 Background
Neural networks are revolutionizing numerous fields including image and speech recognition, language processing, and robotics [13], [9]. Broadly, neural networks are parametric families of functions used to assign outputs to inputs. The parameters of a network are chosen by solving a nonconvex optimization problem of the form
(2.1) 
where each
is a loss associated with a training example. Most popular training methods employ backpropagation (i.e. automatic differentiation) to perform some variant of gradient descent over the loss
. In practice, gradients are approximated using a random subsample of the training data known as minibatch. Importantly, accurate solution of the optimization problem (2.1) is not the endgoal of neural networks; their performance is rather determined by their generalization or testing error, that is, by their ability to accurately assign outputs to unseen examples.A substantial body of literature [4], [15], [3] has demonstrated that optimization procedures with similar training error may consistently lead to different testing error. For instance, large minibatch sizes have been shown to result in poor generalization [11]. Several explanations have been set forth, including overfitting, attraction to saddle points and explorative properties [11]. A commonly accepted theory is that flat local minima of the loss leads to better generalization than sharp minima [10], [4], [11], [5]. As noted in [6] and [15] this explanation is not fully convincing, as due to the high number of symmetries in deep networks one can typically find many parameters that have different flatness but define the same network. Further, reparameterization may alter the flatness of minima. While a complete understanding is missing, the observations above have prompted the development of new algorithms that actively seek minima in wide valleys of the loss In this paper we provide new insights on potential advantages of two such approaches, based on localentropy and heat regularization.
2.1 Background on LocalEntropy Regularizatoin
We will first study optimization of networks performed on a regularization of the loss known as local entropy, given by
(2.2) 
where here and throughout denotes the Gaussian density in with mean
and variance
For given averages values of focusing on a neighborhood of size Thus, for to be small it is required that is small throughout a neighborhood of Note that is equivalent to as and becomes constant as Figure 1 shows that local entropy flattens sharp isolated minima, and deepens wider minima.A natural statistical interpretation of minimizing the loss is in terms of maximum likelihood estimation. Given training data one may define the likelihood function
(2.3) 
Thus, minimizing corresponds to maximizing the likelihood In what follows we assume that is normalized to integrate to Minimization of local entropy can also be interpreted in statistical terms, now as computing a maximum marginal likelihood. Consider a Gaussian prior distribution
indexed by a hyperparameter
on the parameters of the neural network. Moreover, assume a likelihood as in equation (2.3). Then, minimizing local entropy corresponds to maximizing the marginal likelihood(2.4) 
We remark that the righthand side of equation (2.4) is the convolution of the likelihood with a Gaussian, and so we have
(2.5) 
Thus, local entropy can be interpreted as a regularization of the likelihood
2.2 Background on Heat Regularization
We will also consider smoothing of the loss through the heat regularization, defined by
Note that regularizes the loss directly, rather than the likelihood :
Local entropy and heat regularization are, clearly, rather different. Figure 2
shows that while heat regularization smooths the energy landscape, the relative macroscopic depth of local minima is marginally modified. Our paper highlights, however, the common underlying structure of the resulting optimization problems. Further analytical insights on both regularizations in terms of partial differential equations and optimal control can be found in
[5].2.3 Notation
For any and we define the probability density
(2.6) 
where is a normalization constant. These densities will play an important role throughout.
We denote the KullbackLeibler divergence between densities and in by
(2.7) 
KullbackLeibler is a divergence in that with equality iff However, the KullbackLeibler is not a distance as in particular it is not symmetric; this fact will be relevant in the rest of this paper.
3 Local Entropy: Variational Characterization and Optimization
In this section we introduce a variational characterization of local entropy. We will employ this characterization to derive a monotonic algorithm for its minimization. The following result is well known in large deviation theory [7]. We present its proof for completeness.
The local entropy admits the following variational characterization:
(3.8) 
Moreover, the density defined in equation (2.6) achieves the minimum in (3.8).
For any density
(3.9) 
Hence,
showing that achieves the minimum. To conclude note that and so taking minimum over on both sides of equation (3.9) and rearranging gives equation (3.8).
3.1 Twostep Iterative Optimization
From the variational characterization (3.8) it follows that
(3.10) 
Thus, a natural iterative approach to finding the minimizer of is to alternate between i) minimization of the term in curly brackets over densities and ii) finding the associated minimizer over For the former we can employ the explicit formula given by equation (2.6), while for the latter we note that the integral term does not depend on the variable and that the minimizer of the map
is unique, and given by the expected value of The statistical interpretation of these two steps is perhaps most natural through the variational formulation of the Bayesian update [8]: the first step finds a posterior distribution associated with likelihood and prior the second computes the posterior expectation, which is used to define the prior mean in the next iteration. It is worth noting the parallel between this twostep optimization procedure and the empirical Bayes interpretation of local entropy mentioned in Section 2.
In short, the expression (3.10) suggests the following simple scheme for minimizing localentropy:
In practice, the expectation in the second step needs to be approximated. We will explore the potential use of gradientfree sampling schemes in Subsection 5.2 and in our numerical experiments.
A seemingly unrelated approach to minimizing the local entropy is to employ gradient descent and set
(3.11) 
where is a learning rate. We now show that the iterates given by Algorithm 1 agree with those given by gradient descent with learning rate
By direct computation
Therefore,
(3.12) 
establishing that Algorithm 1 performs gradient descent with learning rate . This choice of learning rate leads to monotonic decrease of local entropy, as we show in the next subsection.
3.2 MajorizationMinorization and Monotonicity
We now show that Algorithm 1 is a majorizationminimization algorithm. Let
where is as in (2.6). It follows that for all and that for arbitrary ; in other words, is a majorizer for . In addition, it is easy to check that the updates
coincide with the updates in Algorithm 1. As a consequence we have the following theorem. (Monotonicity and stationarity of Algorithm 1) The sequence generated by Algorithm 1 satisfies
Moreover, equality holds only when is a critical point of . The monotonicity follows immediately from the fact that our algorithm can be interpreted as a majorizationminimization scheme. For the stationarity note that equation (3.12) shows that if and only if .
4 Heat Regularization: Variational Characterization and Optimization
In this section we consider direct regularization of the loss function as opposed to regularization of the density function The following result is analogous to Theorem 3. Its proof is similar and hence omitted.
The heat regularization admits the following variational characterization:
(4.13) 
Moreover, the density defined in equation (2.6) achieves the minimum in (4.13).
4.1 Twostep Iterative Optimization
From equation (4.13) it follows that
(4.14) 
In complete analogy with Section 3, equation (4.14) suggests the following optimization scheme to minimize
The key difference with Algorithm 1 is that the arguments of the KullbackLeibler divergence are reversed. While has a unique minimizer given by minimizers of need not be unique. Moreover, the latter minimization is implicitly defined via an expectation and its computation via a RobbinsMonro [17] approach requires repeated evaluation of the gradient of . We will outline the practical implementation of this minimization in Section 5.2.
4.2 MajorizationMinorization and Monotonicity
5 Gaussian KullbackLeibler Minimization
In Sections 3 and 4 we considered the local entropy and heat regularized loss and introduced twostep iterative optimization schemes for both loss functions. We summarize these schemes here for comparison purposes:
Optimization of
Let and for do:

Define as in equation (2.6).

Let be the minimizer of
Optimization of
Let and for do:

Define as in equation (2.6).

Let be a minimizer of
Both schemes involve finding, at each iteration, the mean vector that gives the best approximation, in KullbackLeibler, to a probability density. For local entropy the minimization is with respect to the second argument of the KullbackLeibler divergence, while for heat regularization the minimization is with respect to the first argument. It is useful to compare, in intuitive terms, the two different minimization problems, both leading to a “best Gaussian”. In what follows we drop the subscripts and use the following nomenclature:
Note that in order to minimize we need to be small over the support of which can happen when or . This illustrates the fact that minimizing may miss out components of . For example, in Figure 3 left panel is a bimodal like distribution but minimizing over Gaussians can only give a single mode approximation which is achieved by matching one of the modes (minimizers are not guaranteed to be unique); we may think of this as “modeseeking”. In contrast, when minimizing over Gaussians we want to be small where appears as the denominator. This implies that wherever has some mass we must let also have some mass there in order to keep as close as possible to one. Therefore the minimization is carried out by allocating the mass of in a way such that on average the discrepancy between and is minimized, as shown in Figure 3 right panel; hence the label “meanseeking.”
In the following two sections we show that, in addition to giving rather different solutions, the argument of the KullbackLeibler divergence that is minimized has computational consequences.
5.1 Minimization of
The solution to this minimization problem is unique and given by For notational convenience we drop the subscript and consider calculation of
(5.15) 
In our numerical experiments we will approximate these expectations using stochastic gradient Langevin dynamics and importance sampling. Both methods are reviewed in the next two subsections.
5.1.1 Stochastic Gradient Langevin Dynamics
The first method that we use to approximate the expectation 5.15 —and thus the bestGaussian approximation for local entropy optimization— is stochastic gradient Langevin dynamics (SGLD). The algorithm was introduced in [19] and its use for local entropy minimization was investigated in [4]. The SGLD algorithm is summarized below.

Define

For do:
When the function is defined by a large sum over training data, minibatches can be used in the evaluation of the gradients
In our numerical experiments we initialize the Langevin chain at the last iteration of the previous parameter update. Note that SGLD can be thought of as a modification of gradientbased MetropolisHastings Markov chain Monte Carlo algorithms, where the acceptreject mechanism is replaced by a suitable tempering of the temperatures
5.1.2 Importance Sampling
We will also investigate the use of importance sampling [14] to approximate the expectations (5.15); our main motivation in doing so is to avoid gradient computations, and hence to give an example of a training scheme that does not involve back propagation.
Importance sampling is based on the observation that
and an approximation of the righthand side may be obtained by standard Monte Carlo approximation of the numerator and the denominator. Crucially, these Monte Carlo simulations are performed sampling the Gaussian rather than the original density The importance sampling algorithm is then given by:

Sample from the Gaussian density

Compute (unnormalized) weights
(5.16) 
Importance sampling is easily parallelizable. If processors are available, then each of the processors can be used to produce an estimate using Gaussian samples, and the associated estimates can be subsequently consolidated.
While the use of importance sampling opens an avenue for gradientfree, parallelizable training of neural networks, our numerical experiments will show that naive implementation without parallelization gives poor performance relative to SGLD or plain stochastic gradient descent (SGD) descent on the original loss. A potential explanation is the socalled curse of dimension for importance sampling [18], [1]
. Another explanation is that the iterative structure of SGLD allows to reutilize the previous parameter update to approximate the following one while importance sampling does not afford such iterative updating. Finally, SGLD with minibatches is known to asymptotically produce unbiased estimates, while the introduction of minibatches in importance sampling introduces a bias.
5.2 Minimization of
A direct calculation shows that the preconditioned EulerLagrange equation for minimizing is given by
Here is implicitly defined as an expected value with respect to a distribution that depends on the parameter The RobbinsMonro algorithm [17] allows to estimate zeroes of functions defined in such a way.

Define

For do:
(5.17)
The RobbinsMonro approach to computing the Gaussian approximation in Hilbert space was studied in [16]. A suitable choice for the step size is , for some and Note that Algorithm 5 gives a form of spatiallyaveraged gradient descent, which involves repeated evaluation of the gradient of the original loss. The use of temporal gradient averages has also been studied as a way to reduce the noise level of stochastic gradient methods [3].
To conclude we remark that an alternative approach could be to employ RobbinsMonro directly to optimize Gradient calculations would still be needed.
6 Numerical Experiments
In the following numerical experiments we investigate the practical use of local entropy and heat regularization in the training of neural networks. We present experiments on dense multilayered networks applied to a basic image classification task, viz. MNIST [12]. We implement Algorithms 3, 4, and 5
in TensorFlow, analyzing the effectiveness of each in comparison to stochastic gradient descent (SGD). We investigate whether the theoretical monotonicity of regularized losses translates into monotonicity of the heldout test data error. Additionally, we explore various choices for the hyperparameter
to illustrate the effects of variable levels of regularization. In accordance to the algorithms specified above, we employ importance sampling (IS) and stochastic gradient Langevin dynamics (SGLD) to approximate the expectation in (5.15), and the RobbinsMonro algorithm for heat regularization (HR).6.1 Network Specification
Our experiments are carried out using the following networks:

Small Dense Network: Consisting of an input layer with 784 units and a 10 unit output layer, this toy network contains 7850 total parameters and achieves a test accuracy of 91.2 % when trained with SGD for 5 epochs over the 60,000 image MNIST dataset.

Single Hidden Layer Dense Network: Using the same input and output layer as the smaller network with an additional 200 unit hidden layer, this network provides an architecture with 159,010 parameters. We expect this architecture to achieve a bestcase performance of 98.9 % accuracy on MNIST, trained over the same data as the previous network.
6.2 Training Neural Networks From Random Initialization
Considering the computational burden of computing a Monte Carlo estimate for each weight update, we propose that Algorithms 3, 4, and 5 are potentially most useful when employed following SGD; although perupdate progress is on par or exceeds that of SGD with step size, often called learning rate, equivalent to the value of , the computational load required makes the method unsuited for endtoend training. Though in this section we present an analysis of these algorithms used for the entirety of training, this approach is likely too expensive to be practical for contemporary deep networks.
Weight Updates  100  200  300  400  500 

SGD  0.75  0.80  0.85  0.87  0.87 
IS  0.27  0.45  0.54  0.57  0.65 
SGLD  0.72  0.81  0.84  0.86  0.88 
HR  0.52  0.64  0.70  0.73  0.76 
Table LABEL:table:Heldout_Test_Accuracy and the associated Figure 4 demonstrate the comparative training behavior for each algorithm, displaying the heldout test accuracy for identical instantiations of the hidden layer network trained with each algorithm for 500 parameter updates. Note that a minibatch size of 20 was used in each case to standardize the amount of training data available to the methods. Additionally, SGLD, IS, and HR each employed , while SGD utilized an equivalent step size, thus fixing the level of regularization in training. To establish computational equivalence between Algorithms 3, 4, and 5, we compute with samples for Algorithms 3 and 4, setting and performing updates of the chain in Algorithm 5
. Testing accuracy was computed by classifying 1000 randomly selected images from the heldout MNIST test set. In related experiments, we observed consistent training progress across all three algorithms. In contrast, IS and HR trained more slowly, particularly during the parameter updates following initialization. From Figure
4 we can appreciate that while SGD attempts to minimize training error, it nonetheless behaves in a stable way when plotting heldout accuracy, specially towards the end of training. SGLD on the other hand is observed to be more stable throughout the whole training.While SGD, SGLD, and HR utilize gradient information in performing parameter updates, IS does not. This difference in approach contributes to IS’s comparatively poor start; as the other methods advance quickly due to the large gradient of the loss landscape, IS’s progress is isolated, leading to training that depends only on the choice of . When is held constant, as shown in 4, the rate of improvement remains nearly constant throughout. This suggests the need for dynamically updating , as is commonly performed with annealed learning rates for SGD. Moreover, SGD, SGLD and HR are all schemes that depend linearly in , making minibatching justifiable, something that is not true for IS.
Average Update Runtime (Seconds)  

SGD  0.0032 
IS  6.2504 
SGLD  7.0599 
HR  3.3053 
It is worth noting that the time to train differed drastically between methods. Table LABEL:table:Update_Runtime shows the average runtime of each algorithm in seconds. SGD performs roughly times faster than the others, an expected result considering the most costly operation in training, filling the network weights, is performed times per parameter update. Other factors contributing to the runtime discrepancy are the implementation specifications and the deep learning library; here, we use TensorFlow’s implementation of SGD, a method for which the framework is optimized. More generally, the runtimes in Table LABEL:table:Update_Runtime reflect the hyperparameter choices for the number of Monte Carlo samples, and will vary according to the number of samples considered.
6.3 Local Entropy Regularization after SGD
Considering the longer runtime of the sampling based algorithms in comparison to SGD, it is appealing to utilize SGD to train networks initially, then shift to more computationally intensive methods to identify local minima with favorable generalization properties. Figure 5 illustrates IS and SGLD performing better than HR when applied after SGD. HR’s smooths the loss landscape, a transformation which is advantageous for generating large steps early in training, but presents challenges as smaller features are lost. In Figure 5, this effect manifests as constant test accuracy after SGD, and no additional progress is made. The contrast between each method is notable since the algorithms use equivalent step sizes—this suggests that the methods, not the hyperparameter choices, dictate the behavior observed.
Presumably, SGD trains the network into a sharp local minima or saddle point of the nonregularized loss landscape; transitioning to an algorithm which minimizes the local entropy regularized loss then finds an extrema which performs better on the test data. However, based on our experiments, in terms of heldout data accuracy regularization in the later stages does not seem to provide significant improvement over training with SGD on the original loss.
6.4 Algorithm Stability & Monotonicity
Prompted by the guarantees of theorems 3.2 and 4.2 which prove the effectiveness of these methods when is approximated accurately, we also demonstrate the stability of these algorithms in the case of an inaccurate estimate of the expectation. To do so, we explore the empirical consequences of varying the number of samples used in the Monte Carlo and RobbinsMonro calculations.
Figure 6 shows how each algorithm responds to this change. We observe that IS performs better as we refine our estimate of , exhibiting less noise and faster training rates. This finding suggests that a highly parallel implementation of IS which leverages modern GPU architecture to efficiently compute the relevant expectation may offer practicality. SGLD also benefits from a more accurate approximation, displaying faster convergence and higher final testing accuracy when comparing 10 and 100 Monte Carlo samples. HR however performs more poorly when we employ longer RobbinsMonro chains, suffering from diminished step size and exchanging quickly realized progress for less oscillatory testing accuracy. Exploration of the choices of and for SGLD and HR remains a valuable avenue for future research, specifically in regards to the interplay between these hyperparameters and the variable accuracy of estimating .
6.5 Choosing
An additional consideration of these schemes is the choice of , the hyperparameter which dictates the level of regularization in Algorithms 3, 4, and 5. As noted in [4], large values of correspond to a nearly uniform local entropy regularized loss, whereas small values of yield a minimally regularized loss which is very similar the original loss function. To explore the effects of small and large values of , we train our smaller network with IS and SGLD for many choices of , observing how regularization alters training rates.
The results, presented in Figure 7, illustrate differences in SGLD and IS, particularly in the small regime. As evidenced in the leftmost plots, SGLD trains successfully, albeit slowly, with . For small values of , the heldout test accuracy improves almost linearly over parameter updates, appearing characteristically similar to SGD with a small learning rate. IS fails for small , with highly variant test accuracy improving only slightly during training. Increasing , we observe SGLD reach a point of saturation, as additional increases in do not affect the training trajectory. We note that this behavior persists as , recognizing that the regularization term in the SGLD algorithm approaches a value of zero for growing . IS demonstrates improved training efficiency in the bottomcenter panel, showing that increased provides favorable algorithmic improvements. This trend dissipates for larger , with IS performing poorly as . The observed behavior suggests there exists an optimal which is architecture and task specific, opening opportunities to further develop a heuristic to tune the hyperparameter.
6.5.1 Scoping of
As suggested in [4], we anneal the scope of from large to small values in order to examine the landscape of the loss function at different scales. Early in training, we use comparatively large values to ensure broad exploration, transitioning to smaller values for a comprehensive survey of the landscape surrounding a minima. We use the following schedule for the parameter update:
where is large and is set so that the magnitude of the local entropy gradient is roughly equivalent to that of SGD.
As shown in Figure 8, annealing proves to be useful, and provides a method by which training can focus on more localized features to improve test accuracy. We observe that SGLD, with a smaller value of , achieves a final test accuracy close to that of SGD, whereas is unable to identify the optimal minima. Additionally, the plot shows that large SGLD trains faster than SGD in the initial 100 parameter updates, whereas small SGLD lags behind. When scoping we consider both annealing and reverseannealing, illustrating that increasing over training produces a network which trains more slowly than SGD and is unable to achieve testing accuracy comparable to that of SGD. Scoping from via the schedule 6.5.1 with and delivers advantageous results, yielding an algorithm which trains faster than SGD after initialization and achieves analogous testing accuracy.
References
 [1] S. Agapiou, O. Papaspiliopoulos, D. SanzAlonso, and A. M. Stuart. Importance sampling: intrinsic dimension and computational cost. Statistical Science, 32(3):405–431, 2017.

[2]
C. Baldassi, A. Ingrosso, C. Lucibello, L. Saglietti, and R. Zecchina.
Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses.
Physical review letters, 115(12):128101, 2015.  [3] L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for largescale machine learning. arXiv preprint arXiv:1606.04838, 2016.
 [4] P. Chaudhari, A. Choromanska, S. Soatto, Y. LeCun, C. Baldassi, C. Borgs, J. T. Chayes, L. Sagun, and R. Zecchina. EntropySGD: biasing gradient descent into wide valleys. CoRR, abs/1611.01838, 2017.
 [5] P. Chaudhari, A. Oberman, S. Osher, S. Soatto, and G. Carlier. Deep relaxation: partial differential equations for optimizing deep neural networks. arXiv preprint arXiv:1704.04932, 2017.
 [6] R. Dinh, L.and Pascanu, S. Bengio, and Y. Bengio. Sharp minima can generalize for deep nets. arXiv preprint arXiv:1703.04933, 2017.
 [7] P. Dupuis and R. S. Ellis. A weak convergence approach to the theory of large deviations, volume 902. John Wiley & Sons, 2011.
 [8] N. Garcia Trillos and D. SanzAlonson. The Bayesian Update: Variational Formulations and Gradient Flows. To appear in Bayesian Analysis, 2018.
 [9] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio. Deep Learning, volume 1. MIT press Cambridge, 2016.
 [10] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.
 [11] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On largebatch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.

[12]
Y. LECUN.
The mnist database of handwritten digits.
http://yann.lecun.com/exdb/mnist/.  [13] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436, 2015.
 [14] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer Science & Business Media, 2008.
 [15] B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pages 5947–5956, 2017.
 [16] F. J. Pinski, G. Simpson, A. M. Stuart, and H. Weber. Algorithms for Kullback–Leibler approximation of probability measures in infinite dimensions. SIAM Journal on Scientific Computing, 37(6):A2733–A2757, 2015.
 [17] H. Robbins. An empirical Bayes approach to statistics. Technical report, Columbia University, New York City United States, 1956.
 [18] D. SanzAlonso. Importance sampling and necessary sample size: an information theory approach. SIAM/ASA Journal on Uncertainty Quantification, 6(2):867–879, 2018.

[19]
M. Welling and Y. W. Teh.
Bayesian learning via stochastic gradient Langevin dynamics.
In
Proceedings of the 28th International Conference on Machine Learning (ICML11)
, pages 681–688, 2011.
Comments
There are no comments yet.