Variational Characterizations of Local Entropy and Heat Regularization in Deep Learning

01/29/2019 ∙ by Nicolas Garcia Trillos, et al. ∙ 0

The aim of this paper is to provide new theoretical and computational understanding on two loss regularizations employed in deep learning, known as local entropy and heat regularization. For both regularized losses we introduce variational characterizations that naturally suggest a two-step scheme for their optimization, based on the iterative shift of a probability density and the calculation of a best Gaussian approximation in Kullback-Leibler divergence. Under this unified light, the optimization schemes for local entropy and heat regularized loss differ only over which argument of the Kullback-Leibler divergence is used to find the best Gaussian approximation. Local entropy corresponds to minimizing over the second argument, and the solution is given by moment matching. This allows to replace traditional back-propagation calculation of gradients by sampling algorithms, opening an avenue for gradient-free, parallelizable training of neural networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The development and assessment of optimization methods for the training of deep neural networks has brought forward novel questions that call for new theoretical insights and computational techniques [3]. The performance of a network is determined by its ability to generalize, and choosing the network parameters by finding the global minimizer of the loss may be not only unfeasible, but also undesirable. In fact, training to a prescribed accuracy with competing optimization schemes may lead, consistenly, to different generalization error [11]. A possible explanation is that parameters in flat local minima of the loss give better generalization [10], [11], [5], [4]

and that certain schemes favor convergence to wide valleys of the loss function. These observations have led to the design of algorithms that employ gradient descent on a regularized loss, actively seeking minima located in wide valleys of the original loss

[4]. While it has been demonstrated that the flatness of minima cannot fully explain generalization in deep learning [6], [15]

, there are various heuristic

[2], theoretical [5], and empirical [4] arguments that support regularizing the loss. In this paper we aim to provide new understanding on two such regularizations, referred to as local entropy and heat regularization.

Our first contribution is to introduce variational characterizations for both regularized loss functions. These characterizations, drawn from the literature on large deviations [7], naturally suggest a two-step scheme for their optimization, based on the iterative shift of a probability density and the calculation of a best Gaussian approximation in Kullback-Leibler divergence. The schemes for both regularized losses differ only over the argument of the (asymmetric) Kullback-Leibler divergence that they minimize. Local entropy minimizes over the second argument, and the solution is given by moment matching; heat regularization minimizes over the first argument, and its solution is defined implicitly.

The second contribution of this paper is to investigate some theoretical and computational implications of the variational characterizations. On the theoretical side, we prove that if the best Kullback-Leibler approximations could be computed exactly, then the regularized losses are monotonically decreasing along the sequence of optimization iterates. This monotonic behavior suggests that the two-step iterative optimization schemes have the potential of being stable provided that the Kullback-Leibler minimizers can be computed accurately. On the computational side, we show that the two-step iterative optimization of local entropy agrees with gradient descent on the regularized loss provided that the learning rate matches the regularization parameter. Thus, the two-step iterative optimization of local entropy computes gradients implicitly in terms of expected values; this observation opens an avenue for gradient-free, paralelizable training of neural networks based on sampling. In contrast, the scheme for heat regularization finds the best Kullback-Leibler Gaussian approximation over the first argument, and its computation via stochastic optimization [17], [16] involves evaluation of gradients of the original loss.

Finally, our third contribution is to perform a numerical case-study to assess the performance of various implementations of the two-step iterative optimization of local entropy and heat regularized functionals. These implementations differ in how the minimization of Kullback-Leibler is computed and the argument that is minimized. Our experiments suggest, on the one hand, that the computational overload of the regularized methods far exceeds the cost of performing stochastic gradient descent on the original loss. On the other hand, they also suggest that for moderate-size architectures —where the best Kullback-Leibler Gaussian approximations can be computed effectively— the generalization error with regularized losses is more stable than for stochastic gradient descent over the original loss. For this reason, we investigate using stochastic gradient descent on the original loss for the first parameter updates, and then switching to optimize over a regularized loss. We also investigate numerically the choice and scoping of the regularization parameter. Our understanding upon conducting thorough numerical experiments is that, while sampling-based optimization of local entropy has the potential of being practical if parallelization is exploited and back-propagation gradient calculations are expensive, existing implementations of regularized methods in standard architectures are more expensive than stochastic gradient descent and do not clearly outperform it.

Several research directions stem from this work. A broad one is to explore the use of local entropy and heat regularizations in complex optimization problems outside of deep learning, e.g. in the computation of maximum a posteriori estimates in high dimensional Bayesian inverse problems. A more concrete direction is to generalize the Gaussian approximations within our two-step iterative schemes and allow to update both the mean and covariance of the Gaussian measures.

The rest of the paper is organized as follows. Section 2 provides background on optimization problems arising in deep learning, and reviews various analytical and statistical interpretations of local entropy and heat regularized losses. In Section 3 we introduce the variational characterization of local entropy, and derive from it a two-step iterative optimization scheme. Section 4 contains analogous developments for heat regularization. Our presentation in Section 4 is parallel to that in Section 3, as we aim to showcase the unity that comes from the variational characterizations of both loss functions. Section 5 reviews various algorithms for Kullback-Leibler minimization, and we conclude in Section 6 with a numerical case study.

2 Background

Neural networks are revolutionizing numerous fields including image and speech recognition, language processing, and robotics [13], [9]. Broadly, neural networks are parametric families of functions used to assign outputs to inputs. The parameters of a network are chosen by solving a non-convex optimization problem of the form

(2.1)

where each

is a loss associated with a training example. Most popular training methods employ backpropagation (i.e. automatic differentiation) to perform some variant of gradient descent over the loss

. In practice, gradients are approximated using a random subsample of the training data known as minibatch. Importantly, accurate solution of the optimization problem (2.1) is not the end-goal of neural networks; their performance is rather determined by their generalization or testing error, that is, by their ability to accurately assign outputs to unseen examples.

A substantial body of literature [4], [15], [3] has demonstrated that optimization procedures with similar training error may consistently lead to different testing error. For instance, large mini-batch sizes have been shown to result in poor generalization [11]. Several explanations have been set forth, including overfitting, attraction to saddle points and explorative properties [11]. A commonly accepted theory is that flat local minima of the loss leads to better generalization than sharp minima [10], [4], [11], [5]. As noted in [6] and [15] this explanation is not fully convincing, as due to the high number of symmetries in deep networks one can typically find many parameters that have different flatness but define the same network. Further, reparameterization may alter the flatness of minima. While a complete understanding is missing, the observations above have prompted the development of new algorithms that actively seek minima in wide valleys of the loss In this paper we provide new insights on potential advantages of two such approaches, based on local-entropy and heat regularization.

2.1 Background on Local-Entropy Regularizatoin

We will first study optimization of networks performed on a regularization of the loss known as local entropy, given by

(2.2)

where here and throughout denotes the Gaussian density in with mean

and variance

For given averages values of focusing on a neighborhood of size Thus, for to be small it is required that is small throughout a -neighborhood of Note that is equivalent to as and becomes constant as Figure 1 shows that local entropy flattens sharp isolated minima, and deepens wider minima.

A natural statistical interpretation of minimizing the loss is in terms of maximum likelihood estimation. Given training data one may define the likelihood function

(2.3)

Thus, minimizing corresponds to maximizing the likelihood In what follows we assume that is normalized to integrate to Minimization of local entropy can also be interpreted in statistical terms, now as computing a maximum marginal likelihood. Consider a Gaussian prior distribution

indexed by a hyperparameter

on the parameters of the neural network. Moreover, assume a likelihood as in equation (2.3). Then, minimizing local entropy corresponds to maximizing the marginal likelihood

(2.4)

We remark that the right-hand side of equation (2.4) is the convolution of the likelihood with a Gaussian, and so we have

(2.5)

Thus, local entropy can be interpreted as a regularization of the likelihood

Figure 1: Toy example of local entropy regularization for a two dimensional lost function. Note how the wider minima from the left figure deepens on the right, while the sharp minima become relatively shallower.

2.2 Background on Heat Regularization

We will also consider smoothing of the loss through the heat regularization, defined by

Note that regularizes the loss directly, rather than the likelihood :

Local entropy and heat regularization are, clearly, rather different. Figure 2

shows that while heat regularization smooths the energy landscape, the relative macroscopic depth of local minima is marginally modified. Our paper highlights, however, the common underlying structure of the resulting optimization problems. Further analytical insights on both regularizations in terms of partial differential equations and optimal control can be found in

[5].

2.3 Notation

For any and we define the probability density

(2.6)

where is a normalization constant. These densities will play an important role throughout.

We denote the Kullback-Leibler divergence between densities and in by

(2.7)

Kullback-Leibler is a divergence in that with equality iff However, the Kullback-Leibler is not a distance as in particular it is not symmetric; this fact will be relevant in the rest of this paper.

Figure 2: Toy example of heat regularization for a two dimensional loss function. Here the smoothing via convolution with a Gaussian amounts to a blur, altering the texture of the landscape without changing the location of deep minima.

3 Local Entropy: Variational Characterization and Optimization

In this section we introduce a variational characterization of local entropy. We will employ this characterization to derive a monotonic algorithm for its minimization. The following result is well known in large deviation theory [7]. We present its proof for completeness.

The local entropy admits the following variational characterization:

(3.8)

Moreover, the density defined in equation (2.6) achieves the minimum in (3.8).

For any density

(3.9)

Hence,

showing that achieves the minimum. To conclude note that and so taking minimum over on both sides of equation (3.9) and rearranging gives equation (3.8).

3.1 Two-step Iterative Optimization

From the variational characterization (3.8) it follows that

(3.10)

Thus, a natural iterative approach to finding the minimizer of is to alternate between i) minimization of the term in curly brackets over densities and ii) finding the associated minimizer over For the former we can employ the explicit formula given by equation (2.6), while for the latter we note that the integral term does not depend on the variable and that the minimizer of the map

is unique, and given by the expected value of The statistical interpretation of these two steps is perhaps most natural through the variational formulation of the Bayesian update [8]: the first step finds a posterior distribution associated with likelihood and prior the second computes the posterior expectation, which is used to define the prior mean in the next iteration. It is worth noting the parallel between this two-step optimization procedure and the empirical Bayes interpretation of local entropy mentioned in Section 2.

In short, the expression (3.10) suggests the following simple scheme for minimizing local-entropy:

  Choose and for do:
  1. Define as in equation (2.6).

  2. Define as the minimizer, of the map

Algorithm 1

In practice, the expectation in the second step needs to be approximated. We will explore the potential use of gradient-free sampling schemes in Subsection 5.2 and in our numerical experiments.

A seemingly unrelated approach to minimizing the local entropy is to employ gradient descent and set

(3.11)

where is a learning rate. We now show that the iterates given by Algorithm 1 agree with those given by gradient descent with learning rate

By direct computation

Therefore,

(3.12)

establishing that Algorithm 1 performs gradient descent with learning rate . This choice of learning rate leads to monotonic decrease of local entropy, as we show in the next subsection.

Remark 3.1

In this paper we restrict our attention to the update scheme (3.11) with . For this choice of learning rate we can deduce theoretical monotonicity according to Theorem 3.2 below, but it may be computationally advantageous to use as explored in [4].

3.2 Majorization-Minorization and Monotonicity

We now show that Algorithm 1 is a majorization-minimization algorithm. Let

where is as in (2.6). It follows that for all and that for arbitrary ; in other words, is a majorizer for . In addition, it is easy to check that the updates

coincide with the updates in Algorithm 1. As a consequence we have the following theorem. (Monotonicity and stationarity of Algorithm 1) The sequence generated by Algorithm 1 satisfies

Moreover, equality holds only when is a critical point of . The monotonicity follows immediately from the fact that our algorithm can be interpreted as a majorization-minimization scheme. For the stationarity note that equation (3.12) shows that if and only if .

4 Heat Regularization: Variational Characterization and Optimization

In this section we consider direct regularization of the loss function as opposed to regularization of the density function The following result is analogous to Theorem 3. Its proof is similar and hence omitted.

The heat regularization admits the following variational characterization:

(4.13)

Moreover, the density defined in equation (2.6) achieves the minimum in (4.13).

4.1 Two-step Iterative Optimization

From equation (4.13) it follows that

(4.14)

In complete analogy with Section 3, equation (4.14) suggests the following optimization scheme to minimize

  Choose and for do:
  1. Define as in equation (2.6).

  2. Define by minimizing the map

Algorithm 2

The key difference with Algorithm 1 is that the arguments of the Kullback-Leibler divergence are reversed. While has a unique minimizer given by minimizers of need not be unique. Moreover, the latter minimization is implicitly defined via an expectation and its computation via a Robbins-Monro [17] approach requires repeated evaluation of the gradient of . We will outline the practical implementation of this minimization in Section 5.2.

4.2 Majorization-Minorization and Monotonicity

As in subsection 3.2 it is easy to see that

is a majorizer for . This can be used to show the following theorem, whose proof is identical to that of Theorem 3.2 and therefore omitted. (Monotonicity of Algorithm 2) The sequence generated by Algorithm 2 satisfies

5 Gaussian Kullback-Leibler Minimization

In Sections 3 and 4 we considered the local entropy and heat regularized loss and introduced two-step iterative optimization schemes for both loss functions. We summarize these schemes here for comparison purposes:

Optimization of
Let and for do:

  1. Define as in equation (2.6).

  2. Let be the minimizer of

Optimization of
Let and for do:

  1. Define as in equation (2.6).

  2. Let be a minimizer of

Both schemes involve finding, at each iteration, the mean vector that gives the best approximation, in Kullback-Leibler, to a probability density. For local entropy the minimization is with respect to the second argument of the Kullback-Leibler divergence, while for heat regularization the minimization is with respect to the first argument. It is useful to compare, in intuitive terms, the two different minimization problems, both leading to a “best Gaussian”. In what follows we drop the subscripts and use the following nomenclature:

Note that in order to minimize we need to be small over the support of which can happen when or . This illustrates the fact that minimizing may miss out components of . For example, in Figure 3 left panel is a bi-modal like distribution but minimizing over Gaussians can only give a single mode approximation which is achieved by matching one of the modes (minimizers are not guaranteed to be unique); we may think of this as “mode-seeking”. In contrast, when minimizing over Gaussians we want to be small where appears as the denominator. This implies that wherever has some mass we must let also have some mass there in order to keep as close as possible to one. Therefore the minimization is carried out by allocating the mass of in a way such that on average the discrepancy between and is minimized, as shown in Figure  3 right panel; hence the label “mean-seeking.”

Figure 3: Cartoon representation of the mode-seeking (left) and mean-seeking (right) Kullback-Leibler minimization. Mean-seeking minimization is employed within local-entropy optimization; mode-seeking minimization is employed within optimization of the heat-regularized loss.

In the following two sections we show that, in addition to giving rather different solutions, the argument of the Kullback-Leibler divergence that is minimized has computational consequences.

5.1 Minimization of

The solution to this minimization problem is unique and given by For notational convenience we drop the subscript and consider calculation of

(5.15)

In our numerical experiments we will approximate these expectations using stochastic gradient Langevin dynamics and importance sampling. Both methods are reviewed in the next two subsections.

5.1.1 Stochastic Gradient Langevin Dynamics

The first method that we use to approximate the expectation 5.15 —and thus the best-Gaussian approximation for local entropy optimization— is stochastic gradient Langevin dynamics (SGLD). The algorithm was introduced in [19] and its use for local entropy minimization was investigated in [4]. The SGLD algorithm is summarized below.

  Input: Sample size and temperatures
  1. Define

  2. For do:

Output: approximation
Algorithm 3

When the function is defined by a large sum over training data, minibatches can be used in the evaluation of the gradients

In our numerical experiments we initialize the Langevin chain at the last iteration of the previous parameter update. Note that SGLD can be thought of as a modification of gradient-based Metropolis-Hastings Markov chain Monte Carlo algorithms, where the accept-reject mechanism is replaced by a suitable tempering of the temperatures

5.1.2 Importance Sampling

We will also investigate the use of importance sampling [14] to approximate the expectations (5.15); our main motivation in doing so is to avoid gradient computations, and hence to give an example of a training scheme that does not involve back propagation.

Importance sampling is based on the observation that

and an approximation of the right-hand side may be obtained by standard Monte Carlo approximation of the numerator and the denominator. Crucially, these Monte Carlo simulations are performed sampling the Gaussian rather than the original density The importance sampling algorithm is then given by:

  Input: sample size
  1. Sample from the Gaussian density

  2. Compute (unnormalized) weights

  Output: approximation
(5.16)
Algorithm 4

Importance sampling is easily parallelizable. If processors are available, then each of the processors can be used to produce an estimate using Gaussian samples, and the associated estimates can be subsequently consolidated.

While the use of importance sampling opens an avenue for gradient-free, parallelizable training of neural networks, our numerical experiments will show that naive implementation without parallelization gives poor performance relative to SGLD or plain stochastic gradient descent (SGD) descent on the original loss. A potential explanation is the so-called curse of dimension for importance sampling [18], [1]

. Another explanation is that the iterative structure of SGLD allows to re-utilize the previous parameter update to approximate the following one while importance sampling does not afford such iterative updating. Finally, SGLD with minibatches is known to asymptotically produce unbiased estimates, while the introduction of minibatches in importance sampling introduces a bias.

5.2 Minimization of

A direct calculation shows that the preconditioned Euler-Lagrange equation for minimizing is given by

Here is implicitly defined as an expected value with respect to a distribution that depends on the parameter The Robbins-Monro algorithm [17] allows to estimate zeroes of functions defined in such a way.

  Input: Number of iterations and schedule
  1. Define

  2. For do:

    (5.17)
Output: approximation to the minimizer of
Algorithm 5

The Robbins-Monro approach to computing the Gaussian approximation in Hilbert space was studied in [16]. A suitable choice for the step size is , for some and Note that Algorithm 5 gives a form of spatially-averaged gradient descent, which involves repeated evaluation of the gradient of the original loss. The use of temporal gradient averages has also been studied as a way to reduce the noise level of stochastic gradient methods [3].

To conclude we remark that an alternative approach could be to employ Robbins-Monro directly to optimize Gradient calculations would still be needed.

6 Numerical Experiments

In the following numerical experiments we investigate the practical use of local entropy and heat regularization in the training of neural networks. We present experiments on dense multilayered networks applied to a basic image classification task, viz. MNIST [12]. We implement Algorithms 3, 4, and 5

in TensorFlow, analyzing the effectiveness of each in comparison to stochastic gradient descent (SGD). We investigate whether the theoretical monotonicity of regularized losses translates into monotonicity of the held-out test data error. Additionally, we explore various choices for the hyper-parameter

to illustrate the effects of variable levels of regularization. In accordance to the algorithms specified above, we employ importance sampling (IS) and stochastic gradient Langevin dynamics (SGLD) to approximate the expectation in (5.15), and the Robbins-Monro algorithm for heat regularization (HR).

6.1 Network Specification

Our experiments are carried out using the following networks:

  1. Small Dense Network: Consisting of an input layer with 784 units and a 10 unit output layer, this toy network contains 7850 total parameters and achieves a test accuracy of 91.2 % when trained with SGD for 5 epochs over the 60,000 image MNIST dataset.

  2. Single Hidden Layer Dense Network: Using the same input and output layer as the smaller network with an additional 200 unit hidden layer, this network provides an architecture with 159,010 parameters. We expect this architecture to achieve a best-case performance of 98.9 % accuracy on MNIST, trained over the same data as the previous network.

6.2 Training Neural Networks From Random Initialization

Considering the computational burden of computing a Monte Carlo estimate for each weight update, we propose that Algorithms 3, 4, and 5 are potentially most useful when employed following SGD; although per-update progress is on par or exceeds that of SGD with step size, often called learning rate, equivalent to the value of , the computational load required makes the method unsuited for end-to-end training. Though in this section we present an analysis of these algorithms used for the entirety of training, this approach is likely too expensive to be practical for contemporary deep networks.

Weight Updates 100 200 300 400 500
SGD 0.75 0.80 0.85 0.87 0.87
IS 0.27 0.45 0.54 0.57 0.65
SGLD 0.72 0.81 0.84 0.86 0.88
HR 0.52 0.64 0.70 0.73 0.76
Table 1: Classification Accuracy on Held-Out Test Data
Figure 4: Held-out Test Accuracy during training for SGD (Red), SGLD (Blue), HR (Green), and IS (Black). for SGLD, IS, and HR. Learning rate of SGD is also . SGLD uses temperatures and HR’s update schedule uses and .

Table LABEL:table:Held-out_Test_Accuracy and the associated Figure 4 demonstrate the comparative training behavior for each algorithm, displaying the held-out test accuracy for identical instantiations of the hidden layer network trained with each algorithm for 500 parameter updates. Note that a mini-batch size of 20 was used in each case to standardize the amount of training data available to the methods. Additionally, SGLD, IS, and HR each employed , while SGD utilized an equivalent step size, thus fixing the level of regularization in training. To establish computational equivalence between Algorithms 3, 4, and 5, we compute with samples for Algorithms 3 and 4, setting and performing updates of the chain in Algorithm 5

. Testing accuracy was computed by classifying 1000 randomly selected images from the held-out MNIST test set. In related experiments, we observed consistent training progress across all three algorithms. In contrast, IS and HR trained more slowly, particularly during the parameter updates following initialization. From Figure

4 we can appreciate that while SGD attempts to minimize training error, it nonetheless behaves in a stable way when plotting held-out accuracy, specially towards the end of training. SGLD on the other hand is observed to be more stable throughout the whole training.

While SGD, SGLD, and HR utilize gradient information in performing parameter updates, IS does not. This difference in approach contributes to IS’s comparatively poor start; as the other methods advance quickly due to the large gradient of the loss landscape, IS’s progress is isolated, leading to training that depends only on the choice of . When is held constant, as shown in  4, the rate of improvement remains nearly constant throughout. This suggests the need for dynamically updating , as is commonly performed with annealed learning rates for SGD. Moreover, SGD, SGLD and HR are all schemes that depend linearly in , making mini-batching justifiable, something that is not true for IS.

Average Update Runtime (Seconds)
SGD 0.0032
IS 6.2504
SGLD 7.0599
HR 3.3053
Table 2: Runtime Per Weight Update

It is worth noting that the time to train differed drastically between methods. Table LABEL:table:Update_Runtime shows the average runtime of each algorithm in seconds. SGD performs roughly times faster than the others, an expected result considering the most costly operation in training, filling the network weights, is performed times per parameter update. Other factors contributing to the runtime discrepancy are the implementation specifications and the deep learning library; here, we use TensorFlow’s implementation of SGD, a method for which the framework is optimized. More generally, the runtimes in Table LABEL:table:Update_Runtime reflect the hyper-parameter choices for the number of Monte Carlo samples, and will vary according to the number of samples considered.

6.3 Local Entropy Regularization after SGD

Figure 5: Training after SGD, for all algorithms. The step size for the SGD is set equal to the value of for all three algorithms. SGLD temperatures are , and HR uses the same update schedule as in Figure 4.

Considering the longer runtime of the sampling based algorithms in comparison to SGD, it is appealing to utilize SGD to train networks initially, then shift to more computationally intensive methods to identify local minima with favorable generalization properties. Figure 5 illustrates IS and SGLD performing better than HR when applied after SGD. HR’s smooths the loss landscape, a transformation which is advantageous for generating large steps early in training, but presents challenges as smaller features are lost. In Figure 5, this effect manifests as constant test accuracy after SGD, and no additional progress is made. The contrast between each method is notable since the algorithms use equivalent step sizes—this suggests that the methods, not the hyper-parameter choices, dictate the behavior observed.

Presumably, SGD trains the network into a sharp local minima or saddle point of the non-regularized loss landscape; transitioning to an algorithm which minimizes the local entropy regularized loss then finds an extrema which performs better on the test data. However, based on our experiments, in terms of held-out data accuracy regularization in the later stages does not seem to provide significant improvement over training with SGD on the original loss.

6.4 Algorithm Stability & Monotonicity

Figure 6: Training behaviors with samples per parameter update. SGLD temperatures and HR schedule are the same as in Figure 4. Note that throughout. To equalize computational load across algorithms, we set for HR.

Prompted by the guarantees of theorems 3.2 and 4.2 which prove the effectiveness of these methods when is approximated accurately, we also demonstrate the stability of these algorithms in the case of an inaccurate estimate of the expectation. To do so, we explore the empirical consequences of varying the number of samples used in the Monte Carlo and Robbins-Monro calculations.

Figure 6 shows how each algorithm responds to this change. We observe that IS performs better as we refine our estimate of , exhibiting less noise and faster training rates. This finding suggests that a highly parallel implementation of IS which leverages modern GPU architecture to efficiently compute the relevant expectation may offer practicality. SGLD also benefits from a more accurate approximation, displaying faster convergence and higher final testing accuracy when comparing 10 and 100 Monte Carlo samples. HR however performs more poorly when we employ longer Robbins-Monro chains, suffering from diminished step size and exchanging quickly realized progress for less oscillatory testing accuracy. Exploration of the choices of and for SGLD and HR remains a valuable avenue for future research, specifically in regards to the interplay between these hyper-parameters and the variable accuracy of estimating .

6.5 Choosing

Figure 7: Training the smaller neural network with different choices for using SGLD and IS. Values of vary horizontally from very small to large: . Top row shows SGLD with , bottom row shows IS. All network parameters were initialized randomly.

An additional consideration of these schemes is the choice of , the hyper-parameter which dictates the level of regularization in Algorithms 3, 4, and 5. As noted in [4], large values of correspond to a nearly uniform local entropy regularized loss, whereas small values of yield a minimally regularized loss which is very similar the original loss function. To explore the effects of small and large values of , we train our smaller network with IS and SGLD for many choices of , observing how regularization alters training rates.

The results, presented in Figure 7, illustrate differences in SGLD and IS, particularly in the small regime. As evidenced in the leftmost plots, SGLD trains successfully, albeit slowly, with . For small values of , the held-out test accuracy improves almost linearly over parameter updates, appearing characteristically similar to SGD with a small learning rate. IS fails for small , with highly variant test accuracy improving only slightly during training. Increasing , we observe SGLD reach a point of saturation, as additional increases in do not affect the training trajectory. We note that this behavior persists as , recognizing that the regularization term in the SGLD algorithm approaches a value of zero for growing . IS demonstrates improved training efficiency in the bottom-center panel, showing that increased provides favorable algorithmic improvements. This trend dissipates for larger , with IS performing poorly as . The observed behavior suggests there exists an optimal which is architecture and task specific, opening opportunities to further develop a heuristic to tune the hyper-parameter.

6.5.1 Scoping of

As suggested in [4], we anneal the scope of from large to small values in order to examine the landscape of the loss function at different scales. Early in training, we use comparatively large values to ensure broad exploration, transitioning to smaller values for a comprehensive survey of the landscape surrounding a minima. We use the following schedule for the parameter update:

where is large and is set so that the magnitude of the local entropy gradient is roughly equivalent to that of SGD.

Figure 8: Examination of the effects of scoping during training via the update schedule 6.5.1. All four panels display SGLD with temperatures set as , and SGD (Blue) with a learning rate of . Top: SGLD with constant , set as and . Bottom: scoped as and .

As shown in Figure 8, annealing proves to be useful, and provides a method by which training can focus on more localized features to improve test accuracy. We observe that SGLD, with a smaller value of , achieves a final test accuracy close to that of SGD, whereas is unable to identify the optimal minima. Additionally, the plot shows that large SGLD trains faster than SGD in the initial 100 parameter updates, whereas small SGLD lags behind. When scoping we consider both annealing and reverse-annealing, illustrating that increasing over training produces a network which trains more slowly than SGD and is unable to achieve testing accuracy comparable to that of SGD. Scoping from via the schedule 6.5.1 with and delivers advantageous results, yielding an algorithm which trains faster than SGD after initialization and achieves analogous testing accuracy.

References