The training of parametric machine learning models often involves the formal task of minimizing the expectation of a loss (risk) over a populationof data, of the form
where the loss function
quantifies the performance of parameter vectoron data point . In practice though, the data distribution is usually unknown, and Eq. 1 is approximated by the empirical risk:
Here denotes a dataset of size with instances drawn independently from . Often there is easy access to the gradient of and gradient-based optimizers can be used to minimize the empirical risk. The gradient descent (gd) algorithm, for example, updates an estimate for the minimizer of according to with , and some hand-tuned or adaptive step sizes . In practice, however, evaluating can become expensive for very large thus making it impossible to make progress in a reasonable time. Instead, stochastic optimization methods are used, which use coarser but much cheaper gradient estimates by randomly choosing a mini-batch of size from the training set and computing . The gradient descent update then becomes
and the corresponding iterative algorithm is commonly known as stochastic gradient descent (sgd) (Robbins and Monro, 1951).
1.1 Overfitting, Regularization and Early-Stopping
Since the risk is virtually always unknown, a key question arising when minimizing the empirical risk , is how the performance of a model trained on a finite dataset generalizes to unseen data. Performance can be measured by the loss itself or other quantities, e.g., the mean accuracy in classification problems. Typically, to measure the generalization performance a finite test set is entirely withheld from the training procedure and the performance of the final model is evaluated on it. This test loss, however, is also only an estimator for
(in the same sense as the train loss) with a finite stochastic error whose variance drops linearly with the test set size. If the used model is overly expressive, minimizing the empirical risk (Eq.2) exactly—or close to exactly—will usually result in poor test performance, since the model overfits to the training data. There is a range of measures that can be taken to mitigate this effect; textbooks like Bishop (2006) give an overview over general concepts, chapter 7 of Goodfellow et al. (2016)
gives a comprehensive summary targeted at deep learning. Some widely used concepts are briefly discussed in the following paragraphs.
Model selection techniques choose a model among a hypothesis class which, under some measure, has the closest level of complexity to the given dataset. They alter the form of the loss function in Eq. 2 over an outer optimization loop (first find a good , then optimize ), such that the final optimization on is conducted on an adequately expressive model. This can—but does not need to—constrain the number of variables of the model. In the case of deep neural networks the number of variables can even significantly exceed the number of training examples (Krizhevsky et al., 2012; Simonyan and Zisserman, 2014; Szegedy et al., 2015; He et al., 2016).
If the dataset is not sufficiently representative of the data distribution, an opposite (although not incompatible) approach is to artificially enrich it to match a complex model. Data augmentation artificially enlarges the training set by adding transformations/perturbations of the training data. This can range from injecting noise (Sietsma and Dow, 1991; Vincent et al., 2008) to carefully tuned contrast and colorspace augmentation (Krizhevsky et al., 2012).
Finally, a widely-used provision against overfitting is to add regularization terms to the objective function that penalize the parameter vector , typically measured by the or norm (Krogh and Hertz, 1991). These terms constrain the magnitude of . They tend to drive individual parameters toward zero or, in the case, enforce sparsity (Bishop, 2006; Goodfellow et al., 2016)
. In linear regression, these concepts are known as least-squares andlasso regularization (Tibshirani, 1996), respectively.
Despite these countermeasures, high-capacity models will often overfit in the course of the optimization process. While the loss on the training set decreases throughout the optimization procedure, the test loss saturates at some point and starts to increase again. This undesirable effect is usually countered by early stopping the optimization process, meaning, that for a given model, the optimizer is halted if a user-designed early stopping criterion is met. This is complementary to the model and data design techniques mentioned above and does not undo eventual poor design choices of . It merely ensures that we do not minimize the empirical risk of a given model beyond the point of best generalization. In practice, however, it is often more accessible to ‘early-stop’ a high-capacity model for algorithmic purposes or because of restrictions to a specific model class, and thus preferred or even enforced by the model designer.
Arguably the gold-standard of early stopping is to monitor the loss on a validation set (Morgan and Bourlard, 1989; Reed, 1993; Prechelt, 2012). For this, a (usually small) portion of the training data is split off and its loss is used as an estimate of the generalization loss (again in the same sense as Eq. 2), leaving less effective training data to define the training loss . An ongoing estimate of this generalization performance is then tracked and the optimizer is halted when the generalization performance drops again. This procedure has many advantages, especially for very large datasets where splitting off a part has minor or no effect on the generalization performance of the learned model. Nevertheless, there are a few obvious drawbacks. Evaluating the model on the validation set in regular intervals can be computationally expensive. More importantly, the choice of the size of the validation set poses a trade-off: A small validation set has a large stochastic error, which can lead to a misguided stopping decision. Enlarging the validation set yields a more reliable estimate of generalization, but reduces the remaining amount of training data, depriving the model of potentially valuable information. This trade-off is not easily resolved, since it is influenced by properties of the data distribution (the variance introduced in Eq. 3 below) and subject to practical considerations, e.g., redundancy in the dataset.
Recently Maclaurin et al. (2015) introduced an interpretation of (stochastic) gradient descent in the framework of variational inference. As a side effect, this motivated an early-stopping criterion based on the estimation of the marginal likelihood, which is done by tracking the change in entropy of the posterior distribution of , induced by each optimization step. Since the method requires estimation of the Hessian diagonals, it comes with considerable computational overhead.
The following section motivates and derives a cheap and scalable early stopping criterion which is solely based on local statistics of the computed gradients. In particular, it does not require a held-out validation set, thus enabling the optimizer to use all available training data.
This section derives a novel criterion for early stopping in stochastic gradient descent. We first introduce notation and model assumptions (§2.1), and motivate the idea of evidence-based stopping (§2.2). Section 2.3 covers the more intuitive case of gradient descent; Section 2.4 extends to stochastic settings.
2.1 Distribution of Gradient Estimators
Let be some set of instances sampled independently from . The following holds for any , but specifically for the training set or a subsampled mini-batch and any validation or test set. Using the same notation as in Eq. 2, and
are unbiased estimators ofand respectively. Since the elements in are independent draws from
, by the Central Limit Theoremand
are approximately normal distributed according to
with population (co-)variances and , respectively. The (co)-variances of and both scale inversely proportional to the dataset size . In the population limit , Eq. 3 concentrates on and . To simplify notation, the indicator will occasionally be dropped: e.g. .
2.2 When to stop? An Evidence-Based Criterion
The perhaps obvious but crucial observation at the heart of the criterion proposed below is that even the full, but finite, data-set is just a finite-variance sample from a population: By Eq. (3), the estimators and are approximately Gaussian samples around their expectations and , respectively. Figure 1 provides an illustrative, one-dimensional sketch. The left subplot shows the marginal distribution of function values (Eq. 3, left). The true, but usually unknown, optimization objective (Eq. 1), is the mean of this distribution and is shown in solid orange. The objective (Eq. 2), which is optimized in practice and is fixed by the training set , defines one realization out of this distribution and is shown in dashed blue.
In general, the minimizers of and need not be the same. Often, for a finite but large number of parameters , the loss can be optimized to be very small. When this is the case the model tends to overfits to the training data and thus performs poorly on newly generated (test) data with . A widely used technique to prevent overfitting is to stop the optimization process early. The idea is, that variations of training examples which do not contain information for generalization, are mostly learned at the very end of the optimization process where the weights are fine-tuned. In practice the true minimum of is unknown, however the approximate errors of the estimators and are accessible at every position . Local estimators for the diagonal of have been successfully used before (Mahsereci and Hennig, 2015; Balles et al., 2016) and can be computed efficiently even for very high dimensional optimization problems. Here the variance estimator of the gradient distribution is denoted as with , where denotes the elementwise square and is either the full dataset or a mini-batch .
Since the minimizers of and are not generally identical, also their gradients will cross zero at different locations . The middle plot of Figure 1 illustrates this behavior. Similar to the left plot, it shows a marginal distribution, but this time over gradients (right expression in Eq. 3). The true gradient is the mean of this distribution and is shown in solid orange. The one realization defined by the dataset is shown as dashed blue and corresponds to the dashed blue function values of the left plot. Ideally the optimizer should stop in an area in -space where possible minima are likely to occur, if different datasets of same size were samples from . In the sketch, this is encoded as the red vertical shaded area in the right plot. It is the area around the minimizer of where standard deviation still encloses zero.
Since is unknown however, this criterion is hard to use in practice, and must be turned into a statement about . Denote the minimizer of by and the population variance of gradients at as . A similar criterion that captures this desiderata in essence is to stop when the collected gradients are becoming consistently very small in comparison to the error (red horizontal shaded area). Close enough to the minima of and
, the two criteria roughly coincide (intersection of red vertical and horizontal shaded areas). A measure for this is the probability
of observing , were it generated by a true zero gradient . This can be seen as the evidence of the trivial model class , with (in principal more general models can be formulated, which lead to a richer class of stopping criteria). If gradients are becoming too small or, ‘too probable’ (stepping into the horizontal shaded area) the gradients are less likely to still carry information about but rather represent noise due to the finiteness of the dataset, then the optimizer should stop. Using these assumptions, the next section derives a stopping criterion for the gradient decent algorithm which then can be extended to stochastic gradient descent as well.
2.3 Early Stopping Criterion for Gradient Descent
When using gradient descent, the whole dataset is used to compute the gradient in each iteration. Still this gradient estimator has an error in comparison to the true gradient , which is encoded in the covariance matrix . In practice is unknown, the variance estimator described in Section 2.2 however is always accessible. In addition Eq. 4 requires the gradient variance at the true minimum which is unknown in practice. Again it can be approximated by which is the gradient variance at the current position of the optimizer . This is a sensible choice if the optimizer is in convergence and already close to a minimum. Thus, at every position an approximation to of Eq. 4 is
Though being a simplification, this allows for fast and scalable computations since dimensions are treated independent of each other. To derive an early stopping criterion based only on we borrow the idea of the previous section that the optimizer should halt when gradients become so small that they are unlikely to still carry information about
, and combine this with well-known techniques from statistical hypothesis testing. Specifically: stop when
Here is the expectation operator. According to Eq. 6, the optimizer stops when the logarithmic evidence of the gradients is larger than its expected value, roughly meaning that more gradient samples lie inside of some expected range. In particular, combining Eq. 5 with Eq. 6 and scaling with the dimension of the objective, gives
This criterion (hereafter called eb-criterion, for ‘evidence-based’) is very intuitive; if all gradient elements lay at exactly one standard deviation distance to zero, then ; thus the left-hand side of Eq. 7 would become zero and the optimizer would stop.
We note on the side that Eq. 7 defines a mean criterion over all elements of the parameter vector . This implicitly assumes that all dimensions converge in roughly the same time scale such that weighing the fractions equally is justified. If optimization problems deal with parameters that converge at different speeds, like for example different layers of neural networks (or biases and weights inside one layer) it might be appropriate to compute one stopping criterion per subset of parameters which are roughly having similar timescales. In Section 3.4 we will use this slight variation of Eq. 7
for experiments on a multi layer perceptron.
2.4 Stochastic Gradients and Mini-batching
It is straightforward to extend the stopping criterion of Eq. 7 to stochastic gradient descent (sgd); the estimator for is replaced with an even more uncertain by sub-sampling the training dataset at each iteration. The local gradient generation is
Remark on implementation: Computing the stopping criterion is straight-forward, given that the variance estimate is available. In this case, it amounts to an element-wise division of the squared gradient by the variance, followed by an aggregation over all dimensions. Balles et al. (2016, §4.2) comment on this issue and present a solution for computing in contemporary software frameworks, that computes the variance estimate implicitly, increasing e.g. the computational cost of a backward pass of a neural network by a factor of about 1.25.
For proof of concept experiments, we evaluate the eb-criterion on a number of standard classification and regression problems. For illustration and analysis, Sections 3.1 and 3.2 show a least-squares toy problem and large synthetic quadratic problems; Sections 3.3 and 3.4 deal with the more realistic setting of logistic regression on the well-known Wisconsin Breast Cancer Dataset (WDBC) (Wolberg et al., 2011) and a multi layer perceptron on the handwritten digits dataset MNIST (LeCun et al., 1998). Section 3.5 contains experiments for logistic regression, as well as for a shallow neural network on the SECTOR dataset Chang and Lin (2011); the SECTOR dataset complements MNIST and WDBC, in the sense, that it has a much less favorable feature-to-datapoint ratio (); increasing the gains on the generalization performance, when all available training data can be used.
3.1 Linear Least-Squares as Toy Problem
We begin with a toy regression problem on artificial data generated from a one-dimensional linear function with additive uniform Gaussian noise. This simple setup allows us to illustrate the model fit at various stages of the optimziation process and provides us with the true generalization performance, since we can generate large amounts of test data. We use a largely over-parametrized 50-dimensional linear regression model which contain the ground truth features (bias and linear) and additional periodic features with varying frequency. The features with
obviously define a massively over-parametrized model for the true function and is thus prone to overfitting. We fit the model by minimizing the squared error, i.e. the loss function is. We use 20 samples for training and about 10 for validation, and then train the model using gradient descent. The results are shown in Figure 3; both, validation loss, and the eb-criterion find an acceptable point to stop the optimization procedure, thus preventing overfitting.
3.2 Synthetic Large-Scale Quadratic Problem
We construct synthetic quadratic optimization problems of the form , where is a positive definite matrix and is the global minimizer of ; the gradient is . In this controlled environment we can test the eb
-criterion on different configurations of eigen-spectra, for example uniform, exponential, or structured (a few large, many small eigenvalues); the matrixis constructed by defining a diagonal matrix which contains the eigenvalues on its diagonal, and a random rotation which is drawn from the Haar-measure on the -dimensional uni-sphere Diaconis and Shahshahani (1987); then . We artificially define the ‘empirical’ loss by moving the true minimizer
by a Gaussian random variable, such that . Thus is distributed according to , and we define . For experiments we chose as input dimension and zero () as the true minimizer of . Figure 4 shows results for three different types of eigen-spectra.
The eb-criterion performs well across the different type of partially ill-conditioned problems and induced meaningful stopping decisions; this worked well for different noise levels (Figure 4 shows ; note that the covariance matrix of the gradient is dense).
We noticed, however, that another assumption is crucial for the eb-criterion, which might also explain the slightly early stopping decision for the logistic regressor on WBCD (Figure 2 in subsequent section) and full batch gd on MNIST (Figure 7, column 1). Eq. (6) implicitly assumes that (on its path to the minimum of the empirical loss ) the optimizer passes by a better minimizer with higher generalization performance; this allows to use variances only (in the form of ) in the stopping criterion; there is no information about bias (direction of shift ) because this is fundamentally hard to know.
The assumption is usually well justified, primarily because otherwise early stopping would not be a viable concept in the first place; and second because over-fitting is usually associated with ‘too large’ weights (weights are initialized small; and regularizers that pull weights to zero are often a good idea); on the way from small weights (under-fitting) to too large weights (over-fitting), optimizers usually pass a better point with weights of intermediate size. If the assumption is fundamentally violated the eb-criterion will stop too early. We can artificially construct this setup by initializing the optimizer with weights that lead to an optimization path that does not lead to any over-fitting; this is depicted in Figure 5. The setup is identical to the one in Figure 4 ( as well as and are identical); the only difference is the initialization of the weights for the optimization process. Since—with this initialization—the lowest point of that can be reached by minimizing is , any early stopping decision will lead to under-fitting. In Figure 5 the (exact) test loss flattens out and does not increase again for all three configurations; the assumptions of the eb-criterion are violated and it induces a sub-optimal stopping decision. Figure 6 illustrates these two scenarios in a 2D-sketch.
3.3 Logistic Regression on WDBC
Next, we apply the eb
-criterion to logistic regression on the Wisconsin Breast Cancer dataset. The task is to classify cell nuclei (described by features such as radius, area, symmetry, et cetera) as either malignant or benign. We conduct a second-order polynomial expansion of the original 30 features (i.e., features of the form) resulting in 496 effective features. Of the 569 instances in the dataset, we withhold 369, a relatively large share, for testing purposes in order to get a reliable estimate of the generalization performance. The remaining 200 instances are available for training the classifier. We perform two trainining runs: one with early stopping based on a validation set of 60 instances (reducing the training set to 140 instances) and one using the full training set and early stopping with the eb-criterion derived in Section 2.3.
If parameters converge at different speeds during the optimization, as indicated in Section 2.3, it is sensible to compute the criterion separately for different subgroups of parameters. Generally, if we split the parameters into disjoint subgroups , and denote , the criterion reads . Since bias and weight gradients usually have different magnitudes they converge at different speeds when trained with the same learning rate. For logistic regression, we thus treat the weight vector and the bias parameter of the logistic regressor as separate subgroups. Since the criterion above is noisy we also smooth it with an exponential running average. The results are depicted in the left-most column of Figure 7. The effect of the additional training data is clearly visible, resulting in lower test losses throughout the optimization process. In this scarce data setting the validation loss, computed on a small set of only 60 instances, is clearly misleading (left-most column, top plot). It decreases throughout the optimization process and, thus, fails to find a suitable stopping point. The bottom left plot of Fig. 7 shows the evolution of the eb-criterion. The induced stopping point is not optimal (in that it does not coincide with the point of minimal test loss) but falls into an acceptable region. Thanks to the additional training data, the test loss at the stopping point is lower than any test loss attainable when withholding a validation set.
3.4 Multi-Layer Perceptron on MNIST
For a non-convex optimization problem, we train a multi-layer perceptron (MLP) on the well-studied problem of hand-written digit classification on the MNIST dataset (
gray-scale images). We use a MLP with five hidden layers with 2500, 2000, 1500, 1000 and 500 units, respectively, ReLU activation, and a standard cross-entropy loss for the 10 outputs with soft-max activation (
12 million trainable parameters). We treat each weight matrix and each bias vector of the network as a separate subgroup as described in Section3.3.The MNIST dataset contains 60k training images, which we split into 40k-10k-10k for train, test and validation sets. Again, the criterion is smoothed by an exponential running average.
The results for full-batch gradient descent are shown on Column 1 of Figure 7, and sgd runs with minibatch size 128 and three different learning rates Column 2-4 of the same Figure. The relatively large validation set (10k images) yields accurate estimates of the generalization performance. Consequently, the stopping points more or less coincide with the points of minimal test loss. The reduced training set size leads to only slightly higher test losses. Since the strength of the eb-criterion is to utilize the additional training data and the fact, that also validation losses are only inexact guesses of the generalization error, both of these points thus favor the early stopping criterion based on the validation loss. Still, for all three sgd-runs (columns 2-4 in Figure 7) the eb-criterion performs as good as or better than the validation set induced method. An additional observation is that the quality of the stopping points induced by the eb-criterion varies between the different training configurations. It is thus arguably not as stable in comparison to setups where the validation loss is very reliable. For gradient descent (full training set in each iteration, Column 1 of Figure 7) , the eb-criterion performs reasonably well, however (an very similarly to the gradient descent runs on the logistic regression on WDBC in Figure 2) chooses to stop a bit too early, and thus does result in a slightly worse test set performance. The difference is not very much (test loss red: , blue ) but it also clearly does not outperform the nearly exactly positioned stopping point induced by this well calibrated validation loss.
3.5 Logistic Regression and Shallow-Net on SECTOR
Finally, we trained a logistic regressor and a shallow fully-connected neural network on the SECTOR datasetChang and Lin (2011). It contains 6412 training and 3207 test datapoints with 55 197 features each, thus having a less favorable feature-to-datapoint ratio than for example MNIST (784 features vs. 60 000 datapoints). The features are extracted from web-pages of companies and the classes describe 105 different industry sectors. The shallow network has one hidden layer with 200 hidden units; the logistic regressor, thus contains million, and the shallow net million trainable parameters. Experiments are set up in the same style as the ones in Section 3.3 and 3.4. We use of the training data for the validation set; this yields 1282 validation examples and a reduced number of 5130 training examples. Figure 8 shows results; columns 1-2 for the logistic regressor and columns 3-4 for the shallow net. Since the size of the dataset is quite small, the gap between test losses is quite large (middle row, full training set (blue), reduced train set, due to validation split (red)). Both architectures do not overfit properly, the test loss rather flattens out, although we trained both architectures for very long ( steps) and initialized weights close to zero. The eb-criterion is again a bit too cautious, and induces stopping when the test loss starts to flatten out; but since it allows utilization of all training data, it beats the validation set on both architectures.
3.6 Greedy Element-wise Stopping
For the eb-criterion, we compute for each gradient element . This quantity can be understood as a ‘signal-to-noise ratio’ and the eb-criterion takes the mean over the individual . As a side experiment, we employ the same idea in an element-wise fashion: we stop the training for an individual parameter (not to be confused with the full parameter vector at iteration ) as soon as falls below the threshold. Importantly, this is not a sparsification of the parameter vector, since is not set to zero when being switched off but merely fixed at its current value. We smooth successive over multiple steps using an exponential moving average; these averages are initialized at high values, resulting in a warm-up phase where all weights are ‘active’. Figure 9 presents results; intriguingly, immediately after the warm-up phase the training of a considerable fraction of all weights (10 percent or more, depending on the training configuration) is being stopped. This fraction increases further as training progresses. Especially towards the end where overfitting sets in, a clear signal can be seen; the fraction of weights where learning has been stopped suddenly increases at a higher rate. Despite this reduction in effective model complexity, the network reaches test losses comparable to our training runs without greedy element-wise stopping (test losses in Figure 7). The fraction of switched-off parameters towards the end of the optimization process reaches up to 80 percent in a single layer and around 50 percent for the whole net.
We presented the eb-criterion, a novel approach to the problem of determining a good point for early-stopping in gradient-based optimization. In contrast to existing methods it does not rely on a held-out validation set and enables the optimizer to utilize all available training data. We exploit fast-to-compute statistics of the observed gradient to assess when it represents noise originating from the finiteness of the training set, instead of an informative gradient direction. The presented method so far is applicable in gradient descent as well as stochastic gradient descent settings and adds little overhead in computation, time, and memory consumption. In our experiments, we presented results for linear least-squares fitting, logistic regression and a multi-layer perceptron, proving the general concept to be viable. Furthermore, preliminary findings on element-wise early stopping open up the possibility to monitor and control model fitting with a higher level of detail.
- Balles and Hennig  L. Balles and P. Hennig. Follow the Signs for Robust Stochastic Optimization. ArXiv e-prints, May 2017.
- Balles et al.  L. Balles, J. Romero, and P. Hennig. Coupling Adaptive Batch Sizes with Learning Rates. ArXiv e-prints, Dec. 2016.
- Bishop  C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
Chang and Lin 
C.-C. Chang and C.-J. Lin.
LIBSVM: A library for support vector machines, 2011. URL https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html.
- Diaconis and Shahshahani  P. Diaconis and M. Shahshahani. The subgroup algorithm for generating uniform random variables. Probability in Engineering and Informational Sciences, 1(15-32):40, 1987.
- Goodfellow et al.  I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016.
He et al. 
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
- Krizhevsky et al.  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), volume 25, pages 1097–1105, 2012.
- Krogh and Hertz  A. Krogh and J. A. Hertz. A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems (NIPS), volume 4, pages 950–957, 1991.
- LeCun et al.  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Maclaurin et al.  D. Maclaurin, D. Duvenaud, and R. P. Adams. Early stopping is nonparametric variational inference. Technical Report arXiv:1504.01344 [stat.ML], 2015.
- Mahsereci and Hennig  M. Mahsereci and P. Hennig. Probabilistic line searches for stochastic optimization. In Advances in Neural Information Processing Systems (NIPS), volume 28, pages 181–189, 2015.
- Martens  J. Martens. New perspectives on the natural gradient method. CoRR, abs/1412.1193, 2014. URL http://arxiv.org/abs/1412.1193.
- Morgan and Bourlard  N. Morgan and H. Bourlard. Generalization and parameter estimation in feedforward nets: Some experiments. In Proceedings of the 2nd International Conference on Neural Information Processing Systems, pages 630–637. MIT Press, 1989.
- Prechelt  L. Prechelt. Early Stopping — But When?, pages 53–67. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. ISBN 978-3-642-35289-8. doi: 10.1007/978-3-642-35289-8_5.
- Reed  R. Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740–747, 1993.
- Robbins and Monro  H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, Sep. 1951.
- Sietsma and Dow  J. Sietsma and R. J. Dow. Creating artificial neural networks that generalize. Neural networks, 4(1):67–79, 1991.
- Simonyan and Zisserman  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition". CoRR, abs/1409.1556, 2014.
- Szegedy et al.  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
- Tibshirani  R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
- Tieleman and Hinton  T. Tieleman and G. Hinton. RMSprop Gradient Optimization, 2015. URL http://www.cs.toronto.edu/t̃ijmen/csc321/slides/lecture_slides_lec6.pdf.
Vincent et al. 
P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol.
Extracting and composing robust features with denoising autoencoders.In Proceedings of the 25th International Conference on Machine Learning (ICML), pages 1096–1103. ACM, 2008.
- Wolberg et al.  W. H. Wolberg, W. N. Street, and O. L. Mangasarian. UCI Machine Learning Repository: Breast Cancer Wisconsin (Diagnostic) Data Set, Jan. 2011. URL http://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic).
5 Comparison to RMSprop
This Section explores the differences and similarities of sgd+eb-criterion and RMSprop. This is rather meant as a means for gaining a better intuition, and not for comparing them as competitors; both methods were derived for different purposes and could be combined in principle.
5.1 Non-Greedy Elementwise eb-Criterion
The non-greedy elementwise eb-criterion can be formulated as
for some conservative smoothing constant , usually , or , learning rate , and the fraction as defined in Section 3.6. The symbol ‘’ denotes elementwise division and is the indicator function. In contrast to the greedy implementation of Section 3.6, where switched-off learning rates stayed switches off, Eq. 10 allows learning to be switched on again.
5.2 Learning Rate Damping in RMSprop
RMSprop Tieleman and Hinton is a well known optimization algorithm that scales learning rates elementwise by an exponential running average of gradient magnitudes; specifically:
again for some smoothing constant , usually , and learning rate . Let be the largest element of the factor , then the second line of Eq. 11 can be rewritten as
The fraction describes the scaling of learning rates relative to the largest one: if the element of is very small, the learning of the corresponding parameter is damped heavily relative to a full step of size . This can be interpreted as ‘switching-off’ the learning of these parameters, similarly to the elementwise eb-criterion.
5.3 Connections and Differences
The following table gives a rough overview over the possible set of learning rates for each method.
|method||step size domain||maximal step size||minimal step size|
|sgd+eb-crit||(only when converged)|
The table shows, that sgd+eb-criterion is a very minor variation of sgd, in the sense that it can also set the learning rate to zero, but only for converged parameters to prevent overfitting. It does not improve the convergence properties of sgd
while it is still training, since the sizes of the ‘active’ learning rates remain unchanged. Specifically, it does not explicitly encode curvature, or other geometric properties of the loss.
In contrast to this, RMSprop also adapts the absolute value of the largest possible step at every iteration by a varying factor , and scales the other steps relative to it. It is based on the steepest descent direction in -space, measured by a weighted norm, where the weight matrix is the inverse Fisher information matrix at ever position .111If the loss can be interpreted as negative log likelihood, this is an approximation to the steepest descent direction in the distribution space, where an approximation to the KL-divergence defines a measure. If the learned conditional distribution approximates the true conditional data-distribution well, also approximates the expected Hessian of the loss Martens . RMSprop thus encodes geometric information, which allows for faster convergence compared to sgd.
Another interpretation of RMSprop, which in spirit is much closer to the eb-criterion, has recently been formulated by Balles and Hennig . It is possible to associate the RMSprop-update of Eq. 11 with local gradient and variance estimators, according to
The fraction on the right hand side of Eq. 13 contains the term , which closely resembles the inverse of . Thus gradients with a small signal-to-noise ratio get shortened; noise free gradients induce steps of equal(!) size in every direction (note, that they are independent of the magnitude of ); RMSprop thus can be seen as elementwise stochastic gradient-sign estimators, which are mildly damped if noisy.
We have now explored algebraic, as well as behavioral connections between sgd+eb-criterion and RMSprop; the following paragraph summarizes the above points and lists some noteworthy distinctions:
Geometry encoding: RMSprop encodes geometric information about the objective and can be loosely associated with second order methods that perform an approximate diagonal preconditioning at every iteration. Alternatively it can be interpreted as stochastic sign estimator, scaling each step with the inverse gradient magnitude, and damping due to noise. In contrast to this, the eb-criterion is just a mild add-on to sgd; it does not alter learning rates due to curvature or other geometric effects.
Mild damping vs. stopping: The eb-criterion defines a strict threshold, justified by a statistical test, when learning should be terminated. RMSprop defines a vaguer version, in the sense, that the optimizer should move somewhat ‘less’ into directions of uncertain gradients. Even if the signal-to-noise ratio falls well below the threshold of the stopping decision induces by the eb-criterion (roughly ), RMSprop just reduces the step proportional to the inverse if the square root (e.g. for (eb-crit stops), the RMSprop-step gets reduced by a factor of only ).
Smoothing and bias: The derivation of Eq. 13 omits the geometric smoothing contribution of which is present in the RMSprop-update in Eq. 11. In contrast to this, the eb-criterion relies on local (non-smoothed) computations of ; this is essential to a stopping decision, since large gradient-samples are usually associated with large variances as well. Smoothing the latter would thus bias learning towards following large gradients; in case of RMSprop it does bias towards larger steps for high variance samples.
The views presented above, give insight on the internal workings of RMSprop as well as the eb-criterion. It is apparent, that, even though RMSprop shortens high variance directions, they do not get damped enough to prevent overfitting the objective to the data.
5.4 Empirical Comparison
For an empirical comparison, we run RMSprop, sgd with elementwise eb-criterion (as in Eq. 10), and an instance of vanilla sgd on a multi-layer-perception on MNIST, similar to the setup in Section 3.4. For the sgd instance that uses the eb-criterion, the fraction of switched-off parameters is defined as
The percentage of ‘switched-off’ parameters for RMSprop can be roughly described as the fraction of parameters, whose (defined in Section 5.2) lie below a threshold
The same smoothing factor was used for both methods, for a meaningful comparison. Figure 10 depicts results; the first row shows training losses (light colors) and test losses (corresponding dark colors) of all three methods. Rows 3-7 show the evolution of for five choices of ; the second row shows . As mentioned above, in contrast to the ‘greedy’ implementation of Section 3.6 (switched-off learning rates, stayed switched-off), and for a more natural comparison to RMSprop, we allowed learning rates to be switched on again as well. The results for and are color coded as in Figure 9 of the main paper: green for the full net, and additionally red for weight matrices and orange for biases per layer.
The test losses of vanilla sgd and sgd+eb-criterion are almost identical, while the training loss of sgd+eb-criterion is a bit more conservative than the one of vanilla sgd; this is expected, since the eb-criterion ideally should not impair generalization performance, but might lead to larger training losses at convergence, due to the overfitting prevention. Already at the beginning of the training sgd+eb-criterion switches off about 10-20% of all learning rates; after that, the fraction increases to about 50% (green line, second row); since the eb-criterion only detects convergence, the curve is quite monotonic, exhibiting not significant jumps.
RMSprop converges a bit faster, as it is expected. Also the plots for are richer in structure. Especially one layer seems to have significantly smaller learning rates for both, biases and weights, than the other layers. Overall the difference between the largest learning rate and all others tends to roughly increase over the optimization process (especially for , green line, last row). There are also significant jumps in all the curves, in contrast to the rather monotonic increasing line of sgd+eb-criterion. This indicates nontrivial scaling of the absolute, as well as relative sizes of learning rates throughout the optimization process; also, no learning rate is smaller than times the largest one at each iteration (third row, green line at exactly zero).
In the future a combination of both—learning rate scaling and overfitting prevention—i.e. combining the eb-criterion with advanced search direction like RMSprop, is desirable.