Log In Sign Up

Stochastic Normalizations as Bayesian Learning

by   Alexander Shekhovtsov, et al.

In this work we investigate the reasons why Batch Normalization (BN) improves the generalization performance of deep networks. We argue that one major reason, distinguishing it from data-independent normalization methods, is randomness of batch statistics. This randomness appears in the parameters rather than in activations and admits an interpretation as a practical Bayesian learning. We apply this idea to other (deterministic) normalization techniques that are oblivious to the batch size. We show that their generalization performance can be improved significantly by Bayesian learning of the same form. We obtain test performance comparable to BN and, at the same time, better validation losses suitable for subsequent output uncertainty estimation through approximate Bayesian posterior.


Proxy-Normalizing Activations to Match Batch Normalization while Removing Batch Dependence

We investigate the reasons for the performance degradation incurred with...

Uncertainty Estimation via Stochastic Batch Normalization

In this work, we investigate Batch Normalization technique and propose i...

Bayesian Uncertainty Estimation for Batch Normalized Deep Networks

Deep neural networks have led to a series of breakthroughs, dramatically...

Understanding Batch Normalization

Batch normalization is a ubiquitous deep learning technique that normali...

Iterative Normalization: Beyond Standardization towards Efficient Whitening

Batch Normalization (BN) is ubiquitously employed for accelerating neura...

Double Forward Propagation for Memorized Batch Normalization

Batch Normalization (BN) has been a standard component in designing deep...

Convolutional Normalization

As the deep neural networks are being applied to complex tasks, the size...

1 Introduction

Recent advances in hardware and deep NNs make it possible to use large capacity networks, so that the training accuracy becomes close to 100% even for rather difficult tasks. At the same time, however, we would like to ensure small generalization gaps, a high validation accuracy and a reliable confidence prediction. For this reason, regularization methods become very important.

As the base model for this study we have chosen the All-CNN network of [23], a network with eight convolutional layers, and train it on the CIFAR-10 dataset. Recent work [7] compares different regularization techniques with this network and reports test accuracy of with their probabilistic network and with dropout but omits BN. Fig. 1 shows how well BN generalizes for this problem when applied to exactly the same network. It easily achieves validation accuracy , being significantly better than the dedicated regularization techniques proposed in [7]. It appears that BN is a very powerful regularization method. The goal of this work is to try to understand and exploit the respective mechanism. Towards this end we identify two components: one is a non-linear reparametrization of the model that preconditions gradient descent and the other is stochasticity.

The reparametrization may be as well achieved by other normalization techniques such as weight normalization [19] and analytic normalization [22] amongst others [14, 1]. The advantage of these methods is that they are deterministic and thus do not rely on batch statistics, often require less computation overhead, are continuously differentiable [22]

and can be applied more flexibly, to cases with a small batch size or recurrent neural networks. Unfortunately, these methods, while improving on the training loss, do not generalize as good as BN, which was observed experimentally in 

[8, 22]. We therefore look at further aspects of BN that could explain its regularization.

Ioffe and Szegedy [11] suggest that the regularization effect of BN is related to the randomness of batch-normalization statistics, which is due to random forming of the batches. However, how and why this kind of randomness works remained unclear. Recent works demonstrated that this randomness can be reproduced [25] or simulated [2] at test time to obtain useful uncertainty estimates. We investigate the effect of randomness of BN on training. For this purpose we first design an experiment in which the training procedure is kept exactly the same (including the learning rate) but the normalization statistics in BN layers are computed over a larger random subset of training data, the normalization batch. Note that the training batch size, for which the loss is computed is fixed to 32 throughout the paper. Changing this value significantly impacts the performance of SGD and would compromise the comparison of normalization techniques. The results shown in Fig. 2

confirm that using larger normalization batches (1024) decreases the stochasticity of BN (the expected training loss gets closer to the evaluation model training loss) but, at the same time, its validation loss and accuracy get worse. The effect is not very strong, possibly due to the fact that BN in convolutional networks performs spatial averaging, significantly reducing the variance in most layers. Yet modeling and explaining it statistically allows to understand the connection to Bayesian learning and to apply such regularization with deterministic normalization techniques. Decoupling this regularization from the batch size and the spatial sizes of the layers and learning the appropriate amount of randomness allows to significantly reduce overfitting and to predict a better calibrated uncertainty at test time.

    Training Loss     Validation Accuracy
Figure 1:

The regularization effect of BN during the training epochs (x-axis).

Left: Batch Normalization, designed for better and faster minimization of the training loss, converges to a small but non-zero value. Standard training reaches numerical accuracy. When BN is switched to the evaluation mode (BN Eval) the loss on the same training data is by an order of magnitude smaller, , it shows generalization when switching the mode. Right: BN clearly achieves a higher validation accuracy. The learning rate is chosen by numerically optimizing the training loss in 5 epochs for each method. Please refer to § 0.A for details regarding the experimental setup.
    Training Loss     Validation Loss
Figure 2: The effect of randomness in BN. The training is performed with the same parameters. In the experiment with normalization batch 1024, BN layers use statistics of the data consisting of the training batch size 32 and 992 other samples chosen randomly at each step of SGD, all dependencies contribute to the total derivative. We observe that the gap between training and evaluation modes becomes smaller (as expected) but also that the validation loss increases. The validation accuracies are and , respectively.

1.1 Contribution

We begin with the observation that BN and the deterministic normalization techniques rely on the same reparametrization of the weights and biases. In BN the node statistics taken over batches are used for normalization and followed by new affine parameters (scale and bias). We measure how random BN is, depending on the batch size and spatial dimensions of the network. We propose the view that BN can be represented as a noise-free normalization followed by a stochastic scale and bias. Next, we verify the hypothesis that such noises are useful for the two considered deterministic normalization techniques: weight normalization [19] and analytic normalization [22]

. Furthermore, the view of stochastic scale and bias allows to connect BN to variational Bayesian inference 

[9] over these parameters and to variational dropout [12]. We test the complete Bayesian learning approach that learns noise variances of scale parameters and show that combining it with deterministic normalizations allows to achieve significant improvements in validation accuracy and loss. The results on the test set, on which we did not perform any parameter or model selection, confirm the findings.

1.2 Related Work

There are several closely related works concurrent with this submission [20, 25, 2, 15]. Work [20] argues that BN improves generalization because it leads to a smoother objective function, the authors of [15] study the question why BN is often found incompatible with dropout, and works [25, 2] observe that randomness in batch normalization can be linked to optimizing a lower bound on the expected data likelihood [2] and to variational Bayesian learning [25]. However, these works focus on estimating the uncertainty of outputs in models that have been already trained using BN. They do not make any proposals concerning the learning methods. The derivation and approximations made in [25] to establish a link to Bayesian learning is different from ours and, as we argue below, in fact gives poor recommendations regarding such learning. Overall, we remark that a better understanding of the success of BN is a topic of high interest.

The improved methods that we propose are also closely related to variational dropout [12] as discussed below. We give a new interpretation to variational dropout and apply it in combination with normalization techniques.

1.3 Background


be an output of a single neuron in a linear layer. Batch normalization introduced by 

Ioffe and Szegedy [11] is applied after a linear layer before the non-linearity and has different training and test-time forms:


where are the mean and variance statistics for a batch and and are such statistics over the whole training distribution (in practice estimated with running averages during the training). The normalized output is invariant to the bias

and to the scaling of the weight vector

, , it projects out two degrees of freedom. They are then reintroduced after the normalization by an additional affine transformation

with free parameters and , so that the final class of modeled functions stays unchanged. BN has the following useful properties.

  • Initialization. When a BN layer is introduced in a network, and are initialized as , . This resets the initial scale and bias degrees of freedom and provides a new initialization point such that the output of the BN layer will initially have zero mean and unit variance for the first training batch. The non-linearity that follows BN, will be not saturated for a significant portion of the batch data and the training may start efficiently.

  • Reparametrization. BN combined with a subsequent affine layer can be viewed as a non-linear reparametrization of the network. It was noted in 


    that such reparametrizations change the relative scales of the coordinates, which is equivalent to a preconditioning of the gradient (applying an adaptive linear transform to the gradient before each step) 

    [16, §8.7].

Let us also note that a common explanation of BN as reducing the internal covariate shift [11] was recently studied and found to be not supported by experiments [20].

We will also consider deterministic normalization techniques, that do not depend on the selection of random batches: weight normalization (WN) [19] and analytic normalization [22]. We write all the discussed normalizations in the form


where and

are different per method. For BN, they are the batch mean and standard deviation and depend on the batch as well as parameters of all preceding layers. For WN,

and . It does the minimum to normalize the distribution, namely if was a vector normalized to zero mean and unit variance then so is . However, if the assumption does not hold (due to the preceding non-linearities and scale-bias transforms), weight normalization cannot account for this.

For analytic normalization and are the approximate statistics of , computed by propagating means and variances of the training set through all the network layers. Thus they depend on the network parameters and the statistics of the whole dataset. All three methods satisfy the following 1-homogeneity properties: , , which imply that (2) is invariant to the scale of in all three methods.

2 Importance of Reparametrization

    BN     Weight Norm     Analytic Norm
Figure 3: Validation accuracies obtained by the normalization techniques with and without projection on constraints for all scaling invariant weights. Validation losses at the final point are improved as well as follows: Batch Norm: 0.31 0.22, Weigh Norm: 0.9 0.75, Analytic Norm: 0.7 0.45.

The original work [11] recommended to use additional regularization with weight decay . However, when used together with normalization, it leads to the learning problem of the form (for clarity, we restrict the optimized parameters to the weight vector of a single neuron). This problem is ill-posed: it has no minimizer because decreasing the norm of is always a descent direction and at the function is undefined.

Many subsequent works nevertheless follow this recommendation,  [19, 8]. We instead propose to keep the constraint by projecting onto it after each gradient descent step. This avoids the possible instability of the optimization. Moreover, we found it to improve the learning results with all normalizations as shown by the following experiment. We compare learning with and without projection onto the constraint (no weight decay in both cases). The objective is invariant to , however, the optimization is not.  Fig. 3 shows experimental comparison for three normalization methods with or without projecting on the constrain . It appears that projecting on the constraint has a significant impact on the validation performance. Notice that [19] propose quite the opposite with weight normalization: to allow vary freely. Their explanation is as follows. The gradient of the reparametrized objective with respect to is given by


where is the gradient w.r.t. normalized weight and denotes the components of orthogonal to . Thus, the gradient steps are always orthogonal to the weight vector and would progressively increase its norm. In its turn, the magnitude of the gradient in decreases with the increase of and therefore smaller steps are made for larger . [19] argues (theoretically and experimentally) that this is useful for optimization, automatically tuning the learning rate for . We observed that in small problems, the norm does not grow significantly, in which case there is no difference between projecting and not projecting. However, the experiments in Fig. 3 show that in larger problems optimized with SGD there is a difference: allowing to be free leads to smaller steps in and to a worse accuracy in a longer run.

3 Importance of Stochasticity

It has been noted in [11] that BN provides similar regularization benefits as dropout, since the activations observed for a particular training example are affected by the random selection of examples in the same mini-batch. In CNNs, statistics and in (1) are the sample mean and sample variance over the batch and spatial dimensions:


where is the batch size, is the spatial size (we represent the spatial dimensions by a 1D index), is a response for sample at spatial location and . Because and depend on a random sample, for a given input the training-time BN output can be considered as a random estimator of the test-time BN .

3.1 Model of BN Stochasticity

In this section we propose a simplified model of BN stochasticity replacing the randomness of batches by independent noises with known distributions. Despite the simplifying assumptions, this model allows to predict BN statistics and general dependencies such as the dependence on the batch size, which we then check experimentally.

For this theoretical derivation we will assume that the distribution of network activations over the full dataset is approximately normal with statistics . This assumption seems appropriate because the sample is taken over the whole dataset and also over multiple spatial locations in a CNN. We will also assume that the activations for different training inputs and different spatial coordinates are . This assumption is a weaker one as we will see below. We can write the train-time BN as


expressing the output of the batch normalization through the exact normalization (cftest-time BN) and some corrections on top of it. Using the above independence assumptions,

is a random variable distributed as

. It follows that



is chi-squared distribution and

is the inverse chi distribution111

Using well known results for the distribution of the sample mean and variance of normally distributed variables. The inverse chi distribution is the distribution of

when has a chi squared distribution [13].
. The expression in (5) has therefore the same distribution as


if and are independent with known distributions (, not depending on the network parameters).

Figure 4: Verification of BN statistics in a learned network. Left: Observed distribution of in a unit in layer 6 of size , batch size , versus the model . Middle: measured variance of BN output versus batch size in different layers. The slope in this log-log scale confirms dependence. Right: measured variance of BN output multiplied by versus the spatial size of a layer. Here the slope is only approximately , due to spatial correlations, especially in the input layer (largest size).

We verify this model experimentally as follows. In a given (trained) network we draw random batches as during learning, propagate them through the network with BN layers and collect samples of and in each layer. We repeat this for several batch sizes. In Fig. 4 left we see that the model of holds rather well: the real statistics is close to the theoretical prediction. Furthermore, the model predicts that the variance of BN (namely of the expression ) decreases as with the batch size and spatial size of the current layer. Fig. 4 middle clearly confirms the dependence on the batch size in all layers. The dependence on the spatial size (Fig. 4 right) is not so precise, as the inputs are de facto spatially correlated, which we have ignored.

Concurrently to this work, the authors of [2] have proposed a similar model for BN stochasticity and demonstrated that the distributions of and can be used at test time for improving the test data likelihoods and out-of-domain uncertainties. However, they did not explore using this model during the learning.

3.2 Regularizing Like BN

We now perform the following experiment. We measure the variances and of the random variables and in a network trained with BN. For example, the average standard deviation of the multiplicative noise, , in the consecutive layers was (0.05, 0.03, 0.026, 0.023, 0.02, 0.026, 0.041, 0.045, 0.071). We then retrain the network with stochastic normalization using the expression (7), in which is replaced with a deterministic method (either weight or the analytic normalization) and noises in (7) are distributed as and . It is important to note that the noises , are spatially correlated as are the original quantities they approximate. This may seem unnecessary, however we will see in the next section that these correlated activation noises can be reinterpreted as parameter noises and are closely related to Bayesian learning. In Fig. 5 we compare training using noise-free normalizations and noisy ones. The results indicate that injecting noises to deterministic normalization techniques does regularize the training but the amount of noise can be still increased to make it more efficient. In the next section we consider learning the noise values instead of picking them by hand.

We argue that the combination of noise following the normalization is particularly meaningful. The base network, without normalizations is equivariant to the global scale of weights. To see this, note that linear layers and ReLU functions are 1-homogeneous: scaling the input by

will scale the output by . Consider an additive noise injected in front of non-linearities as was proposed in [7]:


where has a fixed distribution such as  [7]. Then, scaling and by and scaling by allows the model to increase the signal-to-noise ratio and to suppress the noise completely. In contrast, when noises are injected after a normalization layer, as in (5), the average signal to noise ratio is kept fixed.

    Weight Normalization     Analytic Normalization
Figure 5: Deterministic normalization techniques with noises like in BN. The noises appear to improve validation accuracy (top) and validation loss (bottom) noticeable for weight normalization and insignificantly for analytic normalization.

3.3 BN as Bayesian Learning

Let be our training data. In Bayesian framework, given a prior distribution of parameters , first the posterior parameter distribution given the data is found, then for prediction we marginalize over parameters:


A practical Bayesian approach is possible by using a variational approximation to the parameter posterior, as was proposed for neural networks by [9]. It consists in approximating the posterior by a simple distribution from a parametric family with parameters , for example, a diagonal multivariate normal distribution with . The approximate Bayesian posterior (9) becomes


which allows at least for the Monte Carlo (MC) approximation. The distribution is found by minimizing the KL-divergence between and


in parameters . When is chosen to be the delta-function at (a point estimate of ) and the prior as , formulation (11) recovers the conventional maximum likelihood [9] regularized with . The first term in (11), called the data evidence can be written as the joint expectation over parameters and data:


where denotes drawing from the training dataset uniformly. The gradient of the expectation (12) for normal distribution (assuming is differentiable a.e.) expresses as


Using the parametrization , , gradient (13) simplifies to


For a more general treatment of differentiating expectations see [21]

. A stochastic gradient optimization method may use an unbiased estimate with mini-batches of size



This means that during learning we randomly perturb parameters for every input sample. It becomes apparent that any noises in the parameters during the standard maximum likelihood learning are closely related to the variational Bayesian learning.

In order to connect BN to Bayesian learning, it remains to define the form of the approximating distribution that would correspond to the noisy model of BN with an affine transform as given by


We reinterpret (16) as a model with a stochastic affine transform defined by a stochastic scale and a stochastic bias :


We define the approximate posterior over parameters to be factorized as , where is a delta distribution,  a point estimate, and is a coupled distribution over scale and bias defined as a distribution of a parametric mapping of independent random variables and :


We let the prior to be defined through the prior on and the same mapping (18). The invariance of KL divergence under parameter transformations allows us to write


This completes the construction, which can now be summarized as the following proposition.

Proposition 1.

Assuming that BN can be well approximated with a noisy normalization modeled by (7), the BN learning is equivalent to variational Bayesian learning [9] with stochastic scale-bias parameter distribution with fixed , and prior on which is uniform on the sphere (for all normalized units).

Note, that the distribution of still depends on two parameters and is optimized, but the KL divergence term vanishes. In other words, BN, optimizes only the data evidence term, but does not have the KL prior (it is considered constant). Note also that by choosing a suitable prior, a regularization such as can be derived as well.

Concurrently to this work, [25] proposed an explanation of BN as Bayesian learning with a different interpretation. They associate the stochasticity to noises in the linear transform parameters and build a sequence of approximations to justify the weight norm regularization as the KL prior. While this matches the current practices of applying weight decay, it leads to the problem of regularizing the degrees of freedom to which the network is invariant, as we discussed in § 2. It is therefore likely that some of the approximations made in [25] are too weak. Furthermore, the authors do not draw any applications of their model to experiments or learning methods.

Our interpretation is also only an approximation based on the simplified stochastic model of BN (7). However, one more argument in favor of the Bayesian learning view of BN is the following. It appears that from the initialization to convergence of BN learning, the standard deviations of , are in fact changing, especially in the final layer, growing by a factor of up to 5. So BN appears to increase the regularization towards convergence, which is also the case when we optimize these distributions with Bayesian learning.

3.4 Connection to Variational Dropout

Kingma et al. [12] proposed a related regularization method called variational dropout. More specifically, in the case of ReLU non-linearities and fully connected linear layers, the non-negative stochastic scaling applied to the output of a linear layer (17), can be equivalently applied to the input of the subsequent linear layer, which then expresses as


matching the

variational dropout with correlated white noise

[12, sec. 3.2]. They consider approximate Gaussian posterior , a log-uniform prior on , given by and no priors on and and apply the variational Bayesian learning to this model. This model is rather economical, in that an extra variance variable is introduced per input channel and not per weight coefficient and performed better than the other studied variants of variational dropout [12]. Extending this model to the convolutional networks retains little similarity with the original dropout [24]. The main difference being that the noise is applied to parameters rather than activations. See also [6] discussing variational dropout and such correlations in the context of RNNs.

3.5 Normalization with Bayesian Learning

We propose now how the model (17) can be applied with other normalization techniques. For simplicity, we report results with bias being deterministic, consider the model


We expect that the normalized output approximately has zero mean and unit variance over the dataset. This allows to set reasonable prior for . In our experiments we used (we do not expect scaling by a factor more than 10 in a layer, a rather permissive assumption) and no prior on . We then seek point estimates for and and a normal estimate of parametrized as . A separate value of variance may be learned per-channel or just one value may be learned per layer. In the former case, the learning has a freedom to chose high variances for some channels and in this way to make an efficient selection of model complexity.

The KL divergence on the scale parameter with these choices is, up to constants, . We observe, that the most important term, the only one that pushes the variance up and prevents overfitting is . The very same term occurs in the KL divergence to the scale-uniform prior [12]. The remaining terms balance how much of variance is large enough, may only decrease the regularization strength and penalize large value of .

We identified one technical problem with such KL divergences when used in stochastic gradient optimization: when approaches zero, the derivative is unbounded. This may cause instability of SGD optimization with momentum. To address this issue we reparametrize as a piece-wise function if and if . This makes sure that derivatives of both and are bounded. Note that a simpler parametrization has quickly growing derivatives of the linear terms in and that the data evidence as composition of log softmax and piecewise-linear layers is approximately linear in each variance as seen from the parametrization (15). Note that using a sampling-based estimate of the KL divergence as in [3] does not circumvent the problem because it contains exactly the same problematic term in every sample.

Figure 6: All improvements to weight normalization. Bayesian Learning leads to significantly reduced validation loss and improved accuracy. For Bayesian learning, the training loss shows as solid line the expected data evidence estimated as running mean during training and triangles show the training loss with mean parameters. The validation loss plot uses mean parameters. The MC estimate of the validation loss of the final model with 10 samples is , significantly lower than the value in the plot when using mean parameters.
Figure 7: Analytic Normalization with Bayesian learning in comparison with projection only and with the baseline version [22].

Figs. 7 and 6 show the results for this Bayesian learning model with weight and analytic normalization. In the evaluation mode we substitute mean values of the scales, . In Table 1 we also show results obtained by MC estimation of the posterior (9). It is seen that the validation accuracy of both normalizations is improved by the Bayesian learning. Its most significant impact is on the validation loss, which is particularly important for an accurate estimation of uncertainties for predictions made with the model. MC estimates of the validation losses, better approximating the Bayesian posterior (10), are yet significantly lower as seen in Table 1. It is interesting to inspect the learned noise values. For analytic normalization we obtained in the consecutive layers the average = (0.39, 0.34, 0.18, 0.29, 0.27, 0.29, 0.31, 0.61, 0.024), corresponding to standard deviation of . Compared the respective noises in BN § 3.2, these values are by up to an order of magnitude larger except in the last layer. The learned noises put more randomness after the input and in the penultimate layer that has 192 channels. The final linear transform to 10 channels followed by spatial pooling can indeed be expected to tolerate more noise.

Method Test accuracy, % Test negative log likelihood
Single-pass MC-10 MC-30
No normalization 90.7 1.45 - -
Baseline BN 92.7 0.34 - -
BN with Projection 94.1 0.29 - -
Weight Normalization 93.5 0.48 0.27 0.24
Analytic Normalization 94.4 0.38 0.22 0.20
Best previously published results with the same network
Dropout [5] as reported in [7] 90.88 - - 0.327
ProbOut [7] 91.9 0.37 - -
Published results with other networks
ELU [4] 93.5
ResNet-110 [10] 93.6
Wide ResNet [26] (includes BN) 96.0
Table 1: Summary of results for the test set in CIFAR-10. The test set does not contain augmentations and was not used in any way during training and parameter selection. Weight normalization and analytic normalization use projection and Bayesian learning. For comparison we also quote recently published results in [7] for the very same network and state-of-the art results with more advanced networks. We did not run our method with these larger networks.
Figure 8: Left: exemplar image from the dataset (512x512). Middle: exemplar training segmentation. Right: exemplar deformation applied to a grid image.
    Validation Loss     Validation Accuracy
Figure 9: Comparison of methods on the cell segmentation dataset DIC-HeLa [17] with a deep fully convolutional network (11 conv layers).

4 Other Datasets

To further verify applicability of our proposed improvements, we made an experiment with a quite different problem: segmentation of the dataset ”DIC-HeLa” from the ISBI cell tracking challenge [17] illustrated in Fig. 8. The difficulty of this dataset is that it contains only 20 fully annotated training images, and the cells are hard to segment. Even when using significant data augmentation by non-rigid transforms as illustrated in Fig. 8 right, there is still a gap between training and validation accuracy. The Bayesian learning approach in combination with analytic normalization gives a noticeable improvement as shown in Fig. 9. Please refer to § 0.A.2 for details of this experiment. One of the open problems, is how to balance the prior KL divergence term, as in the case of a fully convolutional networks with data augmentation, we do not have a clear notion of a number of training examples. Results in Fig. 9 are obtained with KL factor

per classified pixel (meaning the augmentation is worth 10 examples).

5 Conclusion

We have studied two possible causes for a good regularization of BN. The effect achieved due to the interplay of the introduced reparametrization and SGD appears to play the major role. We have improved this effect empirically by showing that performing the optimization in the normalized space improves generalization for all three investigated normalization methods. We have analyzed the question of how the randomness of batches helps BN. The effect was quantified, the randomness measured and modeled as injected noises. The interpretation as Bayesian learning is a plausible explanation for why it helps to improve generalization. We further showed that such regularization helps other normalization techniques to achieve similar performance. This allows to improve performance in scenarios when BN is not suitable and to learn the model randomness instead of fixing it by batch size and the architecture of the network. We found that variational Bayesian learning may occasionally diverge due to a delicate balance of the KL divergence prior. This question and the utility of the learned uncertainty are left for future work.


A.S. has been supported by Czech Science Foundation grant 18-25383S and Toyota Motor Europe. B.F. gratefully acknowledges support by the Czech OP VVV project ”Research Center for Informatics” (CZ.02.1.01/0.0/0.0/16_019/0000765).


  • Arpit et al. [2016] Arpit, D., Zhou, Y., Kota, B.U., Govindaraju, V.: Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. In: ICML. pp. 1168–1176 (2016)
  • Atanov et al. [2018] Atanov, A., Ashukha, A., Molchanov, D., Neklyudov, K., Vetrov, D.: Uncertainty estimation via stochastic batch normalization. In: ICLR Workshop track (2018)
  • Blundell et al. [2015] Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight uncertainty in neural networks. In: ICML. pp. 1613–1622 (2015)
  • Clevert et al. [2016] Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). In: ICLR (2016)
  • Gal and Ghahramani [2016a]

    Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.

    In: ICML. pp. 1050–1059 (2016a)
  • Gal and Ghahramani [2016b] Gal, Y., Ghahramani, Z.: A theoretically grounded application of dropout in recurrent neural networks. In: NIPS. pp. 1027–1035 (2016b)
  • Gast and Roth [2018] Gast, J., Roth, S.: Lightweight probabilistic deep networks. In: CVPR (June 2018)
  • Gitman and Ginsburg [2017] Gitman, I., Ginsburg, B.: Comparison of batch normalization and weight normalization algorithms for the large-scale image classification. CoRR abs/1709.08145 (2017)
  • Graves [2011] Graves, A.: Practical variational inference for neural networks. In: NIPS, pp. 2348–2356 (2011)
  • He et al. [2016] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. pp. 770–778 (2016)
  • Ioffe and Szegedy [2015] Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML. vol. 37, pp. 448–456 (2015)
  • Kingma et al. [2015] Kingma, D.P., Salimans, T., Welling, M.: Variational dropout and the local reparameterization trick. In: NIPS, pp. 2575–2583 (2015)
  • Lee [2012]

    Lee, P.: Bayesian Statistics: An Introduction (2012)

  • Lei Ba et al. [2016] Lei Ba, J., Kiros, J.R., Hinton, G.E.: Layer Normalization. ArXiv e-prints (Jul 2016)
  • Li et al. [2018] Li, X., Chen, S., Hu, X., Yang, J.: Understanding the disharmony between dropout and batch normalization by variance shift. CoRR abs/1801.05134 (2018)
  • Luenberger and Ye [2015] Luenberger, D.G., Ye, Y.: Linear and Nonlinear Programming (2015)
  • Ma ka and [2014] Ma ka, M., : A benchmark for comparison of cell tracking algorithms. Bioinformatics 30(11), 1609–1617 (2014)
  • Ronneberger et al. [2015] Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: MICCAI. pp. 234–241 (2015)
  • Salimans and Kingma [2016] Salimans, T., Kingma, D.P.: Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In: NIPS (2016)
  • Santurkar et al. [2018] Santurkar, S., Tsipras, D., Ilyas, A., Madry, A.: How does batch normalization help optimization? (no, it is not about internal covariate shift). CoRR 1805.11604 (2018)
  • Schulman et al. [2015] Schulman, J., Heess, N., Weber, T., Abbeel, P.: Gradient estimation using stochastic computation graphs. In: NIPS, pp. 3528–3536 (2015)
  • Shekhovtsov and Flach [2018] Shekhovtsov, A., Flach, B.: Normalization of neural networks using analytic variance propagation.

    In: Computer Vision Winter Workshop. pp. 45–53 (2018)

  • Springenberg et al. [2015] Springenberg, J., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. In: ICLR (workshop track) (2015)
  • Srivastava et al. [2014] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A simple way to prevent neural networks from overfitting. JMLR 15, 1929–1958 (2014)
  • Teye et al. [2018] Teye, M., Azizpour, H., Smith, K.: Bayesian uncertainty estimation for batch normalized deep networks. In: ICML (2018)
  • Zagoruyko and Komodakis [2016] Zagoruyko, S., Komodakis, N.: Wide residual networks. In: BMVC. pp. 87.1–87.12 (September 2016)

Appendix 0.A Additional Experiments

0.a.1 Cifar

CIFAR10222 is a common vision benchmark for new methods. Unlike the very popular MNIST dataset, its classification performance is not yet saturated. From the training set we split 10 percent (at random) to create a validation set. The validation set is meant for model selection and monitoring the validation loss and accuracy during learning. The test set was kept for the final evaluation only.

The all CNN network we test [23] has the following structure of convolutional layers:

ksize = [3,  3,  3,  3,   3,   3,   3,   1,   1 ]
stride= [1,  1,  2,  1,   1,   2,   1,   1,   1 ]
depth = [96, 96, 96, 192, 192, 192, 192, 192, 10]

each but the last one ending with leaky ReLU activation with leaky slope 0.01. The final layers of the network are

Norm, AdaptiveAvgPool2d, LogSoftmax

When we train with either of the normalization variants, it is introduced after each convolutional layer.

For the optimization in all experiments we used batch size 32 (which was determined as optimal with a manual grid search), SGD optimizer with Nesterov Momentum 0.9 (pytorch default) and the learning rate

, where is the epoch number, is the initial learning rate, is the decrease factor. In all reported results for CIFAR we used such that = 0.1 and 1200 epochs. This relatively longer training schedule is used in order to make comparison in terms of accuracy more fair across somewhat faster and somewhat slower methods. The initial learning rate was selected by an automatic numerical search optimizing the training loss achieved in 5 epochs. This is performed individually per training case to take care for the differences introduced by different reparametrizations.

Parameters of linear and convolutional layers were initialized using pytorch defaults, , uniformly distributed in

, where c is the number of inputs per one output. Standard training and weight normalization were additionally initialized with data-dependent normalization equivalent to making one pass with batch normalization with zero learning rate, batch size 128 and applying the scaling and bias parameters computed by this pass. This is similar to [8, 22] and ensures that these methods start from the same initialization in comparison plots with BN. Analytic normalization [22] uses analytic approximate statistics for initialization.

Standard minor data augmentation was applied to the training and validation sets, consisting in random translations

pixels (with zero padding) and horizontal flipping.

0.a.1.1 Dependence on Batch Size

Fig. 0.A.1 shows dependence of SGD on the batch size. It is widely observed that SGD has a faster convergence to the region of interest than the full gradient descent. Similarly, when increasing the batch size, we observe that the learning slows down. Moreover, stochasticity of the SGD and its regularization effect on the learning is changed. We observe this with BN normalization: although optimizing the training loss succeeds, the stochasticity of both BN and SGD are decreased leading to a significant loss of accuracy.

    Training Loss     Validation Loss
Figure 0.A.1: Dependence of SGD on training batch size. Top: standard training. Bottom: batch normalization. All cases use the same initialization point, the learning rate is automatically found for each run. We observe that with or without BN, the results of SGD significantly depend on the batch size.

0.a.1.2 Statistical Significance / Reproducibility

A fair question about the reported comparisons is their statistical significance. After all, the initialization, batch selection and parameter samples in Bayesian learning are all random. Despite of that, the results are fairly repeatable. Fig. 0.A.2 shows the evidence for BN and the proposed method: runs with different random seeds are within the standard deviation of iterates of a single run. In this experiment the split of the training data into training and validation sets and the learning rate are kept constant. We observe a similar behavior with other studied methods and metrics. We therefore show only the standard deviation of the iterates of a single run in all our validation plots in the main paper (as shaded areas).

    Batch Normalization     Analytic normalization with Bayesian learning
Figure 0.A.2: Validation accuracy vs. epochs for runs with different random seeds. Solid lines show (running) mean of each run and shaded area shows standard deviation of iterates of the run with seed 1 (corresponding to the results shown in the main paper).

0.a.1.3 Recognition with a Reject Option

Except of the bare accuracy reported in Table Table 1 we conducted further tests to see the quality of the learned predictive distribution. Fig. 0.A.3 shows the error coverage plot obtained as follows. The recognition system is allowed to reject from recognition based on a certain criterion, to give an “I don’t know” answer. The error rate of the classified data (error) is plotted versus the portion of classified data to all data (completeness). As a criterion for rejection, for simplicity, we use the entropy of the predictive distribution , , samples with high entropy are first candidates for ”I don’t know” answer. The results are obtained with deterministic “single run” methods (, without sampling batches or stochastic parameters) on the test set. It is seen that regularization uniformly improves the recognition accuracy at all thresholds and that the proposed method is on par with batch normalization while potentially more flexible. It is interesting to note that close to zero error rates are possible with the completeness threshold of about 60% of the cases.

Figure 0.A.3: Error coverage test. By rejecting from recognition on the data with high entropy of the predictive distribution, the recognition of the accepted data can be much more accurate. This verifies that the learned posterior distribution contains useful confidence information – the recognition system can know when the prediction is likely to be erroneous and when not.

0.a.1.4 Sensitivity to Input Perturbations

Fig. 0.A.4 inspects the sensitivity of the learned models to the perturbation of the input with a random noise or with an adversarial gradient sign attack. The tests reveal relatively similar behavior of all solutions experiencing a fast drop of accuracy when the perturbation strength increases. Note that deterministic test-time models are evaluated, , no sampling of batches or parameters at test time. The test shows that stochastic regularization in the form of Bayesian learning did not improve stability of such deterministic predictions.

Figure 0.A.4: Stability with respect to perturbation of the input. Left: the input image is perturbed with a Gaussian noise with a certain standard deviation (x-axis). Right: the input image is perturbed with an adversarial gradient sign attack of a certain norm of the perturbation (x-axis). Numbers in brackets show accuracy at zero noise.

0.a.2 Cell Segmentation

This experiment is conducted on the dataset ”DIC-HeLa” [17] which contains 20 fully annotated training images of the size 512x512. We randomly split the training set into 70% training and 30% validation. We restrict ourselves to the task of segmenting the cells.

The evaluated network is a fully convolutional network with the following structure:

Conv2D(channels=64, ksize=9), Activation
Conv2D(channels=64, ksize=9), Activation
Conv2D(channels=32, ksize=9), Activation
Conv2D(channels=32, ksize=9), Activation
Conv2D(channels=16, ksize=9), Activation
Conv2D(channels=16, ksize=9), Activation
Conv2D(channels=8, ksize=9), Activation
Conv2D(channels=8, ksize=9), Activation
Conv2D(channels=4, ksize=9), Activation
Conv2D(channels=4, ksize=9), Activation
Conv2D(channels=2, ksize=1), Activation

As activation we used the Softmax function. This network has about parameters, which is small compared to U-net [18] that has 30M parameters and has been previously applied to the small dataset. Following [18], we perform the following data augmentation: we consider random vertical and horizontal flips as well as small non-rigid deformations (the images and the corresponding segmentations are deformed synchronously). The deformations are illustrated in Fig. 8.

For learning of this large network, Adam optimizer was more suitable. We used the same learning rate schedule and selection of initial learning rate as with CIFAR, but with set such that = 0.1 and used 3000 epochs, which was appropriate for the amount of data augmentation that we applied.