1 Introduction
Batch Normalization (BN) [11] has significantly contributed to the wide application and success of deep learning. It has enabled training of larger and deeper models [7, 8, 22, 23]
leading to significant advances in computer vision. However, a wellacknowledged drawback of BN is that it requires sufficiently large minibatches
[10, 23, 7], and results in drastically reduced model performance for smaller minibatches. In this paper we study this peculiar behavior of BN to gain a better understanding of the source of the problem. Based on a statistical insight we identify a potential cause, and propose EvalNorm to address the issue. EvalNorm estimates corrected normalization statistics to use for BN during evaluation only. EvalNorm supports online estimation of the corrected statistics while the model is being trained and does not affect the training scheme of the model. An added advantage of EvalNorm is that it can be used with pretrained models allowing them to benefit from our method without retraining. For pretrained models EvalNorm suggests: 1) A rule of thumb that is trivial to implement and, 2) An offline estimation method that requires a pass through data (but doesn’t update the model). The ability to estimate corrected statistics online for new models helps avoid an otherwise two step process, making experimentation easier. The model is still trained as it would be without our method. Note that proposing a new normalization technique is not a goal of this paper. Instead, the goal is to gain a better understanding of the behavior of BN for small minibatches. We limit ourselves to explorations that only affect the evaluation of a model trained with BN and refer to our method as ‘Eval Normalization’ (EvalNorm or EN).Reliance of BN on large minibatches is prohibitive in several settings due to the hardware memory limitations. For example, applications requiring highresolution inputs (object detection, segmentation, medical image analysis, etc.) or highcapacity (deeper and wider) networks are constrained to using smaller minibatches and thus take a performance hit. As a result, several normalization techniques, such as Group Normalization [25] and Batch Renormalization [10], have been proposed to address the problem due to smaller minibatches. However, these approaches don’t shed any light on the source of problem with BN. Instead, they propose alternatives that make changes to the model and require retraining. While that solves the problem by providing alternatives, it still doesn’t address the problem for many already trained and deployed models using BN where retraining is not possible. Further, as a research community we need to gain a better understanding of such widely used methods as BN. Our experimental evaluation of EN shows that it addresses the shortcomings of BN for small batches and provides a feasible alternative when retraining isn’t an option.
Training and evaluation discrepancy in BN:
During training, BN normalizes each channel for an example using the mean and variance of that channel aggregated across the full minibatch. This aggregation across the minibatch introduces a dependency on other minibatch samples. However, during evaluation each example is evaluated by itself and thus an approximation of the minibatch statistics is required. Typically, an exponential moving average (EMA) of minibatch moment statistics is maintained during training and used as a substitute during evaluation. Normalization using EMA statistics is assumed to provide an accurate approximation to normalization observed during training. While reasonable for larger minibatches, we demonstrate that this assumption is erroneous for small minibatches (
Section 3.2). This is because the mean and variance used to normalize a sample in a minibatch during training depend on that sample itself. For small minibatches this dependency is significant but ignored by the default method of using EMA. We qualitatively illustrate the discrepancy in train and test normalization in Figure 1, where we plot the distribution of normalized features at train (‘BN’) and test (‘EMA’) time. Each plot shows smoothed histograms of activations for an arbitrary channel of an arbitrary layer (in a ResNet20 model) using different methods. For larger minibatches (Figure (a)a), we observe that the distribution of normalized activations during training and testing match well. However, for small minibatches (Figures (c)c and (b)b), normalized feature distributions are quite different. Lastly, note that our method (‘Ours’) reduces this discrepancy and brings the distribution of normalized activations during testing closer to that during training (‘BN’).To summarize our contributions, we: 1) quantify the dependence of the normalization statistics used by BN on a particular minibatch sample leading to an insight into the source of poor small minibatch performance, 2) propose two methods for estimating a correction without retraining existing models that use BN, and 3) propose a method to estimate corrections online during training of new models without affecting the training. We have included an extensive experimental section that analyzes and validates our insight and demonstrates that our insight leads to an improved evaluation performance of BN on a variety of common benchmarks.
2 Related work
Normalization of data for training is regularly used in machine learning. For example, it’s a common practice to normalize features (scale or whiten) before learning classifiers like SVM. Similarly, for deep networks, many techniques have been proposed to normalize both inputs and intermediate representations, which make the training better and faster
[13, 6]. Batch Normalization or BatchNorm (BN) is one such technique which aims to stabilize latent feature distributions in a deep network. BN normalizes features using the statistics computed from a minibatch; and has been shown to ease the learning problem and enable fast convergence of very deep network architectures. However, with BN’s widespread adoption, it has been observed that models using BN exhibit severe degradation in performance when trained with smaller minibatches (as discussed in Section 1).To address the stochasticity due to small minibatches (and bias due to noniid samples), Ioffe [10] introduced batch renormalization (Renorm) which constrains the minibatch moments to a specific range. This ensures that the variation in minibatch statistics is contained during training. Ioffe [10]
also introduced hyperparameters to prevent the drift in EMA statistics due to significant variability of small minibatches. However, Renorm is still dependent on minibatch statistics with small minibatches leading to worse performance. For tasks where small minibatches are standard (e.g., object detection), another approach is to engineer systems that can circumvent the issue.
Peng et al. [17] proposed to perform synchronized computation of BN statistics across GPUs (CrossGPU BN) to obtain better statistics. In practice, since the minibatch for object detection is small (12 images), this approach requires GPUs to compute reasonable BN estimates. Further, this does not address the original problem of BN with small minibatches. Moreover, the need for synchronized computation prohibits the use of asynchronous training, which is a standard and practical tool for largescale problems.Instead of dealing with the small minibatch problem, several normalization techniques have been proposed that do not utilize the construct of a ‘minibatch.’ Instance Normalization [24] performs normalization similar to BN but only for a single sample and was shown to be effective on image style transfer applications. Similarly, Layer Normalization [2] utilizes the entire layer (all channels) to estimate the normalization statistics. These approaches [2, 24] have not shown benefits on image recognition tasks, which is the application we focus on. Instead of normalizing the activations, Weight Normalization [20]
reparameterizes the weights in the neural network to accelerate convergence. Normalization Propagation
[1] uses data independent moment estimates in every layer, instead of computing them from minibatches during training,. Group Normalization (GN) [25] divides the channels into groups and, within each group, computes the moments for normalization. GN alleviates the small minibatch problem to some extent, but it performs worse than BN for larger minibatches. Ren et al. [18] provide a unifying view of the different normalization approaches by characterizing them as the same transformation but along different dimensions (layers, samples, filters, etc.).Unlike many approaches discussed above, the proposed EN does not modify the training scheme of batch normalization. It only estimates a different set of evaluation time statistics. The parameters used in EN are independent of the deep network, and training the parameters does not impact the network’s training in any way. We conjecture that EN may be complementary to some of the abovementioned approaches and may be used in conjunction with them to normalize across batch dimension. However, we consider this beyond the focus of this study and leave as future work.
3 Eval Normalization (EN)
We first briefly describe the relevant aspects of batch normalization and then present our method.
3.1 Batch Normalization (BN)
BN normalizes a particular channel of a sample in a minibatch using the mean and variance of that channel computed across the whole minibatch. Since the normalization of the channels is decoupled, we analyze the normalization of a single (but arbitrary) channel. Consider the set of activations of a particular channel in an arbitrary layer for a minibatch of size . Typically, is a two dimensional field of scalars. During training, BN uses the minibatch mean and variance to compute the normalized activations as
(1) 
This has two notable ramifications during training: 1) For activations the normalization statistics and are a function of the statistics of activations themselves, 2) the normalized activations for a particular sample are stochastic as they depend on the statistics of the other samples in a stochastic minibatch.
During evaluation randomness of due to stochastic dependency on other minibatch elements poses a difficulty for BN. To keep the evaluation deterministic and remove the dependency on other test samples BN substitutes by an estimate of its expected value . This is done by treating and
encountered during training as random variables and substituting them by an estimate of their expected values as
and to construct a first order approximation of as(2) 
The estimates and are typically maintained as exponential moving averages (EMA) during training.
Note that BN also employs a learned affine transform after normalization. We omit this affine transform in the following as it is not relevant to the discussion of normalization.
3.2 Source of stochasticity in
As noted earlier in one of the ramifications, and are a function of . However, eq. 2 ignores this dependency entirely. Let us first make this dependency explicit. Let and be the mean and variance respectively of . Denote as the set of minibatch elements excluding with combined mean and variance . The full minibatch mean and variance can be expressed in terms of the above with the combined population mean and variance formulae. Let , then
(3)  
(4) 
First note that
and can be viewed as interpolating between the contribution from the sample being normalized and the contribution from the other minibatch samples. For normalized activations
it is evident that the true source of stochasticity are and due to randomized minibatch. Therefore, we assert that in eq. 2 for the minibatch sample should be approximated using eqs. 4 and 3 as opposed to approximating and directly.Note that for large minibatches , resulting in a nominal contribution of and to the normalizing statistics. Therefore, and are reasonable approximations. However, for small minibatches the contributions of and in eqs. 4, LABEL: and 3 can not be ignored and these approximations are not accurate. We argue that this inaccuracy is the primary reason for observed distribution mismatches between BN and EMA in Figure 1, and poor performance of BN for small minibatches.
3.3 Approximating for small minibatches
Simple approximation: At first glance a simple solution would be to directly use and approximate and using eqs. 4 and 3. More specifically, and . These can then be substituted in eq. 2 for normalization. Note that we have made the additional approximations of . However, for small minibatches and exhibit high variance and these approximations may be inaccurate. Unfortunately, higher order approximations to account for the variance either require tracking higher order moments, whose estimates would themselves be unreliable, or making strong assumptions about the distributions of . Nevertheless, we explore this alternative in experiments where it turns out that
is less than ideal. Experimenting with a few heuristic choices for the value of
leads to a simple ruleofthumb of (Tables 1 and 4) that is found to work well empirically.Estimated approximation: Instead of using a fixed , we propose to turn into an estimated variable. We instantiate two separate copies of , namely and , and construct the following approximations to eqs. 4 and 3
(5)  
(6) 
Our method estimates and (Section 3.4) such that normalized activations using above approximations match the normalized activations using minibatch statistics. Note that, decoupled and lead to slightly better empirical results than a single value.
3.4 Estimating and
Offline estimation: Given a trained model, we propose to estimate and for a particular layer by minimizing an auxiliary loss as below. Let represent a stop_gradient operation that does not allow flow of gradients to its argument in automatic differentiation frameworks. Then is computed as
(7)  
(8)  
(9) 
Minimizing this objective allows us to estimate such that evaluation time normalization produces activations that are similar to those observed using minibatch statistics for normalization. Further, the stop_gradient operation prevents the estimation from affecting the model parameters.
Online estimation: For training a new model a naïve approach would be to first train the model as usual with BN and then estimate and in a second pass while freezing rest of the parameters using above. However, use of stop_gradient as above allows us to collapse the two steps into a single one without affecting the training of the model parameters.
4 Experiments
We evaluate our approach on two tasks and four datasets: image classification on CIFAR10, CIFAR100 [12], and ImageNet [3], and object detection on COCO [14]. We use CIFAR100 as a testbed to perform a variety of ablation experiments and study various aspects of our method. The results on ImageNet and COCO demonstrate the utility of using EvalNorm (EN), especially for models trained with small minibatches. Note that, unless explicitly stated, and in all the experiments are estimated online for ease of experimentation.
4.1 Image Classification on CIFAR10/100
Experimental setup. CIFAR10 and CIFAR100 contain images from 10 and 100 classes respectively. For both CIFAR10 and CIFAR100, we train on the 50000 training images and evaluate on the 10000 test images. Unless specified otherwise, we use a 20 layer ResNetv2 CIFAR variant [8] for both datasets. The base CIFAR variant contains three groups of residual blocks with widths respectively. Wider variants use multiples of these sizes. All networks are trained using SGD with momentum for 128k updates, with an initial learning rate of 0.1 and a cosine decay schedule [15] unless otherwise specified.
Small minibatches.
In this section we study the effect of minibatch size used to compute the normalization statistics. For fair comparison all models need to be trained for the same number of epochs. However, this leads to smaller minibatch models getting more gradient updates. Moreover, gradients from small minibatches have higher variance in comparison to larger minibatches requiring an adjustment in learning rate. To isolate the impact of small minibatches on normalization from these confounding factors, we use the same minibatch size (128 samples) for gradient updates and vary the number of samples used for normalization (from 2 to 128) (refer to the “microbatch” setup in
[10]).Figures (b)b and (a)a compare our method (EN) with the default EMA statistics used by standard BN [11] and batch renormalization (Renorm) [10]. For Renorm, we set and . For larger minibatch sizes (64 and 128), we notice that all methods are comparable, but for smaller minibatch sizes (2 and 4) using EN statistics leads to a lower classification error compared to both BN and Renorm. Even though Renorm is less sensitive to varying minibatch sizes, it also lead to worse performance.
Interestingly, the performance of BN and EN does not decrease monotonically with the decreasing minibatch size used for normalization. Instead, it seems to improve before becoming worse. Similar trends were reported by [16]. In depth study of this is beyond the scope of this paper (refer to [16] for an empirical study). One intuition is that smaller minibatches introduce stochasticity that acts as a regularizer before it starts to hurt performance.
Visualization of estimated . Figure 3 visualizes the estimated across different layers for various batch sizes. Larger value of for smaller batches confirms our insight that statistics of the sample being normalized need to be included and weighted in accordance with batchsize. Both follow the trend of taking on smaller values for larger batches indicating a decreasing influence of individual statistics for larger batches in agreement with eqs. 4, LABEL: and 3.
Simple approximation for . Figure 4 compares a set of hand picked schedules for as a function of batch size . Although, the estimated values using our method perform the best, appears to be a good rule of thumb. Note that this is counter intuitive to the more natural seeming . We also validate this on ImageNet in Table 1.
Deeper and wider networks. Figure 5 compares EN and EMA in deeper (20, 56, 110 and 254 layers) and wider (1, 2, 4 and 8 the original width) ResNetv2 models, with normalization minibatch size of 4 and 8 on CIFAR100. EN consistently leads to lower classification error compared to using EMA statistics indicating its general applicability.
4.2 Image Classification on ImageNet
We report results on the ImageNet classification dataset [3] with 1000 classes. We train on M training images and evaluate on 50k validation images. All methods in this section use a 50 layer ResNetv2 [8] network architecture. We resize all images to and use training time data augmentation from [23].
As is standard practice [7, 8], we use synchronous SGD across 8 GPUs to train all models in this section. We compute the BN and EN statistics per GPU, whereas gradients are averaged across all GPUs. Therefore, the effective minibatch size for gradient computation (SGD ‘batchsize’) is
the per GPU normalization minibatch size (‘samples/GPU’). This is the standard synchronized multiGPU training setup in popular libraries, such as PyTorch and Tensorflow.
We study normalization minibatch sizes (samples/GPU) of 2, 4, 8, 16, and 32, leading to an effective SGD batchsize () of 16, 32, 64, 128, and 256 respectively. All models are trained for 60 epochs with an initial learning rate of and a cosine decay schedule. Other implementation details (such as initialization) follows [8]. We report results on two image classification metrics: ‘Prec@1’ and ‘Recall@5’. Prec@1 measures the classification accuracy of the top1 prediction, and Recall@5 measures the recall accuracy for top5 predictions.
We evaluate using EN and EMA statistics for inference time normalization and report the results in Table 1. We observe that using EN statistics consistently perform better than EMA statistics across all minibatch sizes. For larger minibatches, where EMA statistics provide an accurate approximation, the difference in performance is marginal. However, for small minibatches EN statistics have a significant impact. For example, for 2 samples/GPU, Prec@1 metric improves by more than 8 points and Recall@5 metric improves by 6.27 points. Also note that the performance gap between small and large minibatch size is much lower with EN. For example, reducing samples/GPU from 32 to 4, Prec@1 for EMA drops from 75.98 to 71.40 (4.58 points), whereas for EN, it only drops by 2.7 points.
Next, in Table 1 we also compare our method with two recent normalization methods namely Group normalization [25] and Batch renormalization [10] that circumvent minibatch dependence. Note that these methods change the model or the training method itself. EN performs similar to or better than alternatives that normalize across batch (EMA and Renorm), indicating that it properly accounts for individual sample contributions. For large batch sizes EN outperforms other methods while for small batch sizes Group normalization tends to perform better. However, the gain from retraining using GroupNorm is reduced from 10% to 3.75% (for batchsize 2), and EN does not suffer the side effect of reduced performance for large batch sizes.
Table 1 also reports the performance using the suggested ruleofthumb () as well as offline estimation. Ruleofthumb significantly improves over EMA but underperforms EN. Nevertheless, it is an attractive and easier alternative to estimation at the cost of performance. Offline estimation performs similar to online estimation as expected.
samples/GPU  32  16  8  4  2  
Prec@1  Renorm  76.04  75.78  74.21  73.33  70.87 
Groupnorm  75.87  75.73  75.75  75.61  74.78  
EMA  75.98  75.26  73.68  71.40  64.80  
EN [ours]  76.20  75.89  74.87  73.49  70.98  
+0.22  +0.63  +1.19  +2.09  +6.18  
76.28  75.77  74.52  73.17  70.12  
ENOffline [ours]  76.18  75.85  74.92  73.48  71.03  
Recall@5  Renorm  92.81  92.65  92.18  91.68  90.03 
Groupnorm  92.59  92.31  92.28  92.10  91.81  
EMA  92.88  92.61  92.17  90.70  86.13  
EN [ours]  92.97  92.78  92.38  91.75  90.18  
+0.09  +0.17  +0.21  +1.05  +4.05  
92.96  92.81  92.17  91.46  89.51  
ENOffline [ours]  92.93  92.79  92.30  91.81  90.25 
Network Weights  BN EMA Stats  AP  AP  AP  
Init  Train?  Init  Train?  images/batch  8  4  2  8  4  2  8  4  2  
(a)  Random  Random  EMA  25.2  25.6  22.0  40.9  41.2  36.1  26.7  27.5  22.4  
EN [ours]  25.2  25.8  23.5  41.0  41.3  38.1  26.8  27.6  25.0  
0.0  +0.1  +1.5  +0.1  +0.1  +2.0  +0.1  +0.1  +2.6  
(b)  ImageNet  Random  EMA  30.4  29.9  27.6  48.4  47.2  43.5  32.8  31.9  29.6  
EN [ours]  30.7  30.5  29.6  48.5  47.9  46.5  33.0  32.8  31.3  
+0.2  +0.6  +2.0  +0.1  +0.7  +3.1  +0.2  +0.9  +1.7  
(c)  ImageNet  ImageNet  EMA  27.5  28.8  28.2  45.1  47.3  46.6  28.9  30.5  30.5  
EN [ours]  30.2  30.3  30.5  48.2  48.6  48.7  32.2  32.1  32.7  
+2.7  +1.5  +2.3  +3.1  +1.3  +2.1  +3.3  +1.6  +2.2  
(d)  ImageNet  ImageNet  EMA  31.7  30.1  24.8  50.6  47.6  40.0  33.9  32.2  26.1  
EN [ours]  31.9  31.6  30.2  50.7  49.6  46.9  34.1  34.0  31.0  
+0.2  +1.5  +5.4  +0.2  +2.1  +7.0  +0.2  +1.8  +4.9 
4.3 Object Detection on COCO
Finally, we evaluate our method on the task of object detection, and demonstrate consistent improvements when using EN statistics as opposed to EMA. Object detection frameworks are typically trained with high resolution inputs, and hence use small SGD minibatch sizes. As a result object detection is a good benchmark to evaluate the differences between EN and EMA.
Experimental setup. All experiments in this section use the COCO dataset [14] with 80 object classes. We train using the trainval35k set [4] with 75k images and evaluate on a held out minival set [4]
with 5k images. We report the standard COCO evaluation metrics for mean average prevision with varying IoU thresholds (AP, AP
, AP) (see [14] for details).We use the Faster RCNN (FRCN) [19, 9] object detection framework, built with a 50 layer ResNetv1 [7] (ResNet50). The FRCN system has three components: 1) backbone feature extractor (base network), which operates on high resolution images (we resize images to 600600 for all experiments), 2) region proposal network (RPN), which proposes regions of interest (ROIs), and 3) region classification network (RCN), which classifies regions proposed by RPN using cropped features from the base network. All but the last residual blocks from ResNet50 are used as the base network and the last residual block is used in the RCN network. Refer to [19, 9, 21] for architecture details.
We study the SGD minibatch sizes (images/GPU or ) of 2, 4, and 8. As is standard practice [9], we use minibatch size of 300 and 64 ROIs for the RPN and RCN respectively. Note that this results in different normalization minibatches for different BN layers in the network ( for base network and for RCN). All models are trained for 64 epochs (in terms of images and not ROIs) with an initial learning rate of and a cosine decay schedule. All methods use asynchronous SGD with 11 parallel GPU workers. We use the publicly released code from [9].
We investigate two different training paradigms: training from a random initialization and transfer learning (or finetuning). The random initialization setup is similar to the experiments reported on image classification in
Sections 4.1 and 4.2. The transfer learning setup is more common in practice because it generally leads to higher performance. Next, we discuss the tradeoffs and training flexibility in each paradigm and report results in Table 2 (blocks (a)(d)).Random initialization. We study the impact of EN by training a ResNet50 Faster RCNN model with all parameters initialized randomly. The results for using both EMA and EN statistics are reported Table 2(a). We observe that for when using 4 and 8 images/minibatch, both methods are onpar; but when training with 2 images/minibatch, we see consistent and significant gains of more than points on all AP metrics. Note that in this setup, the SGD and normalization minibatch sizes for the base network are same; unlike the ImageNet setup, where we average gradients across 8 GPUs resulting in SGD minibatch size being the normalization minibatch size.
Transfer learning. The standard paradigm of training object detection models (like Faster RCNN) is to use transfer learning or finetuning [5, 4, 19, 9]. In this setup, parameters from a model trained on ImageNet classification are transferred to the detection model, which is then trained on the task of object detection. We use the model checkpoint from [7] as is standard practice. The transfer learning paradigm allows us to study the impact of EN statistics compared to EMA in a variety of settings.
First, we investigate which method is able to better estimate BN statistics in the absence of a prior. For this, we use the ImageNet trained model to initialize the network parameters (except BN parameters), but we use initial values of BN parameters from [11]; and both sets of parameters are trained simultaneously. We report the results for EN and EMA in Table 2(b). We notice that EN is consistently better than EMA across all minibatch sizes and AP metrics. In fact, as opposed to random initialization, we see gains for both 2 and 4 images/minibatch, with the improvements for the smaller minibatch being much higher.
Next, we study the standard setup [19, 9] of initializing both, the network and the BN EMA parameters, using the ImageNet model. Since standard BN performs poorly with smaller minibatches [7, 10, 25], the BN parameters are not updated during training in this setup. In addition to reducing the dependence on minibatch statistics, EN also allows adjustment of statistics to using the learned features (as opposed to ‘stale’ features from initialization). The results for this setup are reported in Table 2(c). Notice that EN performs significantly better than EMA across all minibatch sizes and AP metrics. Since this is the standard training paradigm used by almost all detection systems, these consistent and significant improvements are doubly important.
Finally, to demonstrate the effectiveness of EN in adjusting statistics, we initialize both network and BN parameters from ImageNet, and finetune both for object detection. In practice, this setup performs poorly because BN is unable to train for small minibatches, and is generally not used (for example, [25] ignore this variant because of significant drop in performance). We also notice this in Table 2(c) and (d), where the EMA performance for 2 images/minibatch drops by 3.4 AP, 6.6 AP, and 4.4 AP. Because of these drastic drops in performance, methods do not finetune BN parameters. Compare this to using the proposed EN statistics, which improves over EMA by 5.4 AP, 7.0 AP, and 4.9 AP. Therefore, using EN allows us to train BN parameters for smaller minibatches yielding significant improvements.
In this section, we showed that for the object detection task, which is generally trained with small minibatches, using EN statistics performs markedly better across all training setups. In Table 2, notice that we get a healthy performance improvement for 2 images/minibatch across training setups (ranging from +1.5 points to +7 points). In the standard paradigm (Table 2(c)), we improve for all minibatch sizes. Moreover, when training BN parameters for small minibatches, (Table 2(c) and (d)), we improve both 2 and 4 images/minibatch.
5 Conclusion
Our goal in this work was to gain a better understanding of the problem with BN for small minibatches. Our hypothesis is that while normalization using EMA statistics provides accurate approximation for normalization used during training with large minibatches, it is inaccurate for small batchsizes. This leads to a discrepancy between training and evaluation normalizations and is the main reason of performance degradation of BN for small batch sizes. To address this, we proposed EvalNorm, which provides a corrected normalization term for use at evaluation. EN is fully compatible with the existing pretrained models using BN and yields large gains for models trained with smaller batches.
Acknowledgement. We would like to thank Chen Sun, Sergey Ioffe, and Rahul Sukthankar for helpful discussions and comments.
References
 Arpit et al. [2016] D. Arpit, Y. Zhou, B. U. Kota, and V. Govindaraju. Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. arXiv preprint arXiv:1603.01431, 2016.
 Ba et al. [2016] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
 Deng et al. [2009] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei. Imagenet: A largescale hierarchical image database. In CVPR, 2009.
 Girshick [2015] R. Girshick. Fast RCNN. CVPR, 2015.

Girshick et al. [2014]
R. Girshick, J. Donahue, T. Darrell, and J. Malik.
Rich feature hierarchies for accurate object detection a nd semantic
segmentation.
In
Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on
, pages 580–587. IEEE, 2014. 
Glorot and Bengio [2010]
X. Glorot and Y. Bengio.
Understanding the difficulty of training deep feedforward neural
networks.
In
Proceedings of the thirteenth international conference on artificial intelligence and statistics
, pages 249–256, 2010.  He et al. [2015] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
 He et al. [2016] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027, 2016.
 Huang et al. [2017] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy. Speed/accuracy tradeoffs for modern convolutional object detectors. In CVPR, 2017.
 Ioffe [2017] S. Ioffe. Batch renormalization: Towards reducing minibatch dependence in batchnormalized models. In Advances in Neural Information Processing Systems, pages 1942–1950, 2017.
 Ioffe and Szegedy [2015] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015.
 Krizhevsky et al. [2014] A. Krizhevsky, V. Nair, and G. Hinton. The cifar10/100 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, 2014.
 LeCun et al. [1998] Y. LeCun, L. Bottou, G. B. Orr, and K.R. Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–50. Springer, 1998.
 Lin et al. [2014] T.Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.

Loshchilov and Hutter [2017]
I. Loshchilov and F. Hutter.
SGDR: Stochastic gradient descent with warm restarts.
In ICLR, 2017.  Masters and Luschi [2018] D. Masters and C. Luschi. Revisiting small batch training for deep neural networks. arXiv preprint arXiv:1804.07612, 2018.
 Peng et al. [2017] C. Peng, T. Xiao, Z. Li, Y. Jiang, X. Zhang, K. Jia, G. Yu, and J. Sun. Megdet: A large minibatch object detector. arXiv preprint arXiv:1711.07240, 2017.
 Ren et al. [2016] M. Ren, R. Liao, R. Urtasun, F. H. Sinz, and R. S. Zemel. Normalizing the normalizers: Comparing and extending network normalization schemes. arXiv preprint arXiv:1611.04520, 2016.
 Ren et al. [2015] S. Ren, K. He, R. Girshick, and J. Sun. Faster rcnn: Towards realtime object detection with region proposal networks. In Advances in neural information processing systems, 2015.
 Salimans and Kingma [2016] T. Salimans and D. P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pages 901–909, 2016.
 Shrivastava et al. [2016] A. Shrivastava, A. Gupta, and R. Girshick. Training regionbased object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 761–769, 2016.
 Szegedy et al. [2014] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014.

Szegedy et al. [2017]
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi.
Inceptionv4, inceptionresnet and the impact of residual connections on learning.
In AAAI, volume 4, page 12, 2017.  Ulyanov et al. [2016] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. CoRR, abs/1607.08022, 2016. URL http://arxiv.org/abs/1607.08022.
 Wu and He [2018] Y. Wu and K. He. Group normalization. In CVPR, 2018.
Comments
There are no comments yet.