1 Introduction
Normalization techniques are effective components in deep learning, advancing many research fields such as natural language processing, computer vision, and machine learning. In recent years, many normalization methods such as Batch Normalization (BN)
(Ioffe & Szegedy, 2015), Instance Normalization (IN) (Ulyanov et al., 2016), and Layer Normalization (LN) (Ba et al., 2016) have been developed. Despite their great successes, existing practices often employed the same normalizer in all normalization layers of an entire network, rendering suboptimal performance. Also, different normalizers are used to solve different tasks, making model design cumbersome.To address the above issues, we propose Switchable Normalization (SN)
, which combines three types of statistics estimated channelwise, layerwise, and minibatchwise by using IN, LN, and BN respectively. SN switches among them by learning their importance weights. By design,
SN is adaptable to various deep networks and tasks. For example, the ratios of IN, LN, and BN in SN are compared in multiple tasks as shown in Fig.1 (a). We see that using one normalization method uniformly is not optimal for these tasks. For instance, image classification and object detection prefer the combination of three normalizers. In particular, SN chooses BN more than IN and LN in image classification and the backbone network of object detection, while LN has larger weights in the box and mask heads. For artistic image style transfer (Johnson et al., 2016), SN selects IN. For neural architecture search, SN is applied to LSTM where LN is preferable than group normalization (GN) (Wu & He, 2018), which is a variant of IN by dividing channels into groups.The selectivity of normalizers makes SN robust to minibatch size. As shown in Fig.1 (b), when training ResNet50 (He et al., 2016) on ImageNet (Deng et al., 2009) with different batch sizes, SN is close to the “ideal case” more than BN and GN. For as an example^{1}^{1}1In this work, minibatch size refers to the number of samples per GPU, and batch size is ‘GPUs’ times ‘samples per GPU’. A batch setting is denoted as a 2tuple, (GPUs, samples per GPU)., ResNet50 trained with SN is able to achieve 76.9% top1 accuracy, surpassing BN and GN by 0.5% and 1.0% respectively. In general, SN obtains better or comparable results than both BN and GN in all batch settings.
Overall, this work has three key contributions. (1) We introduce Switchable Normalization (SN), which is applicable in both CNNs and RNNs/LSTMs, and improves the other normalization techniques on many challenging benchmarks and tasks including image recognition in ImageNet (Russakovsky et al., 2015), object detection in COCO (Lin et al., 2014), scene parsing in Cityscapes (Cordts et al., 2016) and ADE20K (Zhou et al., 2017), artistic image stylization (Johnson et al., 2016), neural architecture search (Pham et al., 2018), and video recognition in Kinetics (Kay et al., 2017). (2) The analyses of SN are presented where multiple normalizers can be compared and understood with geometric interpretation. (3) By enabling each normalization layer in a deep network to have its own operation, SN helps ease the usage of normalizers, pushes the frontier of normalization in deep learning, as well as opens up new research direction. We believe that all existing models could be reexamined with this new perspective. We’ll make the code of SN available and recommend it as an alternative of existing handcrafted approaches.
In the following sections, we first present SN in Sec.2 and then discuss its relationships with previous work in Sec.3. SN is evaluated extensively in Sec.4.
2 Switchable Normalization (SN)
We describe a general formulation of a normalization layer and then present SN.
A General Form. We take CNN as an illustrative example. Let
be the input data of an arbitrary normalization layer represented by a 4D tensor
, indicating number of samples, number of channels, height and width of a channel respectively as shown in Fig.2. Let and be a pixel before and after normalization, where , , , and . Let andbe a mean and a standard deviation. We have
(1) 
where and are a scale and a shift parameter respectively. is a small constant to preserve numerical stability. Eqn.(1) shows that each pixel is normalized by using and , and then rescale and reshift by and .
IN, LN, and BN share the formulation of Eqn.(1), but they use different sets of pixels to estimate and . In other words, the numbers of their estimated statistics are different. In general, we have
(2) 
where is used to distinguish different methods. is a set pixels and denotes the number of pixels. Specifically, , , and are the sets of pixels used to compute statistics in different approaches.
IN was established in the task of artistic image style transfer (Johnson et al., 2016; Huang & Belongie, 2017). In IN, we have and , meaning that IN has elements of statistics, where each mean and variance value is computed along for each channel of each sample.
LN (Ba et al., 2016)
was proposed to ease optimization of recurrent neural networks (RNNs). In LN, we have
and , implying that LN has statistical values, where a mean value and a variance value are computed in for each one of the samples.BN (Ioffe & Szegedy, 2015) was first demonstrated in the task of image classification (He et al., 2016; Krizhevsky et al., 2012) by normalizing the hidden feature maps of CNNs. In BN, we have and , in the sense that BN treats each channel independently like IN, but not only normalizes across , but also the samples in a minibatch, leading to elements of statistics.
2.1 Formulation of SN
SN has an intuitive expression
(3) 
where is a set of statistics estimated in different ways. In this work, we define the same as above where and can be calculated by following Eqn.(2). However, this strategy leads to large redundant computations. In fact, the three kinds of statistics of SN depend on each other. Therefore we could reduce redundancy by reusing computations,
(4) 
showing that the means and variances of LN and BN can be computed based on IN. Using Eqn.(2.1), the computational complexity of SN is , which is comparable to previous work.
Furthermore, and in Eqn.(3) are importance ratios used to weighted average the means and variances respectively. Each or is a scalar variable, which is shared across all channels. There are importance weights in SN. We have , , and , and define
(5) 
Here each is computed by using a softmax function with , , and as the control parameters, which can be learned by backpropagation (BP). are defined similarly by using another three control parameters , , and .
Training. Let be a set of network parameters (e.g. filters) and be a set of control parameters that control the network architecture. In SN, we have
. Training a deep network with SN is to minimize a loss function
, where and can be optimized jointly by backpropagation (BP). This training procedure is different from previous metalearning algorithms such as network architecture search (Colson et al., 2007; Liu et al., 2018; Pham et al., 2018). In previous work, represents as a set of network modules with different learning capacities, where and were optimized in two BP stages iteratively by using two training sets that are nonoverlapped. For example, previous work divided an entire training set into a training and a validation set. However, if and in previous work are optimized in the same set of training data, would choose the module with large complexity to overfit these data. In contrast, SN essentially prevents overfitting by choosing normalizers to improve both learning and generalization ability as discussed below.Analyses of SN. To understand SN, we theoretically compare SN with BN, IN, and LN by representing them using weight normalization (WN) (Salimans & Kingma, 2016) that is independent of mean and variance. WN is computed as , where and represent a filter and an image patch. WN normalizes the norm of each filter to 1 and rescales to .
Remark 1.
Remark 1 simplifies SN in Eqn.(3), enabling us to compare different normalizers geometrically by formulating them with respect to WN. In Fig.3, of IN can be computed similarly to WN with an additional bias , where the norms of all filters are normalized to 1 and then rescaled to . As and have the same learning dynamic, the length of would be identically to (see of IN). Moreover, in BN can be rewritten as WN with regularization over , making it shorter than . Compared to IN and LN, Luo et al. (2019) shows that the regularization of BN improves generalization and increases angle between filters, preventing them from coadaptation (see of BN). Furthermore, in LN normalizes each filter among channels where the filter norm is less constrained than IN and BN. That is, LN allows to increase learning ability. Finally, in SN inherits the benefits from all of them and enables balance between learning and generalization ability. For example, when the batch size is small, the random noise from the batch statistics of BN would be too strong. SN is able to maintain performance by decreasing and increasing , such that the regularization from BN is reduced and the learning ability is enhanced by LN. This phenomenon is supported by our experiment. More results are provided in Appendix B.
Variants of SN. SN has many extensions. For instance, a pretrained network with SN can be finetuned by applying the function on its control parameters where each normalization layer selects only one normalizer, leading to sparse SN. For as an example, SN with sparsity achieves top1 accuracy of 77.0% in ImageNet with ResNet50, which is comparable to 76.9% of SN without sparsity. Moreover, when the channels are divided into groups, each group could select its own normalizer to increase representation power of SN. Our preliminary results suggest that group SN performs better than SN in some senses. For instance, group SN with only two groups boosts the top1 accuracy of ResNet50 to 77.2% in ImageNet. The above two variants will be presented as future work due to the length of paper. This work focuses on SN where the importance weights are tied between channels.
Inference. When applying SN in test, the statistics of IN and LN are computed independently for each sample, while BN uses batch average after training without computing moving average in each iteration. Here batch average is performed in two steps. First, we freeze the parameters of the network and all the SN layers, and feed the network with a certain number of minibatches randomly chosen from the training set. Second, we average the means and variances produced by all these minibatches in each SN layer. The averaged statistics are used by BN in SN.
We find that batch average makes training converged faster than moving average. It can be computed by using a small amount of samples. For example, top1 accuracies of ResNet50 on ImageNet by using batch average with 50k and all training samples are 76.90% and 76.92% respectively. They are trained much faster and slightly better than 76.89% of moving average. Appendix A shows more results.
Implementation.
SN can be easily implemented in existing softwares such as PyTorch and TensorFlow. The backward computation of SN can be obtained by automatic differentiation (AD) in these softwares. Without AD, we need to implement backpropagation (BP) of SN, where the errors are propagated through
and . We provide the derivations of BP in Appendix H.3 Relationships to Previous Work
In Table 1, we compare SN to BN, IN, LN, and GN, as well as three variants of BN including Batch Renormalization (BRN), Batch Kalman Normalization (BKN), and WN. In general, we see that SN possesses comparable numbers of parameters and computations, as well as rich statistics. Details are presented below.
Parameter  Statistical Estimation  









BN (Ioffe & Szegedy, 2015)  
IN (Ulyanov et al., 2016)  
LN (Ba et al., 2016)  
GN (Wu & He, 2018)  
BRN (Ioffe, 2017)  
BKN (Wang et al., 2018)  
WN (Salimans & Kingma, 2016)  –  –  –  –  
SN 


are a vector of means, a vector of standard deviations, and a covariance matrix.
represents the moving average. Moreover, is the momentum of moving average, in GN is the number of groups, is a small value for numerical stability, and are used in BRN. In SN, indicates a set of different kinds of statistics, , and is an importance weight of each kind.First, SN has similar number of parameters compared to previous methods, as shown in the first portion of Table 1. Most of the approaches learn a scale parameter and a bias for each one of the channels, resulting in parameters. SN learns 6 importance weights as the additional parameters. We see that BKN has the maximum number of parameters, as it learns a transformation matrix for the means and variances. WN has scale parameters without the biases.
Furthermore, many methods have and as hyperparameters, whose values are not sensitive because they are often fixed in different tasks. In contrast, GN and BRN have to search the number of groups or the renormalization parameters , which may have different values in different networks. Moreover, WN does not have hyperparameters and statistics, since it performs normalization in the space of network parameters rather than feature space. Salimans & Kingma (2016); Luo et al. (2019) showed that WN is a special case of BN.
Second, although SN has richer statistics, the computational complexity to estimate them is comparable to previous methods, as shown in the second portion of Table 1. As introduced in Sec.2, IN, LN, and BN estimate the means and variances along axes , , and respectively, leading to , , and numbers of statistics. Therefore, SN has statistics by combining them. Although BKN has the largest number of statistics, it also has the highest computations because it estimates the covariance matrix other than the variance vector. Also, approximating the covariance matrix in a minibatch is nontrivial as discussed in (Desjardins et al., 2015; Luo, 2017b, a). BN, BRN, and BKN also compute moving averages.
Third, SN is demonstrated in various networks, tasks, and datasets. Its applications are much wider than existing normalizers and it also has rich theoretical value that is worth exploring.
4 Experiments
This section presents the main results of SN in multiple challenging problems and benchmarks, such as ImageNet (Russakovsky et al., 2015), COCO (Lin et al., 2014), Cityscapes (Cordts et al., 2016), ADE20K (Zhou et al., 2017), and Kinetics (Kay et al., 2017), where the effectiveness of SN is demonstrated by comparing with existing normalization techniques.
4.1 Image Classification in ImageNet
SN is first compared with existing normalizers on the ImageNet classification dataset of 1k categories. All the methods adopt ResNet50 as backbone network. The experimental setting and more results are given in Appendix C.
Comparisons. The top1 accuracy on the 224224 center crop is reported for all models. SN is compared to BN and GN as shown in Table 2. In the first five columns, we see that the accuracy of BN reduces by 1.1% from to and declines to 65.3% of , implying that BN is unsuitable in small minibatch, where the random noise from the statistics is too heavy. GN obtains around 75.9% in all cases, while SN outperforms BN and GN in almost all cases, rendering its robustness to different batch sizes. In Appendix, Fig.10 plots the training and validation curves, where SN enables faster convergence while maintains higher or comparable accuracies than those of BN and GN.
The middle two columns of Table 2 average the gradients in a single GPU by using only 16 and 32 samples, such that their batch sizes are the same as and . SN again performs best in these singleGPU settings, while BN outperforms GN. For example, unlike that uses 8 GPUs, BN achieves 76.5% in , which is the bestperforming result of BN, although the batch size to compute the gradients is as small as 32. From the above results, we see that BN’s performance are sensitive to the statistics more than the gradients, while SN are robust to both of them. The last two columns of Table 2 have the same batch size of 8, where has a minibatch size of 8, while is an extreme case with a single sample in a minibatch. For , SN performs best. For , SN consists of IN and LN but no BN, because IN and BN are the same in training when the minibatch size is 1. In this case, both SN and GN still perform reasonably well, while BN fails to converge.
Ablation Study. Fig.1 (a) and Fig.4 plot histograms to compare the importance weights of SN with respect to different tasks and batch sizes. These histograms are computed by averaging the importance weights of all SN layers in a network. They show that SN adapts to various scenarios by changing its importance weights. For example, SN prefers BN when the minibatch is sufficiently large, while it selects LN instead when small minibatch is presented, as shown in the green and red bars of Fig.4. These results are in line with our analyses in Sec.2.1.
Furthermore, we repeat training of ResNet50 several times in ImageNet, to show that when the network, task, batch setting and data are fixed, the importance weights of SN are not sensitive to the change of training protocols such as solver, parameter initialization, and learning rate decay. As a result, we find that all trained models share similar importance weights.
The importance weights in each SN layer are visualized in Appendix C.2. Overall, examining the selectivity of SN layers discloses interesting characteristics and impacts of normalization methods in deep learning, and sheds light on model design in many research fields.
SN v.s. IN and LN. IN and LN are not optimal in image classification as reported in (Ulyanov et al., 2016) and (Ba et al., 2016). With a regular setting of , ResNet50 trained with IN and LN achieve 71.6% and 74.7% respectively, which reduce 5.3% and 2.2% compared to 76.9% of SN.
SN v.s. BRN and BKN. BRN has two extra hyperparameters, and , which renormalize the means and variances. We choose their values as and , which work best for ResNet50 in the setting of following (Ioffe, 2017). 73.7% of BRN surpasses 72.7% of BN by 1%, but it reduces 2.2% compared to 75.9% of SN.
BKN (Wang et al., 2018) estimated the statistics in the current layer by combining those computed in the preceding layers. It estimates the covariance matrix rather than the variance vector. In particular, how to connect the layers requires careful design for every specific network. For ResNet50 with , BKN achieved 76.8%, which is comparable to 76.9% of SN. However, for small minibatch, BKN reported 76.1% that was evaluated in a microbatch setting where 256 samples are used to compute gradients and 4 samples to estimate the statistics. This setting is easier than that uses 32 samples to compute gradients. Furthermore, it is unclear how to apply BRN and BKN in the other tasks such as object detection and segmentation.
(8,32)  (8,16)  (8,8)  (8,4)  (8,2)  (1,16)  (1,32)  (8,1)  (1,8)  


BN  76.4  76.3  75.2  72.7  65.3  76.2  76.5  –  75.4 
GN  75.9  75.8  76.0  75.8  75.9  75.9  75.8  75.5  75.5 
SN  76.9  76.7  76.7  75.9  75.6  76.3  76.6  75.0  75.9 


GNBN  0.5  0.5  0.8  3.1  10.6  0.3  0.7  –  0.1 
SNBN  0.5  0.4  1.5  3.2  10.3  0.1  0.1  –  0.5 
SNGN  1.0  0.9  0.7  0.1  0.3  0.4  0.8  0.5  0.4 

head  AP  AP  AP  AP  AP  AP  


BN  –  36.7  58.4  39.6  48.1  39.8  21.1  
BN  GN  37.2  58.0  40.4  48.6  40.3  21.6  
BN  SN  38.0  59.4  41.5  48.9  41.3  22.7  
GN  GN  38.2  58.7  41.3  49.6  41.0  22.4  
SN  SN  39.3  60.9  42.8  50.3  42.7  23.5 

head  AP  AP  AP  AP  AP  AP  


BN  –  38.6  59.5  41.9  34.2  56.2  36.1  
BN  GN  39.5  60.0  43.2  34.4  56.4  36.3  
BN  SN  40.0  61.0  43.3  34.8  57.3  36.3  
GN  GN  40.2  60.9  43.8  35.7  57.8  38.0  
GN  SN  40.4  61.4  44.2  36.0  58.4  38.1  
SN  SN  41.0  62.3  45.1  36.5  58.9  38.7 
4.2 Object Detection and Instance Segmentation in COCO
Next we evaluate SN in object detection and instance segmentation in COCO (Lin et al., 2014). Unlike image classification, these two tasks benefit from large size of input images, making large memory footprint and therefore leading to small minibatch size, such as 2 samples per GPU (Ren et al., 2015; Lin et al., 2016). In this case, as BN is not applicable in small minibatch, previous work (Ren et al., 2015; Lin et al., 2016; He et al., 2017)
often freeze BN and turns it into a constant linear transformation layer, which actually performs no normalization. Overall, SN selects different operations in different components of a detection system (see Fig.
1), showing much more superiority than both BN and GN. The experimental settings and more results are given in Appendix D.Table 6 reports results of Faster RCNN by using ResNet50 and the Feature Pyramid Network (FPN) (Lin et al., 2016). A baseline BN achieves an AP of 36.7 without using normalization in the detection head. When using SN and GN in the head and BN in the backbone, BN+SN improves the AP of BN+GN by 0.8 (from 37.2 to 38.0). We investigate using SN and GN in both the backbone and head. In this case, we find that GN improves BN+SN by only a small margin of 0.2 AP (38.2 v.s. 38.0), although the backbone is pretrained and finetuned by using GN. When finetuning the SN backbone, SN obtains a significant improvement of 1.1 AP over GN (39.3 v.s. 38.2). Furthermore, the 39.3 AP of SN and 38.2 of GN both outperform 37.8 in (Peng et al., 2017), which synchronizes BN layers in the backbone (i.e. BN layers are not frozen).
Table 6 reports results of Mask RCNN (He et al., 2017) with FPN. In the upper part, SN is compared to a head with no normalization and a head with GN, while the backbone is pretrained with BN, which is then frozen in finetuning (i.e. the ImageNet pretrained features are the same). We see that the baseline BN achieves a box AP of 38.6 and a mask AP of 34.2. SN improves GN by 0.5 box AP and 0.4 mask AP, when finetuning the same BN backbone.
More direct comparisons with GN are shown in the lower part of Table 6. We apply SN in the head and finetune the same backbone network pretrained with GN. In this case, SN outperforms GN by 0.2 and 0.3 box and mask APs respectively. Moreover, when finetuning the SN backbone, SN surpasses GN by a large margin of both box and mask AP (41.0 v.s. 40.2 and 36.5 v.s. 35.7). Note that the performance of SN even outperforms 40.9 and 36.4 of the 101layered ResNet (Girshick et al., 2018).
4.3 Semantic Image Parsing in Cityscapes and ADE20K
We investigate SN in semantic image segmentation in ADE20K (Zhou et al., 2017) and Cityscapes (Cordts et al., 2016). The empirical setting can be found in Appendix E.
Table 8 reports mIoU on the ADE20K validation set and Cityscapes test set, by using both singlescale and multiscale testing. In SN, BN is not synchronized across GPUs. In ADE20K, SN outperforms SyncBN with a large margin in both testing schemes (38.7 v.s. 36.4 and 39.2 v.s. 37.7), and improve GN by 3.0 and 2.9. In Cityscapes, SN also performs best compared to SyncBN and GN. For example, SN surpasses SyncBN by 1.5 and 2.1 in both testing scales. We see that GN performs worse than SyncBN in these two benchmarks. Fig.13 in Appendix compares the importance weights of SN in ResNet50 trained on both ADE20K and Cityscapes, showing that different datasets would choose different normalizers when the models and tasks are the same.
ADE20K  Cityscapes  


mIoU  mIoU  mIoU  mIoU  
SyncBN  36.4  37.7  69.7  73.0 
GN  35.7  36.3  68.4  73.1 
SN  38.7  39.2  71.2  75.1 
batch=8, length=32  batch=4, length=32  


top1  top5  top1  top5  
BN  73.2  90.9  72.1  90.0 
GN  73.0  90.6  72.8  90.6 
SN  73.5  91.3  73.3  91.2 
4.4 Video Recognition in Kinetics
We evaluate video recognition in Kinetics dataset (Kay et al., 2017), which has 400 action categories. We experiment with Inflated 3D (I3D) convolutional networks (Carreira & Zisserman, 2017) and employ the ResNet50 I3D baseline as described in (Wu & He, 2018). The models are pretrained from ImageNet. For all normalizers, we extend the normalization from over to over , where is the temporal axis. We train in the training set and evaluate in the validation set. The top1 and top5 classification accuracy are reported by using standard 10clip testing that averages softmax scores from 10 clips sampled regularly.
Table 8 shows that SN works better than BN and GN in both batch sizes. For example, when batch size is 4, top1 accuracy of SN is better than BN and GN by 1.2% and 0.5%. It is seen that SN already surpasses BN and GN with batch size of 8. SN with batch size 8 further improves the results.
4.5 On the other Tasks
5 Discussions and Future Work
This work presented Switchable Normalization (SN) to learn different operations in different normalization layers of a deep network. This novel perspective opens up new direction in many research fields that employ deep learning, such as CV, ML, NLP, Robotics, and Medical Imaging. This work has demonstrated SN in multiple tasks of CV such as recognition, detection, segmentation, image stylization, and neural architecture search, where SN outperforms previous normalizers without bells and whistles. The implementations of these experiments will be released. Our analyses (Luo et al., 2018) suggest that SN has an appealing characteristic to balance learning and generalization when training deep networks. Investigating SN facilitates the understanding of normalization approaches (Shao et al., 2019; Pan et al., 2019; Luo, 2017a, b), such as sparse SN (Shao et al., 2019) and switchable whitening (Pan et al., 2019).
References
 Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv:1607.06450, 2016.
 Carreira & Zisserman (2017) Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. arXiv:1705.07750, 2017.
 Chen et al. (2018) LiangChieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell., 40(4):834–848, 2018.
 Colson et al. (2007) Beno?t Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization. Annals of operations research, 2007.

Cordts et al. (2016)
Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler,
Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.
The cityscapes dataset for semantic urban scene understanding.
In CVPR, 2016.  Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, LiJia Li, Kai Li, and Li FeiFei. Imagenet: A largescale hierarchical image database. In CVPR, 2009.
 Desjardins et al. (2015) Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and Koray Kavukcuoglu. Natural neural networks. NIPS, 2015.
 Girshick et al. (2018) Ross Girshick, Ilija Radosavovic, Georgia Gkioxari, Piotr Dollár, and Kaiming He. Detectron. https://github.com/facebookresearch/detectron, 2018.
 Goyal et al. (2017) Priya Goyal, Piotr Doll r, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv:1706.02677, 2017.
 He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
 He et al. (2017) Kaiming He, Georgia Gkioxari, Piotr Doll r, and Ross Girshick. Mask rcnn. ICCV, 2017.
 Huang & Belongie (2017) Xun Huang and Serge Belongie. Arbitrary style transfer in realtime with adaptive instance normalization. arXiv:1703.06868, 2017.
 Ioffe (2017) Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batchnormalized models. arXiv:1702.03275, 2017.
 Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
 Johnson et al. (2016) Justin Johnson, Alexandre Alahi, and Li FeiFei. Perceptual losses for realtime style transfer and superresolution. arXiv:1603.08155, 2016.
 Kay et al. (2017) Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv:1705.06950, 2017.
 Krizhevsky (2009) Alex. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.

Krizhevsky et al. (2012)
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks.
In NIPS, 2012.  Lin et al. (2014) TsungYi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740–755. Springer, 2014.
 Lin et al. (2016) TsungYi Lin, Piotr Doll ra, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. arXiv:1612.03144, 2016.
 Liu et al. (2018) Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv:1806.09055, 2018.
 Luo (2017a) Ping Luo. Eigennet: Towards fast and structural learning of deep neural networks. IJCAI, 2017a.
 Luo (2017b) Ping Luo. Learning deep architectures via generalized whitened neural networks. ICML, 2017b.
 Luo et al. (2018) Ping Luo, Zhanglin Peng, Jiamin Ren, and Ruimao Zhang. Do normalization layers in a deep convnet really need to be distinct? arXiv:1811.07727, 2018.
 Luo et al. (2019) Ping Luo, Xinjiang Wang, Wenqi Shao, and Zhanglin Peng. Towards understanding regularization in batch normalization. ICLR, 2019.
 Pan et al. (2019) Xingang Pan, Xiaohang Zhan, Jianping Shi, Xiaoou Tang, and Ping Luo. Switchable whitening for deep representation learning. In arXiv:1904.09739, 2019.
 Peng et al. (2017) Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, and Jian Sun. Megdet: A large minibatch object detector. arXiv:1711.07240, 2017.
 Perez et al. (2017) Ethan Perez, Harm de Vries, and Florian Strub. Learning visual reasoning without strong priors. In arXiv:1707.03017, 2017.
 Pham et al. (2018) Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. arXiv:1802.03268, 2018.
 Ren et al. (2016) Mengye Ren, Renjie Liao, Raquel Urtasun, Fabian H. Sinz, and Richard S. Zemel. Normalizing the normalizers: Comparing and extending network normalization schemes. In ICLR, 2016.
 Ren et al. (2015) Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster rcnn: Towards realtime object detection with region proposal networks. arXiv:1506.01497, 2015.
 Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
 Salimans & Kingma (2016) Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv:1602.07868, 2016.
 Shao et al. (2019) Wenqi Shao, Tianjian Meng, Jingyu Li, Ruimao Zhang, Yudian Li, Xiaogang Wang, and Ping Luo. Ssn: Learning sparse switchable normalization via sparsestmax. In CVPR, 2019.
 Simonyan & Zisserman (2014) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. arXiv:1409.1556, 2014.
 Ulyanov et al. (2016) Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022, 2016.
 Wang et al. (2018) Guangrun Wang, Jiefeng Peng, Ping Luo, Xinjiang Wang, and Liang Lin. Batch kalman normalization: Towards training deep neural networks with microbatches. NIPS, 2018.

Williams (1992)
Ronald J. Williams.
Simple statistical gradientfollowing algorithms for connectionist reinforcement learning.
Machine Learning, 1992.  Wu & He (2018) Yuxin Wu and Kaiming He. Group normalization. arXiv:1803.08494, 2018.
 Yang et al. (2017) Jianwei Yang, Jiasen Lu, Dhruv Batra, and Devi Parikh. A faster pytorch implementation of faster rcnn. https://github.com/jwyang/fasterrcnn.pytorch, 2017.
 Zhao et al. (2017) Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In CVPR, 2017.
 Zhou et al. (2017) Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ADE20K dataset. In CVPR, 2017.
Appendices
A Inference of SN
In SN, BN employs batch average rather than moving average. We provide comparisons between them as shown in Fig.9, where SN is evaluated with both moving average and batch average to estimate the statistics used in test. They are used to train ResNet50 on ImageNet. The two settings of SN produce similar results of 76.9% when converged, which is better than 76.4% of BN. We see that SN with batch average converges faster and more stably than BN and SN that use moving average. In this work, we find that for all batch settings, SN with batch average provides results better than moving average. We also found that the conventional BN can be improved by replacing moving average by using batch average.
B Proof of Remark 1
Remark 1.
Proof.
Eqn.(1) shows that IN, LN, and BN can be generally computed as , . When is normalized to zero mean and unit variance, we have and according to their definitions.
For BN, we follow the derivations in (Luo et al., 2019) where the batch statistics and
are treated as random variables. BN can be reformulated as population normalization (PN) and adaptive gamma decay. Let
be the expected loss function of BN by integrating over random variables and . We have , where represents population normalization (PN) with . and are the population mean and population standard deviation. is a datadependent coefficient. Therefore, represents adaptive gamma regularization whose strength is depended on training data. With normalized input, we have and . Thus PN can be rewritten as WN, that is, . Let WN be defined as . Then in WN and in PN have the same learning dynamic. However, the adaptive gamma regularization imposes the constraint to BN, since WN does not have regularization on . Compared to WN, we express BN as . ∎C ImageNet
c.1 Experimental Setting
All models in ImageNet are trained on 1.2M images and evaluated on 50K validation images. They are trained by using SGD with different settings of batch sizes, which are denoted as a 2tuple, (number of GPUs, number of samples per GPU). For each setting, the gradients are aggregated over all GPUs, and the means and variances of the normalization methods are computed in each GPU. The network parameters are initialized by following (He et al., 2016). For all normalization methods, all ’s are initialized as 1 and all ’s as 0. The parameters of SN ( and ) are initialized as 1. We use a weight decay of for all parameters including and
. All models are trained for 100 epoches with a initial learning rate of 0.1, which is deceased by 10
after 30, 60, and 90 epoches. For different batch sizes, the initial learning rate is linearly scaled according to (Goyal et al., 2017). During training, we employ data augmentation the same as (He et al., 2016). The top1 classification accuracy on the 224224 center crop is reported.c.2 More Results
Fig.10 (a) plots the validation curves of SN. Fig.10 (b) and (c) compare the training and validation curves of SN, BN and GN in and respectively. From all these curves, we see that SN enables faster convergence while maintains higher or comparable accuracies than those of BN and GN.
Ablation Study of Importance Weights. In particular, the selected operations of each SN layer are shown in Fig.11. We have several observations. First, for the same batch size, the importance weights of and could have notable differences, especially when comparing ‘res1,4,5’ of (a,b) and ‘res2,4,5’ of (c,d). For example, of BN (green) in ‘res5’ in (b,d) are mostly reduced compared to of BN in (a,c). As discussed in (Ioffe & Szegedy, 2015; Salimans & Kingma, 2016), this is because the variance estimated in a minibatch produces larger noise than the mean, making training instable. SN is able to restrain the noisy statistics and stabilize training.
Second, the SN layers in different places of a network may select distinct operations. In other words, when comparing the adjacent SN layers after the conv layer, shortcut, and the conv layer, we see that they may choose different importance weights, e.g. ‘res2,3’. The selectivity of operations in different places (normalization layers) of a deep network has not been observed in previous work.
Third, deeper layers prefer LN and IN more than BN, as illustrated in ‘res5’, which tells us that putting BN in an appropriate place is crucial in the design of network architecture. Although the stochastic uncertainty in BN (i.e. the minibatch statistics) acts as a regularizer that might benefit generalization, using BN uniformly in all normalization layers may impede performance.
D COCO Dataset
SN is easily plugged into different detection frameworks implemented by using different softwares. We implement it on existing detection softwares of PyTorch and Caffe2Detectron (Girshick et al., 2018) respectively. We conduct 3 settings, including setting1: Faster RCNN (Ren et al., 2015) on PyTorch; setting2: Faster RCNN+FPN (Lin et al., 2016) on Caffe2; and setting3: Mask RCNN (He et al., 2017)+FPN on Caffe2. For all these settings, we choose ResNet50 as the backbone network. In each setting, the experimental configurations of all the models are the same, while only the normalization layers are replaced. All models of SN are finetuned from in ImageNet.
Experimental Settings. For setting1, we employ a fast implementation (Yang et al., 2017) of Faster RCNN in PyTorch and follow its protocol. Specifically, we train all models on 4 GPUs and 3 images per GPU. Each image is rescaled such that its shorter side is 600 pixels. All models are trained for 80k iterations with a learning rate of 0.01 and then for another 40k iterations with 0.001. For setting2 and setting3, we employ the configurations of the Caffe2Detectron (Girshick et al., 2018). We train all models on 8 GPUs and 2 images per GPU. Each image is rescaled to its shorter side of 800 pixels. In particular, for setting2, the learning rate (LR) is initialized as 0.02 and is decreased by a factor of 0.1 after 60k and 80k iterations and finally terminates at 90k iterations. This is referred as the 1 schedule in Detectron. In setting3, the LR schedule is twice as long as the 1 schedule with the LR decay points scaled twofold proportionally, referred as 2 schedule. For all settings, we set weight decay to 0 for both and following (Wu & He, 2018).

head  AP  AP  AP  AP  AP  AP  


BN  BN  29.6  47.8  31.9  45.5  33.0  11.5  
BN  BN  19.3  33.0  20.0  32.3  21.3  7.4  
GN  GN  32.7  52.4  35.1  49.1  36.1  14.9  
SN  SN  33.0  52.9  35.7  48.7  37.2  15.6  
BN  BN  20.0  33.5  21.1  32.1  21.9  7.3  
GN  GN  28.3  46.3  30.1  41.2  30.0  12.7  
SN  SN  29.5  47.8  31.6  44.2  32.6  13.0 
All the above models are trained in the 2017 train set of COCO by using SGD with a momentum of 0.9 and a weight decay of on the network parameters, and tested in the 2017 val set. We report the standard metrics of COCO, including average precisions at IoU=0.5:0.05:0.75 (AP), IoU=0.5 (AP), and IoU=0.75 (AP) for both bounding box (AP) and segmentation mask (AP). Also, we report average precisions for small (AP), medium (AP), and large (AP) objects.
Results of Setting1. As shown in Table 3, SN is compared with both BN and GN in the Faster RCNN. In this setting, the layers up to conv4 of ResNet50 are used as backbone to extract features, and the layers of conv5 are used as the RegionofInterest head for classification and regression. As the layers are inherited from the pretrained model, both the backbone and head involve normalization layers. Different results of Table 3 use different normalization methods in the backbone and head. Its upper part shows results of finetuning the ResNet50 models pretrained on ImageNet. The lower part compares training COCO from scratch without pretraining on ImageNet.
In the upper part of Table 3, the baseline is denoted as BN, where the BN layers are frozen. We see that freezing BN performs significantly better than finetuning BN (29.6 v.s. 19.3). SN and GN enable finetuning the normalization layers, where SN obtains the bestperforming AP of 33.0 in this setting. Fig.12 (a) compares their AP curves.
As reported in the lower part of Table 3, SN and GN allow us to train COCO from scratch without pretraining on ImageNet, and they still achieve competitive results. For instance, 29.5 of SN outperforms BN by a large margin of 9.5 AP and GN by 1.2 AP. Their learning curves are compared in Fig.12 (b).
Results of Setting2 and 3. The results of setting2 and setting3 are presented in the paper.
E Semantic Image Parsing
Setting. Similar to object detection, semantic image segmentation also benefits from large input size, making the minibatch size is small during training. We use 2 samples per GPU for ADE20K and 1 sample per GPU for Cityscapes. We employ the opensource software in PyTorch^{2}^{2}2https://github.com/CSAILVision/semanticsegmentationpytorch and only replace the normalization layers in CNNs with the other settings fixed. For both datasets, we use DeepLab (Chen et al., 2018) with ResNet50 as the backbone network, where and the last two blocks in the original ResNet contains atrous convolution with and respectively. Following (Zhao et al., 2017), we employ “poly” learning rate policy with and use the auxiliary loss with the weight during training. The bilinear operation is adopted to upsmaple the score maps in the validation phase.
ADE20K. SyncBN and GN adopt the pretrained models on ImageNet. SyncBN collects the statistics from GPUs. Thus the actual “batchsize” is during training. To evaluate the performance of SN, we use SN in ImageNet as the pretrained model. For all models, we resize each image to and train for iterations. We performance multiscale testing with .
Cityscapes. For all models, we finetune from their pretrained ResNet50 models. SN finetunes from . For all models, the batchsize is in finetuning. We use random crop with the size and train for epoches. For multiscale testing, the inference scales are .
Ablation Study. Fig.13 compares the importance weights of SN in ResNet50 trained on both ADE20K and Cityscapes. We see that even when the models and tasks are the same, different training data encourage SN to choose different normalizers.
F Artistic Image Stylization
We evaluate SN in the task of artistic image stylization. We adopt a recent advanced approach (Johnson et al., 2016)
, which jointly minimizes two loss functions. Specifically, one is a feature reconstruction loss that penalizes an output image when its content is deviated from a target image, and the other is a style reconstruction loss that penalizes differences in style (
e.g. color, texture, exact boundary). Johnson et al. (2016); Huang & Belongie (2017) show that IN works better than BN in this task.We compare SN with IN and BN using VGG16 (Simonyan & Zisserman, 2014) as backbone network. All models are trained on the COCO dataset (Lin et al., 2014). For each model in training, we resize each image to 256256 and train for iterations with a batch size setting of . We do not employ weight decay or dropout. The other training protocols are the same as (Johnson et al., 2016). In test, we evaluate the trained models on 512512 images selected following (Johnson et al., 2016).
G Neural Architecture Search
We investigate SN in LSTM for efficient neural architecture search (ENAS) (Pham et al., 2018), which is designed to search the structures of convolutional cells. In ENAS, a convolutional neural network (CNN) is constructed by stacking multiple convolutional cells. It consists of two steps, training controllers and training child models. A controller is a LSTM whose parameters are trained by using the REINFORCE (Williams, 1992) algorithm to sample a cell architecture, while a child model is a CNN that stacks many sampled cell architectures and its parameters are trained by backpropagation with SGD. In (Pham et al., 2018), the LSTM controller is learned to produce an architecture with high reward, which is the classification accuracy on the validation set of CIFAR10 (Krizhevsky, 2009). Higher accuracy indicates the controller produces better architecture.
We compare SN with LN and GN by using them in the LSTM controller to improve architecture search. As BN is not applicable in LSTM and IN is equivalent to LN in fullyconnected layer (i.e.
both compute the statistics across neurons), SN combines LN and GN in this experiment. Fig.
14 (b) shows the validation accuracy of CIFAR10. We see that SN obtains better accuracy than both LN and GN.H BackPropagation of SN
For the software without auto differentiation, we provide the backward computations of SN below. Let be the output of the SN layer represented by a 4D tensor with index . Let and , where , , and . Note that the importance weights are shared among the means and variances for clarity of notations. Suppose that each one of is reshaped into a vector of entries, which are the same as the dimension of IN’s statistics. Let be the loss function and be the gradient with respect to the th entry of .
We have
(6) 
(7) 
(8) 
(9) 
The gradients for and are
(10) 
(11) 
and the gradients for , and are
(12)  
(13)  
(14)  
Comments
There are no comments yet.