. bottom, middle and top) of ResNet50 are presented. (b) EN outperforms its counterparts on various computer vision tasks (i.e. image classification, noisy-supervised classification and semantic image segmentation ) by using different network architectures. Zoom in three times for the best view.
Normalization techniques are one of the most essential components to improve performance and accelerate training of convolutional neural networks (CNNs). Recently, a family of normalization methods is proposed including batch normalization (BN), instance normalization (IN) , layer normalization (LN)  and group normalization (GN) . As these methods were designed for different tasks, they often normalize feature maps of CNNs from different dimensions.
To combine advantages of the above methods, switchable normalization (SN)  and its variant  were proposed to learn linear combination of normalizers for each convolutional layer in an end-to-end manner. We term this normalization setting as static ‘learning-to-normalize’. Despite the successes of these methods, once a CNN is optimized by using them, it employed the same combination ratios of the normalization methods for all image samples in a dataset, incapable to adapt to different instances and thus rendering suboptimal performance.
As shown in Fig. 1, this work studies a new learning problem, that is, dynamic ‘learning-to-normalize’, by proposing Exemplar Normalization (EN), which is able to learn arbitrary normalizer for different convolutional layers, image samples, categories, datasets, and tasks in an end-to-end way. Unlike previous conditional batch normalization (cBN) that used multi-layer perceptron (MLP) to learn data-dependent parameters in a normalization layer, suffering from over-fitting easily, the internal architecture of EN is carefully designed to learn data-dependent normalization with merely a few parameters, thus stabilizing training and improving generalization capacity of CNNs.
EN has several appealing benefits. (1) It can be treated as an explanation tool for CNNs. The exemplar-based important ratios in each EN layer provide information to analyze the properties of different samples, classes, and datasets in various tasks. As shown in Fig. 1(a), by training ResNet50  on ImageNet , images from different categories would select different normalizers in the same EN layer, leading to superior performance compared to the ordinary network. (2) EN makes versatile design of the normalization layer possible, as EN is suitable for various benchmarks and tasks. Compared with state-of-the-art counterparts in Fig. 1(b), EN consistently outperforms them on many benchmarks such as ImageNet  for image classification, Webvision  for noisy label learning, ADE20K  and Cityscapes  for semantic segmentation. (3) EN is a plug and play module. It can be inserted into various CNN architectures such as ResNet , Inception v2 , and ShuffleNet v2 , to replace any normalization layer therein and boost their performance.
The contributions of this work are three-fold. (1) We present a novel normalization learning setting named dynamic ‘learning-to-normalize’, by proposing Exemplar Normalization (EN), which learns to select different normalizers in different normalization layers for different image samples. EN is able to normalize image sample in both training and testing stage. (2) EN provides a flexible way to analyze the selected normalizers in different layers, the relationship among distinct samples and their deep representations. (3) As a new building block, we apply EN to various tasks and network architectures. Extensive experiments show that EN outperforms its counterparts in wide spectrum of benchmarks and tasks. For example, by replacing BN in the ordinary ResNet50 , improvement produced by EN is more than that of SN on both ImageNet  and the noisy WebVision  dataset.
2 Related Work
Many normalization techniques are developed to normalize feature representations [14, 1, 36, 39, 23] or weights of filters [12, 32, 27] to accelerate training and boost generation ability of CNNs. Among them, Batch Normalization (BN) , Layer Normalization (LN)  and Instance Normalization (IN)  are most popular methods that compute statistics with respect to channel, layer, and minibatch respectively. The follow-up Position Normalization  normalizes the activations at each spatial position independently across the channels. Besides normalizing different dimensions of the feature maps, another branch of work improved the capability of BN to deal with small batch size, including Group Normalization (GN) , Batch Renormalization (BRN) , Batch Kalman Normalization (BKN)  and Stream Normalization (StN) .
In recent studies, using the hybrid of multiple normalizers in a single normalization layer has achieved much attention [29, 28, 24, 30, 25]. For example, Pan et al. introduced IBN-Net  to improve the generalization ability of CNNs by manually designing the mixture strategy of IN and BN. In , Nam et al. adopted the same scheme in style transfer, where they employed gated function to learn the important ratios of IN and BN. Luo et al. further proposed Switchable Normalization (SN) [23, 22] and its sparse version  to extend such a scheme to deal with arbitrary number of normalizers. More recently, Dynamic Normalization (DN) 
was introduced to estimate the computational pattern of statistics for the specific layer. Our work is motivated by this series of studies, but provides a more flexible way to learn normalization for each sample.
The adaptive normalization methods are also related to us. In , Conditional Batch Normalization (cBN) was introduced to learn parameters of BN (i.e. scale and offset) adaptively as a function of the input features. Attentive Normalization (AN)  learns sample-based coefficients to combine feature maps. In , Deecke et al. proposed Mode Normalization (MN) to detect modes of data on-the-fly and normalize them. However, these methods are incapable to learn various normalizers for different convolutional layers and images as EN did.
The proposed EN also has a connection with learning data-dependent  or dynamic weights  in convolution and pooling . The subnet for computation of important ratios is also similar to SE-like [11, 2, 38] attention mechanism in form, but they are technically different. First, SE-like models encourage channels to contribute equally to the feature representation , while EN learns to select different normalizers in different layers. Second, SE is plugged into different networks by using different schemes. EN could directly replace other normalization layers.
3 Exemplar Normalization (EN)
3.1 Notation and Background
We introduce normalization in terms of a 4D tensor, which is the input data of a normalization layer in a mini-batch. Letbe the input 4D tensor, where indicate the number of images, number of channels, channel height and width respectively. Here and define the spatial size of a single feature map. Let matrix denote the feature maps of -th image, where . Different normalizers normalize
by removing its mean and standard deviation along different dimensions, performing a formulation
where is the feature maps after normalization. and
are the vectors of mean and standard deviation calculated by the-th normalizer. Here we define BN, IN, LN, GN,…. The scale parameter and bias parameter are adopted to re-scale and re-shift the normalized feature maps. is a small constant to prevent dividing by zero, and both and are channel-wise operators.
Switchable Normalization (SN). Unlike previous methods that estimated statistics over different dimensions of the input tensor, SN [23, 24] learns a linear combination of statistics of existing normalizers,
where is a learnable parameter corresponding to the -th normalizer, and
. In practice, this important ratio is calculated by using the softmax function. The important ratios for mean and variance can be also different. Although SN outperforms the individual normalizer in various tasks, it solves a static ‘learning-to-normalize’ problem by switching among several normalizers in each layer. Once SN is learned, its important ratios are fixed for the entire dataset. Thus the flexibility of SN is limited and it suffers from the bias between the training and the test set, leading to sub-optimal results.
In this paper, Exemplar Normalization (EN) is proposed to investigate a dynamic ‘learning-to-normalize’ problem, which learns different data-dependant normalizations for different image samples in each layer. EN extremely expands the flexibility of SN, while retaining SN’s advantages of differential learning, stability of model training, and capability in multiple tasks.
3.2 Formulation of EN
Given input feature maps , Exemplar Normalization (EN) is defined by
where indicates the important ratio of the -th normalizer for the -th sample. Similar with SN, we use softmax function to satisfy the summation constraint, . Compared with Eqn. (2) and Eqn. (3), the differences between SN and EN are three-fold. (1) The important ratios of mean and standard deviation in SN can be different, but such scheme is avoided in EN to ensure stability of training, because the learning capacity of EN already outperforms SN by learning different normalizers for different samples. (2) We use important ratios to combine the normalized feature maps instead of combining statistics of normalizers, reducing the bias in SN when combining the standard deviations. (3) Multiple and are adopted to re-scale and re-shift the normalized feature maps in EN.
To calculate the important ratios depended on the feature map of individual sample, we define
where , and is the total number of normalizers in EN. indicates a collection of statistics of different normalizers. We have . is a function (a small neural network) to calculate the instance-based important ratios, according to the input feature maps and statistics . denotes learnable parameters of function . We carefully design a lightweight module to implement the function in next subsection.
3.3 An Exemplar Normalization Layer
Fig. 2 shows a diagram of the key operations in an EN layer, including important ratio calculation and feature map normalization. Given an input tensor , a set of statistics are estimated. We use to denote the -th statistics (mean and standard deviation). Then the EN layer uses and to calculate the important ratios as shown in the right branch of Fig. 2 in blue. As shown in the left branch of Fig. 2, multiple normalized tensors are also calculated.
In Fig. 2, there are three steps to calculate the important ratios for each sample. (1) The input tensor is firstly down-sampled in the spatial dimension by using average pooling. The output feature matrix is denoted as . Then we use every to pre-normalize by subtracting the means and dividing by the standard deviations. There are statistics and thus we have . After that, a 1-D convolutional operator is employed to reduce the channel dimension of from to , which is shown in the first blue block in Fig. 2. Here is a hyper-parameter that indicates the reduction rate. To further reduce the parameters in the above operation, we use group convolution with the group number to ensure the total number of convolutional parameters always equals to , irrelevant to the value of . The output in this step is denoted as .
(2) The second step is to compute the pairwise correlation of different normalizers for each sample, which is motivated by the high-order feature representation [7, 4]. For the -th sample, we use and its transposition to compute the pairwise correlations by . Then is reshaped to a vector to calculate the important ratios. Intuitively, the pairwise correlations capture the relationship between different normalizers for each sample, and allow the model to integrate more information to calculate the important ratios. In practice, we also find such operation could effectively stabilize the model training and make the model achieve higher performance.
(3) In the last step, the above vector is firstly fed into a fully-connected (FC) layer followed by a tanh unit. This is to raise its dimensions to , where is a hyper-parameter and the value of is usually small, e.g. . In practice, we set the value of as in experiments. After that, we perform another FC layer to reduce the dimension to . The output vector is regarded as the important ratios of the -th sample for normalizers, where each element is corresponding to an individual normalizer. Once we obtain the important ratio , the softmax function is applied to satisfy the summation constraint that the important ratios of different normalizers sum to .
Complexity Analysis. The numbers of parameters and computational complexity of different normalization methods are compared in Table 1. The additional parameters in EN are mainly from the convolutional and FC layers to calculate the data-dependant important ratios. In SN , such number is since it adopts the global important ratios for both mean and standard deviant. In EN, the total number of parameters that is applied to generate the data-dependant important ratios is , where equals to the input channel size of the convolutional layer (i.e. “Conv.” with parameters in Fig. 2). is a function of , which indicates the amount of parameters in the two FC layers (i.e. the top blue block in Fig. 2). In practice, since the number of is small (e.g. ), the value of is just about . In this paper, EN employ a pool of normalizers that is the same as SN, i.e. IN,LN,BN. Thus the computational complexities of both SN and EN for estimating the statistics are . We also compare FLOPs in Sec. 4, showing that the extra #parameters of EN is marginal compared to SN, but its relative improvement over the ordinary BN is 300% larger than SN.
4.1 Image Classification with ImageNet dataset
Experiment Setting. We first examine the performance of proposed EN on ImageNet , a standard large-scale dataset for high-resolution image classification. Following , the and in all of the normalization methods are initialized as and respectively. In the training phase, the batch size is set as and the data augmentation scheme is employed same as  for all of the methods. In inference, the single-crop validation accuracies based on center crop are reported.
We use ShuffleNet v2 x  and ResNet50  as the backbone network to evaluate various normalization methods since the difference in their network architectures and the number of parameters. Same as , ShuffleNet v2 is trained by using Adam optimizer with the initial learning rate . For ResNet50, all of the methods are optimized by using stochastic gradient decent (SGD) with stepwise learning rate decay. The hyper-parameter in ShuffleNet v2 x and ResNet50 are set as and respectively since the smallest number of channels are different. The hyper-parameter is . For fair comparison, we replace compared normalizers with EN in all of the normalization layers in the backbone network.
Result Comparison. Table 2 reports the efficiency and accuracy of EN against its counterparts including BN , GN , SN  and SSN . For both two backbone networks, EN offers a super-performance and a competitive computational cost compared with previous methods. For example, by considering the sample-based ratio selection, EN outperforms SN , and on top-1 accuracy by using ShuffleNet v2 x0.5 and ResNet50 with only a small amount of GFLOPs increment. The top-1 accuracy curves of ResNet50 by using BN, SN and EN on training and validation set of ImageNet are presented in Fig. 3. We also compare the performance with state-of-the-art attention-based methods, i.e. SENet  and AANet , without bells and whistles, the proposed EN still outperforms these methods.
4.2 Noisy Classification with Webvision dataset
Experiment Setting. We also evaluate the performance of EN on noisy image classification task with Webvision dataset . We adopt Inception v2  and ResNet50  as the backbone network. Since the smallest number of channels in Inception v2 is
, the feature reduction ratein the first “Conv.” is set as for such network architecture. In ResNet50 , we maintain the same reduction parameter as Imagenet. The center crop with the image size are adopted in inference. All of the models are optimized with SGD, where the learning rate is initialized as and decreases at the iterations of with a factor of . The batch size is set as and the data augmentation and data balance technologies are used by following . In the training phase, we replace compared normalizers with EN in all of the normalization layers.
Result Comparison. Table 3 reports the top-1 and top-5 classification accuracies of various normalization methods. EN outperforms its counterparts by using both of two network architectures. Specially, by using ResNet50 as the backbone, EN significantly boost the top-1 accuracy from to compared with SN. It achieves about times relative improvement of EN against SN compared to the ordinary plain ResNet50. Such performance gain is consistent with the results on ImageNet. The training and validation curves are shown in Fig. 3.
The cross dataset test is also conducted to investigate the transfer ability of EN since the categories in ImageNet and Webvision are the same. The model trained on one dataset is used to do the test on another dataset’s validation set. The results are reported in Fig. 4 that EN still outperforms its counterparts.
|training set val. set||method||top-1||top-5|
4.3 Tiny Image Classification with CIFAR dataset
Experiment Setting. We also conduct the experiment on CIFAR-10 and CIFAR-100 dataset. The training batch size is . All of the models are trained by using the single GPU. The training process contains epoches. The initial learning rate is set as and decayed at and epoch, respectively. We also adopt the warm up scheme [9, 10] for all of the models training, which increases the learning rate from to in the first epoch.
Result Comparison. The experiment results on CIFAR dataset are presented in Table 5. Compared with the previous methods, EN shows better performance than the other normalization methods over various depths of ResNet . In particular, the top-1 accuracies of EN on CIFAR-100 are significantly improved by , and compared with SN with different network depths.
4.4 Semantic Image Segmentation
Experiment Setting. We also evaluate the performance of EN on semantic segmentation task by using standard benchmarks, i.e. ADE20K  and Cityscapes  datasets, to demonstrate its generalization ability. Same as [23, 40], we use DeepLab  with ResNet50 as the backbone network and adopt the atrous convolution with the rate and in the last two blocks. The downsample rate of the backbone network is and the bilinear operation is employed to upsample the predicted semantic maps to the size of the input image. All of the models are trained with samples per GPU by using “ploy” learning rate decay. The initial learning rate on ADE20K and Cityscapes are set as and , respectively. Single-scale and multi-scale testing are used for evaluation. Note that the synchronization scheme is not used in SN and EN to estimate the batch mean and batch standard deviate across multi-GPU. To finetune the model on semantic segmentation, we use GPU with images per GPU to pre-train the EN-ResNet50 in ImageNet, thus we report the same configuration of SN (i.e. SN(8,32) ) for fair comparision.
Result Comparison. The mIoU scores on ADE20K validation set and Cityscapes test set are reported in Table 6. The performance improvement of EN is consistent with the results in classification. For example, the mIoUs on ADE20K and Cityscapes are improved from and to and by using multi-scale test.
4.5 Ablation Study
Hyper-parameter . We first investigate the effect of hyper-parameter in Sec. 3.3. The top-1 accuracy on ImageNet by using ResNet50 as the backbone network are reported in Table 7. All of the EN models outperform SN. With the number of increasing, the performance of classification growths steadily. The the gap between the lowest and highest is about excluding , which demonstrates the model is not sensitive to the hyper-parameter in most situations. To leverage the classification accuracy and computational efficiency, we set as in our model.
|Method||SN||EN ( value of hyper-parameter )|
|Method||SN||EN ( value of hyper-parameter )|
|Method||top-1 / top5||top-1 / top5|
|EN-ResNet50||78.1 / 93.6||-|
|2-layer MLP||76.7 / 92.9||1.4 / 0.7|
|77.6 / 92.9||0.5 / 0.7|
|77.7 / 93.4||0.4 / 0.2|
|77.6 / 93.3||0.5 / 0.3|
Hyper-parameter . We also evaluate the different group division strategy in the first “Conv.” of Fig. 3.3 through controlling the hyper-parameter . Although the total numbers of parameters in “Conv.” layer are the same by using distinct , the reduced feature dimensions are different, leading to the different computational complexity, i.e. the larger , the smaller computation cost in the subsequent block. Table 8 shows the top-1 accuracy on ImageNet by using EN-ResNet50 with different group division in the first “Conv.” shown in Fig. 2. All of the configurations achieve higher performance than SN. With the value of growths, the performance of EN-ResNet50 increases stably expect , which equals to the smallest number of channels in ResNet50. These results indicate that feature dimension reduction benefits to the performance increment. However, such advantage may disappear if the reduction rate equals to the smallest number of channels.
Other Configurations. We replace the other components in the EN layer to verify their effectiveness. The configurations for comparison are as follows. () A 2-layer multi-layer perceptron (MLP) is used to replace the designed important ratio calculation module in Fig. 2. The MLP reduces the feature dimension to
in the first layer followed by an activation function, and then reduce the dimension to the number of important ratios in the second layer. () The “Conv.” operation in the Fig. 2 are omitted and pairwise correlations in Sec. 3.3 ‘step(2)’ are directly computed. () The Tanh activation function in the top blue block of Fig. 2 is replaced with ReLU. () Instead of multiple in Eqn. (3) (i.e. each is corresponding to one normalizer), single are adopted. Table 9 reports the comparisons of proposed EN with different internal configuration. According to the results, the current configuration of EN achieves the best performance compared with the other variants. It is worthy to note that we find the output of 2-layer MLP changing dramatically in the training phase (i.e. important ratios), making the distribution of feature maps at different iterations changed too much and leading to much poor accuracy.
4.6 Analysis of EN
Learning Dynamic of Ratios on Dataset. Since the parameters which are adopted to learn the important ratios in EN layer are initialized as , the important ratios of each sample in each layer have uniform values ( i.e. ) at the beginning of the model training. In the training phase, the values of changes between and . We first investigate the averaged sample ratios in different layers of ResNet50 on ImageNet and Webvision validation set. We use the optimized model to calculate the ratios of each sample in each layer, then the average ratios of each layer are calculated over all of the validation set. According to Fig. 4, once the training dataset is determined, the learned averaged ratios are usually distinct for different datasets.
To analysis the changes of ratios in the training process, Fig. 5 plots the leaning dynamic of ratios of epochs for normalization layers in ResNet50 . Each value of ratios are averaged over all of the samples in ImageNet validation set. From the perspective of the entire dataset, the changes of ratios in each layer of EN are similar to those in SN, whose values have smooth fluctuation in the training phase, implying that distinct layers may need their own preference of normalizers to optimize the model in different epochs.
Learning Dynamic of Ratios on Classes and Images. One advantage of EN compared with SN is able to learn important ratios to adaptive to different exemplars. To illustrate such benefit of EN, we further plot the averaged important ratios of different classes (i.e. w/ and w/o similar appearance) in different layers in Fig. 6, as well as the important ratios of various image samples in different layers in Fig. 7. We have the following observations.
(1) Different classes learn their own important ratios in different layers. However, once the neural network is optimized on a certain dataset (e.g. ImageNet), the trend of the ratio changes of are similar in different epochs. For example, in Fig. 6, since the Persian cat and Siamese cat have a similar appearance, their leaned ratio curves are very close and even coincident in some layers, e.g. Layer5 and Layer 10. While the ratio curves from the class of Cheeseburger are far away from the above two categories. But in most layers, the ratio changes of different normalizers are basically the same, only have the numerical nuances.
(2) For the images with the same class index but various appearances, their learned ratios could also be distinct in different layers. Such cases are shown in Fig. 7. All of the images are sampled from confectionery class but with various appearance, e.g. the exemplar of confectionery and shelves for selling candy. According to Fig. 7, different images from the same category also obtained different ratios in bottom, middle and top normalization layers.
In this paper, we propose Exemplar Normalization to learn the linear combination of different normalizers with a sample-based manner in a single layer. We show the effectiveness of EN on various computer vision tasks, such as classification, detection and segmentation, demonstrate its superior learning and generalization ability than static learning-to-normalize method such as SN. In addition, the interpretable visualization of learned important ratios reveals the properties of classes and datasets. The future work will explore EN in more intelligent tasks. In addition, the task-oriented constraint on the important ratios will also be a potential research direction.
Acknowledgement This work was partially supported by No. 2018YFB1800800, Open Research Fund from Shenzhen Research Institute of Big Data No. 2019ORF01005, 2018B030338001, 2017ZT07X152, ZDSYS201707251409055, HKU Seed Fund for Basic Research and Start-up Fund.
-  Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv:1607.06450, 2016.
-  Irwan Bello, Barret Zoph, Ashish Vaswani, Jonathon Shlens, and Quoc V Le. Attention augmented convolutional networks. In ICCV, 2019.
-  Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2017.
-  Yunpeng Chen, Yannis Kalantidis, Jianshu Li, Shuicheng Yan, and Jiashi Feng. A^ 2-nets: Double attention networks. In NeurIPS, 2018.
Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler,
Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.
The cityscapes dataset for semantic urban scene understanding.In CVPR, 2016.
-  Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
-  Zilin Gao, Jiangtao Xie, Qilong Wang, and Peihua Li. Global second-order pooling convolutional networks. In CVPR, 2019.
Sheng Guo, Weilin Huang, Haozhi Zhang, Chenfan Zhuang, Dengke Dong, Matthew R
Scott, and Dinglong Huang.
Curriculumnet: Weakly supervised learning from large-scale web images.In ECCV, 2018.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016.
-  Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In CVPR, 2018.
-  Lei Huang, Xianglong Liu, Yang Liu, Bo Lang, and Dacheng Tao. Centered weight normalization in accelerating training of deep neural networks. In ICCV, 2017.
-  Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In NeurIPS, 2017.
-  Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
-  Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. In NeurIPS, 2016.
-  Chen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In Artificial intelligence and statistics, 2016.
-  Boyi Li, Felix Wu, Kilian Q. Weinberger, and Serge J. Belongie. Positional normalization. arXiv:1907.04312, 2019.
-  Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. Webvision database: Visual learning and understanding from web data. arXiv:1708.02862, 2017.
-  Xilai Li, Wei Sun, and Tianfu Wu. Attentive normalization. arXiv:/1908.01259, 2019.
-  Qianli Liao, Kenji Kawaguchi, and Tomaso Poggio. Streaming normalization: Towards simpler and more biologically-plausible normalizations for online and recurrent learning. arXiv:1610.06160, 2016.
-  Iain Murray Lucas Deecke and Hakan Bilen. Mode normalization. In ICLR, 2019.
-  Ping Luo, Zhanglin Peng, Jiamin Ren, and Ruimao Zhang. Do normalization layers in a deep convnet really need to be distinct? arXiv:1811.07727, 2018.
-  Ping Luo, Jiamin Ren, Zhanglin Peng, Ruimao Zhang, and Jingyu Li. Differentiable learning-to-normalize via switchable normalization. In ICLR, 2019.
-  Ping Luo, Ruimao Zhang, Jiamin Ren, Zhanglin Peng, and Jingyu Li. Switchable normalization for learning-to-normalize deep representation. IEEE Trans. Pattern Anal. Mach. Intell., 2019.
-  Ping Luo, Peng Zhanglin, Shao Wenqi, Zhang Ruimao, Ren Jiamin, and Wu Lingyun. Differentiable dynamic normalization for learning deep representation. In ICML, 2019.
-  Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet V2: practical guidelines for efficient CNN architecture design. In ECCV, 2018.
-  Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
-  Hyeonseob Nam and Hyo-Eun Kim. Batch-instance normalization for adaptively style-invariant neural networks. In NeurIPS, 2018.
-  Xingang Pan, Ping Luo, Jianping Shi, and Xiaoou Tang. Two at once: enhancing learning and generalization capacities via ibn-net. In ECCV, 2018.
-  Xingang Pan, Xiaohang Zhan, Jianping Shi, Xiaoou Tang, and Ping Luo. Switchable whitening for deep representation learning. In ICCV, 2019.
-  Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, and Aaron C. Courville. Learning visual reasoning without strong priors. arXiv:1707.03017, 2017.
-  Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In NIPS, 2016.
-  Wenqi Shao, Tianjian Meng, Jingyu Li, Ruimao Zhang, Yudian Li, Xiaogang Wang, and Ping Luo. Ssn: Learning sparse switchable normalization via sparsestmax. In CVPR, 2019.
-  Wenqi Shao, Shitao Tang, Xingang Pan, Ping Tan, Xiaogang Wang, and Ping Luo. Channel equilibrium networks for learning deep representation. arXiv:2003.00214, 2020.
-  Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
-  Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022, 2016.
-  Guangrun Wang, Jiefeng Peng, Ping Luo, Xinjiang Wang, and Liang Lin. Batch kalman normalization: Towards training deep neural networks with micro-batches. 2018.
-  Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, Wangmeng Zuo, and Qinghua Hu. Eca-net: Efficient channel attention for deep convolutional neural networks. In CVPR, 2020.
-  Yuxin Wu and Kaiming He. Group normalization. In ECCV, 2018.
-  Ruimao Zhang, Wei Yang, Zhanglin Peng, Pengxu Wei, Xiaogang Wang, and Liang Lin. Progressively diffused networks for semantic visual parsing. Pattern Recognition, 90:78–86, 2019.
-  Zhaoyang Zhang, Jingyu Li, Wenqi Shao, Zhanglin Peng, Ruimao Zhang, Xiaogang Wang, and Ping Luo. Differentiable learning-to-group channels via groupable convolutional neural networks. In ICCV, 2019.
-  Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ADE20K dataset. In CVPR, 2017.