1 Introduction
. bottom, middle and top) of ResNet50 are presented. (b) EN outperforms its counterparts on various computer vision tasks (
i.e. image classification, noisysupervised classification and semantic image segmentation ) by using different network architectures. Zoom in three times for the best view.Normalization techniques are one of the most essential components to improve performance and accelerate training of convolutional neural networks (CNNs). Recently, a family of normalization methods is proposed including batch normalization (BN)
[14], instance normalization (IN) [36], layer normalization (LN) [1] and group normalization (GN) [39]. As these methods were designed for different tasks, they often normalize feature maps of CNNs from different dimensions.To combine advantages of the above methods, switchable normalization (SN) [23] and its variant [33] were proposed to learn linear combination of normalizers for each convolutional layer in an endtoend manner. We term this normalization setting as static ‘learningtonormalize’. Despite the successes of these methods, once a CNN is optimized by using them, it employed the same combination ratios of the normalization methods for all image samples in a dataset, incapable to adapt to different instances and thus rendering suboptimal performance.
As shown in Fig. 1, this work studies a new learning problem, that is, dynamic ‘learningtonormalize’, by proposing Exemplar Normalization (EN), which is able to learn arbitrary normalizer for different convolutional layers, image samples, categories, datasets, and tasks in an endtoend way. Unlike previous conditional batch normalization (cBN) that used multilayer perceptron (MLP) to learn datadependent parameters in a normalization layer, suffering from overfitting easily, the internal architecture of EN is carefully designed to learn datadependent normalization with merely a few parameters, thus stabilizing training and improving generalization capacity of CNNs.
EN has several appealing benefits. (1) It can be treated as an explanation tool for CNNs. The exemplarbased important ratios in each EN layer provide information to analyze the properties of different samples, classes, and datasets in various tasks. As shown in Fig. 1(a), by training ResNet50 [9] on ImageNet [6], images from different categories would select different normalizers in the same EN layer, leading to superior performance compared to the ordinary network. (2) EN makes versatile design of the normalization layer possible, as EN is suitable for various benchmarks and tasks. Compared with stateoftheart counterparts in Fig. 1(b), EN consistently outperforms them on many benchmarks such as ImageNet [6] for image classification, Webvision [18] for noisy label learning, ADE20K [42] and Cityscapes [5] for semantic segmentation. (3) EN is a plug and play module. It can be inserted into various CNN architectures such as ResNet [9], Inception v2 [35], and ShuffleNet v2 [26], to replace any normalization layer therein and boost their performance.
The contributions of this work are threefold. (1) We present a novel normalization learning setting named dynamic ‘learningtonormalize’, by proposing Exemplar Normalization (EN), which learns to select different normalizers in different normalization layers for different image samples. EN is able to normalize image sample in both training and testing stage. (2) EN provides a flexible way to analyze the selected normalizers in different layers, the relationship among distinct samples and their deep representations. (3) As a new building block, we apply EN to various tasks and network architectures. Extensive experiments show that EN outperforms its counterparts in wide spectrum of benchmarks and tasks. For example, by replacing BN in the ordinary ResNet50 [9], improvement produced by EN is more than that of SN on both ImageNet [6] and the noisy WebVision [18] dataset.
2 Related Work
Many normalization techniques are developed to normalize feature representations [14, 1, 36, 39, 23] or weights of filters [12, 32, 27] to accelerate training and boost generation ability of CNNs. Among them, Batch Normalization (BN) [14], Layer Normalization (LN) [1] and Instance Normalization (IN) [36] are most popular methods that compute statistics with respect to channel, layer, and minibatch respectively. The followup Position Normalization [17] normalizes the activations at each spatial position independently across the channels. Besides normalizing different dimensions of the feature maps, another branch of work improved the capability of BN to deal with small batch size, including Group Normalization (GN) [39], Batch Renormalization (BRN) [13], Batch Kalman Normalization (BKN) [37] and Stream Normalization (StN) [20].
In recent studies, using the hybrid of multiple normalizers in a single normalization layer has achieved much attention [29, 28, 24, 30, 25]. For example, Pan et al. introduced IBNNet [29] to improve the generalization ability of CNNs by manually designing the mixture strategy of IN and BN. In [28], Nam et al. adopted the same scheme in style transfer, where they employed gated function to learn the important ratios of IN and BN. Luo et al. further proposed Switchable Normalization (SN) [23, 22] and its sparse version [33] to extend such a scheme to deal with arbitrary number of normalizers. More recently, Dynamic Normalization (DN) [25]
was introduced to estimate the computational pattern of statistics for the specific layer. Our work is motivated by this series of studies, but provides a more flexible way to learn normalization for each sample.
The adaptive normalization methods are also related to us. In [31], Conditional Batch Normalization (cBN) was introduced to learn parameters of BN (i.e. scale and offset) adaptively as a function of the input features. Attentive Normalization (AN) [19] learns samplebased coefficients to combine feature maps. In [21], Deecke et al. proposed Mode Normalization (MN) to detect modes of data onthefly and normalize them. However, these methods are incapable to learn various normalizers for different convolutional layers and images as EN did.
The proposed EN also has a connection with learning datadependent [15] or dynamic weights [41] in convolution and pooling [16]. The subnet for computation of important ratios is also similar to SElike [11, 2, 38] attention mechanism in form, but they are technically different. First, SElike models encourage channels to contribute equally to the feature representation [34], while EN learns to select different normalizers in different layers. Second, SE is plugged into different networks by using different schemes. EN could directly replace other normalization layers.
3 Exemplar Normalization (EN)
3.1 Notation and Background
Overview.
We introduce normalization in terms of a 4D tensor, which is the input data of a normalization layer in a minibatch. Let
be the input 4D tensor, where indicate the number of images, number of channels, channel height and width respectively. Here and define the spatial size of a single feature map. Let matrix denote the feature maps of th image, where . Different normalizers normalizeby removing its mean and standard deviation along different dimensions, performing a formulation
(1) 
where is the feature maps after normalization. and
are the vectors of mean and standard deviation calculated by the
th normalizer. Here we define BN, IN, LN, GN,…. The scale parameter and bias parameter are adopted to rescale and reshift the normalized feature maps. is a small constant to prevent dividing by zero, and both and are channelwise operators.Switchable Normalization (SN). Unlike previous methods that estimated statistics over different dimensions of the input tensor, SN [23, 24] learns a linear combination of statistics of existing normalizers,
(2) 
where is a learnable parameter corresponding to the th normalizer, and
. In practice, this important ratio is calculated by using the softmax function. The important ratios for mean and variance can be also different. Although SN
[23] outperforms the individual normalizer in various tasks, it solves a static ‘learningtonormalize’ problem by switching among several normalizers in each layer. Once SN is learned, its important ratios are fixed for the entire dataset. Thus the flexibility of SN is limited and it suffers from the bias between the training and the test set, leading to suboptimal results.In this paper, Exemplar Normalization (EN) is proposed to investigate a dynamic ‘learningtonormalize’ problem, which learns different datadependant normalizations for different image samples in each layer. EN extremely expands the flexibility of SN, while retaining SN’s advantages of differential learning, stability of model training, and capability in multiple tasks.
3.2 Formulation of EN
Given input feature maps , Exemplar Normalization (EN) is defined by
(3) 
where indicates the important ratio of the th normalizer for the th sample. Similar with SN, we use softmax function to satisfy the summation constraint, . Compared with Eqn. (2) and Eqn. (3), the differences between SN and EN are threefold. (1) The important ratios of mean and standard deviation in SN can be different, but such scheme is avoided in EN to ensure stability of training, because the learning capacity of EN already outperforms SN by learning different normalizers for different samples. (2) We use important ratios to combine the normalized feature maps instead of combining statistics of normalizers, reducing the bias in SN when combining the standard deviations. (3) Multiple and are adopted to rescale and reshift the normalized feature maps in EN.
To calculate the important ratios depended on the feature map of individual sample, we define
(4) 
where , and is the total number of normalizers in EN. indicates a collection of statistics of different normalizers. We have . is a function (a small neural network) to calculate the instancebased important ratios, according to the input feature maps and statistics . denotes learnable parameters of function . We carefully design a lightweight module to implement the function in next subsection.
3.3 An Exemplar Normalization Layer
Fig. 2 shows a diagram of the key operations in an EN layer, including important ratio calculation and feature map normalization. Given an input tensor , a set of statistics are estimated. We use to denote the th statistics (mean and standard deviation). Then the EN layer uses and to calculate the important ratios as shown in the right branch of Fig. 2 in blue. As shown in the left branch of Fig. 2, multiple normalized tensors are also calculated.
In Fig. 2, there are three steps to calculate the important ratios for each sample. (1) The input tensor is firstly downsampled in the spatial dimension by using average pooling. The output feature matrix is denoted as . Then we use every to prenormalize by subtracting the means and dividing by the standard deviations. There are statistics and thus we have . After that, a 1D convolutional operator is employed to reduce the channel dimension of from to , which is shown in the first blue block in Fig. 2. Here is a hyperparameter that indicates the reduction rate. To further reduce the parameters in the above operation, we use group convolution with the group number to ensure the total number of convolutional parameters always equals to , irrelevant to the value of . The output in this step is denoted as .
(2) The second step is to compute the pairwise correlation of different normalizers for each sample, which is motivated by the highorder feature representation [7, 4]. For the th sample, we use and its transposition to compute the pairwise correlations by . Then is reshaped to a vector to calculate the important ratios. Intuitively, the pairwise correlations capture the relationship between different normalizers for each sample, and allow the model to integrate more information to calculate the important ratios. In practice, we also find such operation could effectively stabilize the model training and make the model achieve higher performance.
(3) In the last step, the above vector is firstly fed into a fullyconnected (FC) layer followed by a tanh unit. This is to raise its dimensions to , where is a hyperparameter and the value of is usually small, e.g. . In practice, we set the value of as in experiments. After that, we perform another FC layer to reduce the dimension to . The output vector is regarded as the important ratios of the th sample for normalizers, where each element is corresponding to an individual normalizer. Once we obtain the important ratio , the softmax function is applied to satisfy the summation constraint that the important ratios of different normalizers sum to .
Complexity Analysis. The numbers of parameters and computational complexity of different normalization methods are compared in Table 1. The additional parameters in EN are mainly from the convolutional and FC layers to calculate the datadependant important ratios. In SN [23], such number is since it adopts the global important ratios for both mean and standard deviant. In EN, the total number of parameters that is applied to generate the datadependant important ratios is , where equals to the input channel size of the convolutional layer (i.e. “Conv.” with parameters in Fig. 2). is a function of , which indicates the amount of parameters in the two FC layers (i.e. the top blue block in Fig. 2). In practice, since the number of is small (e.g. ), the value of is just about . In this paper, EN employ a pool of normalizers that is the same as SN, i.e. IN,LN,BN. Thus the computational complexities of both SN and EN for estimating the statistics are . We also compare FLOPs in Sec. 4, showing that the extra #parameters of EN is marginal compared to SN, but its relative improvement over the ordinary BN is 300% larger than SN.
Method  params  params  computation 

complexity  
BN [14]  
IN [36]  
LN [1]  
GN [39]  
BKN [37]  
SN [23]  
EN  
4 Experiment
4.1 Image Classification with ImageNet dataset
Experiment Setting. We first examine the performance of proposed EN on ImageNet [6], a standard largescale dataset for highresolution image classification. Following [23], the and in all of the normalization methods are initialized as and respectively. In the training phase, the batch size is set as and the data augmentation scheme is employed same as [9] for all of the methods. In inference, the singlecrop validation accuracies based on center crop are reported.
We use ShuffleNet v2 x [26] and ResNet50 [9] as the backbone network to evaluate various normalization methods since the difference in their network architectures and the number of parameters. Same as [26], ShuffleNet v2 is trained by using Adam optimizer with the initial learning rate . For ResNet50, all of the methods are optimized by using stochastic gradient decent (SGD) with stepwise learning rate decay. The hyperparameter in ShuffleNet v2 x and ResNet50 are set as and respectively since the smallest number of channels are different. The hyperparameter is . For fair comparison, we replace compared normalizers with EN in all of the normalization layers in the backbone network.
Backbone  Method  GFLOPs  Params.  top1  top5 

BN  0.046  1.37M  60.3  81.9  
ShuffleNet  SN  0.057  1.37M  61.2  82.9 
v2 x0.5  SSN  0.052  1.37M  61.2  82.7 
EN  0.063  1.59M  62.2  83.3  
SENet  4.151  26.77M  77.6  93.7  
AANet  4.167  25.80M  77.7  93.8  
BN  4.136  25.56M  76.4  93.0  
GN  4.155  25.56M  76.0  92.8  
ResNet50  SN  4.225  25.56M  76.9  93.2 
SSN  4.186  25.56M  77.2  93.1  
EN  4.325  25.91M  78.1  93.6 
Result Comparison. Table 2 reports the efficiency and accuracy of EN against its counterparts including BN [14], GN [39], SN [23] and SSN [33]. For both two backbone networks, EN offers a superperformance and a competitive computational cost compared with previous methods. For example, by considering the samplebased ratio selection, EN outperforms SN , and on top1 accuracy by using ShuffleNet v2 x0.5 and ResNet50 with only a small amount of GFLOPs increment. The top1 accuracy curves of ResNet50 by using BN, SN and EN on training and validation set of ImageNet are presented in Fig. 3. We also compare the performance with stateoftheart attentionbased methods, i.e. SENet [11] and AANet [2], without bells and whistles, the proposed EN still outperforms these methods.
4.2 Noisy Classification with Webvision dataset
Experiment Setting. We also evaluate the performance of EN on noisy image classification task with Webvision dataset [18]. We adopt Inception v2 [35] and ResNet50 [9] as the backbone network. Since the smallest number of channels in Inception v2 is
, the feature reduction rate
in the first “Conv.” is set as for such network architecture. In ResNet50 [9], we maintain the same reduction parameter as Imagenet. The center crop with the image size are adopted in inference. All of the models are optimized with SGD, where the learning rate is initialized as and decreases at the iterations of with a factor of . The batch size is set as and the data augmentation and data balance technologies are used by following [8]. In the training phase, we replace compared normalizers with EN in all of the normalization layers.Result Comparison. Table 3 reports the top1 and top5 classification accuracies of various normalization methods. EN outperforms its counterparts by using both of two network architectures. Specially, by using ResNet50 as the backbone, EN significantly boost the top1 accuracy from to compared with SN. It achieves about times relative improvement of EN against SN compared to the ordinary plain ResNet50. Such performance gain is consistent with the results on ImageNet. The training and validation curves are shown in Fig. 3.
The cross dataset test is also conducted to investigate the transfer ability of EN since the categories in ImageNet and Webvision are the same. The model trained on one dataset is used to do the test on another dataset’s validation set. The results are reported in Fig. 4 that EN still outperforms its counterparts.
Model  Norm  GFLOPs  Params.  top1  top5 

Inception v2  BN  2.056  11.29M  70.7  88.0 
SN  2.081  11.30M  71.3  88.5  
EN  2.122  12.36M  71.6  88.6  
ResNet50  BN  4.136  25.56M  72.5  89.1 
SN  4.225  25.56M  72.8  89.2  
EN  4.325  25.91M  73.5  89.4 
training set val. set  method  top1  top5 

ImageNet Webvision  BN  67.9  85.8 
SN  68.0  86.3  
EN  68.4  86.8  
Webvision ImageNet  BN  64.4  84.3 
SN  61.1  81.0  
EN  64.7  84.6 
4.3 Tiny Image Classification with CIFAR dataset
Experiment Setting. We also conduct the experiment on CIFAR10 and CIFAR100 dataset. The training batch size is . All of the models are trained by using the single GPU. The training process contains epoches. The initial learning rate is set as and decayed at and epoch, respectively. We also adopt the warm up scheme [9, 10] for all of the models training, which increases the learning rate from to in the first epoch.
Result Comparison. The experiment results on CIFAR dataset are presented in Table 5. Compared with the previous methods, EN shows better performance than the other normalization methods over various depths of ResNet [9]. In particular, the top1 accuracies of EN on CIFAR100 are significantly improved by , and compared with SN with different network depths.
Dataset  Backbone  BN  SN  EN 

CIFAR10  ResNet20  91.54  91.81  92.41 
ResNet56  93.15  93.41  93.73  
ResNet110  93.88  94.01  94.22  
CIFAR100  ResNet20  67.87  67.74  68.78 
ResNet56  70.83  70.70  72.01  
ResNet110  72.41  72.53  73.32 
4.4 Semantic Image Segmentation
Experiment Setting. We also evaluate the performance of EN on semantic segmentation task by using standard benchmarks, i.e. ADE20K [42] and Cityscapes [5] datasets, to demonstrate its generalization ability. Same as [23, 40], we use DeepLab [3] with ResNet50 as the backbone network and adopt the atrous convolution with the rate and in the last two blocks. The downsample rate of the backbone network is and the bilinear operation is employed to upsample the predicted semantic maps to the size of the input image. All of the models are trained with samples per GPU by using “ploy” learning rate decay. The initial learning rate on ADE20K and Cityscapes are set as and , respectively. Singlescale and multiscale testing are used for evaluation. Note that the synchronization scheme is not used in SN and EN to estimate the batch mean and batch standard deviate across multiGPU. To finetune the model on semantic segmentation, we use GPU with images per GPU to pretrain the ENResNet50 in ImageNet, thus we report the same configuration of SN (i.e. SN(8,32) [24]) for fair comparision.
Result Comparison. The mIoU scores on ADE20K validation set and Cityscapes test set are reported in Table 6. The performance improvement of EN is consistent with the results in classification. For example, the mIoUs on ADE20K and Cityscapes are improved from and to and by using multiscale test.
Method  ADE20K  Cityscapes  

mIoU  mIoU  mIoU  mIoU  
SyncBN  36.4  37.7  69.7  73.0 
GN  35.7  36.6  68.4  73.1 
SN  37.7  38.4  72.2  75.8 
EN  38.2  38.9  72.6  76.1 
4.5 Ablation Study
Hyperparameter . We first investigate the effect of hyperparameter in Sec. 3.3. The top1 accuracy on ImageNet by using ResNet50 as the backbone network are reported in Table 7. All of the EN models outperform SN. With the number of increasing, the performance of classification growths steadily. The the gap between the lowest and highest is about excluding , which demonstrates the model is not sensitive to the hyperparameter in most situations. To leverage the classification accuracy and computational efficiency, we set as in our model.
Method  SN  EN ( value of hyperparameter )  

1  10  20  50  100  
top1  76.9  77.1  77.5  77.8  78.1  78.0 
vs. SN    0.2  0.6  0.9  +1.2  1.1 
Method  SN  EN ( value of hyperparameter )  

2  4  16  32  64  
top1  76.9  77.7  77.9  77.9  78.1  77.7 
vs. SN    +1.2 
Method  top1 / top5  top1 / top5 

vs. EN  
ENResNet50  78.1 / 93.6   
2layer MLP  76.7 / 92.9  1.4 / 0.7 
w/o Conv. 
77.6 / 92.9  0.5 / 0.7 
ReLU 
77.7 / 93.4  0.4 / 0.2 
single 
77.6 / 93.3  0.5 / 0.3 
Hyperparameter . We also evaluate the different group division strategy in the first “Conv.” of Fig. 3.3 through controlling the hyperparameter . Although the total numbers of parameters in “Conv.” layer are the same by using distinct , the reduced feature dimensions are different, leading to the different computational complexity, i.e. the larger , the smaller computation cost in the subsequent block. Table 8 shows the top1 accuracy on ImageNet by using ENResNet50 with different group division in the first “Conv.” shown in Fig. 2. All of the configurations achieve higher performance than SN. With the value of growths, the performance of ENResNet50 increases stably expect , which equals to the smallest number of channels in ResNet50. These results indicate that feature dimension reduction benefits to the performance increment. However, such advantage may disappear if the reduction rate equals to the smallest number of channels.
Other Configurations. We replace the other components in the EN layer to verify their effectiveness. The configurations for comparison are as follows. () A 2layer multilayer perceptron (MLP) is used to replace the designed important ratio calculation module in Fig. 2. The MLP reduces the feature dimension to
in the first layer followed by an activation function, and then reduce the dimension to the number of important ratios in the second layer. (
) The “Conv.” operation in the Fig. 2 are omitted and pairwise correlations in Sec. 3.3 ‘step(2)’ are directly computed. () The Tanh activation function in the top blue block of Fig. 2 is replaced with ReLU. () Instead of multiple in Eqn. (3) (i.e. each is corresponding to one normalizer), single are adopted. Table 9 reports the comparisons of proposed EN with different internal configuration. According to the results, the current configuration of EN achieves the best performance compared with the other variants. It is worthy to note that we find the output of 2layer MLP changing dramatically in the training phase (i.e. important ratios), making the distribution of feature maps at different iterations changed too much and leading to much poor accuracy.4.6 Analysis of EN
Learning Dynamic of Ratios on Dataset. Since the parameters which are adopted to learn the important ratios in EN layer are initialized as , the important ratios of each sample in each layer have uniform values ( i.e. ) at the beginning of the model training. In the training phase, the values of changes between and . We first investigate the averaged sample ratios in different layers of ResNet50 on ImageNet and Webvision validation set. We use the optimized model to calculate the ratios of each sample in each layer, then the average ratios of each layer are calculated over all of the validation set. According to Fig. 4, once the training dataset is determined, the learned averaged ratios are usually distinct for different datasets.
To analysis the changes of ratios in the training process, Fig. 5 plots the leaning dynamic of ratios of epochs for normalization layers in ResNet50 . Each value of ratios are averaged over all of the samples in ImageNet validation set. From the perspective of the entire dataset, the changes of ratios in each layer of EN are similar to those in SN, whose values have smooth fluctuation in the training phase, implying that distinct layers may need their own preference of normalizers to optimize the model in different epochs.
Learning Dynamic of Ratios on Classes and Images. One advantage of EN compared with SN is able to learn important ratios to adaptive to different exemplars. To illustrate such benefit of EN, we further plot the averaged important ratios of different classes (i.e. w/ and w/o similar appearance) in different layers in Fig. 6, as well as the important ratios of various image samples in different layers in Fig. 7. We have the following observations.
(1) Different classes learn their own important ratios in different layers. However, once the neural network is optimized on a certain dataset (e.g. ImageNet), the trend of the ratio changes of are similar in different epochs. For example, in Fig. 6, since the Persian cat and Siamese cat have a similar appearance, their leaned ratio curves are very close and even coincident in some layers, e.g. Layer5 and Layer 10. While the ratio curves from the class of Cheeseburger are far away from the above two categories. But in most layers, the ratio changes of different normalizers are basically the same, only have the numerical nuances.
(2) For the images with the same class index but various appearances, their learned ratios could also be distinct in different layers. Such cases are shown in Fig. 7. All of the images are sampled from confectionery class but with various appearance, e.g. the exemplar of confectionery and shelves for selling candy. According to Fig. 7, different images from the same category also obtained different ratios in bottom, middle and top normalization layers.
5 Conclusion
In this paper, we propose Exemplar Normalization to learn the linear combination of different normalizers with a samplebased manner in a single layer. We show the effectiveness of EN on various computer vision tasks, such as classification, detection and segmentation, demonstrate its superior learning and generalization ability than static learningtonormalize method such as SN. In addition, the interpretable visualization of learned important ratios reveals the properties of classes and datasets. The future work will explore EN in more intelligent tasks. In addition, the taskoriented constraint on the important ratios will also be a potential research direction.
Acknowledgement This work was partially supported by No. 2018YFB1800800, Open Research Fund from Shenzhen Research Institute of Big Data No. 2019ORF01005, 2018B030338001, 2017ZT07X152, ZDSYS201707251409055, HKU Seed Fund for Basic Research and Startup Fund.
References
 [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv:1607.06450, 2016.
 [2] Irwan Bello, Barret Zoph, Ashish Vaswani, Jonathon Shlens, and Quoc V Le. Attention augmented convolutional networks. In ICCV, 2019.
 [3] LiangChieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2017.
 [4] Yunpeng Chen, Yannis Kalantidis, Jianshu Li, Shuicheng Yan, and Jiashi Feng. A^ 2nets: Double attention networks. In NeurIPS, 2018.

[5]
Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler,
Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.
The cityscapes dataset for semantic urban scene understanding.
In CVPR, 2016.  [6] Jia Deng, Wei Dong, Richard Socher, LiJia Li, Kai Li, and Li FeiFei. Imagenet: A largescale hierarchical image database. In CVPR, 2009.
 [7] Zilin Gao, Jiangtao Xie, Qilong Wang, and Peihua Li. Global secondorder pooling convolutional networks. In CVPR, 2019.

[8]
Sheng Guo, Weilin Huang, Haozhi Zhang, Chenfan Zhuang, Dengke Dong, Matthew R
Scott, and Dinglong Huang.
Curriculumnet: Weakly supervised learning from largescale web images.
In ECCV, 2018.  [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
 [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016.
 [11] Jie Hu, Li Shen, and Gang Sun. Squeezeandexcitation networks. In CVPR, 2018.
 [12] Lei Huang, Xianglong Liu, Yang Liu, Bo Lang, and Dacheng Tao. Centered weight normalization in accelerating training of deep neural networks. In ICCV, 2017.
 [13] Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batchnormalized models. In NeurIPS, 2017.
 [14] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
 [15] Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. In NeurIPS, 2016.
 [16] ChenYu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In Artificial intelligence and statistics, 2016.
 [17] Boyi Li, Felix Wu, Kilian Q. Weinberger, and Serge J. Belongie. Positional normalization. arXiv:1907.04312, 2019.
 [18] Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. Webvision database: Visual learning and understanding from web data. arXiv:1708.02862, 2017.
 [19] Xilai Li, Wei Sun, and Tianfu Wu. Attentive normalization. arXiv:/1908.01259, 2019.
 [20] Qianli Liao, Kenji Kawaguchi, and Tomaso Poggio. Streaming normalization: Towards simpler and more biologicallyplausible normalizations for online and recurrent learning. arXiv:1610.06160, 2016.
 [21] Iain Murray Lucas Deecke and Hakan Bilen. Mode normalization. In ICLR, 2019.
 [22] Ping Luo, Zhanglin Peng, Jiamin Ren, and Ruimao Zhang. Do normalization layers in a deep convnet really need to be distinct? arXiv:1811.07727, 2018.
 [23] Ping Luo, Jiamin Ren, Zhanglin Peng, Ruimao Zhang, and Jingyu Li. Differentiable learningtonormalize via switchable normalization. In ICLR, 2019.
 [24] Ping Luo, Ruimao Zhang, Jiamin Ren, Zhanglin Peng, and Jingyu Li. Switchable normalization for learningtonormalize deep representation. IEEE Trans. Pattern Anal. Mach. Intell., 2019.
 [25] Ping Luo, Peng Zhanglin, Shao Wenqi, Zhang Ruimao, Ren Jiamin, and Wu Lingyun. Differentiable dynamic normalization for learning deep representation. In ICML, 2019.
 [26] Ningning Ma, Xiangyu Zhang, HaiTao Zheng, and Jian Sun. Shufflenet V2: practical guidelines for efficient CNN architecture design. In ECCV, 2018.
 [27] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
 [28] Hyeonseob Nam and HyoEun Kim. Batchinstance normalization for adaptively styleinvariant neural networks. In NeurIPS, 2018.
 [29] Xingang Pan, Ping Luo, Jianping Shi, and Xiaoou Tang. Two at once: enhancing learning and generalization capacities via ibnnet. In ECCV, 2018.
 [30] Xingang Pan, Xiaohang Zhan, Jianping Shi, Xiaoou Tang, and Ping Luo. Switchable whitening for deep representation learning. In ICCV, 2019.
 [31] Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, and Aaron C. Courville. Learning visual reasoning without strong priors. arXiv:1707.03017, 2017.
 [32] Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In NIPS, 2016.
 [33] Wenqi Shao, Tianjian Meng, Jingyu Li, Ruimao Zhang, Yudian Li, Xiaogang Wang, and Ping Luo. Ssn: Learning sparse switchable normalization via sparsestmax. In CVPR, 2019.
 [34] Wenqi Shao, Shitao Tang, Xingang Pan, Ping Tan, Xiaogang Wang, and Ping Luo. Channel equilibrium networks for learning deep representation. arXiv:2003.00214, 2020.
 [35] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
 [36] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022, 2016.
 [37] Guangrun Wang, Jiefeng Peng, Ping Luo, Xinjiang Wang, and Liang Lin. Batch kalman normalization: Towards training deep neural networks with microbatches. 2018.
 [38] Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, Wangmeng Zuo, and Qinghua Hu. Ecanet: Efficient channel attention for deep convolutional neural networks. In CVPR, 2020.
 [39] Yuxin Wu and Kaiming He. Group normalization. In ECCV, 2018.
 [40] Ruimao Zhang, Wei Yang, Zhanglin Peng, Pengxu Wei, Xiaogang Wang, and Liang Lin. Progressively diffused networks for semantic visual parsing. Pattern Recognition, 90:78–86, 2019.
 [41] Zhaoyang Zhang, Jingyu Li, Wenqi Shao, Zhanglin Peng, Ruimao Zhang, Xiaogang Wang, and Ping Luo. Differentiable learningtogroup channels via groupable convolutional neural networks. In ICCV, 2019.
 [42] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ADE20K dataset. In CVPR, 2017.
Comments
There are no comments yet.