1 Introduction
As the default recommendation, rectified linear unit (ReLU) [28, 18]
is one of the few milestones in the deep learning revolution. It is simple and powerful, greatly improving the performance of feedforward networks. Thus, it has been widely used in many successful architectures (e.g. ResNet
[12], MobileNet[13, 31, 14] and ShuffleNet [45, 25]) for different vision tasks (e.g. recognition, detection, segmentation).ReLU and its generalizations, either nonparametric (leaky ReLU [26]) or parametric(PReLU [11]) are static. They perform in the exactly same way for different inputs (e.g. images). This naturally raises an issue: should rectifiers be fixed or adaptive to input (e.g. images)? In this paper, we investigate dynamic rectifiers to answer this question.
We propose Dynamic ReLU (DYReLU), a piecewise linear function whose parameters are computed from a hyper function over input . For example, Figure 1 illustrates that the slopes of two linear functions are determined by the hyper function. The key idea is that the global context of all input elements is encoded in the hyper function for adapting the piecewise linear activation function . This enables significantly more representation capability especially for lightweight neural networks (e.g. MobileNet). Meanwhile it is computationally efficient as the hyper function is simple with negligible extra computational cost.
Furthermore, we explore three variations of dynamic ReLU. They have different ways of sharing activation functions across spatial locations and channels: (a) spatial and channelshared DYReLUA, (b) spatialshared and channelwise DYReLUB, and (c) spatial and channelwise DYReLUC. We have two findings. Firstly, channelwise variations (DYReLUB and DYReLUC) are more suitable for image classification. Secondly, for keypoint detection, channelwise variations (DYReLUB and DYReLUC) are more suitable for the backbone network while the spatialwise variation (DYReLUC) is more suitable for the head network.
We demonstrate the effectiveness of DYReLU on both image classification (ImageNet) and keypoint detection (COCO). Without bells and whistles, simply replacing static ReLU with dynamic ReLU in multiple networks (i.e. ResNet, MobileNet V2 and V3) achieves solid improvement with only a slight increase (5%) of computational cost. For instance, when using MobileNetV2, our method gains 4.2% top1 accuracy on image classification and 3.5 AP on keypoint detection, respectively.
2 Related Work
Activation Functions: As a key factor in deep neural network, activation function introduces nonlinearity. Among various activation functions, ReLU [10, 28, 18] is widely used. Three generalizations of ReLU are based on using a nonzero slopes for negative input. Absolute value rectification [18] fixes . LeakyReLU [26] fixes to a small value, while PReLU [11] treats
as a learnable parameter. RReLU took a further step by making the trainable parameter a random number sampled from a uniform distribution
[41]. Maxout [8] generalizes ReLU further, by dividing input into groups and outputs the maximum. One problem of ReLU is that it is not smooth. A number of smooth activation functions have been developed to address this, such as softplus [6], ELU [4], SELU [20], Mish [27]. PELU [34] introduced three trainable parameters into ELU. Recently, empowered by neural architecture search (NAS) techniques [46, 30, 47, 23, 40, 2, 33, 36], Ramachandran et al. [29] found several novel activation functions, such as Swish function. Different to these static activation functions that are input independent, our dynamic ReLU adapts the activation function to the input.Dynamic Neural Networks: Our method is related to recent works of dynamic neural networks [21, 24, 35, 38, 43, 16, 15, 42, 3]. DNN [24], SkipNet [35] and BlockDrop [38]
learn an additional controller for skipping part of an existing model by using reinforcement learning. MSDNet
[16] allows earlyexit based on the prediction confidence. Slimmable Nets [43] learns a single neural network executable at different widths. Onceforall [1] proposes a progressive shrinking algorithm to train one network that supports multiple subnetworks. Hypernetworks [9] generates network parameters using anther hypernetwork. SENet [15] squeezes the global context and use it to reweight channels. Dynamic convolution [42, 3] dynamically aggregates convolution kernels based on their attentions which are input dependent. Compared with these works, our method shifts the focus from kernel weights to activation functions, and shows dynamic ReLU is very powerful.Efficient CNNs: Recently, designing efficient CNN architectures [17, 13, 31, 14, 45, 25] has been an active research area. MobileNetV1 [13] decomposes convolution to depthwise convolution and pointwise convolution. MobileNetV2 [31] introduces inverted residual and linear bottlenecks. MobileNetV3 [14] applies squeezeandexcitation [15] and employs a platformaware neural architecture search approach [33] to find the optimal network structure. ShuffleNet further reduces MAdds for convolution by channel shuffle operations. ShiftNet [37] replaces expensive spatial convolution by the shift operation and pointwise convolution. Our method provides a new and effective component for efficient networks. It can be directly used in these networks by replacing static ReLU with our dynamic ReLU, with negligible extra computational cost.
3 Dynamic ReLU
We will describe dynamic ReLU (DYReLU) in this section. It is a dynamic piecewise linear function, whose parameters are input dependent. DYReLU does NOT increase either the depth or the width of the network, but increases the model capability efficiently with negligible extra computational cost.
This section is organized as follows. We firstly introduce the generic dynamic activation. Then, we present the mathematical definition of DYReLU and how to implement it. Finally, we compare it with prior work.
3.1 Dynamic Activation
For a given input vector (or tensor)
, the dynamic activation is defined as a function with learnable parameters , which adapt to the input . As shown in Figure 1, it includes two functions:
hyper function : that computes parameters for the activation function.

activation function : that computes the activation for the input. Its parameters are generated by the hyper function .
Note that the hyper function encodes the global context of all input elements () to determine the appropriate activation function. This enables significantly more representation power than its static counterpart (e.g. sigmoid, tanh, hswish [14], ReLU [28, 18], LeakyReLU [26], PRelu [11]), especially for lightweight models (e.g. MobileNet). Next, we will discuss dynamic ReLU.
3.2 Definition and Implementation of Dynamic ReLU
Definition: Let us denote the traditional or static ReLU as , where is the input vector. The activation of the channel is computed as , where is the input on the channel. In contrast, DYReLU is defined as the maximum of multiple () linear functions as follows:
(1) 
where the linear coefficients (, ) are the output of the hyper function as:
(2) 
where is the number of channels. Note that the activation parameters of each channel (, ) are determined by considering all input channels, i.e. and are not only related to its corresponding input , but also related to other input elements .
Example (learning XOR): To make the idea of dynamic ReLU more concrete, we begin with a simple task, i.e. learning XOR function. In this example, we want our network to perform correctly on the four points . Compared with the solution [7] using two linear layers and one static ReLU, DYReLU only needs a single linear layer as follows:
(3) 
where is the weight matrix of the first linear layer, which has a single output . Thus, the activation function only has one channel output . Here, we use subscript to make it consistent to Eq (1). Actually, DYReLU for this case only has one linear function (), with one nonzero parameter that equals to the input . Essentially, this is equivalent to a quadratic function . This example demonstrates that dynamic ReLU has more representation power due to its hyper function.
Implementation: next, we show how to model the hyper function in CNNs, where the input is a 3D tensor. We use a lightweight network to model the hyper function that is similar to SqueezeandExcitation (SE) [15]. The global spatial information is firstly squeezed by global average pooling. It is then followed by two fully connected layers (with a ReLU between them) and a normalization layer. Different from SE, the output has elements, corresponding to the residual of and , which are denoted as and . We simply use to normalize the residual between 1 to 1, where
denotes sigmoid function. The final output of the hyper function is computed as the sum of initialization and residual as follows:
(4) 
where and are the initialization values of and , respectively. and are scalars which control the range of residual. , , and are hyper parameters. For the case of , the default initialization values are , , corresponding to the static ReLU. The default and are 1.0 and 0.5, respectively.
3.3 Relation to Prior Work
Type  relation to DYReLU  






























Table 1 shows the relationship between DYReLU and prior work. The three special cases of DYReLU are equivalent to ReLU [28, 18], LeakyReLU [26] and PReLU [11], where the hyper function becomes static. SqueezeandExcitation [15] is another special case of DYReLU, with a single linear function and zero intercept .
DYReLU is a dynamic and efficient Maxout [8], with significantly less computations but even better performance. Literally, Maxout outputs the maximum of results for convolutional kernels. In contrast, DYReLU applies dynamiclinear transforms on the results of a single convolutional kernel, and outputs the maximum of them. These dynamic linear transforms are powerful and computationally efficient.
4 Variations of Dynamic ReLU
In this section, we introduce another two variations of dynamic ReLU in addition to the option discussed in section 3.2. These three options have different ways of sharing activation functions as follows:

DYReLUA: the activation function is spatial and channelshared.

DYReLUB: the activation function is spatialshared and channelwise.

DYReLUC: the activation function is spatial and channelwise.
DYReLUB has been discussed in section 3.2.
4.1 Network Structure and Complexity
The network structures of three variations are shown in Figure 2. The detailed explanation is discussed as follows:
DYReLUA (Spatial and Channelshared): the same piecewise linear activation function is shared across all spatial positions and channels. Its hyper function has similar network structure (shown in Figure 2(a)) to DYReLUB, except the number of outputs is reduced to . Compared to DYReLUB, DYReLUA has less computational cost, but less representation capability.
DYReLUB (Spatialshared and Channelwise): the implementation details are introduced in section 3.2 and its network structure is shown in Figure 2(b). The activation function requires parameters ( per channel), which are computed by the hyper function.
DYReLUC (Spatial and Channelwise): as shown in Figure 2(c), each input element has a unique activation function , where the subscript indicates the channel at the row and column of the feature map with dimension . This introduces an issue that the output dimension is too large () to use a fully connected layer to generate. We address it by decoupling spatial locations from channels. Specifically, another branch for computing spatial attention is introduced.The final output is computed as the product of channelwise parameters () and spatial attentions (). The spatial attention branch is simple, including an convolution with a single output channel and a normalization function that is a softmax function with upper cutoff as follows:
(5) 
where is the output of convolution, is the temperature, and is a scalar. The softmax is scaled up by is to prevent gradient vanishing. We empirically set , making the average attention to . A large temperature () is used to prevent sparsity during the early training stage. The upper bound constrains the attention between zero and one.
Variation  Avg Pooling  FC1  FC2  Spatial Attention  

DYReLUA  –  
DYReLUB  –  
DYReLUC 
Computational Complexity: DYReLU is computationally efficient. For a feature map with dimension , the computational complexities for the three DYReLU variations are listed in Table 2. Compared to a convolution, DYReLU has reduced complexity by an order of magnitude.
4.2 Ablations
Next, we investigate the three DYReLU variations by performing ablation studies on two tasks: image classification and keypoint detection. Here, we focus on the studies, which enable us to select the proper DYReLU variation for different tasks. The details of datasets, implementation and training setup will be shown later in the next section.
Top1  Top5  

ReLU  60.3  82.9 
DYReLUA  63.3  84.2 
DYReLUB  66.4  86.5 
DYReLUC  66.3  86.7 
The comparison among three DYReLU variations on ImageNet [22] classification is shown in Table 3. Here we use MobileNetV2 with width multiplier . Although all three variations achieve improvement from the baseline, channelwise DYReLUs (variation B and C) are clearly better than the channelshared DYReLU (variation A). Variation B and C have similar accuracy, showing that spatialwise is not critical for image classification.
Backbone  Head  AP  AP  AP  AP  AP  AR 

ReLU  ReLU  59.2  84.3  66.4  56.2  65.0  65.6 
DYReLUA  ReLU  84.6  65.3  56.2  64.4  65.4  
DYReLUB  ReLU  61.5  85.8  68.9  58.5  67.5  67.9 
DYReLUC  ReLU  61.9  86.0  69.2  59.2  67.5  68.3 
ReLU  DYReLUA  83.7  63.5  54.1  62.6  63.6  
ReLU  DYReLUB  83.8  64.9  55.5  64.2  64.6  
ReLU  DYReLUC  61.0  85.2  68.6  58.0  67.1  67.3 
DYReLUC 
DYReLUC  63.3  86.3  71.4  60.3  69.2  69.4 
Table 4
shows the ablation results on single person pose estimation (or keypoint detection). Similar to image classification, channelwise DYReLUs (variation B and C) are better than the channelshared one (variation A) in the backbone. In contrast, the spatialwise variation DYReLUC is critical in the head. This is because the keypoint detection task is spatially sensitive (distinguishing body joints in pixel level), especially in the head network with higher resolutions. The spatial attention allows different magnitudes of activation at different locations, which encourages better learning of DYReLU. Using spatialwise DYReLUC in both the backbone and the head achieves 4.1 AP improvement.
We also observe that the performance is even worse than the baseline if we use DYReLUA in the backbone or use DYReLUA and DYReLUB in the head. We believe that DYReLU becomes sensitive if the hyper function is difficult to learn. Specifically, it is hard to learn spatiallyshared hyper function for spatially sensitive task (i.e. distinguishing between keypoints and background in pixel level). Thus, learning DYReLUA and DYReLUB in the head with higher resolutions becomes sensitive. This sensitivity can be significantly reduced by introducing spatial attention, as learning spatialwise hyper function to distinguish keypoints becomes easier. We also find that when spatial attention is involved in the head network, the training converges much faster at the beginning.
In summary, we have two findings from these ablations: (a) using channelwise DYReLU (variation B or C) is important for image classification, and (b) for keypoint detection, channelwise variations (DYReLUB and DYReLUC) are more suitable for the backbone, and spatial and channelwise DYReLUC is more suitable for the head. Thus, we use DYReLUB for ImageNet classification and use DYReLUC for keypoint detection in the next section.
5 Experimental Results
In this section, we present experimental results on image classification and single person pose estimation to demonstrate the effectiveness of DYReLU. We also report ablation studies to analyze different components of our approach.
5.1 ImageNet Classification
We use ImageNet [5] for all classification experiments. ImageNet has 1000 object classes, including 1,281,167 images for training and 50,000 images for validation. We evaluate DYReLU on three CNN architectures (MobileNetV2[31], MobileNetV3 [14] and ResNet [12]), by using DYReLU as the activation function after each convolution layer. Note that for MobileNetV3, we remove SqueezeandExcitation and replace ReLU and hswish by DYReLU. The main results are obtained by using spatialshared and channelwise DYReLUB with two piecewise linear functions (). The batch size is 256. We use different training setups for the three architectures as follows:
Training setup for MobileNetV2:
The initial learning rate is 0.05 and is scheduled to arrive at zero within a single cosine cycle. All models are trained using SGD optimizer with 0.9 momentum for 300 epochs. The label smoothing (0.1) is used. The weight decay, dropout rate and data augmentation vary for different width multipliers. The details of hyper parameters are shown in appendix
0.A.4.Training setup for MobileNetV3: The initial learning rate is 0.1 and is scheduled to arrive at zero within a single cosine cycle. The weight decay is 3e5 and label smoothing is 0.1. We use SGD optimizer with 0.9 momentum for 300 epochs. We use dropout rate of 0.1 and 0.2 before the last layer for MobileNetV3Small and MobileNetV3Large respectively. We use more data augmentation (color jittering and Mixup [44]) for MobileNetV3Large.
Training setup for ResNet: The initial learning rate is 0.1 and drops by 10 at epoch 30, 60. The weight decay is 1e4. All models are trained using SGD optimizer with 0.9 momentum for 90 epochs. We use dropout rate 0.1 before the last layer and label smoothing for ResNet18, ResNet34 and ResNet50.
Network  Activation  #Param  MAdds  Top1  Top5 

MobileNetV2 
ReLU  3.5M  300.0M  72.0  91.0 
DYReLU  7.5M  315.5M  76.2  93.1  
MobileNetV2  ReLU  2.6M  209.0M  69.8  89.6 
DYReLU  5.0M  221.7M  74.3  91.7  
MobileNetV2  ReLU  2.0M  97.0M  65.4  86.4 
DYReLU  3.1M  104.5M  70.3  89.3  
MobileNetV2  ReLU  1.7M  59.2M  60.3  82.9 
DYReLU  2.7M  65.0M  66.4  86.5  
MobileNetV3Large 
ReLU/HS  5.4M  219.0M  75.2  92.2 
DYReLU  9.8M  230.5M  75.7  92.5  
MobileNetV3Small 
ReLU/HS  2.9M  66.0M  67.4  86.4 
DYReLU  4.0M  68.7M  69.7  88.3  
ResNet50 
ReLU  23.5M  3.8G  76.2  92.9 
DYReLU  27.6M  3.92G  77.2  93.4  
ResNet34  ReLU  21.3M  3.6G  73.3  91.4 
DYReLU  25.2M  3.71G  74.4  92.0  
ResNet18 
ReLU  11.1M  1.81G  69.8  89.1 
DYReLU  12.8M  1.86G  71.8  90.6  
ResNet10  ReLU  5.2M  0.89G  63.0  84.7 
DYReLU  6.3M  0.91G  66.3  86.7 
Activation  K  #Param  MAdds  Top1  Top5 
ReLU  2  1.7M  59.2M  60.3  82.9 
RReLU [41] 
2  1.7M  59.2M  60.0  81.9 
LeakyReLU [26]  2  1.7M  59.2M  60.9  82.3 
PReLU (channelwise) [11]  2  1.7M  59.2M  62.0  83.4 
PReLU (channelshared) [11]  2  1.7M  59.2M  63.1  84.0 
SE[15]+ReLU  2  2.1M  60.9M  62.8  84.6 
Maxout [8]  2  2.1M  118.3M  64.9  85.6 
Maxout [8]  3  2.4M  177.4M  65.4  86.0 
DYReLUB  2  2.7M  65.0M  66.4  86.5 
DYReLUB  3  3.1M  67.8M  66.6  86.8 

Main Results: We compare DYReLU with its static counterpart in three CNN architectures (MobileNetV2, MobileNetV3 and ResNet) in Table 5. Without bells and whistles, DYReLU outperforms its static counterpart by a clear margin for all three architectures, with small extra computational cost (). DYReLU gains more than 1.0% top1 accuracy in ResNet and gains more than 4.2% top1 accuracy in MobileNetV2. For the stateoftheart MobileNetV3, our DYReLU outperforms the combination of SqueezeandExcitation and hswish (key contributions of MobileNetV3). The top1 accuracy is improved by 2.3% and 0.5% for MobileNetV3Small and MobileNetV3Large, respectively. Note that DYReLU achieves more improvement for smaller models (e.g. MobileNetV2 , MobileNetV3Small, ResNet10). This is because the smaller models are underfitted due to their model size, and dynamic ReLU significantly boosts their representation capability.
The comparison between DYReLU and prior work is shown in Table 6. Here we use MobileNetV2 , and replace ReLU with different activation functions in prior work. Our method outperforms all prior work with a clear margin. Compared to Maxout which has significantly more computational cost, DYReLU gains more than 1% top1 accuracy. This demonstrates that DYReLU not only has more representation capability, but also is computationally efficient. The comparison between DYReLU and prior work using MobileNetV2 is shown in appendix 0.A.2. Similarly, our DYReLU outperforms all prior work. Note that channelshared PReLU is better than channelwise PReLU, which is different from the finding in [11]. This may be due to the different network usage (MobileNet vs VGG).
5.2 Ablation Studies on ImageNet
In this subsection, we run a number of ablations to analyze DYReLU. We focus on spatialshared and channelwise DYReLUB, and use MobileNetV2 for all ablations. By default, the number of linear functions in DYReLU is set as . The initialization values of slope and intercept are set as , . The range of slope and intercept are set as and , respectively. The reduction ratio of the first FC layer in the hyper function is set as .
intercept  Activation Function  Top1  Top5  

ReLU  2  60.3  82.9  
2  63.8  85.1  
2  ✓  64.0  85.2  
DYReLU  2  65.7  86.2  
2  ✓  66.4  86.5  
3  65.9  86.3  
3  ✓  66.6  86.8 
Piecewise Linear Functions: Table 7 shows the classification accuracy using different piecewise linear functions. Compared to the static counterpart, all dynamic activation functions gain at least 3.5% on Top1 accuracy. In addition, changing the second function from zero to a parametric linear function gains more improvement (1.9%+). The intercept is helpful consistently. The gap between and is small.


Dynamic ReLU at Different Layers: Table 8(Left) shows the classification accuracy for using DYReLU at three different layers (after conv, depthwise conv, conv) of inverted residual block in MobileNetV2 . The accuracy is improved if DYReLU is used for more layers. Using DYReLU for all three layers yields the best accuracy. If only one layer is allowed to use DYReLU, using it after depthwise convolution yields the best performance.
Reduction Ratio : The reduction ratio of the first FC layer in the hyper function controls the representation capacity and computational cost of DYReLU. The comparison across different reduction ratios is shown in Table 8(Right). Setting achieves a good tradeoff.



Initialization of Slope ( in Eq (4)): As shown in Table 9(Left), the classification accuracy is not sensitive to the initialization values of slopes if the first slope is not close to zero and the second slope is nonnegative.
5.3 COCO SinglePerson Keypoint Detection
We use COCO 2017 dataset [22] to evaluate dynamic ReLU on singleperson keypoint detection. All models are trained on train2017, including images and person instances labeled with 17 keypoints. These models are evaluated on val2017 containing 5000 images by using the mean average precision (AP) over 10 object key point similarity (OKS) thresholds as the metric.
Implementation Details: We evaluate DYReLU on two backbone networks (MobileNetV2 and MobileNetV3) and one head network used in [3]. The head simply uses upsampling and four MobileNetV2’s inverted residual bottleneck blocks. We compare DYReLU with its static counterpart in both backbone and head. The spatial and channelwise DYReLUC is used here, as we show that the spatial attention is important for keypoint detection, especially in the head network (see section 4.2). Note that when using MobileNetV3 as backbone, we remove SqueezeandExcitation and replace either ReLU or hswish by DYReLU. The number of linear functions in DYReLU is set as . The initialization values of slope and intercept are set as , . The range of slope and intercept are set as and , respectively.
Training setup: We follow the training setup in [32]. All models are trained from scratch for 210 epochs, using Adam optimizer [19]. The initial learning rate is set as 1e3 and is dropped to 1e4 and 1e5 at the and epoch, respectively. All human detection boxes are cropped from the image and resized to . The data augmentation includes random rotation (), random scale (), flipping, and half body data augmentation.
Testing: We use the person detectors provided by [39] and follow the evaluation procedure in [39, 32]. The keypoints are predicted on the average heatmap of the original and flipped images. The highest heat value location is then adjusted by a quarter offset from the highest response to the second highest response.
Backbone  Activation  #Param  MAdds  AP  AP  AP  AP  AP  AR 

MBNetV2  ReLU  3.4M  993.7M  64.6  87.0  72.4  61.3  71.0  71.0 
DYReLU  9.0M  1026.9M  68.1  88.5  76.2  64.8  74.3  73.9  
MBNetV2  ReLU  1.9M  794.8M  59.2  84.3  66.4  56.2  65.0  65.6 
DYReLU  4.6M  820.3M  63.3  86.3  71.4  60.3  69.2  69.4  
MBNetV3Large 
ReLU/HS  4.1M  896.4M  65.7  87.4  74.1  62.3  72.2  71.7 
DYReLU  10.1M  926.6M  67.2  88.2  75.4  64.1  73.2  72.9  
MBNetV3Small  ReLU/HS  2.1M  726.9M  57.1  83.8  63.7  55.0  62.2  64.1 
DYReLU  4.8M  747.9M  60.7  85.7  68.1  58.1  66.3  67.3 
Main Results: Table 10 shows the comparison between DYReLU and its static counterpart in four different backbone networks (MobileNetV2 and , MobileNetV3 Small and Large). The head network [3] is shared for these four experiments. DYReLU outperforms baselines by a clear margin. It gains 3.5 and 4.1 AP when using MobileNetV2 with width multipler and , respectively. It also gains 1.5 and 3.6 AP when using MobileNetV3Large and MobileNetV3Small, respectively. These results demonstrate that our method is also effective on keypoint detection.
6 Conclusion
In this paper, we introduce Dynamic ReLU (DYReLU), which adapts a piecewise linear activation function dynamically for each input. Compared to its static counterpart (ReLU and its generalizations), DYReLU significantly improves the representation capability with negligible extra computation cost, thus is more friendly to efficient CNNs. Our dynamic ReLU can be easily integrated into existing CNN architectures. By simply replacing ReLU (or hswish) in ResNet and MobileNet (V2 and V3) with DYReLU, we achieve solid improvement for both image classification and human pose estimation. We hope DYReLU becomes a useful component for efficient network architecture.
Appendix 0.A Appendix
In this appendix, we report additional analysis and experimental results for our dynamic ReLU (DYReLU) method.
0.a.1 Is DYReLU Dynamic?
In this section, we check if DYReLU is dynamic. Therefore, we inspect the input and output of DYReLU, and expect different activation values () across different images for a fixed input value (e.g. ). In contrast, for a given input (e.g. ), the output of ReLU is fixed () regardless of channel or input image. Thus, the inputoutput pairs of ReLU fall into two lines ( if , otherwise).
Fig. 3 plots the input and output values of DYReLU at different blocks (from low level to high level) for 50,000 validation images in ImageNet [5]. We confirm that the activation values () vary in a range (that blue dots in Fig. 3 cover) for a fixed input . This demonstrates that the learnt DYReLU is dynamic to features. Furthermore, we observe that the distribution of inputoutput pairs varies across different blocks, indicating different dynamic functions learnt across levels.
0.a.2 Comparison between DYReLU with Prior Work
Table 11 shows the comparison between DYReLU and prior work, using MobileNetV2 . This is additional to results in Table 6, which are generated by using MobileNetV2 . The same conclusion holds for these two experiments: our method outperforms all prior work. Compared to Maxout [8] which has significantly more computational cost, DYReLU gains 1.1% and 0.4% top1 accuracy for and , respectively. This demonstrates that DYReLU not only has more representation capability, but also is computationally efficient.
Activation  K  #Param  MAdds  Top1  Top5 
ReLU  2  3.5M  300.0M  72.0  91.0 
RReLU [41] 
2  3.5M  300.0M  72.5  90.8 
LeakyReLU [26]  2  3.5M  300.0M  72.7  90.8 
PReLU (channelwise) [11]  2  3.5M  300.0M  72.9  91.0 
PReLU (channelshared) [11]  2  3.5M  300.0M  73.3  91.2 
SE[15]+ReLU  2  5.1M  304.8M  74.2  91.9 
Maxout [8]  2  5.7M  579.1M  75.1  92.3 
Maxout [8]  3  7.8M  866.4M  75.8  92.7 
DYReLUB  2  7.5M  315.5M  76.2  93.1 
DYReLUB  3  9.2M  322.8M  76.2  93.2 

0.a.3 Ablations of DYReLU Variations on Pose Estimation
In this section, we report additional results of comparing three DYReLU variations on COCO keypoint detection [22] (or pose estimation). The three variations are listed as follows:

DYReLUA: the activation function is spatial and channelshared.

DYReLUB: the activation function is spatialshared and channelwise.

DYReLUC: the activation function is spatial and channelwise.
Table 12 shows average precisions (AP) for all 16 combinations of using ReLU or DYReLU variations in both backbone and head. This is additional to the results in Table 4. The original conclusions hold: (a) spatialwise (DYReLUC) is critical in the head network as the last column in Table 12 has higher AP than the previous three columns, and (b) the optimal solution is to use channelwise variation (variation B or C) in the backbone and use spatialwise DYReLUC in the head (see the last two rows in the last column of Table 12). Compared to the baseline that uses ReLU in both backbone and head, using DYReLUC achieves 4.1 AP improvement.
The spatialwise variation (DYReLUC) is a better fit for keypoint detection, which is spatially sensitive (distinguishing body joints in pixel level). This is because the spatial attention allows different magnitudes of activation at different locations. Thus, it encourages better learning of DYReLU especially in the head network, which has higher resolutions.
Head  

ReLU  DYReLUA  DYReLUB  DYReLUC  
ReLU  59.2  57.0  58.4  61.0  
Backbone  DYReLUA  58.8  51.5  56.5  62.4 
DYReLUB  61.5  54.3  58.6  63.2  
DYReLUC  61.9  53.5  58.8  63.3 
0.a.4 Implementation Details of MobileNetV2
We now show the implementation details of MobileNetV2. Basically, we use larger weight decay, dropout rate and more data augmentation for higher width multipliers (e.g. ) to prevent overfitting. We use weight decay 2e5 and dropout 0.1 for width and increase weight decay (3e5) and dropout (0.2) for width , , . Random cropping/flipping and color jitter are used for all width multipliers. Mixup [44] is used for width . Without using Mixup, the top1 accuracy of DYReLU drops from 76.2% to 75.7%, which still outperforms the static counterpart (72.0%) by a clear margin.
References
 [1] (2019) Once for all: train one network and specialize it for efficient deployment. ArXiv abs/1908.09791. Cited by: §2.
 [2] (2019) ProxylessNAS: direct neural architecture search on target task and hardware. In International Conference on Learning Representations, External Links: Link Cited by: §2.
 [3] (2019) Dynamic convolution: attention over convolution kernels. ArXiv abs/1912.03458. Cited by: Table 12, §2, Table 4, §5.3, §5.3.
 [4] (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. Cited by: §2.

[5]
(2009)
Imagenet: a largescale hierarchical image database.
In
2009 IEEE conference on computer vision and pattern recognition
, pp. 248–255. Cited by: Figure 3, §0.A.1, Table 11, Table 3, §5.1, Table 5, Table 6, Table 7, Table 8, Table 9.  [6] (2001) Incorporating secondorder functional knowledge for better option pricing. In Advances in neural information processing systems, pp. 472–478. Cited by: §2.
 [7] (2016) Deep learning. The MIT Press. External Links: ISBN 0262035618, 9780262035613 Cited by: §3.2.
 [8] (2013) Maxout networks. arXiv preprint arXiv:1302.4389. Cited by: §0.A.2, Table 11, §2, §3.3, Table 1, Table 6.
 [9] (2017) HyperNetworks. ICLR. Cited by: §2.
 [10] (2000) Digital selection and analogue amplification coexist in a cortexinspired silicon circuit. Nature 405 (6789), pp. 947–951. Cited by: §2.
 [11] (2015) Delving deep into rectifiers: surpassing humanlevel performance on imagenet classification.. In ICCV, Cited by: Table 11, §1, §2, §3.1, §3.3, Table 1, §5.1, Table 6.
 [12] (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §5.1.
 [13] (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §1, §2.
 [14] (2019) Searching for mobilenetv3. CoRR abs/1905.02244. External Links: Link, 1905.02244 Cited by: §1, §2, §3.1, §5.1.
 [15] (201806) Squeezeandexcitation networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Table 11, §2, §2, §3.2, §3.3, Table 1, Table 6.
 [16] (2018) Multiscale dense networks for resource efficient image classification. In International Conference on Learning Representations, External Links: Link Cited by: §2.
 [17] (2016) SqueezeNet: alexnetlevel accuracy with 50x fewer parameters and <1mb model size. CoRR abs/1602.07360. External Links: Link, 1602.07360 Cited by: §2.
 [18] (2009) What is the best multistage architecture for object recognition?. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §1, §2, §3.1, §3.3, Table 1.
 [19] (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §5.3.
 [20] (2017) Selfnormalizing neural networks. In Advances in neural information processing systems, pp. 971–980. Cited by: §2.
 [21] (2017) Runtime neural pruning. In Advances in Neural Information Processing Systems, pp. 2181–2191. External Links: Link Cited by: §2.
 [22] (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §0.A.3, Table 12, §4.2, Table 4, §5.3.
 [23] (2019) DARTS: differentiable architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §2.

[24]
(2018)
Dynamic deep neural networks: optimizing accuracyefficiency tradeoffs by selective execution.
In
AAAI Conference on Artificial Intelligence (AAAI)
, Cited by: §2.  [25] (201809) ShuffleNet v2: practical guidelines for efficient cnn architecture design. In The European Conference on Computer Vision (ECCV), Cited by: §1, §2.
 [26] (2013) Rectifier nonlinearities improve neural network acoustic models. In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing, Cited by: Table 11, §1, §2, §3.1, §3.3, Table 1, Table 6.
 [27] (2019) Mish: a self regularized nonmonotonic neural activation function. arXiv preprint arXiv:1908.08681. Cited by: §2.

[28]
(2010)
Rectified linear units improve restricted boltzmann machines.
. In ICML, Cited by: §1, §2, §3.1, §3.3, Table 1.  [29] (2017) Searching for activation functions. arXiv preprint arXiv:1710.05941. Cited by: §2.

[30]
(2018)
Regularized evolution for image classifier architecture search
. In AAAI Conference on Artificial Intelligence (AAAI), Cited by: §2.  [31] (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520. Cited by: §1, §2, §5.1.
 [32] (2019) Deep highresolution representation learning for human pose estimation. In CVPR, Cited by: §5.3, §5.3.
 [33] (201906) MnasNet: platformaware neural architecture search for mobile. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §2.

[34]
(2017)
Parametric exponential linear unit for deep convolutional neural networks.
In
2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)
, pp. 207–214. Cited by: §2.  [35] (201809) SkipNet: learning dynamic routing in convolutional networks. In The European Conference on Computer Vision (ECCV), Cited by: §2.
 [36] (201906) FBNet: hardwareaware efficient convnet design via differentiable neural architecture search. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
 [37] (2017) Shift: a zero flop, zero parameter alternative to spatial convolutions. Cited by: §2.
 [38] (201806) BlockDrop: dynamic inference paths in residual networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
 [39] (201804) Simple baselines for human pose estimation and tracking. In European conference on computer vision, pp. . Cited by: §5.3.
 [40] (2019) SNAS: stochastic neural architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §2.
 [41] (2015) Empirical evaluation of rectified activations in convolutional network. CoRR. Cited by: Table 11, §2, Table 6.
 [42] (2019) CondConv: conditionally parameterized convolutions for efficient inference. In NeurIPS, Cited by: §2.
 [43] (2019) Slimmable neural networks. In International Conference on Learning Representations, External Links: Link Cited by: §2.
 [44] (2018) Mixup: beyond empirical risk minimization. In International Conference on Learning Representations, External Links: Link Cited by: §0.A.4, §5.1.
 [45] (201806) ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
 [46] (2017) Neural architecture search with reinforcement learning. CoRR abs/1611.01578. Cited by: §2.
 [47] (201806) Learning transferable architectures for scalable image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
Comments
There are no comments yet.