is one of the few milestones in the deep learning revolution. It is simple and powerful, greatly improving the performance of feed-forward networks. Thus, it has been widely used in many successful architectures (e.g. ResNet, MobileNet[13, 31, 14] and ShuffleNet [45, 25]) for different vision tasks (e.g. recognition, detection, segmentation).
ReLU and its generalizations, either non-parametric (leaky ReLU ) or parametric(PReLU ) are static. They perform in the exactly same way for different inputs (e.g. images). This naturally raises an issue: should rectifiers be fixed or adaptive to input (e.g. images)? In this paper, we investigate dynamic rectifiers to answer this question.
We propose Dynamic ReLU (DY-ReLU), a piecewise linear function whose parameters are computed from a hyper function over input . For example, Figure 1 illustrates that the slopes of two linear functions are determined by the hyper function. The key idea is that the global context of all input elements is encoded in the hyper function for adapting the piecewise linear activation function . This enables significantly more representation capability especially for light-weight neural networks (e.g. MobileNet). Meanwhile it is computationally efficient as the hyper function is simple with negligible extra computational cost.
Furthermore, we explore three variations of dynamic ReLU. They have different ways of sharing activation functions across spatial locations and channels: (a) spatial and channel-shared DY-ReLU-A, (b) spatial-shared and channel-wise DY-ReLU-B, and (c) spatial and channel-wise DY-ReLU-C. We have two findings. Firstly, channel-wise variations (DY-ReLU-B and DY-ReLU-C) are more suitable for image classification. Secondly, for keypoint detection, channel-wise variations (DY-ReLU-B and DY-ReLU-C) are more suitable for the backbone network while the spatial-wise variation (DY-ReLU-C) is more suitable for the head network.
We demonstrate the effectiveness of DY-ReLU on both image classification (ImageNet) and keypoint detection (COCO). Without bells and whistles, simply replacing static ReLU with dynamic ReLU in multiple networks (i.e. ResNet, MobileNet V2 and V3) achieves solid improvement with only a slight increase (5%) of computational cost. For instance, when using MobileNetV2, our method gains 4.2% top-1 accuracy on image classification and 3.5 AP on keypoint detection, respectively.
2 Related Work
Activation Functions: As a key factor in deep neural network, activation function introduces non-linearity. Among various activation functions, ReLU [10, 28, 18] is widely used. Three generalizations of ReLU are based on using a nonzero slopes for negative input. Absolute value rectification  fixes . LeakyReLU  fixes to a small value, while PReLU  treats
as a learnable parameter. RReLU took a further step by making the trainable parameter a random number sampled from a uniform distribution. Maxout  generalizes ReLU further, by dividing input into groups and outputs the maximum. One problem of ReLU is that it is not smooth. A number of smooth activation functions have been developed to address this, such as softplus , ELU , SELU , Mish . PELU  introduced three trainable parameters into ELU. Recently, empowered by neural architecture search (NAS) techniques [46, 30, 47, 23, 40, 2, 33, 36], Ramachandran et al.  found several novel activation functions, such as Swish function. Different to these static activation functions that are input independent, our dynamic ReLU adapts the activation function to the input.
learn an additional controller for skipping part of an existing model by using reinforcement learning. MSDNet allows early-exit based on the prediction confidence. Slimmable Nets  learns a single neural network executable at different widths. Once-for-all  proposes a progressive shrinking algorithm to train one network that supports multiple sub-networks. Hypernetworks  generates network parameters using anther hypernetwork. SENet  squeezes the global context and use it to reweight channels. Dynamic convolution [42, 3] dynamically aggregates convolution kernels based on their attentions which are input dependent. Compared with these works, our method shifts the focus from kernel weights to activation functions, and shows dynamic ReLU is very powerful.
Efficient CNNs: Recently, designing efficient CNN architectures [17, 13, 31, 14, 45, 25] has been an active research area. MobileNetV1  decomposes convolution to depthwise convolution and pointwise convolution. MobileNetV2  introduces inverted residual and linear bottlenecks. MobileNetV3  applies squeeze-and-excitation  and employs a platform-aware neural architecture search approach  to find the optimal network structure. ShuffleNet further reduces MAdds for convolution by channel shuffle operations. ShiftNet  replaces expensive spatial convolution by the shift operation and pointwise convolution. Our method provides a new and effective component for efficient networks. It can be directly used in these networks by replacing static ReLU with our dynamic ReLU, with negligible extra computational cost.
3 Dynamic ReLU
We will describe dynamic ReLU (DY-ReLU) in this section. It is a dynamic piecewise linear function, whose parameters are input dependent. DY-ReLU does NOT increase either the depth or the width of the network, but increases the model capability efficiently with negligible extra computational cost.
This section is organized as follows. We firstly introduce the generic dynamic activation. Then, we present the mathematical definition of DY-ReLU and how to implement it. Finally, we compare it with prior work.
3.1 Dynamic Activation
hyper function : that computes parameters for the activation function.
activation function : that computes the activation for the input. Its parameters are generated by the hyper function .
Note that the hyper function encodes the global context of all input elements () to determine the appropriate activation function. This enables significantly more representation power than its static counterpart (e.g. sigmoid, tanh, h-swish , ReLU [28, 18], LeakyReLU , PRelu ), especially for light-weight models (e.g. MobileNet). Next, we will discuss dynamic ReLU.
3.2 Definition and Implementation of Dynamic ReLU
Definition: Let us denote the traditional or static ReLU as , where is the input vector. The activation of the channel is computed as , where is the input on the channel. In contrast, DY-ReLU is defined as the maximum of multiple () linear functions as follows:
where the linear coefficients (, ) are the output of the hyper function as:
where is the number of channels. Note that the activation parameters of each channel (, ) are determined by considering all input channels, i.e. and are not only related to its corresponding input , but also related to other input elements .
Example (learning XOR): To make the idea of dynamic ReLU more concrete, we begin with a simple task, i.e. learning XOR function. In this example, we want our network to perform correctly on the four points . Compared with the solution  using two linear layers and one static ReLU, DY-ReLU only needs a single linear layer as follows:
where is the weight matrix of the first linear layer, which has a single output . Thus, the activation function only has one channel output . Here, we use subscript to make it consistent to Eq (1). Actually, DY-ReLU for this case only has one linear function (), with one non-zero parameter that equals to the input . Essentially, this is equivalent to a quadratic function . This example demonstrates that dynamic ReLU has more representation power due to its hyper function.
Implementation: next, we show how to model the hyper function in CNNs, where the input is a 3D tensor. We use a light-weight network to model the hyper function that is similar to Squeeze-and-Excitation (SE) . The global spatial information is firstly squeezed by global average pooling. It is then followed by two fully connected layers (with a ReLU between them) and a normalization layer. Different from SE, the output has elements, corresponding to the residual of and , which are denoted as and . We simply use to normalize the residual between -1 to 1, where
denotes sigmoid function. The final output of the hyper function is computed as the sum of initialization and residual as follows:
where and are the initialization values of and , respectively. and are scalars which control the range of residual. , , and are hyper parameters. For the case of , the default initialization values are , , corresponding to the static ReLU. The default and are 1.0 and 0.5, respectively.
3.3 Relation to Prior Work
|Type||relation to DY-ReLU|
Table 1 shows the relationship between DY-ReLU and prior work. The three special cases of DY-ReLU are equivalent to ReLU [28, 18], LeakyReLU  and PReLU , where the hyper function becomes static. Squeeze-and-Excitation  is another special case of DY-ReLU, with a single linear function and zero intercept .
DY-ReLU is a dynamic and efficient Maxout , with significantly less computations but even better performance. Literally, Maxout outputs the maximum of results for convolutional kernels. In contrast, DY-ReLU applies dynamiclinear transforms on the results of a single convolutional kernel, and outputs the maximum of them. These dynamic linear transforms are powerful and computationally efficient.
4 Variations of Dynamic ReLU
In this section, we introduce another two variations of dynamic ReLU in addition to the option discussed in section 3.2. These three options have different ways of sharing activation functions as follows:
DY-ReLU-A: the activation function is spatial and channel-shared.
DY-ReLU-B: the activation function is spatial-shared and channel-wise.
DY-ReLU-C: the activation function is spatial and channel-wise.
DY-ReLU-B has been discussed in section 3.2.
4.1 Network Structure and Complexity
The network structures of three variations are shown in Figure 2. The detailed explanation is discussed as follows:
DY-ReLU-A (Spatial and Channel-shared): the same piecewise linear activation function is shared across all spatial positions and channels. Its hyper function has similar network structure (shown in Figure 2-(a)) to DY-ReLU-B, except the number of outputs is reduced to . Compared to DY-ReLU-B, DY-ReLU-A has less computational cost, but less representation capability.
DY-ReLU-B (Spatial-shared and Channel-wise): the implementation details are introduced in section 3.2 and its network structure is shown in Figure 2-(b). The activation function requires parameters ( per channel), which are computed by the hyper function.
DY-ReLU-C (Spatial and Channel-wise): as shown in Figure 2-(c), each input element has a unique activation function , where the subscript indicates the channel at the row and column of the feature map with dimension . This introduces an issue that the output dimension is too large () to use a fully connected layer to generate. We address it by decoupling spatial locations from channels. Specifically, another branch for computing spatial attention is introduced.The final output is computed as the product of channel-wise parameters () and spatial attentions (). The spatial attention branch is simple, including an convolution with a single output channel and a normalization function that is a softmax function with upper cutoff as follows:
where is the output of convolution, is the temperature, and is a scalar. The softmax is scaled up by is to prevent gradient vanishing. We empirically set , making the average attention to . A large temperature () is used to prevent sparsity during the early training stage. The upper bound constrains the attention between zero and one.
|Variation||Avg Pooling||FC-1||FC-2||Spatial Attention|
Computational Complexity: DY-ReLU is computationally efficient. For a feature map with dimension , the computational complexities for the three DY-ReLU variations are listed in Table 2. Compared to a convolution, DY-ReLU has reduced complexity by an order of magnitude.
Next, we investigate the three DY-ReLU variations by performing ablation studies on two tasks: image classification and keypoint detection. Here, we focus on the studies, which enable us to select the proper DY-ReLU variation for different tasks. The details of datasets, implementation and training setup will be shown later in the next section.
The comparison among three DY-ReLU variations on ImageNet  classification is shown in Table 3. Here we use MobileNetV2 with width multiplier . Although all three variations achieve improvement from the baseline, channel-wise DY-ReLUs (variation B and C) are clearly better than the channel-shared DY-ReLU (variation A). Variation B and C have similar accuracy, showing that spatial-wise is not critical for image classification.
shows the ablation results on single person pose estimation (or keypoint detection). Similar to image classification, channel-wise DY-ReLUs (variation B and C) are better than the channel-shared one (variation A) in the backbone. In contrast, the spatial-wise variation DY-ReLU-C is critical in the head. This is because the keypoint detection task is spatially sensitive (distinguishing body joints in pixel level), especially in the head network with higher resolutions. The spatial attention allows different magnitudes of activation at different locations, which encourages better learning of DY-ReLU. Using spatial-wise DY-ReLU-C in both the backbone and the head achieves 4.1 AP improvement.
We also observe that the performance is even worse than the baseline if we use DY-ReLU-A in the backbone or use DY-ReLU-A and DY-ReLU-B in the head. We believe that DY-ReLU becomes sensitive if the hyper function is difficult to learn. Specifically, it is hard to learn spatially-shared hyper function for spatially sensitive task (i.e. distinguishing between keypoints and background in pixel level). Thus, learning DY-ReLU-A and DY-ReLU-B in the head with higher resolutions becomes sensitive. This sensitivity can be significantly reduced by introducing spatial attention, as learning spatial-wise hyper function to distinguish keypoints becomes easier. We also find that when spatial attention is involved in the head network, the training converges much faster at the beginning.
In summary, we have two findings from these ablations: (a) using channel-wise DY-ReLU (variation B or C) is important for image classification, and (b) for keypoint detection, channel-wise variations (DY-ReLU-B and DY-ReLU-C) are more suitable for the backbone, and spatial and channel-wise DY-ReLU-C is more suitable for the head. Thus, we use DY-ReLU-B for ImageNet classification and use DY-ReLU-C for keypoint detection in the next section.
5 Experimental Results
In this section, we present experimental results on image classification and single person pose estimation to demonstrate the effectiveness of DY-ReLU. We also report ablation studies to analyze different components of our approach.
5.1 ImageNet Classification
We use ImageNet  for all classification experiments. ImageNet has 1000 object classes, including 1,281,167 images for training and 50,000 images for validation. We evaluate DY-ReLU on three CNN architectures (MobileNetV2, MobileNetV3  and ResNet ), by using DY-ReLU as the activation function after each convolution layer. Note that for MobileNetV3, we remove Squeeze-and-Excitation and replace ReLU and h-swish by DY-ReLU. The main results are obtained by using spatial-shared and channel-wise DY-ReLU-B with two piecewise linear functions (). The batch size is 256. We use different training setups for the three architectures as follows:
Training setup for MobileNetV2:
The initial learning rate is 0.05 and is scheduled to arrive at zero within a single cosine cycle. All models are trained using SGD optimizer with 0.9 momentum for 300 epochs. The label smoothing (0.1) is used. The weight decay, dropout rate and data augmentation vary for different width multipliers. The details of hyper parameters are shown in appendix0.A.4.
Training setup for MobileNetV3: The initial learning rate is 0.1 and is scheduled to arrive at zero within a single cosine cycle. The weight decay is 3e-5 and label smoothing is 0.1. We use SGD optimizer with 0.9 momentum for 300 epochs. We use dropout rate of 0.1 and 0.2 before the last layer for MobileNetV3-Small and MobileNetV3-Large respectively. We use more data augmentation (color jittering and Mixup ) for MobileNetV3-Large.
Training setup for ResNet: The initial learning rate is 0.1 and drops by 10 at epoch 30, 60. The weight decay is 1e-4. All models are trained using SGD optimizer with 0.9 momentum for 90 epochs. We use dropout rate 0.1 before the last layer and label smoothing for ResNet-18, ResNet-34 and ResNet-50.
|PReLU (channel-wise) ||2||1.7M||59.2M||62.0||83.4|
|PReLU (channel-shared) ||2||1.7M||59.2M||63.1||84.0|
Main Results: We compare DY-ReLU with its static counterpart in three CNN architectures (MobileNetV2, MobileNetV3 and ResNet) in Table 5. Without bells and whistles, DY-ReLU outperforms its static counterpart by a clear margin for all three architectures, with small extra computational cost (). DY-ReLU gains more than 1.0% top-1 accuracy in ResNet and gains more than 4.2% top-1 accuracy in MobileNetV2. For the state-of-the-art MobileNetV3, our DY-ReLU outperforms the combination of Squeeze-and-Excitation and h-swish (key contributions of MobileNetV3). The top-1 accuracy is improved by 2.3% and 0.5% for MobileNetV3-Small and MobileNetV3-Large, respectively. Note that DY-ReLU achieves more improvement for smaller models (e.g. MobileNetV2 , MobileNetV3-Small, ResNet-10). This is because the smaller models are underfitted due to their model size, and dynamic ReLU significantly boosts their representation capability.
The comparison between DY-ReLU and prior work is shown in Table 6. Here we use MobileNetV2 , and replace ReLU with different activation functions in prior work. Our method outperforms all prior work with a clear margin. Compared to Maxout which has significantly more computational cost, DY-ReLU gains more than 1% top-1 accuracy. This demonstrates that DY-ReLU not only has more representation capability, but also is computationally efficient. The comparison between DY-ReLU and prior work using MobileNetV2 is shown in appendix 0.A.2. Similarly, our DY-ReLU outperforms all prior work. Note that channel-shared PReLU is better than channel-wise PReLU, which is different from the finding in . This may be due to the different network usage (MobileNet vs VGG).
5.2 Ablation Studies on ImageNet
In this subsection, we run a number of ablations to analyze DY-ReLU. We focus on spatial-shared and channel-wise DY-ReLU-B, and use MobileNetV2 for all ablations. By default, the number of linear functions in DY-ReLU is set as . The initialization values of slope and intercept are set as , . The range of slope and intercept are set as and , respectively. The reduction ratio of the first FC layer in the hyper function is set as .
Piecewise Linear Functions: Table 7 shows the classification accuracy using different piecewise linear functions. Compared to the static counterpart, all dynamic activation functions gain at least 3.5% on Top-1 accuracy. In addition, changing the second function from zero to a parametric linear function gains more improvement (1.9%+). The intercept is helpful consistently. The gap between and is small.
Dynamic ReLU at Different Layers: Table 8-(Left) shows the classification accuracy for using DY-ReLU at three different layers (after conv, depthwise conv, conv) of inverted residual block in MobileNetV2 . The accuracy is improved if DY-ReLU is used for more layers. Using DY-ReLU for all three layers yields the best accuracy. If only one layer is allowed to use DY-ReLU, using it after depth-wise convolution yields the best performance.
Reduction Ratio : The reduction ratio of the first FC layer in the hyper function controls the representation capacity and computational cost of DY-ReLU. The comparison across different reduction ratios is shown in Table 8-(Right). Setting achieves a good trade-off.
Initialization of Slope ( in Eq (4)): As shown in Table 9-(Left), the classification accuracy is not sensitive to the initialization values of slopes if the first slope is not close to zero and the second slope is non-negative.
Initialization of Intercept ( in Eq (4)): the performance is stable (shown in Table 9-(Middle)) when both intercepts are close to zero. The second intercept is more sensitive than the first one, as it moves the interception of two lines further away from the origin diagonally.
5.3 COCO Single-Person Keypoint Detection
We use COCO 2017 dataset  to evaluate dynamic ReLU on single-person keypoint detection. All models are trained on train2017, including images and person instances labeled with 17 keypoints. These models are evaluated on val2017 containing 5000 images by using the mean average precision (AP) over 10 object key point similarity (OKS) thresholds as the metric.
Implementation Details: We evaluate DY-ReLU on two backbone networks (MobileNetV2 and MobileNetV3) and one head network used in . The head simply uses upsampling and four MobileNetV2’s inverted residual bottleneck blocks. We compare DY-ReLU with its static counterpart in both backbone and head. The spatial and channel-wise DY-ReLU-C is used here, as we show that the spatial attention is important for keypoint detection, especially in the head network (see section 4.2). Note that when using MobileNetV3 as backbone, we remove Squeeze-and-Excitation and replace either ReLU or h-swish by DY-ReLU. The number of linear functions in DY-ReLU is set as . The initialization values of slope and intercept are set as , . The range of slope and intercept are set as and , respectively.
Training setup: We follow the training setup in . All models are trained from scratch for 210 epochs, using Adam optimizer . The initial learning rate is set as 1e-3 and is dropped to 1e-4 and 1e-5 at the and epoch, respectively. All human detection boxes are cropped from the image and resized to . The data augmentation includes random rotation (), random scale (), flipping, and half body data augmentation.
Testing: We use the person detectors provided by  and follow the evaluation procedure in [39, 32]. The keypoints are predicted on the average heatmap of the original and flipped images. The highest heat value location is then adjusted by a quarter offset from the highest response to the second highest response.
Main Results: Table 10 shows the comparison between DY-ReLU and its static counterpart in four different backbone networks (MobileNetV2 and , MobileNetV3 Small and Large). The head network  is shared for these four experiments. DY-ReLU outperforms baselines by a clear margin. It gains 3.5 and 4.1 AP when using MobileNetV2 with width multipler and , respectively. It also gains 1.5 and 3.6 AP when using MobileNetV3-Large and MobileNetV3-Small, respectively. These results demonstrate that our method is also effective on keypoint detection.
In this paper, we introduce Dynamic ReLU (DY-ReLU), which adapts a piecewise linear activation function dynamically for each input. Compared to its static counterpart (ReLU and its generalizations), DY-ReLU significantly improves the representation capability with negligible extra computation cost, thus is more friendly to efficient CNNs. Our dynamic ReLU can be easily integrated into existing CNN architectures. By simply replacing ReLU (or h-swish) in ResNet and MobileNet (V2 and V3) with DY-ReLU, we achieve solid improvement for both image classification and human pose estimation. We hope DY-ReLU becomes a useful component for efficient network architecture.
Appendix 0.A Appendix
In this appendix, we report additional analysis and experimental results for our dynamic ReLU (DY-ReLU) method.
0.a.1 Is DY-ReLU Dynamic?
In this section, we check if DY-ReLU is dynamic. Therefore, we inspect the input and output of DY-ReLU, and expect different activation values () across different images for a fixed input value (e.g. ). In contrast, for a given input (e.g. ), the output of ReLU is fixed () regardless of channel or input image. Thus, the input-output pairs of ReLU fall into two lines ( if , otherwise).
Fig. 3 plots the input and output values of DY-ReLU at different blocks (from low level to high level) for 50,000 validation images in ImageNet . We confirm that the activation values () vary in a range (that blue dots in Fig. 3 cover) for a fixed input . This demonstrates that the learnt DY-ReLU is dynamic to features. Furthermore, we observe that the distribution of input-output pairs varies across different blocks, indicating different dynamic functions learnt across levels.
0.a.2 Comparison between DY-ReLU with Prior Work
Table 11 shows the comparison between DY-ReLU and prior work, using MobileNetV2 . This is additional to results in Table 6, which are generated by using MobileNetV2 . The same conclusion holds for these two experiments: our method outperforms all prior work. Compared to Maxout  which has significantly more computational cost, DY-ReLU gains 1.1% and 0.4% top-1 accuracy for and , respectively. This demonstrates that DY-ReLU not only has more representation capability, but also is computationally efficient.
|PReLU (channel-wise) ||2||3.5M||300.0M||72.9||91.0|
|PReLU (channel-shared) ||2||3.5M||300.0M||73.3||91.2|
0.a.3 Ablations of DY-ReLU Variations on Pose Estimation
In this section, we report additional results of comparing three DY-ReLU variations on COCO keypoint detection  (or pose estimation). The three variations are listed as follows:
DY-ReLU-A: the activation function is spatial and channel-shared.
DY-ReLU-B: the activation function is spatial-shared and channel-wise.
DY-ReLU-C: the activation function is spatial and channel-wise.
Table 12 shows average precisions (AP) for all 16 combinations of using ReLU or DY-ReLU variations in both backbone and head. This is additional to the results in Table 4. The original conclusions hold: (a) spatial-wise (DY-ReLU-C) is critical in the head network as the last column in Table 12 has higher AP than the previous three columns, and (b) the optimal solution is to use channel-wise variation (variation B or C) in the backbone and use spatial-wise DY-ReLU-C in the head (see the last two rows in the last column of Table 12). Compared to the baseline that uses ReLU in both backbone and head, using DY-ReLU-C achieves 4.1 AP improvement.
The spatial-wise variation (DY-ReLU-C) is a better fit for keypoint detection, which is spatially sensitive (distinguishing body joints in pixel level). This is because the spatial attention allows different magnitudes of activation at different locations. Thus, it encourages better learning of DY-ReLU especially in the head network, which has higher resolutions.
0.a.4 Implementation Details of MobileNetV2
We now show the implementation details of MobileNetV2. Basically, we use larger weight decay, dropout rate and more data augmentation for higher width multipliers (e.g. ) to prevent overfitting. We use weight decay 2e-5 and dropout 0.1 for width and increase weight decay (3e-5) and dropout (0.2) for width , , . Random cropping/flipping and color jitter are used for all width multipliers. Mixup  is used for width . Without using Mixup, the top-1 accuracy of DY-ReLU drops from 76.2% to 75.7%, which still outperforms the static counterpart (72.0%) by a clear margin.
-  (2019) Once for all: train one network and specialize it for efficient deployment. ArXiv abs/1908.09791. Cited by: §2.
-  (2019) ProxylessNAS: direct neural architecture search on target task and hardware. In International Conference on Learning Representations, External Links: Cited by: §2.
-  (2019) Dynamic convolution: attention over convolution kernels. ArXiv abs/1912.03458. Cited by: Table 12, §2, Table 4, §5.3, §5.3.
-  (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. Cited by: §2.
-  (2009) Imagenet: a large-scale hierarchical image database. In , pp. 248–255. Cited by: Figure 3, §0.A.1, Table 11, Table 3, §5.1, Table 5, Table 6, Table 7, Table 8, Table 9.
-  (2001) Incorporating second-order functional knowledge for better option pricing. In Advances in neural information processing systems, pp. 472–478. Cited by: §2.
-  (2016) Deep learning. The MIT Press. External Links: Cited by: §3.2.
-  (2013) Maxout networks. arXiv preprint arXiv:1302.4389. Cited by: §0.A.2, Table 11, §2, §3.3, Table 1, Table 6.
-  (2017) HyperNetworks. ICLR. Cited by: §2.
-  (2000) Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405 (6789), pp. 947–951. Cited by: §2.
-  (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification.. In ICCV, Cited by: Table 11, §1, §2, §3.1, §3.3, Table 1, §5.1, Table 6.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §5.1.
-  (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §1, §2.
-  (2019) Searching for mobilenetv3. CoRR abs/1905.02244. External Links: Cited by: §1, §2, §3.1, §5.1.
-  (2018-06) Squeeze-and-excitation networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Table 11, §2, §2, §3.2, §3.3, Table 1, Table 6.
-  (2018) Multi-scale dense networks for resource efficient image classification. In International Conference on Learning Representations, External Links: Cited by: §2.
-  (2016) SqueezeNet: alexnet-level accuracy with 50x fewer parameters and <1mb model size. CoRR abs/1602.07360. External Links: Cited by: §2.
-  (2009) What is the best multi-stage architecture for object recognition?. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §1, §2, §3.1, §3.3, Table 1.
-  (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §5.3.
-  (2017) Self-normalizing neural networks. In Advances in neural information processing systems, pp. 971–980. Cited by: §2.
-  (2017) Runtime neural pruning. In Advances in Neural Information Processing Systems, pp. 2181–2191. External Links: Cited by: §2.
-  (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §0.A.3, Table 12, §4.2, Table 4, §5.3.
-  (2019) DARTS: differentiable architecture search. In International Conference on Learning Representations, External Links: Cited by: §2.
Dynamic deep neural networks: optimizing accuracy-efficiency trade-offs by selective execution.
AAAI Conference on Artificial Intelligence (AAAI), Cited by: §2.
-  (2018-09) ShuffleNet v2: practical guidelines for efficient cnn architecture design. In The European Conference on Computer Vision (ECCV), Cited by: §1, §2.
-  (2013) Rectifier nonlinearities improve neural network acoustic models. In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing, Cited by: Table 11, §1, §2, §3.1, §3.3, Table 1, Table 6.
-  (2019) Mish: a self regularized non-monotonic neural activation function. arXiv preprint arXiv:1908.08681. Cited by: §2.
Rectified linear units improve restricted boltzmann machines.. In ICML, Cited by: §1, §2, §3.1, §3.3, Table 1.
-  (2017) Searching for activation functions. arXiv preprint arXiv:1710.05941. Cited by: §2.
Regularized evolution for image classifier architecture search. In AAAI Conference on Artificial Intelligence (AAAI), Cited by: §2.
-  (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520. Cited by: §1, §2, §5.1.
-  (2019) Deep high-resolution representation learning for human pose estimation. In CVPR, Cited by: §5.3, §5.3.
-  (2019-06) MnasNet: platform-aware neural architecture search for mobile. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2, §2.
Parametric exponential linear unit for deep convolutional neural networks.
2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 207–214. Cited by: §2.
-  (2018-09) SkipNet: learning dynamic routing in convolutional networks. In The European Conference on Computer Vision (ECCV), Cited by: §2.
-  (2019-06) FBNet: hardware-aware efficient convnet design via differentiable neural architecture search. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
-  (2017) Shift: a zero flop, zero parameter alternative to spatial convolutions. Cited by: §2.
-  (2018-06) BlockDrop: dynamic inference paths in residual networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
-  (2018-04) Simple baselines for human pose estimation and tracking. In European conference on computer vision, pp. . Cited by: §5.3.
-  (2019) SNAS: stochastic neural architecture search. In International Conference on Learning Representations, External Links: Cited by: §2.
-  (2015) Empirical evaluation of rectified activations in convolutional network. CoRR. Cited by: Table 11, §2, Table 6.
-  (2019) CondConv: conditionally parameterized convolutions for efficient inference. In NeurIPS, Cited by: §2.
-  (2019) Slimmable neural networks. In International Conference on Learning Representations, External Links: Cited by: §2.
-  (2018) Mixup: beyond empirical risk minimization. In International Conference on Learning Representations, External Links: Cited by: §0.A.4, §5.1.
-  (2018-06) ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
-  (2017) Neural architecture search with reinforcement learning. CoRR abs/1611.01578. Cited by: §2.
-  (2018-06) Learning transferable architectures for scalable image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.