Deep neural networks (DNNs) are state-of-the-art models, responsible for transforming research in the area of vision, language and speech [14, 7, 4]. Various works [30, 31, 26, 29, 27, 28, 25, 20, 32]
have been proposed for efficient deep learning. These deep network at core performs a linear transformation followed by a non-linear operation using an activation function. The activation function is the one, which is responsible for nonlinear behaviour and the learning capabilities of the network. These activations are non-linear continuous functions which may also possess non-differentiability[22, 18, 8]
. Researchers have proposed many activation functions which can be classified into saturated[2, 21, 10] and non-saturated activation functions [2, 18, 35, 22].
The saturated activation belongs to a category, in which the learning process gets slow down due to the very small gradient near-saturated output. These activation functions are experimentally proved to be less effective for training a deep network. The key reason for the failure is the (vanishing/exploding) gradient problems, which mostly occurs due to saturated output in an activation function. This problem is efficiently tackled by using non-saturated activation function, like ReLU[22, 18]
. In particular, the derivative of ReLU is one for the positive inputs; hence, the gradient cannot vanish. In contrast, all the negative values are mapped to zero, which restricts the flow of information in DNNs for these negative values. ReLU gets saturated exactly at zero, which makes ReLU fragile at the time of training, and the neuron can die forever. For example, the flow of large gradients through ReLU may update weight parameters in a way that may deactivate neurons for all data points. This problem is known as dying ReLU, which implies that the gradients flow through the neuron will forever be zero from that point. Due to this, the gradient-based optimization algorithm will not be able to update the weights of that neuron unit. Also, training the network on a high learning rate may shoot the number of “dead” neurons in the network as much as 40% of the network (i.e., neurons that never activate across the entire training dataset). So there is a need to set the learning rate properly to reduce the issues.
To resolve these potential problems originated by the hard zero mappings in the ReLU units, various generalizations of ReLU such as Leaky ReLU , and PReLU  have been proposed. Both Leaky ReLU and PReLU are same as ReLU except for the case of negative inputs in which a small constant slope for Leaky ReLU and a learnable slope for PReLU are used. Similarly, Exponential Linear Units (ELU)  is also proposed, which resolves the bias shift  from the succeeding layers. ELU  gives an exponential value corresponding to negative inputs, which force the mean output of the activation function to reach towards zero. Although ELU is not backed by concrete theory, ELU shows competitive results. Besides all these, Softplus  is approximately similar to ReLU, except at zero, where softplus is differentiable and smooth. Softplus is also differentiable everywhere and saturates less, which gives an edge over ReLU. In practice, there is no non-linear activation function that outperforms all the time over all models, datasets, and problems.
In this paper, we propose an approach in which multiple non-linear activation functions are exploited using a cooperative strategy to overcome their drawbacks. We have used multiple activation functions in the initial epochs of training the deep network. The aggregation of gradient from all the activation function gives a regularization effect
for the gradient flow corresponding to the whole range of inputs (negative values also). This results in regularizing the update of weight parameters, which is a very crucial step in the initial stage. In the next stage, we train the network with only one activation function (standard network) corresponding to each layer of the network. The proposed approach has experimented extensively on different architectures such as ResNet, and VGG-16 models over CIFAR-10, CIFAR-100, and ImageNet datasets. We also experimented with object detection task using SSD-300 on the PASCAL VOC dataset.
Major contributions of this paper are as follows:
We have shown experimentally that using multiple activation functions in the initial few epochs of the training process benefits the update of the full set of weight parameters, which results in substantial performance improvement later.
We have shown empirically that a mixture of non-linear activation functions results in significant improvement in the performance as compared to the individual non-linear activation function.
We have shown that Cooperative Initialization based training also help in reducing overfitting problem.
Our proposed approach does not increase the number of parameters and inference (test) time in the final model while improving the performance.
2 Previous Work
The first activation function is a step function originally used in the perceptron model. Researchers in the same direction have also proposed many other saturated activation functions such as sigmoid, softmax, and tanh . The ReLU activation further replaces these activation functions, owing to the outstanding performance on deep neural networks [22, 33, 12]
. ReLU had escalated the convergence and resolved the vanish gradient problem, normally occurred in saturated activation functions. These activation functions have accelerated the efforts of the research community in solving various vision problems. Several attempts have been made to develop a more efficient network by developing better activation functions, which can resolve the problems arising in the above activation functions. Some variants of ReLU have been proposed such as RReLU, PReLU, leaky ReLU, and others[35, 8, 18].
To resolve the issues of mapping, all negative input to zero (dying ReLU) in the ReLU activation function, Leaky ReLU  has been proposed. This mapping causes an information loss (dead neuron), which is resolved by defining a linear function corresponding to negative input, having a small predefined constant slope, to leak some information 
. However, Leaky ReLU does not give any notable results on performance experimentally. Further, a parametric rectified linear unit (PReLU) has been proposed which uses a learnable slope parameter instead of a constant slope as in Leaky ReLU 
for negative inputs. PReLU gives better performance than ReLU in many cases. On the second thought, the slope parameter can be randomly sampled from a uniform distribution as used in Randomized Leaky Rectified Linear Unit (RReLU) which reduces the risk of overfitting in the training phase. ELU  is also same as ReLU for positive input while it behaves similarly to saturated exponential function for negative inputs. Further, Parametric ELU (PELU) is a scaled version of ELU having a learnable scaling parameter . Most of the above discussed ReLU variants are based on the experiments over the negative inputs of ReLU.
There are few other works in which new activation functions are proposed, such as Maxout (maximum over K affine functions), Softplus, and Adaptive Piecewise Linear (APL) [6, 36, 1]. The APL consist of many non-differentiable points that scale linearly with the number of hinge functions. This will increase the model complexity and affect the parameter updates during back-propagation . In the same manner, Maxout take maximum over multiple feature maps. Softplus is the smooth approximation of ReLU, which is differentiable everywhere.
In this work, we have focused on leveraging benefits from the multiple non-linear activation functions simultaneously. To the best of our knowledge, this is the first work that considers a mixture of non-linear activation functions in the initial few epochs. We have also presented an ablation study and feature visualization to support the proposed approach.
3 Cooperative Network Design (Phase-1)
Nowadays, deep networks consist of huge depth due to the presence of multiple convolutional layers. These convolutional layers are followed by an activation function, which operates on the feature maps (output) of these layers. The training process will update weight parameters using back-propagation. This update solely depends on the behavior of the activation function, i.e., ReLU never updates the weight parameter corresponding to negative inputs. Restricting these weights from an update in the initial phase of training may hurt the performance of the deep network later.
To overcome these issues in the network, researchers have proposed various activation functions, which have their advantages and disadvantages. In this work, we have proposed a cooperative design having multiple activation functions, helping each other in the initial update of parameters in such a way that all set of weight parameters get an opportunity to contribute to the performance. These activation functions cooperate to overcome their drawbacks and improve the update process for all sets of weight parameters in the initial epochs of training.
In Figure 1, we have shown a block diagram of a layer, where the convolution operation is same as of standard CNN, but we have used k activation functions instead of one activation function. The input feature maps for the given layer are represented as in Figure 1, which are then convolved using convolutional filters shown in green color. The output feature maps generated by convolution operation are passed to each activation function, which operates over each element of the given feature maps. This results in the generation of k different sets of feature maps corresponding to each activation function, as shown in Figure 1. The Final output Feature Maps () are the result of the weighted average of these k feature maps as given by the following equation:
where are the activation functions applied on the feature map. The final output feature maps () are the weighted average of outputs from these activation function when applied to input feature maps (shown in red color). The corresponding weights for averaging are where represents the k activation functions. We assume that each activation contributes equally in the update process (improvement) of the weight parameters by assigning equal weight to each parameter ().
We have presented an ablation study to shown empirically that even if we train the model completely by using a mixture of activation functions, it will results in substantial performance improvement as compare to the individual activation function.
4 Standard Network Design (Phase-2)
In Phase-2, the network use only a single activation function, i.e., ReLU activation for each layer in the model, instead of the mixture of activation functions. In the training process of phase-1 design, our focus is to shift the model in a stable state where all sets of model parameters are in a better state than a randomly initialized state. The phase-1 network design is trained for a few epochs such that the mixture of activation functions will overcome the drawbacks of each other by cooperation in the update process of weight parameters. All the k activation functions will get different gradients and averaging them give a regularization effect, which updates all sets of parameters uniformly without undermining others.
In Phase-2, We use only one activation function, so we opt to use only ReLU as our activation function. ReLU is a linear (identity) mapping for all positive values and zero values for all negative values. ReLU activation shows a sparse behavior as all negative inputs are mapped to zero. After getting into a better state from phase-1, we often desire to make feature maps sparse enough, which is easily possible with ReLU as it possesses many deactivated neurons, giving a regularized effect to the model. Sparsity results in concise models that often have better predictive power and fewer chances of overfitting/noise. In a sparse network, all neurons cannot be activated simultaneously in a model. Only those set of neurons get activated which are responsible for a particular aspect of the given task, e.g., there are a given set of neurons which gets activated for a face like structure in a human detecting task while the same set of neurons are inactive for other parts of the body. That’s the reason standard ReLU seems to be less prone to overfitting vs. leaky ReLU with modern architectures. We have also presented an ablation study to empirically show that ReLU in Phase-2 will results in substantially improved performance as compare to other possible options. Hence ReLU is competing with all other non-linear activation functions in Phase-2.
5 Training Method
The proposed training framework is divided into two phases corresponding to the two network design phases. For Phase-1 (Cooperative) training, we use 20% of epochs used in standard (Phase-2) training with a mixture of non-linear activation functions (having equal weight). We have presented an ablation on the number of epochs used in Phase-1 training to validate the choice mentioned above.
In the phase-1 training process, gradient calculation in the back-propagation algorithm  is the aggregate of the gradient calculated for each activation function. This gives a regularized effect on the gradient and provides equal opportunity for all weight parameters to optimize. Phase-2 training is the same as the standard training of the model with only one non-linear activation function (ReLU). Please note that a mixture of activation functions is used in only Phase-1 training, while Phase-2 has only ReLU at every layer. Further details are provided in the experimental section.
6 Experiments and Results
In this section, we have evaluated the performance of the proposed approach on classification and detection task. Our experimentation uses the state-of-art CNN architectures such as ResNet , and VGG-16  for various activation functions. All these models are trained over three standard benchmark datasets: CIFAR10, CIFAR100  and ImageNet  dataset. We have also performed experiments using SSD  on PASCAL VOC  for object detection.
In these experiments, we have used four most prominently used non-linear activation functions (ReLU, PReLU, ELU, and SoftPlus). We have preferred PReLU to resolve the issue of dying ReLU; however, someone may also prefer Leaky ReLU. ELU has an exponential function for negative input, which is contrary to ReLU. This behavior of ELU pushes the mean to the neighborhood of zero, similar to the case of batch normalization. This shift of mean toward the vicinity of zero accelerate the training of network (fast convergence). ELU also guarantees more robustness towards the noise. These are the few reasons we have selected ELU in the mixture of activation functions. The behavior of ReLU and Softplus  is almost similar, excluding near the periphery of zero, where the softplus is differentiable and smooth. Softplus has privilege over ReLU due to differentiability in the entire domain, and it saturates less.
The scope of activation functions depends on the problems. We have selected ReLU, PReLU, ELU, and SoftPlus, which are the most widely used non-linear activation functions in image classification and object detection problems.
In our experiments, baselines (using activation function ReLU/PReLU/ELU/SoftPlus) have been reproduced using a standard training procedure in PyTorch framework. We trained these models using 300 and 90 epochs for CIFAR and ImageNet dataset respectively.
6.1 CIFAR 10 and CIFAR 100
The CIFAR-10 and CIFAR-100  are the datasets having tiny natural images. CIFAR10 datasets have 10 different image classes, while CIFAR-100 datasets have 100 classes. There are 50,000 training images and 10,000 test images, where all images are RGB images with a dimension of 32x32.
In the experiments with the CIFAR dataset, we perform standard data augmentation methods of random cropping to a size of
and random horizontal flipping. The optimization is performed using Stochastic Gradient Descent (SGD) algorithm with momentumand a minibatch size of . In Phase-2 training, the initial learning rate is set to , which is decreased by a factor of after every epoch. The models are trained from scratch for around epochs. For Phase-1 (Cooperative) training, we use only 20% of epochs used in Phase-2 training with PReLU, ReLU, ELU, and SoftPlus activation functions (having equal weight). The learning rate is set to and is decreased by a factor of after every epoch in Phase-1 (Cooperative) training. For evaluation, the validation images are used. The results on the CIFAR-10/100 datasets for all the architectures have been reproduced in the PyTorch framework.
The results are shown in Table 1, 2. We observe a consistent improvement in accuracy for VGG-16 and ResNet-56 over CIFAR. The model trained with our proposed two-phase training procedure not only outperforms ReLU significantly but also other non-linear activation functions such as PReLU, ELU, and SoftPlus, as shown in Table 1, 2.
|VGG-16 (Ours)||Mix (ReLU)||94.2|
|ResNet-56 (Ours)||Mix (ReLU)||94.4|
|VGG-16 (Ours)||Mix (ReLU)||74.0|
|ResNet-56 (Ours)||Mix (ReLU)||73.1|
|AlexNet (Ours)||Mix (ReLU)||57.2|
|ResNet-18 (Ours)||Mix (ReLU)||70.8|
|Class||SSD (Baseline) AP||SSD Mix (ReLU) AP|
ImageNet dataset  contains 1000 classes, each category roughly having 1000 images. The dataset contains about 1.2 million training images, 50,000 validation images, and 100,000 test images (with no labels). The training is performed on training data, whereas all evaluations are performed on the validation set.
For ImageNet experiments, we perform standard data augmentation methods of random cropping to a size of and random horizontal flipping. For optimization, Stochastic Gradient Descent (SGD) is used with momentum and a minibatch size of . For Phase-2 training, the initial learning rate is set to , which is decreased by a factor of after every epoch. The models are trained for around epochs. The evaluation is done over validation images are subjected to center cropping of size . For Phase-1 (Cooperative) training, we use 20% of epochs used in Phase-2 training with PReLU, ReLU, ELU, and SoftPlus activation functions (having equal weight). The learning rate is set to and is decreased by a factor of after every epoch in Phase-1 (Cooperative) training.
The results are shown in Table 3. We have consistent improvement in accuracy for AlexNet , ResNet-18  over ImageNet dataset. The model trained with our proposed two-phase training procedure not only significantly outperforms ReLU but also other non-linear activation function such as PReLU, ELU, and SoftPlus (Table 3).
6.3 Pascal Voc
We have performed experiments on the SSD model over PASCAL VOC  dataset to validate our proposed approach for the object detection task. In this experiment, we follow the same experimental setting and training schedule, as described in  for Phase-2 training. For Phase-1 (Cooperative) training, we have used 20% iterations of Phase-2 training with PReLU, ReLU, ELU, and SoftPlus activation functions (having equal weight). The SSD  detection model is a feed-forward convolutional network that produces a collection of fixed-size bounding boxes and predicts classification scores to represent the presence of object class instances in these boxes, followed by a non-maximum suppression which produces the final detections.
As shown in Table 4, our proposed approach is not limited to classification but also works well on object detection task. We have a significant improvement (approx. 1%) in mAP as compare to baseline by using our training procedure.
|ResNet-56 (Ours)||Mix (ReLU)||94.4|
|#Epochs in Phase-1||Activation Function||Accuracy(%)|
7 Ablation Studies
As shown in Table 5, if we train a model without any non-linear activation function (WNLA), the performance of the model degrades massively since ResNet-56 is a deep CNN model which is very hard to optimize without any non-linear activation function. There is a performance boost in the case of Mix (SoftPlus) and Mix (ELU) from their respective baselines, but the overall performance scores are significantly lower than Mix (ReLU). The key reason for the significant difference in the performance can be due to the sparse behavior of (ReLU) activation function in Phase-2 training. This sparsity is often desirable in the deep network due to better predictive power and less overfitting/noise.
Although Mixture and Mix (ReLU) have similar performance, Mix (ReLU) should be preferred because of the following reasons:
Mixture will occupy more feature maps memory as compare to Mix (ReLU) at run time because separate feature maps will be generated for every non-linear activation function. Mix (ReLU) will not increase in feature maps memory at a run time (GPU Memory).
Mixture will add some delay at the inference time as compare to Mix (ReLU) due to extra calculations performed by multiple activation functions.
Hence, Mix (ReLU) is the most suitable combination while considering all the other possibilities. Our proposed approach uses Two-Phase Training (TPT), where Phase-1 training uses 20% of Phase-2 epochs. Therefore, one may think that performance improvement is due to more training epochs (20%). Hence we also train baselines with the same Two-Phase Training schedule (Baseline-TPT), except in Phase-1, only a single activation function is used. As shown in Table 5, Baseline-TPT has a similar performance as the baseline. From Table 5, we can conclude that performance improvement is not due to more training epochs (20%) in Two-Phase Training (TPT) but because of cooperative initialization in Phase-1.
We also conduct an ablation to decide the number of Epochs in Phase-1 training. For Phase-1 (Cooperative) training, we use 10-40% of epochs used in standard (Phase-2) training. As shown in Table 6, 20% of Phase-2 epochs in Phase-1 (Cooperative) training is the most suitable choice because it gives significant performance improvement with only 20% increase in overall training time.
8 Visualizing Last Layer Features on MNIST
In the Figure 2, we are plotting t-SNE  plots for LeNet-like network on MNIST  dataset to visualize the features learned for various non linear activation functions. The LeNet-like network contains two convolutional layers and one fully-connected layer. The convolutional layers have 5x5 kernel size while the first and second convolutional layer consists of twenty and thirty number of filters respectively.
The t-Distributed Stochastic Neighbor Embedding (t-SNE) is a dimensionality reduction technique that is heavily used to visualize the high-level features learned by CNN. The t-SNE is a commonly used technique to visualize feature representations in high-dimensional data into a space of two or three dimensions. From Figure2, we can visualize the two-dimensional embeddings of the last layer. The features corresponding to Mix (Relu) are more separable than remaining other embeddings. The features corresponding to Mix (ReLU) are well separated and discriminated enough, as shown using corresponding representative points in Figure 2.
This section focused on the analysis of performance gain from two different perspectives. The first perspective focus on investigating the convergence of Mix (ReLU) and ReLU on the ResNet-56 architecture. The convergence speed of Mix (ReLU) is much faster as compared to ReLU, which can be inferred in Figure-3. The second perspective for investigation is based on over-fitting aspects of the models where Mix (ReLU) is more robust compared to ReLU based on the empirical results.
The investigation is performed on the CIFAR-100 dataset using the ResNet-56 model. We used standard data augmentation techniques such as random horizontal flipping and random cropping to size . The optimization is performed using Stochastic Gradient Descent (SGD) with momentum set to and weight decay as 0.0001. The minibatch size of is selected to perform experiments. The initial learning rate is taken as , which is then decreased by a factor of after every epoch. The models are trained from scratch for around epochs.
9.1 Effect of using Mix (ReLU) on convergence
|Method||Activation Function||Train Accuracy(%)||Test Accuracy(%)|
|ResNet-56 Baseline (100% of training set)||ReLU||99.4||71.6|
|ResNet-56 (100% of training set)||Mix (ReLU)||99.6||73.1|
|ResNet-56 Baseline (25% of training set)||ReLU||99.9||51.3|
|ResNet-56 (25% of training set)||Mix (ReLU)||99.9||60.8|
We analyzed the convergence rate of Mix (ReLU) based model, and we found that the convergence using Mix (ReLU) is slightly better compared to ReLU, which can be inferred from their respective curves in Figure 3. The two graphs of cross-entropy losses vs. the number of epochs for training and test set in Figure-3, shows that the dropping rate of cross-entropy losses is quite higher as compared to the loss corresponding to ReLU on the training set in the experimental results shown in Figure-3.
9.2 Effect of Mix (ReLU) on over fitting
Our method utilize multiple activation functions in the initial few training epochs of the deep network. The gradient from these multiple activation functions gets accumulated and gave a regularization effect to the gradient flow in the model corresponding to the complete range of input data (negative and positive ). This aggregation of gradients also helps in the regularization of weight parameters while updating, which in effect knock-down the chances of over-fitting issues. In support of our hypothesis, we have performed some experiments in two different scenarios. In the first scenario, experiments are performed out over the complete dataset (100 % of training data), while the second scenario of experiments is performed on only 25 % of the training data.
The first scenario of experiments is performed on the CIFAR-100 dataset using ResNet-56 architecture. The first experiment with only ReLU achieved an accuracy of 71.6% as presented in Table-7 while the second experiment uses Mix (ReLU) and achieve 73.1% accuracy. From Table-7, we can infer that Mix (ReLU) is more robust to overfitting as the difference between test and training accuracy is 26.5, which is lesser than the difference between test and training accuracy of ReLU (27.8). In these experiments, the difference between test and training accuracy is not that much significant for ReLU and Mix (ReLU), hence in the second scenario, we have chosen only 25% samples from the training images of CIFAR-100 dataset to perform various experiments.
The second scenario of experiments is performed to train the ResNet-56 ReLU model using only 25% train samples, and we achieve accuracy of 99.9% and 51.3% corresponding to train and test data respectively. The ResNet-56 Mix (ReLU), on the other hand, achieves 99.9% and 60.8% of training and test accuracy respectively.
From Figure-4, We can conclude that the Mix (ReLU) is less prone to overfitting issues, as the difference between test and training accuracy is 39.1 while this difference is quite higher than that of ReLU, i.e., 48.6.
We propose a Cooperative Initialization for training deep networks to improve the performance. We have shown experimentally that a mixture of the non-linear activation function is beneficial for CNN in the initial phase of training, where we start from random initialization. Initially, multiple activation functions regularize the gradient flow corresponding to the positive and negative input of activation functions, thereby improving the update of weight parameters, which is very crucial at the initial stage. Our experimental results show that the proposed approach improves the performances of state-of-the-art networks. Our proposed approach also helps in reducing the overfitting problem and does not increase the number of parameters, inference (test) time in the final model while improving the performance. Therefore, cooperative initialization is a promising approach to improve the feature representation and performance of deep networks.
-  (2014) Learning activation functions to improve deep neural networks. arXiv preprint arXiv:1412.6830. Cited by: §2.
Neural networks for pattern recognition. Oxford university press. Cited by: §1, §2.
-  (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. Cited by: §1, §2.
A unified architecture for natural language processing: deep neural networks with multitask learning. In
Proceedings of the 25th international conference on Machine learning, pp. 160–167. Cited by: §1.
The pascal visual object classes challenge: a retrospective.
International journal of computer vision111 (1), pp. 98–136. Cited by: §6.3, §6.
-  (2013) Maxout networks. arXiv preprint arXiv:1302.4389. Cited by: §2.
Speech recognition with deep recurrent neural networks. In ICASSP, Cited by: §1.
-  (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §1, §1, §2, §2.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §6.2, §6.
-  (2009) Replicated softmax: an undirected topic model. In Advances in neural information processing systems, pp. 1607–1614. Cited by: §1.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §1, §6.
Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §2, §6.2, §6.2, §6.
-  (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §6.1, §6.
-  (2015) Deep learning. nature. Cited by: §1, §2.
-  (1989) Backpropagation applied to handwritten zip code recognition. Neural computation 1 (4), pp. 541–551. Cited by: §5.
-  (2010) MNIST handwritten digit database. External Links: Cited by: §8.
-  (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §6.3, §6.
-  Rectifier nonlinearities improve neural network acoustic models. Cited by: §1, §1, §1, §2, §2.
-  (2008) Visualizing data using t-sne. Journal of machine learning research. Cited by: §8.
-  (2019) CPWC: contextual point wise convolution for object recognition. arXiv preprint arXiv:1910.09643. Cited by: §1.
-  (1943) A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics. Cited by: §1, §2.
Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814. Cited by: §1, §1, §2.
-  (2017) Automatic differentiation in pytorch. Cited by: §6.
-  (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §6.
-  (2019) FALF convnets: fatuous auxiliary loss based filter-pruning for efficient deep cnns. Image and Vision Computing, pp. 103857. Cited by: §1.
-  (2019) Stability based filter pruning for accelerating deep cnns. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1166–1174. Cited by: §1.
-  (2019) Multi-layer pruning framework for compressing single shot multibox detector. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1318–1327. Cited by: §1.
-  (2019) Accuracy booster: performance boosting using feature map re-calibration. arXiv preprint arXiv:1903.04407. Cited by: §1.
-  (2018) Leveraging filter correlations for deep model compression. arXiv preprint arXiv:1811.10559. Cited by: §1.
-  (2019) HetConv: beyond homogeneous convolution kernels for deep cnns. International Journal of Computer Vision, pp. 1–21. Cited by: §1.
-  (2019) Hetconv: heterogeneous kernel-based convolutions for deep cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4835–4844. Cited by: §1.
Play and prune: adaptive filter pruning for deep model compression.
International Joint Conference on Artificial Intelligence (IJCAI). Cited by: §1.
-  (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9. Cited by: §2.
-  (2017) Parametric exponential linear unit for deep convolutional neural networks. In ICMLA, Cited by: §2.
-  (2015) Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853. Cited by: §1, §2, §2.
-  Improving deep neural networks using softplus units. In IJCNN, Cited by: §1, §2, §6.