1 Introduction
Deep neural networks (DNNs) are stateoftheart models, responsible for transforming research in the area of vision, language and speech [14, 7, 4]. Various works [30, 31, 26, 29, 27, 28, 25, 20, 32]
have been proposed for efficient deep learning. These deep network at core performs a linear transformation followed by a nonlinear operation using an activation function. The activation function is the one, which is responsible for nonlinear behaviour and the learning capabilities of the network. These activations are nonlinear continuous functions which may also possess nondifferentiability
[22, 18, 8]. Researchers have proposed many activation functions which can be classified into saturated
[2, 21, 10] and nonsaturated activation functions [2, 18, 35, 22].The saturated activation belongs to a category, in which the learning process gets slow down due to the very small gradient nearsaturated output. These activation functions are experimentally proved to be less effective for training a deep network. The key reason for the failure is the (vanishing/exploding) gradient problems, which mostly occurs due to saturated output in an activation function. This problem is efficiently tackled by using nonsaturated activation function, like ReLU
[22, 18]. In particular, the derivative of ReLU is one for the positive inputs; hence, the gradient cannot vanish. In contrast, all the negative values are mapped to zero, which restricts the flow of information in DNNs for these negative values. ReLU gets saturated exactly at zero, which makes ReLU fragile at the time of training, and the neuron can die forever. For example, the flow of large gradients through ReLU may update weight parameters in a way that may deactivate neurons for all data points. This problem is known as dying ReLU, which implies that the gradients flow through the neuron will forever be zero from that point. Due to this, the gradientbased optimization algorithm will not be able to update the weights of that neuron unit. Also, training the network on a high learning rate may shoot the number of “dead” neurons in the network as much as 40% of the network
[18] (i.e., neurons that never activate across the entire training dataset). So there is a need to set the learning rate properly to reduce the issues.To resolve these potential problems originated by the hard zero mappings in the ReLU units, various generalizations of ReLU such as Leaky ReLU [18], and PReLU [8] have been proposed. Both Leaky ReLU and PReLU are same as ReLU except for the case of negative inputs in which a small constant slope for Leaky ReLU and a learnable slope for PReLU are used. Similarly, Exponential Linear Units (ELU) [3] is also proposed, which resolves the bias shift [11] from the succeeding layers. ELU [3] gives an exponential value corresponding to negative inputs, which force the mean output of the activation function to reach towards zero. Although ELU is not backed by concrete theory, ELU shows competitive results. Besides all these, Softplus [36] is approximately similar to ReLU, except at zero, where softplus is differentiable and smooth. Softplus is also differentiable everywhere and saturates less, which gives an edge over ReLU. In practice, there is no nonlinear activation function that outperforms all the time over all models, datasets, and problems.
In this paper, we propose an approach in which multiple nonlinear activation functions are exploited using a cooperative strategy to overcome their drawbacks. We have used multiple activation functions in the initial epochs of training the deep network. The aggregation of gradient from all the activation function gives a regularization effect
for the gradient flow corresponding to the whole range of inputs (negative values also). This results in regularizing the update of weight parameters, which is a very crucial step in the initial stage. In the next stage, we train the network with only one activation function (standard network) corresponding to each layer of the network. The proposed approach has experimented extensively on different architectures such as ResNet, and VGG16 models over CIFAR10, CIFAR100, and ImageNet datasets. We also experimented with object detection task using SSD300 on the PASCAL VOC dataset.
Major contributions of this paper are as follows:

We have shown experimentally that using multiple activation functions in the initial few epochs of the training process benefits the update of the full set of weight parameters, which results in substantial performance improvement later.

We have shown empirically that a mixture of nonlinear activation functions results in significant improvement in the performance as compared to the individual nonlinear activation function.

We have shown that Cooperative Initialization based training also help in reducing overfitting problem.

Our proposed approach does not increase the number of parameters and inference (test) time in the final model while improving the performance.
2 Previous Work
The first activation function is a step function originally used in the perceptron model
[21]. Researchers in the same direction have also proposed many other saturated activation functions such as sigmoid, softmax, and tanh [2]. The ReLU activation further replaces these activation functions, owing to the outstanding performance on deep neural networks [22, 33, 12]. ReLU had escalated the convergence and resolved the vanish gradient problem, normally occurred in saturated activation functions. These activation functions have accelerated the efforts of the research community in solving various vision problems. Several attempts have been made to develop a more efficient network by developing better activation functions, which can resolve the problems arising in the above activation functions. Some variants of ReLU have been proposed such as RReLU, PReLU, leaky ReLU, and others
[35, 8, 18].To resolve the issues of mapping, all negative input to zero (dying ReLU) in the ReLU activation function, Leaky ReLU [18] has been proposed. This mapping causes an information loss (dead neuron), which is resolved by defining a linear function corresponding to negative input, having a small predefined constant slope, to leak some information [18]
. However, Leaky ReLU does not give any notable results on performance experimentally. Further, a parametric rectified linear unit (PReLU) has been proposed
[8] which uses a learnable slope parameter instead of a constant slope as in Leaky ReLU [18]for negative inputs. PReLU gives better performance than ReLU in many cases. On the second thought, the slope parameter can be randomly sampled from a uniform distribution as used in Randomized Leaky Rectified Linear Unit (RReLU)
[35] which reduces the risk of overfitting in the training phase. ELU [3] is also same as ReLU for positive input while it behaves similarly to saturated exponential function for negative inputs. Further, Parametric ELU (PELU) is a scaled version of ELU having a learnable scaling parameter [34]. Most of the above discussed ReLU variants are based on the experiments over the negative inputs of ReLU.There are few other works in which new activation functions are proposed, such as Maxout (maximum over K affine functions), Softplus, and Adaptive Piecewise Linear (APL) [6, 36, 1]. The APL consist of many nondifferentiable points that scale linearly with the number of hinge functions. This will increase the model complexity and affect the parameter updates during backpropagation [14]. In the same manner, Maxout take maximum over multiple feature maps. Softplus is the smooth approximation of ReLU, which is differentiable everywhere.
In this work, we have focused on leveraging benefits from the multiple nonlinear activation functions simultaneously. To the best of our knowledge, this is the first work that considers a mixture of nonlinear activation functions in the initial few epochs. We have also presented an ablation study and feature visualization to support the proposed approach.
3 Cooperative Network Design (Phase1)
Nowadays, deep networks consist of huge depth due to the presence of multiple convolutional layers. These convolutional layers are followed by an activation function, which operates on the feature maps (output) of these layers. The training process will update weight parameters using backpropagation. This update solely depends on the behavior of the activation function, i.e., ReLU never updates the weight parameter corresponding to negative inputs. Restricting these weights from an update in the initial phase of training may hurt the performance of the deep network later.
To overcome these issues in the network, researchers have proposed various activation functions, which have their advantages and disadvantages. In this work, we have proposed a cooperative design having multiple activation functions, helping each other in the initial update of parameters in such a way that all set of weight parameters get an opportunity to contribute to the performance. These activation functions cooperate to overcome their drawbacks and improve the update process for all sets of weight parameters in the initial epochs of training.
In Figure 1, we have shown a block diagram of a layer, where the convolution operation is same as of standard CNN, but we have used k activation functions instead of one activation function. The input feature maps for the given layer are represented as in Figure 1, which are then convolved using convolutional filters shown in green color. The output feature maps generated by convolution operation are passed to each activation function, which operates over each element of the given feature maps. This results in the generation of k different sets of feature maps corresponding to each activation function, as shown in Figure 1. The Final output Feature Maps () are the result of the weighted average of these k feature maps as given by the following equation:
where are the activation functions applied on the feature map. The final output feature maps () are the weighted average of outputs from these activation function when applied to input feature maps (shown in red color). The corresponding weights for averaging are where represents the k activation functions. We assume that each activation contributes equally in the update process (improvement) of the weight parameters by assigning equal weight to each parameter ().
We have presented an ablation study to shown empirically that even if we train the model completely by using a mixture of activation functions, it will results in substantial performance improvement as compare to the individual activation function.
4 Standard Network Design (Phase2)
In Phase2, the network use only a single activation function, i.e., ReLU activation for each layer in the model, instead of the mixture of activation functions. In the training process of phase1 design, our focus is to shift the model in a stable state where all sets of model parameters are in a better state than a randomly initialized state. The phase1 network design is trained for a few epochs such that the mixture of activation functions will overcome the drawbacks of each other by cooperation in the update process of weight parameters. All the k activation functions will get different gradients and averaging them give a regularization effect, which updates all sets of parameters uniformly without undermining others.
In Phase2, We use only one activation function, so we opt to use only ReLU as our activation function. ReLU is a linear (identity) mapping for all positive values and zero values for all negative values. ReLU activation shows a sparse behavior as all negative inputs are mapped to zero. After getting into a better state from phase1, we often desire to make feature maps sparse enough, which is easily possible with ReLU as it possesses many deactivated neurons, giving a regularized effect to the model. Sparsity results in concise models that often have better predictive power and fewer chances of overfitting/noise. In a sparse network, all neurons cannot be activated simultaneously in a model. Only those set of neurons get activated which are responsible for a particular aspect of the given task, e.g., there are a given set of neurons which gets activated for a face like structure in a human detecting task while the same set of neurons are inactive for other parts of the body. That’s the reason standard ReLU seems to be less prone to overfitting vs. leaky ReLU with modern architectures. We have also presented an ablation study to empirically show that ReLU in Phase2 will results in substantially improved performance as compare to other possible options. Hence ReLU is competing with all other nonlinear activation functions in Phase2.
5 Training Method
The proposed training framework is divided into two phases corresponding to the two network design phases. For Phase1 (Cooperative) training, we use 20% of epochs used in standard (Phase2) training with a mixture of nonlinear activation functions (having equal weight). We have presented an ablation on the number of epochs used in Phase1 training to validate the choice mentioned above.
In the phase1 training process, gradient calculation in the backpropagation algorithm [15] is the aggregate of the gradient calculated for each activation function. This gives a regularized effect on the gradient and provides equal opportunity for all weight parameters to optimize. Phase2 training is the same as the standard training of the model with only one nonlinear activation function (ReLU). Please note that a mixture of activation functions is used in only Phase1 training, while Phase2 has only ReLU at every layer. Further details are provided in the experimental section.
6 Experiments and Results
In this section, we have evaluated the performance of the proposed approach on classification and detection task. Our experimentation uses the stateofart CNN architectures such as ResNet [9], and VGG16 [24] for various activation functions. All these models are trained over three standard benchmark datasets: CIFAR10, CIFAR100 [13] and ImageNet [12] dataset. We have also performed experiments using SSD [17] on PASCAL VOC [5] for object detection.
In these experiments, we have used four most prominently used nonlinear activation functions (ReLU, PReLU, ELU, and SoftPlus). We have preferred PReLU to resolve the issue of dying ReLU; however, someone may also prefer Leaky ReLU. ELU has an exponential function for negative input, which is contrary to ReLU. This behavior of ELU pushes the mean to the neighborhood of zero, similar to the case of batch normalization
[11]. This shift of mean toward the vicinity of zero accelerate the training of network (fast convergence). ELU also guarantees more robustness towards the noise. These are the few reasons we have selected ELU in the mixture of activation functions. The behavior of ReLU and Softplus [36] is almost similar, excluding near the periphery of zero, where the softplus is differentiable and smooth. Softplus has privilege over ReLU due to differentiability in the entire domain, and it saturates less.The scope of activation functions depends on the problems. We have selected ReLU, PReLU, ELU, and SoftPlus, which are the most widely used nonlinear activation functions in image classification and object detection problems.
In our experiments, baselines (using activation function ReLU/PReLU/ELU/SoftPlus) have been reproduced using a standard training procedure in PyTorch
[23] framework. We trained these models using 300 and 90 epochs for CIFAR and ImageNet dataset respectively.6.1 CIFAR 10 and CIFAR 100
The CIFAR10 and CIFAR100 [13] are the datasets having tiny natural images. CIFAR10 datasets have 10 different image classes, while CIFAR100 datasets have 100 classes. There are 50,000 training images and 10,000 test images, where all images are RGB images with a dimension of 32x32.
In the experiments with the CIFAR dataset, we perform standard data augmentation methods of random cropping to a size of
and random horizontal flipping. The optimization is performed using Stochastic Gradient Descent (SGD) algorithm with momentum
and a minibatch size of . In Phase2 training, the initial learning rate is set to , which is decreased by a factor of after every epoch. The models are trained from scratch for around epochs. For Phase1 (Cooperative) training, we use only 20% of epochs used in Phase2 training with PReLU, ReLU, ELU, and SoftPlus activation functions (having equal weight). The learning rate is set to and is decreased by a factor of after every epoch in Phase1 (Cooperative) training. For evaluation, the validation images are used. The results on the CIFAR10/100 datasets for all the architectures have been reproduced in the PyTorch framework.The results are shown in Table 1, 2. We observe a consistent improvement in accuracy for VGG16 and ResNet56 over CIFAR. The model trained with our proposed twophase training procedure not only outperforms ReLU significantly but also other nonlinear activation functions such as PReLU, ELU, and SoftPlus, as shown in Table 1, 2.
Model  Activation Function  Accuracy(%) 

VGG16 (Baseline)  ReLU  93.6 
VGG16 (Baseline)  PReLU  93.7 
VGG16 (Baseline)  SoftPlus  90.5 
VGG16 (Baseline)  ELU  92.3 
VGG16 (Ours)  Mix (ReLU)  94.2 
ResNet56 (Baseline)  ReLU  93.5 
ResNet56 (Baseline)  PReLU  94.0 
ResNet56 (Baseline)  SoftPlus  92.0 
ResNet56 (Baseline)  ELU  92.0 
ResNet56 (Ours)  Mix (ReLU)  94.4 
Model  Activation Function  Accuracy(%) 

VGG16 (Baseline)  ReLU  72.0 
VGG16 (Baseline)  PReLU  72.5 
VGG16 (Baseline)  SoftPlus  64.2 
VGG16 (Baseline)  ELU  66.9 
VGG16 (Ours)  Mix (ReLU)  74.0 
ResNet56 (Baseline)  ReLU  71.6 
ResNet56 (Baseline)  PReLU  71.9 
ResNet56 (Baseline)  SoftPlus  69.3 
ResNet56 (Baseline)  ELU  69.5 
ResNet56 (Ours)  Mix (ReLU)  73.1 
Model  Activation Function  Accuracy(%) 

AlexNet (Baseline)  ReLU  56.6 
AlexNet (Baseline)  PReLU  56.9 
AlexNet (Baseline)  SoftPlus  55.2 
AlexNet (Baseline)  ELU  56.6 
AlexNet (Ours)  Mix (ReLU)  57.2 
ResNet18 (Baseline)  ReLU  69.8 
ResNet18 (Baseline)  PReLU  69.1 
ResNet18 (Baseline)  SoftPlus  68.8 
ResNet18 (Baseline)  ELU  68.2 
ResNet18 (Ours)  Mix (ReLU)  70.8 

Class  SSD (Baseline) AP  SSD Mix (ReLU) AP 

aero  80.40  82.53 
bike  82.95  82.54 
bird  74.62  77.02 
boat  71.61  72.45 
bottle  50.49  51.36 
bus  86.04  85.57 
car  86.55  86.28 
cat  88.02  86.91 
chair  60.88  63.23 
cow  83.10  81.58 
table  77.87  78.35 
dog  85.55  84.06 
horse  86.68  87.79 
mbike  84.14  85.85 
person  78.26  79.29 
plant  50.44  52.82 
sheep  74.28  78.13 
sofa  80.03  80.72 
train  85.88  87.28 
tv  75.49  77.15 
mAP  77.16  78.05 
6.2 ImageNet
ImageNet dataset [12] contains 1000 classes, each category roughly having 1000 images. The dataset contains about 1.2 million training images, 50,000 validation images, and 100,000 test images (with no labels). The training is performed on training data, whereas all evaluations are performed on the validation set.
For ImageNet experiments, we perform standard data augmentation methods of random cropping to a size of and random horizontal flipping. For optimization, Stochastic Gradient Descent (SGD) is used with momentum and a minibatch size of . For Phase2 training, the initial learning rate is set to , which is decreased by a factor of after every epoch. The models are trained for around epochs. The evaluation is done over validation images are subjected to center cropping of size . For Phase1 (Cooperative) training, we use 20% of epochs used in Phase2 training with PReLU, ReLU, ELU, and SoftPlus activation functions (having equal weight). The learning rate is set to and is decreased by a factor of after every epoch in Phase1 (Cooperative) training.
The results are shown in Table 3. We have consistent improvement in accuracy for AlexNet [12], ResNet18 [9] over ImageNet dataset. The model trained with our proposed twophase training procedure not only significantly outperforms ReLU but also other nonlinear activation function such as PReLU, ELU, and SoftPlus (Table 3).
6.3 Pascal Voc
We have performed experiments on the SSD model over PASCAL VOC [5] dataset to validate our proposed approach for the object detection task. In this experiment, we follow the same experimental setting and training schedule, as described in [17] for Phase2 training. For Phase1 (Cooperative) training, we have used 20% iterations of Phase2 training with PReLU, ReLU, ELU, and SoftPlus activation functions (having equal weight). The SSD [17] detection model is a feedforward convolutional network that produces a collection of fixedsize bounding boxes and predicts classification scores to represent the presence of object class instances in these boxes, followed by a nonmaximum suppression which produces the final detections.
As shown in Table 4, our proposed approach is not limited to classification but also works well on object detection task. We have a significant improvement (approx. 1%) in mAP as compare to baseline by using our training procedure.
Model  Activation Function  Accuracy(%) 

ResNet56 (Baseline)  ReLU  93.5 
ResNet56 (Baseline)  PReLU  94.0 
ResNet56 (Baseline)  SoftPlus  92.0 
ResNet56 (Baseline)  ELU  92.0 
ResNet56 (BaselineTPT)  ReLU  93.6 
ResNet56 (BaselineTPT)  PReLU  94.0 
ResNet56 (BaselineTPT)  SoftPlus  92.0 
ResNet56 (BaselineTPT)  ELU  92.1 
ResNet56  WNLA  40.0 
ResNet56  Mixture  94.3 
ResNet56  Mix (PReLU)  94.1 
ResNet56  Mix (SoftPlus)  93.1 
ResNet56  Mix (ELU)  93.2 
ResNet56 (Ours)  Mix (ReLU)  94.4 
#Epochs in Phase1  Activation Function  Accuracy(%) 

10%  Mix (ReLU)  94.03 
20%  Mix (ReLU)  94.40 
30%  Mix (ReLU)  94.40 
40%  Mix (ReLU)  94.42 
7 Ablation Studies
As shown in Table 5, if we train a model without any nonlinear activation function (WNLA), the performance of the model degrades massively since ResNet56 is a deep CNN model which is very hard to optimize without any nonlinear activation function. There is a performance boost in the case of Mix (SoftPlus) and Mix (ELU) from their respective baselines, but the overall performance scores are significantly lower than Mix (ReLU). The key reason for the significant difference in the performance can be due to the sparse behavior of (ReLU) activation function in Phase2 training. This sparsity is often desirable in the deep network due to better predictive power and less overfitting/noise.
Although Mixture and Mix (ReLU) have similar performance, Mix (ReLU) should be preferred because of the following reasons:

Mixture will occupy more feature maps memory as compare to Mix (ReLU) at run time because separate feature maps will be generated for every nonlinear activation function. Mix (ReLU) will not increase in feature maps memory at a run time (GPU Memory).

Mixture will add some delay at the inference time as compare to Mix (ReLU) due to extra calculations performed by multiple activation functions.
Hence, Mix (ReLU) is the most suitable combination while considering all the other possibilities. Our proposed approach uses TwoPhase Training (TPT), where Phase1 training uses 20% of Phase2 epochs. Therefore, one may think that performance improvement is due to more training epochs (20%). Hence we also train baselines with the same TwoPhase Training schedule (BaselineTPT), except in Phase1, only a single activation function is used. As shown in Table 5, BaselineTPT has a similar performance as the baseline. From Table 5, we can conclude that performance improvement is not due to more training epochs (20%) in TwoPhase Training (TPT) but because of cooperative initialization in Phase1.
We also conduct an ablation to decide the number of Epochs in Phase1 training. For Phase1 (Cooperative) training, we use 1040% of epochs used in standard (Phase2) training. As shown in Table 6, 20% of Phase2 epochs in Phase1 (Cooperative) training is the most suitable choice because it gives significant performance improvement with only 20% increase in overall training time.
8 Visualizing Last Layer Features on MNIST
In the Figure 2, we are plotting tSNE [19] plots for LeNetlike network on MNIST [16] dataset to visualize the features learned for various non linear activation functions. The LeNetlike network contains two convolutional layers and one fullyconnected layer. The convolutional layers have 5x5 kernel size while the first and second convolutional layer consists of twenty and thirty number of filters respectively.
The tDistributed Stochastic Neighbor Embedding (tSNE) is a dimensionality reduction technique that is heavily used to visualize the highlevel features learned by CNN. The tSNE is a commonly used technique to visualize feature representations in highdimensional data into a space of two or three dimensions. From Figure
2, we can visualize the twodimensional embeddings of the last layer. The features corresponding to Mix (Relu) are more separable than remaining other embeddings. The features corresponding to Mix (ReLU) are well separated and discriminated enough, as shown using corresponding representative points in Figure 2.9 Analysis
This section focused on the analysis of performance gain from two different perspectives. The first perspective focus on investigating the convergence of Mix (ReLU) and ReLU on the ResNet56 architecture. The convergence speed of Mix (ReLU) is much faster as compared to ReLU, which can be inferred in Figure3. The second perspective for investigation is based on overfitting aspects of the models where Mix (ReLU) is more robust compared to ReLU based on the empirical results.
The investigation is performed on the CIFAR100 dataset using the ResNet56 model. We used standard data augmentation techniques such as random horizontal flipping and random cropping to size . The optimization is performed using Stochastic Gradient Descent (SGD) with momentum set to and weight decay as 0.0001. The minibatch size of is selected to perform experiments. The initial learning rate is taken as , which is then decreased by a factor of after every epoch. The models are trained from scratch for around epochs.
9.1 Effect of using Mix (ReLU) on convergence
Method  Activation Function  Train Accuracy(%)  Test Accuracy(%) 

ResNet56 Baseline (100% of training set)  ReLU  99.4  71.6 
ResNet56 (100% of training set)  Mix (ReLU)  99.6  73.1 
ResNet56 Baseline (25% of training set)  ReLU  99.9  51.3 
ResNet56 (25% of training set)  Mix (ReLU)  99.9  60.8 
We analyzed the convergence rate of Mix (ReLU) based model, and we found that the convergence using Mix (ReLU) is slightly better compared to ReLU, which can be inferred from their respective curves in Figure 3. The two graphs of crossentropy losses vs. the number of epochs for training and test set in Figure3, shows that the dropping rate of crossentropy losses is quite higher as compared to the loss corresponding to ReLU on the training set in the experimental results shown in Figure3.
9.2 Effect of Mix (ReLU) on over fitting
Our method utilize multiple activation functions in the initial few training epochs of the deep network. The gradient from these multiple activation functions gets accumulated and gave a regularization effect to the gradient flow in the model corresponding to the complete range of input data (negative and positive ). This aggregation of gradients also helps in the regularization of weight parameters while updating, which in effect knockdown the chances of overfitting issues. In support of our hypothesis, we have performed some experiments in two different scenarios. In the first scenario, experiments are performed out over the complete dataset (100 % of training data), while the second scenario of experiments is performed on only 25 % of the training data.
The first scenario of experiments is performed on the CIFAR100 dataset using ResNet56 architecture. The first experiment with only ReLU achieved an accuracy of 71.6% as presented in Table7 while the second experiment uses Mix (ReLU) and achieve 73.1% accuracy. From Table7, we can infer that Mix (ReLU) is more robust to overfitting as the difference between test and training accuracy is 26.5, which is lesser than the difference between test and training accuracy of ReLU (27.8). In these experiments, the difference between test and training accuracy is not that much significant for ReLU and Mix (ReLU), hence in the second scenario, we have chosen only 25% samples from the training images of CIFAR100 dataset to perform various experiments.
The second scenario of experiments is performed to train the ResNet56 ReLU model using only 25% train samples, and we achieve accuracy of 99.9% and 51.3% corresponding to train and test data respectively. The ResNet56 Mix (ReLU), on the other hand, achieves 99.9% and 60.8% of training and test accuracy respectively.
From Figure4, We can conclude that the Mix (ReLU) is less prone to overfitting issues, as the difference between test and training accuracy is 39.1 while this difference is quite higher than that of ReLU, i.e., 48.6.
10 Conclusion
We propose a Cooperative Initialization for training deep networks to improve the performance. We have shown experimentally that a mixture of the nonlinear activation function is beneficial for CNN in the initial phase of training, where we start from random initialization. Initially, multiple activation functions regularize the gradient flow corresponding to the positive and negative input of activation functions, thereby improving the update of weight parameters, which is very crucial at the initial stage. Our experimental results show that the proposed approach improves the performances of stateoftheart networks. Our proposed approach also helps in reducing the overfitting problem and does not increase the number of parameters, inference (test) time in the final model while improving the performance. Therefore, cooperative initialization is a promising approach to improve the feature representation and performance of deep networks.
References
 [1] (2014) Learning activation functions to improve deep neural networks. arXiv preprint arXiv:1412.6830. Cited by: §2.

[2]
(1995)
Neural networks for pattern recognition
. Oxford university press. Cited by: §1, §2.  [3] (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. Cited by: §1, §2.

[4]
(2008)
A unified architecture for natural language processing: deep neural networks with multitask learning
. InProceedings of the 25th international conference on Machine learning
, pp. 160–167. Cited by: §1. 
[5]
(2015)
The pascal visual object classes challenge: a retrospective.
International journal of computer vision
111 (1), pp. 98–136. Cited by: §6.3, §6.  [6] (2013) Maxout networks. arXiv preprint arXiv:1302.4389. Cited by: §2.

[7]
(2013)
Speech recognition with deep recurrent neural networks
. In ICASSP, Cited by: §1.  [8] (2015) Delving deep into rectifiers: surpassing humanlevel performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §1, §1, §2, §2.
 [9] (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §6.2, §6.
 [10] (2009) Replicated softmax: an undirected topic model. In Advances in neural information processing systems, pp. 1607–1614. Cited by: §1.
 [11] (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §1, §6.

[12]
(2012)
Imagenet classification with deep convolutional neural networks
. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §2, §6.2, §6.2, §6.  [13] (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §6.1, §6.
 [14] (2015) Deep learning. nature. Cited by: §1, §2.
 [15] (1989) Backpropagation applied to handwritten zip code recognition. Neural computation 1 (4), pp. 541–551. Cited by: §5.
 [16] (2010) MNIST handwritten digit database. External Links: Link Cited by: §8.
 [17] (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §6.3, §6.
 [18] Rectifier nonlinearities improve neural network acoustic models. Cited by: §1, §1, §1, §2, §2.
 [19] (2008) Visualizing data using tsne. Journal of machine learning research. Cited by: §8.
 [20] (2019) CPWC: contextual point wise convolution for object recognition. arXiv preprint arXiv:1910.09643. Cited by: §1.
 [21] (1943) A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics. Cited by: §1, §2.

[22]
(2010)
Rectified linear units improve restricted boltzmann machines
. In Proceedings of the 27th international conference on machine learning (ICML10), pp. 807–814. Cited by: §1, §1, §2.  [23] (2017) Automatic differentiation in pytorch. Cited by: §6.
 [24] (2014) Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §6.
 [25] (2019) FALF convnets: fatuous auxiliary loss based filterpruning for efficient deep cnns. Image and Vision Computing, pp. 103857. Cited by: §1.
 [26] (2019) Stability based filter pruning for accelerating deep cnns. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1166–1174. Cited by: §1.
 [27] (2019) Multilayer pruning framework for compressing single shot multibox detector. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1318–1327. Cited by: §1.
 [28] (2019) Accuracy booster: performance boosting using feature map recalibration. arXiv preprint arXiv:1903.04407. Cited by: §1.
 [29] (2018) Leveraging filter correlations for deep model compression. arXiv preprint arXiv:1811.10559. Cited by: §1.
 [30] (2019) HetConv: beyond homogeneous convolution kernels for deep cnns. International Journal of Computer Vision, pp. 1–21. Cited by: §1.
 [31] (2019) Hetconv: heterogeneous kernelbased convolutions for deep cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4835–4844. Cited by: §1.

[32]
(2019)
Play and prune: adaptive filter pruning for deep model compression.
International Joint Conference on Artificial Intelligence (IJCAI)
. Cited by: §1.  [33] (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9. Cited by: §2.
 [34] (2017) Parametric exponential linear unit for deep convolutional neural networks. In ICMLA, Cited by: §2.
 [35] (2015) Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853. Cited by: §1, §2, §2.
 [36] Improving deep neural networks using softplus units. In IJCNN, Cited by: §1, §2, §6.