Cooperative Initialization based Deep Neural Network Training

01/05/2020
by   Pravendra Singh, et al.
0

Researchers have proposed various activation functions. These activation functions help the deep network to learn non-linear behavior with a significant effect on training dynamics and task performance. The performance of these activations also depends on the initial state of the weight parameters, i.e., different initial state leads to a difference in the performance of a network. In this paper, we have proposed a cooperative initialization for training the deep network using ReLU activation function to improve the network performance. Our approach uses multiple activation functions in the initial few epochs for the update of all sets of weight parameters while training the network. These activation functions cooperate to overcome their drawbacks in the update of weight parameters, which in effect learn better "feature representation" and boost the network performance later. Cooperative initialization based training also helps in reducing the overfitting problem and does not increase the number of parameters, inference (test) time in the final model while improving the performance. Experiments show that our approach outperforms various baselines and, at the same time, performs well over various tasks such as classification and detection. The Top-1 classification accuracy of the model trained using our approach improves by 2.8 dataset.

READ FULL TEXT
research
03/18/2020

A Survey on Activation Functions and their relation with Xavier and He Normal Initialization

In artificial neural network, the activation function and the weight ini...
research
11/02/2020

Reducing Neural Network Parameter Initialization Into an SMT Problem

Training a neural network (NN) depends on multiple factors, including bu...
research
07/31/2017

An Effective Training Method For Deep Convolutional Neural Network

In this paper, we propose the nonlinearity generation method to speed up...
research
12/16/2017

Mitigating Asymmetric Nonlinear Weight Update Effects in Hardware Neural Network based on Analog Resistive Synapse

Asymmetric nonlinear weight update is considered as one of the major obs...
research
04/06/2023

Optimizing Neural Networks through Activation Function Discovery and Automatic Weight Initialization

Automated machine learning (AutoML) methods improve upon existing models...
research
06/01/2023

Initial Guessing Bias: How Untrained Networks Favor Some Classes

The initial state of neural networks plays a central role in conditionin...
research
12/22/2015

Deep Learning with S-shaped Rectified Linear Activation Units

Rectified linear activation units are important components for state-of-...

Please sign up or login with your details

Forgot password? Click here to reset