Gradient Amplification: An efficient way to train deep neural networks

06/16/2020 ∙ by Sunitha Basodi, et al. ∙ Georgia State University 0

Improving performance of deep learning models and reducing their training times are ongoing challenges in deep neural networks. There are several approaches proposed to address these challenges one of which is to increase the depth of the neural networks. Such deeper networks not only increase training times, but also suffer from vanishing gradients problem while training. In this work, we propose gradient amplification approach for training deep learning models to prevent vanishing gradients and also develop a training strategy to enable or disable gradient amplification method across several epochs with different learning rates. We perform experiments on VGG-19 and resnet (Resnet-18 and Resnet-34) models, and study the impact of amplification parameters on these models in detail. Our proposed approach improves performance of these deep learning models even at higher learning rates, thereby allowing these models to achieve higher performance with reduced training time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning models have achieved state-of-the-art performances in several areas including computer vision

[9]

, automatic speech recognition

[10]

, natural language processing

[12] and beyond[15, 18, 28, 27, 25]. These models are designed, trained, and tuned to achieve better performance for a given dataset. Their performance increases further with the increase in the depth of the network[13]. The major challenges associated with the increase in the network architecture is the high amount of time required to train the model even on parallel computation resources and also vanishing gradients[13]. Training deep neural networks is time-consuming, which could take days or sometimes weeks depending on the type of the model architecture and size of the dataset. One way to speed up the training process is to increase the learning rate. This will accelerate the training process by quickly converging to optima, but also has the risk of missing the global optima resulting in sub-optimal solutions or sometimes non-convergence[1]. Lower learning rates does not have such a risk and can converge to optima, but increases training speeds. In general, training process with a learning rate scheduler begins with higher learning rates for a few epochs, followed by reduction of learning rates for the next couple of epochs; which is repeated until the desired optima or model performance is achieved. One way to improve the training speed of deep learning models can be to determine ways to achieve optimal model parameters at larger learning rates.

The other important area of research in deep learning models is to prevent vanishing gradient problem[11, 6, 8]. The vanishing gradient problem occurs during training of artificial neural networks, specifically during backpropagation. There are several approaches to avoid this problem. One suggested early method was to perform a two step training process which involves network weight initialization followed by fine-tuning using backpropagation method [22]

. The other simpler methods that prevent this problem are Rectified Linear Unit (ReLU) activation function

[19, 5]

and batch normalization (BN)

[14]

. Since ReLU activation saturates inputs in only one direction, therefore has less impact of vanishing gradients. The other recent approach of batch normalization not only improves the performance of the model, but also reduces vanishing gradient problems. Resnet architecture have residual connections which also overcome vanishing gradient problem to some extent

[9]. Lately, due to the improvement of hardware along with the computational abilities of Graphical Processing Units (GPUs), neural networks can be trained without the issue of vanishing gradients.

In this work, we propose a novel gradient amplification approach along with a training strategy which addresses the challenges discussed above. In this method, gradients are dynamically increased for some layers during backpropagation so that significant gradient values are propagated to the initial layers. This process is repeated for a few epochs along with the normal training process with no gradient amplification for the other epochs. When neural networks are trained using this method, we observe that the testing/training accuracies of the models improve and achieve higher accuracies faster, even at higher learning rates, and therefore reduces the training time of these deep learning models.

Our contributions include the following:

  • We propose a novel way of amplifying gradients during backpropagation for effective training of deep neural networks

  • We suggest a unique training strategy which includes amplification during certain epochs along with normal training with no amplification

  • We perform comprehensive experiments to understand the impact of different parameters used in amplification

  • We perform step-wise analysis of training strategy demonstrating the best strategy with different learning rates

The remainder of this paper is organized as follows. Related works are briefly described in Section 2. Our proposed approach is presented in Section 3. Experimental setup, results and their comparisons are covered in Section 4, followed by conclusions in Section 5.

2 Background

In this section, we briefly discuss the existing approaches to address vanishing gradient problem, reduce the training time of deep learning models, and the impact of learning rates.

2.1 Vanishing gradients

Vanishing gradient problem[11, 6, 8] occurs while training artificial neural networks during backpropagation and can become significant with the increase of depth of the network. In gradient-based learning methods, during backpropagation, network weights are updated proportional to the gradient value (partial derivative of the cost function with respect to the current weights) after each training iteration (epoch). Depending on the type of the activation functions and network architectures, sometimes the gradient value is too small and its value gets gradually diminished during backpropagation to the initial layers. This prevents the network from updating its weights and also sometimes when the value is too small, the network may be completely stopped from training (updating weights). Though there is no fundamental solution to this problem, but some of the approaches help to avoid it[23]

. One such approach consists of performing a two step training process. In the first step, network weights are trained using unsupervised learning methods (such as auto-encoding) and then the weights are fine-tuned using backpropagation method

[22]. Other simpler methods that prevent this problem are ReLU activation function [19, 5], batch normalization(BN)[14] and Resnet networks[9]

. ReLU activation zeros the negative values and only considers positive values. As it saturates inputs in only one direction, it has less impact of vanishing gradients. The other approach, batch normalization, also reduces vanishing gradient problems other than boosting the performance of the model. In batch normalization, during every training iteration, the input data is normalized to reduce its variance, so that the data does not have large bounds. Since inputs are normalized, gradients are also regulated

[14]. Resnet network architectures have residual networks have residual connections which help to improve on this problem. In addition to these approaches, recent advancement in the hardware has also played a crucial role in solving this issue. Increased computational abilities and availability of GPUs aid in reducing this problem.

2.2 Learning rates

Learning rate is one of the most important hyperparameters which controls the performance of deep neural networks. Having higher learning rates cause the model to train faster but might have sub-optimal solutions. However, lower learning rates take longer time to train the model, but can achieve better optimal solutions

[1]. There are several approaches designed to take advantage of them. One such method is learning rate scheduler where we start with higher learning rates and gradually lower the rates with training epochs [2]. There are several ways in which such a scheduler can be designed, namely, directly assigning the learning rates to the epochs, gradually decaying the learning rate based on the current learning rate, current epoch and total number of epochs(time-based decay); reducing the learning rate in a step-wise manner after a certain number of epochs(step decay); and exponentially decaying the learning rate based on the initial learning rate and the current epoch(exponential decay). Another approach include adapting learning rate dynamically based on the performance of the optimization algorithm without need of any scheduling, some of such methods include Adagrad[3], Adadelta[26]

, RMSprop

[7] and Adam [16]. Article [17] summarizes all the above discussed methods in detail. Paper [21] proposes a method to automatic tune the learning rate based on the local gradient variations of the data samples which has similar performance to other adaptive learning rate methods. Another paper [24] shows that models can achieve similar test performance with out decaying the learning rate but by increasing the batch size instead. This method not only has fewer parameter updates but also increases parallelism thereby reducing training times.

In the next section, we discuss our proposed method and also training strategy with a fixed learning rate schedule across epochs which achieves better accuracies even at the higher values of learning rates.

3 Proposed Method

Our proposed approach is to dynamically amplify (increase) the value of the gradients for a selection of layers during backpropagation. This ensures that the gradient values are not diminished while updating weights for the initial network layers and a significant value of the gradients is available during backpropagation even for deep neural networks with large number of layers. Architectures of neural networks have evolved over the years and there are many different layers where such an amplification can be done. The layers on which gradient amplification can be performed during backpropagation are arranged into a group, say . To determine this group, firstly, the type of layers that needs to be included for gradient amplification should be identified. Each of the layers such as convolution layers, batch normalization layers, pooling layers, activation function layers and so on can be chosen to be included in the group. The type of the layer considered plays a crucial role in the performance of the model. Gradient amplification is done on a subset of the layers from this group , which we refer as layers in the rest of the paper. Selection of the layers from a group of layers can be done in various methods. In this work, we determine the layers by random selection. To identify which subset size has better performance, we choose a parameter representing the ratio of layers to be selected from all the layers in the group . Gradients are amplified when they pass through these randomly selected layers during backpropagation. During amplification, value of gradients is increased at run time by multiplying the actual gradient values by a factor . The value of is important as it should not be too small or too large. If the value of is too small, then the increase might not be effective and if it is too large it might overfit the data or cause incorrect weight updates. During training, we perform gradient amplification for some epochs and with no gradient amplification for other epochs. Algorithm 1 describes the training process with gradient amplification and algorithm 2 describes the steps for the selection of layers from .

0:  , =[(), (), ] Variables: is gradient amplification factor is ratio of layers to be selected for amplification is the learning rate the set of layers selected to perform amplification is the neural network model is an array of elements, each in the format (, , , ) =0
  for  in  do
     update learning rate to
     optimizer=sdg_optimizer()
     if (then
         = GetGradientAmpLayers(, )
     end if
     for  to  do
        train the model
        if (then
           multiply gradients with for layers in during backpropagation
        else
           perform regular backpropagation without gradient amplification
        end if
     end for
     =
     reset
     evaluate model with a testing set
  end for
  return  
Algorithm 1 Training process with gradient amplification
0:  in , is ratio of layers to be selected for amplification is a set consisting of a group of all layers that can be used for gradient amplification = Set indicating the type of layers to be used for amplification
  Function GetGradientAmpLayers(, )
  for all  in  do
     include in
  end for
   = *sizeof()
   = RandomSelect(, )
  EndFunction
Algorithm 2 Determination of layers

4 Experiments & Results

4.1 Setup

Our experiments are performed on CIFAR 10 dataset which consists of 60000 colored images of 10 classes with 6000 images per class and each image has 32x32 resolution. We implement our algorithms using python and pytorch

[20] libraries. In our experiments, we employ several standard deep learning models and train them for 150 epochs. The number of epochs, combination of number of epochs and learning rates can be chosen as one thinks best. In this work, the first 100 epochs have learning rate of 0.1 and the next 50 epochs have the learning rate of 0.01 (as shown in Fig. 1. The first 50 epochs are trained with learning rate of 0.1 without gradient amplification. This is because for the first few epochs, the model is considered to be in transient phase and the network parameters undergo significant changes. This initial transient can be considered for any number of epochs and in this work, we set it to 50 epochs. The next 50 epochs have the same learning rate of 0.1 but has gradient amplification applied during backpropagation while training the model (as shown in Fig. 2(a)). After identifying the best with gradient amplification for epochs , using those for those epochs, we extend amplification for epochs to identify the best and with no amplification for epochs , as shown Fig. 2(b). There are mainly three important parameters while applying gradient amplification method namely, the type of the layers to be employed for amplification, the ratio of layers () to be chosen from selected layers to perform amplification and gradient amplification factor. The effects of varying each of these parameters are explained in detail in the subsections below. We run our experiments on Resnet and VGG models with different architectures.

Here we perform three phase analysis while evaluating our model.

Phase1

In this phase, we choose the type of layers to be considered for amplification. There are several types of layers at which amplification can be applied such as activation function layers, pooling layers, batch normalization layers and convolution layers. Convolution layers apply kernel functions and extract important features from the data and pooling layers perform accumulation of features over a grid using several strategies such as retrieving maximum values, minimum values, averaging, fractional pooling and so on. Since the network parameter tuning while training can be sensitive to these values, in this work, we do not perform amplification on these layers. Batch normalization layers normalize data over a batch of inputs, and activation function layers transform data non-linearly before forwarding it to the succeeding layers. In our work, we perform gradient amplification on batch normalization and activation function layers. ReLU is the activation function used in Resnet and VGG models. From these two types of layers, either one or both of them can be considered for amplification. Once the type of the layers is selected, we now tag all the layers of the selected type to belong to the group . We now move to the next phase to determine the final amplification layers .

Fig. 1: Experiment setting showing the number of epochs and learning rates corresponding to epochs for training all the models.
Phase2

Once the set of layers is determined, the next task is to find the subset of layers which gives better performance. It requires identifying subset size and selection of those many layers from . Since the size is unknown, experiments are performed by selecting the size to be a ratio of size of . This ratio, , is chosen from the set . The actual size of is determined by the value . When the value is , no layers are chosen and gradient amplification is not performed. When the value is , then all the layers in are considered for amplification. is included to verify whether the model performs better without gradient amplification or vice versa. Random selection is employed to select subset of layers from . We perform experiments with all these sizes and select the model with the best performance.

Phase3

In this phase, the layers on which gradient amplification can be applied are known. The only parameter left to explore is , the factor with which gradient needs to be amplified. To reduce computation complexity in testing all the combinations of parameter values , and , firstly experiments are performed on all combinations of and i.e., until phase-2, then the best models are chosen from phase-2 and analyzed by varying . The value of is firstly varied from to analyze the impact of amplification and then fine-tuned by varying from to determine the value that works best during training.

4.2 Results

In our experiments, we employ Resnet-18, Resnet-34 and VGG-19 models and perform thorough analysis. As the complexity of the model and the depth of the network increases, it takes longer to compute and requires more GPU/CPU resources. Since we perform many experiments (around hundreds), having models with relatively simpler architectures and less layers would make the computation time faster. Most of our experiments are performed on High Performance Computing(HPC) cluster at Georgia State University(GSU) [4].

While performing experiments, we choose either batch normalization layers or ReLU layers or both and then verify their performance over multiple epochs. We first explain the training which is important to understand the performance tables. We train our models for 150 epochs and the learning rate of the first 100 epochs is 0.1 and the next 50 is 0.01. We train the models with no gradient amplification for the first 50 epochs as the initial transient and for the next epochs, we aim to identify the pattern to select the epochs which improve the overall performance of the model. We follow the training steps mentioned in Algorithm 1 and =[(), (), ] is chosen as when no gradient amplification is performed. The values in each element represent end epoch, learning rate, ratio of amplified layers and gradient amplification factor respectively. For instance, means that the model is trained with learning rate until we reach epochs, during which layers are selected for gradient amplification and amplification factor is .

Performance of original models with no gradient amplification is firstly recorded. Next, models with gradient amplification are experimented in two steps. We first set as . That is, no gradient amplification is applied for the first and the last 50 epochs, as shown in Fig. 2(a). For epochs , the ratio of selected layers is scanned from to identify the best model with the amplification factor . For simplicity, we define to represent the modified ratio during epochs 51-100 in step-1 while performing amplification, and to represent the modified ratio during epochs 51-100 and during epochs 101-130, respectively, during amplification in step-2. So, the defined above will be represented as , where represents the value that is varied. Once we identify the best ratio for epochs, say , we then run the experiments with different ratio values for the next 30 epochs by setting to be i.e., as shown in Fig. 2(b). Note that, the learning rate is decreased to 0.01 after 100 epochs. After these experiments, the best models are chosen to perform experiments in phase-3 to analyze the impact of gradient amplification factor on its performance. All the phases and various experiments performed are shown in Fig. 3.

From our initial experiments, we observe that the ratio values on average provide better results in step-1, explained below in detail in Phase-2. Instead of running step-2 only on the best models from step-1, different models are built with ratio values for epochs 51-100 where the learning rate is 0.1. We perform our analysis on phase-1 and phase-2 for the following amplification in step-2 (see 2(b)) :

  • :

  • :

  • :

  • :

(a) Step-1
(b) Step-2
Fig. 2: Two step training process carried out during performance analysis of deep learning models. Experiments are first executed on the models with training steps shown in step-1 (a). For step-2(b), ratio parameters for gradient amplification which have better performance of the models in step-1 are considered as the parameters for epochs 51-100 epochs and experiments are performed by analyzing ratio parameters for epochs 101-130, with no amplification from epochs 131-150. These settings show the number of epochs and the learning rates corresponding to these epochs while training these models.
Fig. 3: Overview of all the experiments performed by varying different parameters of gradient amplification.
(a) for
(b) for
(c) for
Fig. 4: Performance of the models after training with step-2 strategy with gradient amplification(red) applied from epochs 51-100 compared to mean accuracies of the original models(blue) with no gradient amplification. In each plot, blue horizontal line shows the average testing accuracy of the original models without gradient amplification. amp testing refers to testing accuracies of models with gradient amplification. The type of the layer is shown in each subplot; horizontal and vertical axes correspond to the ratio of amplified layers and accuracies respectively. These experiment plots correspond to , where the ratio(0.5) of layers are amplified for epochs 51-100. The other , and also have similar performance patterns.

4.2.1 Phase-1: (Effect of type of layers)

In this work, ReLU, BN or both(ReLU+BN) are the layers used for gradient amplification. We run original models without gradient amplification 5 times and record their training, testing accuracies and compare the corresponding gradient amplified models with the mean of the these accuracies across 5 runs. For each type(s) of layer chosen, experiments are run for , , , , that is, for each ratio value in during epochs 51-100 (), we build models by varying ratio values from for epochs 101-130 (), without amplification from 131-150 (see Fig. 2(b)). The best training and testing accuracies of these models are compared with the average training and testing accuracies of corresponding original model. Since training accuracies of the original models are close to 100%, we emphasize our comparison on testing accuracies.

In VGG-19 model, we perform analysis considering ReLU, BN or both layers for gradient amplification and provide accuracy improvements for , , , respectively. When only ReLU layers are chosen, testing accuracies improve around , , , respectively for above . In the case of amplification applied only to BN layers, an improvement of , , , in testing accuracies is observed. When both ReLU and BN are chosen, models have accuracy difference of , , , respectively. When both layers are used in amplification, the improvements across different models are less than , which becomes better when only either ReLU or BN is used. Best improvements are seen when amplification is applied on only BN layers.

Resnet models are made of residual blocks, each of which consists of two convolutional units and therefore each block has two ReLU and BN layers. In these models, other than experimenting with all ReLU and BN layers, we additionally perform experiments considering only one of the BN layers from residual blocks. When all the BN layers are considered for amplification in Resnet-18 models, there is an improvement of , , , respectively for , , , . When only ReLU layers are used, there is a difference of , , , respectively. When both BN and ReLU are used, there is an initial improvement of , , for , but the performance drops for , , . When only one of the BN layer from a residual block is considered, there is an improvement of , , , for respective . When both BN and ReLU are used for amplification, there is an improvement only for params and either declines or slightly changes for the remaining . When either ReLU or BN layers is considered, for params , , more than performance improvement of more than can be observed and for params , , improvements are less than . When one of the BN layers in residual blocks are considered, then the models have accuracy improvements of more than for all params and it also achieves the best testing accuracy of with an improvement of over original model.

Similarly for Resnet-34, in the case of only BN layers, there is an accuracy gain of , for , and then slightly decreases for , respectively. When only ReLU layers are considered, there is an improvement of , for , , and for other , there is a slight decrease(less than 0.8%) for , . When both BN and ReLU are used, there is an initial improvement of for and then it declines for other . When only one of the BN layers are used in a residual block, an improvement of , , , can be seen for respective . When all the BN layers are only used for amplification, there is an improvement of more than only for params and the performance either declines or slightly changes for remaining . Similar pattern is observed when only ReLU or both ReLU+BN are used for amplification. When one of the BN layers in residual blocks are considered, models have accuracy improvements of more than for all params and it also achieves the best testing accuracy of with an improvement of over original model.

Our experiments show that for VGG-19 models, selecting ReLU improves the performance of the models, but achieves best performance when BN layers are chosen for amplification. In Resnet-18 and Resnet-34, performance of the models improve when BN layers are chosen for amplification and best performance is achieved when only of the BN layers from a residual block is selected for amplification.

4.2.2 Phase-2: (Effect of ratio of selected layers )

Here, we discuss the impact of the ratio of selected layers on each of the above types. In our training strategy, gradient amplification is firstly applied in step-1 (as shown in Fig. 2(a)) to determine the best performing ratio values for epochs 51-100. The best training and testing accuracies after gradient amplification across all the ratio values are compared with the original baseline models to analyze the overall effectiveness. In VGG-19, for all layer types, as the ratio of amplified layers increases, the performance of the model diminishes compared to original models. The training accuracies decrease at an increased rate compared to testing accuracies. When gradient amplification is performed, training and testing accuracies decrease slightly by and (for BN only) and increase slightly by and (in the case of ReLU+BN) and show an improvement of and (for ReLU only) respectively.

In Resnet-18 and Resnet-34 models, as the ratio of amplified layers increases, training and testing accuracies remain close to the accuracies of the baseline models when only one of the BN layers in a residual block are considered. When either BN or ReLU is considered, as the ratio of amplified layers increases, performance of the models decreases slightly compared to original models. When both BN and ReLU are considered for amplification, as the ratio of layers increases, performance of the models decrease significantly compared to respective baseline models. In Resnet-18, the best training and testing accuracies after gradient amplification across all the ratio values, show an improvement by and (for BN only), and (for ReLU only), and (in the case of ReLU+BN), and and (in the case when one of BN layers from residual block) respectively. In Resnet-34, the best training and testing accuracies after gradient amplification across all the ratio values, show an improvement by and (for BN only), and (for ReLU only), and (in the case of ReLU+BN), and and (when one of the BN layers from residual blocks are considered) respectively.

In the case of step-1, amplification is applied only for epochs 51-100 (). We also perform experiments by applying amplification for epochs 101-150 (), by considering all or some of the epochs. We observe that the models perform better when amplification is applied from 51-100 epochs () followed by 101-130 epochs () as shown in step-2 (Fig. 2(b)). To narrow the parameter space, we only consider ratio values for epochs 51-100 where the models perform better. From analysis of model performances in step-1, ratio values on average provide better results and therefore, these ratios are used for epochs 51-100 () as mentioned earlier and ratio values are varied for 101-130 epochs (), namely , , , . Fig. 4 shows the performance of VGG-19, Resnet-18 and Resnet-34 models respectively for these params when different layers are amplified. Performance improvements of these models are discussed in Phase-1 in detail. Here we emphasize on the effect on the models as the ratio of amplified layers increases. For VGG-19 models, as the ratio of layers increases, there is an increase in performance initially and then it decreases when the ratios above . When both ReLU+BN are amplified, the models have significant decrease with the increase of ratio values. In the case of Resnet-18, models have improved or similar performance even as the ratio increases except when both ReLU+BN are amplified, in which case it decreases. For Resnet-34, models have improved or similar performance even as the ratio increases until 0.8, after which it decreases. But when both ReLU+BN are amplified, the performance declines even for smaller ratios.

When amplification is applied using approach in step-1 (Fig. 2(a)), the models perform better when only ReLU are amplified in the case of VGG-19 and for Resnet-18, Resnet-34, models perform best when only one of the BN layers from a residual block is used for amplification. When amplification is done as in step-2, all models achieve higher accuracies than baseline models for most of the ratio values except when ReLU+BN are amplified, in which case only some of the smaller ratio values have better models. This shows that a small ratio of amplified layers are sufficient to improve the performance of original models.

4.2.3 Phase-3: (Effect of gradient amplification factor)

Fig. 5 shows the performance of models as is varied. of the best models after analysis phase-1 and phase-2 are taken and gradients are amplified by varying the value of from . For VGG-19, the best model is achieved while amplifying only BN layers for . For Resnet-18 and Resnet-34, the best models are achieved while amplifying only one of the BN layers in residual units and for and respectively. While changing values of , as the factor of amplification increases, the performance of the models declines. To generalize, we can say that when is more than 5, the models do not perform better or sometimes perform worse than the corresponding baseline models. Effect of amplification factor also depends on the ratio of layers being amplified. If the ratio is close to 1, then values less than 5 can also decrease performance of the models.

We also perform experiments by fine-tuning the amplification factor from 1 to 3 in steps of 0.1, i.e., by varying . Fig. 6 shows the performance of these models as is varied in small steps from 1 to 3. In the case of VGG-19 and Resnet-18, the model always performs better than the baseline models both during training and testing and for Resnet-34, the model performs better until 2.7 and declines after that. In all these models, it can be observed that the best accuracy is around the value 2 which justifies our experiment analysis in the above phases.

Fig. 5: Performance comparison of amplified models(red) as is varied from 1 to 10 (horizontal axis) (vs) original models (blue).
Fig. 6: Performance comparison of amplified models(red) as is varied in small steps from 1 to 3 (horizontal axis) (vs) original models(blue).
(a) VGG-19
(b) Resnet-18
(c) Resnet-34
Fig. 7: Performance of the best models with gradient amplification over 150 epochs compared to original model with no gradient amplification. Original training(gray), testing(blue) accuracies including their mean accuracies are plotted along with amplified training(green) and testing(red) accuracies. These plots demonstrate that the models do not overfit while training with amplification.

4.3 Best models

The best performance of all the models is shown in Table I. Performance improvements can be observed both in and testing accuracies. The rows with ‘original’ in column show the performance of the original model with no gradient amplification. The following grayed row shows the performance of the corresponding model with gradient amplification. The that achieve these best models are also shown. We can observe that gradient amplification increases both training and testing accuracies. Though training accuracies are very close to 100% in the original model, gradient amplification improves them further. It can be noted that resnet models comprises of residual blocks architecture (an extra connection to the preceding layer) which already overcomes vanishing gradients by passing the current gradients directly to the previous layers without modification using residual connection, and therefore an improvement of can be assumed to be significant.

Model Mean/Best accuracy (%) Improved accuracy (%)
Training Testing Training Testing
Original 97.87 91.08
VGG_19 ( with amplification) 99.764 93.35 1.9 2.27
Original 98.371 92.488
Resnet_18 ( with amplification) 99.878 94.57 1.51 2.08
Original 98.444 92.716
Resnet_34 ( with amplification) 99.774 94.39 1.25 1.67
TABLE I: Accuracy comparison of models with gradient amplification (vs) mean accuracies of corresponding original model across 5 runs

Fig. 7 shows the performance of the each of the models for the params listed in Table I. These plots demonstrate that the models trained with gradient amplification do not cause overfitting problem. In the case of VGG-19, the best model is achieved when amplification is applied only on BN layers and for Resnet models, the best models are achieved when only one of the BN layers from a residual block are considered for amplification. Gradient amplification model surpasses the performance of all the original models. Accuracies achieved by amplified models not only exceed the mean average accuracies across 5 runs of the original models, but also outperform the best accuracy among these 5 runs of the original models.

To analyze the impact of training time, we perform experiments on the original models without amplification. Even the best models out of 5 runs do not achieve performance close to the corresponding amplified models. We ran all the original models for 50 more epochs with the learning rate of 0.01(i.e., 151-200 with ) and notice that these models do not reach the performance of amplified models. We need to reduce the learning rate further for the next epochs to improve performance of the original models and therefore there is no direct way of comparing them. But clearly, our proposed method of training with amplified gradients can train the deep learning models at higher learning rates to achieve better performance. Original models take more time to achieve similar accuracy (at the same settings) as amplified models.

5 Conclusions & Future Work

In this work, we propose a novel gradient amplification method to dynamically increase gradients during backpropagation. We also provide a training strategy consisting of set of epochs with switching between gradient amplification and without amplification. Detailed experiments are performed on VGG-19, Resnet-18 and Resnet-34 models to analyze the impact of gradient amplification with different amplification parameters. We learn that only a proportion of layers are sufficient to attain such a performance gain. It can also be observed that BN layers give the best improvement while performing amplification, followed by ReLU layers, whereas the performance quickly diminishes when both ReLU+BN are used. All these experiments show that our proposed amplification method and training strategy increase the performance of the original models and achieve better accuracies even at higher learning rates. In future work, we would like to perform the experiments on the larger datasets having more output classification classes like CIFAR-100 and ImageNet. In our current experimental models, there were no exploding gradients. However, we would also like to explore other models having such an issue and experiment gradient diminishing dynamically, similar to graident amplification, to address gradient exploding problem.

Acknowledgments

The authors would like to thank Georgia State University(GSU) for providing us access to High Performance Computing(HPC) cluster on which most of the experiments are performed. This research is supported in part by a NVIDIA Academic Hardware Grant.

References

  • [1] J. Brownlee (2019) Understand the Impact of Learning Rate on Neural Network Performance. Machine Learning Mastery. Note: [accessed 1 June 2019 ] External Links: Link Cited by: §1, §2.2.
  • [2] C. Darken, J. Chang, and J. Moody (1992) Learning rate schedules for faster stochastic gradient search. In Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop, pp. 3–12. Cited by: §2.2.
  • [3] J. Duchi, E. Hazan, and Y. Singer (2011) Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12 (Jul), pp. 2121–2159. Cited by: §2.2.
  • [4] (2019) Georgia State University (GSU) : High Performance Computing . Note: [accessed 1 June 2019 ] External Links: Link Cited by: §4.2.
  • [5] X. Glorot, A. Bordes, and Y. Bengio (2011) Deep sparse rectifier neural networks. In

    Proceedings of the fourteenth international conference on artificial intelligence and statistics

    ,
    pp. 315–323. Cited by: §1, §2.1.
  • [6] G. B. Goh, N. O. Hodas, and A. Vishnu (2017) Deep learning for computational chemistry. Journal of computational chemistry 38 (16), pp. 1291–1307. Cited by: §1, §2.1.
  • [7] A. Graves (2013)

    Generating sequences with recurrent neural networks

    .
    arXiv preprint arXiv:1308.0850. Cited by: §2.2.
  • [8] B. Hanin (2018) Which neural net architectures give rise to exploding and vanishing gradients?. In Advances in Neural Information Processing Systems, pp. 582–591. Cited by: §1, §2.1.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 770–778. Cited by: §1, §1, §2.1.
  • [10] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal processing magazine 29 (6), pp. 82–97. Cited by: §1.
  • [11] S. Hochreiter, Y. Bengio, P. Frasconi, J. Schmidhuber, et al. (2001) Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. A field guide to dynamical recurrent neural networks. IEEE Press. Cited by: §1, §2.1.
  • [12] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §1.
  • [13] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger (2016) Deep networks with stochastic depth. In European conference on computer vision, pp. 646–661. Cited by: §1.
  • [14] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §1, §2.1.
  • [15] A. Kamilaris and F. X. Prenafeta-Boldú (2018) Deep learning in agriculture: a survey. Computers and electronics in agriculture 147, pp. 70–90. Cited by: §1.
  • [16] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §2.2.
  • [17] S. Lau (2019) Learning Rate Schedules and Adaptive Learning Rate Methods for Deep Learning.

    Towards Data Science

    .
    Note: [accessed 1 June 2019 ] External Links: Link Cited by: §2.2.
  • [18] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez (2017) A survey on deep learning in medical image analysis. Medical image analysis 42, pp. 60–88. Cited by: §1.
  • [19] V. Nair and G. E. Hinton (2010)

    Rectified linear units improve restricted boltzmann machines

    .
    In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814. Cited by: §1, §2.1.
  • [20] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, Cited by: §4.1.
  • [21] T. Schaul, S. Zhang, and Y. LeCun (2013) No more pesky learning rates. In International Conference on Machine Learning, pp. 343–351. Cited by: §2.2.
  • [22] J. Schmidhuber (1992) Learning complex, extended sequences using the principle of history compression. Neural Computation 4 (2), pp. 234–242. Cited by: §1, §2.1.
  • [23] J. Schmidhuber (2015) Deep learning in neural networks: an overview. Neural networks 61, pp. 85–117. Cited by: §2.1.
  • [24] S. L. Smith, P. Kindermans, C. Ying, and Q. V. Le (2017) Don’t decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489. Cited by: §2.2.
  • [25] J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu (2019) Deep learning for sensor-based activity recognition: a survey. Pattern Recognition Letters 119, pp. 3–11. Cited by: §1.
  • [26] M. D. Zeiler (2012) ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Cited by: §2.2.
  • [27] L. Zhang, S. Wang, and B. Liu (2018)

    Deep learning for sentiment analysis: a survey

    .
    Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8 (4), pp. e1253. Cited by: §1.
  • [28] Q. Zhang, L. T. Yang, Z. Chen, and P. Li (2018) A survey on deep learning for big data. Information Fusion 42, pp. 146–157. Cited by: §1.