Comparison between layer-to-layer network training and conventional network training using Convolutional Neural Networks

Title: Comparison between layer-to-layer network training and conventional network training using Convolutional Neural Networks Abstract: Convolutional neural networks (CNNs) are widely used in various applications due to their effectiveness in extracting features from data. However, the performance of a CNN heavily depends on its architecture and training process. In this study, we propose a layer-to-layer training method and compare its performance with the conventional training method. In the layer-to-layer training approach, we treat a portion of the early layers as a student network and the later layers as a teacher network. During each training step, we incrementally train the student network to learn from the output of the teacher network, and vice versa. We evaluate this approach on a VGG16 network without pre-trained ImageNet weights and a regular CNN model. Our experiments show that the layer-to-layer training method outperforms the conventional training method for both models. Specifically, we achieve higher accuracy on the test set for the VGG16 network and the CNN model using layer-to-layer training compared to the conventional training method. Overall, our study highlights the importance of layer-wise training in CNNs and suggests that layer-to-layer training can be a promising approach for improving the accuracy of CNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2020

A Greedy Algorithm for Quantizing Neural Networks

We propose a new computationally efficient method for quantizing the wei...
research
11/13/2020

Investigating Learning in Deep Neural Networks using Layer-Wise Weight Change

Understanding the per-layer learning dynamics of deep neural networks is...
research
10/26/2017

Knowledge Projection for Deep Neural Networks

While deeper and wider neural networks are actively pushing the performa...
research
05/17/2019

Sequential training algorithm for neural networks

A sequential training method for large-scale feedforward neural networks...
research
01/23/2019

Decoupled Greedy Learning of CNNs

A commonly cited inefficiency of neural network training by back-propaga...
research
11/19/2019

Inter-layer Collision Networks

Deeper neural networks are hard to train. Inspired by the elastic collis...

Please sign up or login with your details

Forgot password? Click here to reset