Training Deeper Convolutional Networks with Deep Supervision

05/11/2015
by   Liwei Wang, et al.
0

One of the most promising ways of improving the performance of deep convolutional neural networks is by increasing the number of convolutional layers. However, adding layers makes training more difficult and computationally expensive. In order to train deeper networks, we propose to add auxiliary supervision branches after certain intermediate layers during training. We formulate a simple rule of thumb to determine where these branches should be added. The resulting deeply supervised structure makes the training much easier and also produces better classification results on ImageNet and the recently released, larger MIT Places dataset

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2016

Residual CNDS

Convolutional Neural networks nowadays are of tremendous importance for ...
research
06/03/2019

Deeply-supervised Knowledge Synergy

Convolutional Neural Networks (CNNs) have become deeper and more complic...
research
11/06/2016

The Shallow End: Empowering Shallower Deep-Convolutional Networks through Auxiliary Outputs

The depth is one of the key factors behind the great success of convolut...
research
12/10/2015

Deep Residual Learning for Image Recognition

Deeper neural networks are more difficult to train. We present a residua...
research
03/22/2021

DeepViT: Towards Deeper Vision Transformer

Vision transformers (ViTs) have been successfully applied in image class...
research
10/05/2021

Interpreting intermediate convolutional layers in unsupervised acoustic word classification

Understanding how deep convolutional neural networks classify data has b...
research
09/18/2017

Coupled Ensembles of Neural Networks

We investigate in this paper the architecture of deep convolutional netw...

Please sign up or login with your details

Forgot password? Click here to reset