Residual CNDS

08/07/2016
by   Hussein A. Al-Barazanchi, et al.
0

Convolutional Neural networks nowadays are of tremendous importance for any image classification system. One of the most investigated methods to increase the accuracy of CNN is by increasing the depth of CNN. Increasing the depth by stacking more layers also increases the difficulty of training besides making it computationally expensive. Some research found that adding auxiliary forks after intermediate layers increases the accuracy. Specifying which intermediate layer shoud have the fork just addressed recently. Where a simple rule were used to detect the position of intermediate layers that needs the auxiliary supervision fork. This technique known as convolutional neural networks with deep supervision (CNDS). This technique enhanced the accuracy of classification over the straight forward CNN used on the MIT places dataset and ImageNet. In the other side, Residual Learning is another technique emerged recently to ease the training of very deep CNN. Residual Learning framwork changed the learning of layers from unreferenced functions to learning residual function with regard to the layer's input. Residual Learning achieved state of arts results on ImageNet 2015 and COCO competitions. In this paper, we study the effect of adding residual connections to CNDS network. Our experiments results show increasing of accuracy over using CNDS only.

READ FULL TEXT

page 4

page 6

research
05/11/2015

Training Deeper Convolutional Networks with Deep Supervision

One of the most promising ways of improving the performance of deep conv...
research
12/10/2015

Deep Residual Learning for Image Recognition

Deeper neural networks are more difficult to train. We present a residua...
research
07/12/2022

Contrastive Deep Supervision

The success of deep learning is usually accompanied by the growth in neu...
research
03/15/2023

DeepMIM: Deep Supervision for Masked Image Modeling

Deep supervision, which involves extra supervisions to the intermediate ...
research
11/06/2016

The Shallow End: Empowering Shallower Deep-Convolutional Networks through Auxiliary Outputs

The depth is one of the key factors behind the great success of convolut...
research
02/15/2021

Momentum Residual Neural Networks

The training of deep residual neural networks (ResNets) with backpropaga...

Please sign up or login with your details

Forgot password? Click here to reset