Deep Residual Networks with Exponential Linear Unit

04/14/2016
by   Anish Shah, et al.
0

Very deep convolutional neural networks introduced new problems like vanishing gradient and degradation. The recent successful contributions towards solving these problems are Residual and Highway Networks. These networks introduce skip connections that allow the information (from the input or those learned in earlier layers) to flow more into the deeper layers. These very deep models have lead to a considerable decrease in test errors, on benchmarks like ImageNet and COCO. In this paper, we propose the use of exponential linear unit instead of the combination of ReLU and Batch Normalization in Residual Networks. We show that this not only speeds up learning in Residual Networks but also improves the accuracy as the depth increases. It improves the test error on almost all data sets, like CIFAR-10 and CIFAR-100

READ FULL TEXT
research
11/23/2015

Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

We introduce the "exponential linear unit" (ELU) which speeds up learnin...
research
07/16/2019

Single-bit-per-weight deep convolutional neural networks without batch-normalization layers for embedded systems

Batch-normalization (BN) layers are thought to be an integrally importan...
research
12/10/2015

Deep Residual Learning for Image Recognition

Deeper neural networks are more difficult to train. We present a residua...
research
05/21/2017

Shake-Shake regularization

The method introduced in this paper aims at helping deep learning practi...
research
06/01/2018

Tandem Blocks in Deep Convolutional Neural Networks

Due to the success of residual networks (resnets) and related architectu...
research
10/21/2019

Universal flow approximation with deep residual networks

Residual networks (ResNets) are a deep learning architecture with the re...

Please sign up or login with your details

Forgot password? Click here to reset