Residual Connections Encourage Iterative Inference

10/13/2017
by   Stanisław Jastrzębski, et al.
0

Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we study Resnets both analytically and empirically. We formalize the notion of iterative refinement in Resnets by showing that residual architectures naturally encourage features to move along the negative gradient of loss during the feedforward phase. In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement. In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features. Finally we observe that sharing residual layers naively leads to representation explosion and hurts generalization performance, and show that simple existing strategies can help alleviating this problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2022

Entangled Residual Mappings

Residual mappings have been shown to perform representation learning in ...
research
12/22/2016

Highway and Residual Networks learn Unrolled Iterative Estimation

The past year saw the introduction of new architectures such as Highway ...
research
01/09/2017

Visualizing Residual Networks

Residual networks are the current state of the art on ImageNet. Similar ...
research
02/08/2021

Spike-based Residual Blocks

Deep Spiking Neural Networks (SNNs) are harder to train than ANNs becaus...
research
04/30/2018

On the iterative refinement of densely connected representation levels for semantic segmentation

State-of-the-art semantic segmentation approaches increase the receptive...
research
06/24/2020

Understanding Deep Architectures with Reasoning Layer

Recently, there has been a surge of interest in combining deep learning ...
research
07/02/2022

Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation

Iterative refinement – start with a random guess, then iteratively impro...

Please sign up or login with your details

Forgot password? Click here to reset