i-RevNet: Deep Invertible Networks

02/20/2018
by   Jörn-Henrik Jacobsen, et al.
0

It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show via a one-to-one mapping that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for one, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNets learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural image representations.

READ FULL TEXT
research
10/16/2018

Downsampling leads to Image Memorization in Convolutional Autoencoders

Memorization of data in deep neural networks has become a subject of sig...
research
03/06/2017

Building a Regular Decision Boundary with Deep Networks

In this work, we build a generic architecture of Convolutional Neural Ne...
research
08/05/2023

Neural Collapse in the Intermediate Hidden Layers of Classification Neural Networks

Neural Collapse (NC) gives a precise description of the representations ...
research
06/09/2015

Inverting Visual Representations with Convolutional Networks

Feature representations, both hand-designed and learned ones, are often ...
research
11/19/2015

Adjustable Bounded Rectifiers: Towards Deep Binary Representations

Binary representation is desirable for its memory efficiency, computatio...
research
06/01/2018

Inverting Supervised Representations with Autoregressive Neural Density Models

Understanding the nature of representations learned by supervised machin...
research
04/10/2019

Deep Learning Inversion of Electrical Resistivity Data

The inverse problem of electrical resistivity surveys (ERS) is difficult...

Please sign up or login with your details

Forgot password? Click here to reset