Understanding and mitigating exploding inverses in invertible neural networks

06/16/2020
by   Jens Behrmann, et al.
14

Invertible neural networks (INNs) have been used to design generative models, implement memory-saving gradient computation, and solve inverse problems. In this work, we show that commonly-used INN architectures suffer from exploding inverses and are thus prone to becoming numerically non-invertible. Across a wide range of INN use-cases, we reveal failures including the non-applicability of the change-of-variables formula on in- and out-of-distribution (OOD) data, incorrect gradients for memory-saving backprop, and the inability to sample from normalizing flow models. We further derive bi-Lipschitz properties of atomic building blocks of common architectures. These insights into the stability of INNs then provide ways forward to remedy these failures. For tasks where local invertibility is sufficient, like memory-saving backprop, we propose a flexible and efficient regularizer. For problems where global invertibility is necessary, such as applying normalizing flows on OOD data, we show the importance of designing stable INN building blocks.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

page 19

page 20

page 21

page 23

page 29

10/08/2018

Algorithmic Aspects of Inverse Problems Using Generative Models

The traditional approach of hand-crafting priors (such as sparsity) for ...
05/13/2021

Provably Convergent Algorithms for Solving Inverse Problems Using Generative Models

The traditional approach of hand-crafting priors (such as sparsity) for ...
01/06/2020

Using CEF Digital Service Infrastructures in the Smart4Health Project for the Exchange of Electronic Health Records

The Smart4Health (S4H) software application will empower EU citizens to ...
07/18/2019

MintNet: Building Invertible Neural Networks with Masked Convolutions

We propose a new way of constructing invertible neural networks by combi...
11/19/2019

KISS: Keeping It Simple for Scene Text Recognition

Over the past few years, several new methods for scene text recognition ...
11/30/2020

General Invertible Transformations for Flow-based Generative Modeling

In this paper, we present a new class of invertible transformations. We ...
10/05/2020

Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks

Neural networks (NNs) whose subnetworks implement reusable functions are...

Code Repositories

INN-exploding-inverses

Code for Understanding and Mitigating Exploding Inverses in Invertible Neural Networks http://arxiv.org/abs/2006.09347


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.