Globally Injective ReLU Networks

06/15/2020
by   Michael Puthawala, et al.
0

We study injective ReLU neural networks. Injectivity plays an important role in generative models where it facilitates inference; in inverse problems with generative priors it is a precursor to well posedness. We establish sharp conditions for injectivity of ReLU layers and networks, both fully connected and convolutional. We make no architectural assumptions beyond the ReLU activations so our results apply to a very general class of neural networks. We show through a layer-wise analysis that an expansivity factor of two is necessary for injectivity; we also show sufficiency by constructing weight matrices which guarantee injectivity. Further, we show that global injectivity with iid Gaussian matrices, a commonly used tractable model, requires considerably larger expansivity which might seem counterintuitive. We then derive the inverse Lipschitz constants and study the approximation-theoretic properties of injective neural networks. Using arguments from differential topology we prove that, under mild technical conditions, any Lipschitz map can be approximated by an injective neural network. This justifies the use of injective neural networks in problems which a priori do not require injectivity. Our results establish a theoretical basis for the study of nonlinear inverse and inference problems using neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2022

Continuous Generative Neural Networks

In this work, we present and study Continuous Generative Neural Networks...
research
04/10/2023

Approximation of Nonlinear Functionals Using Deep ReLU Networks

In recent years, functional neural networks have been proposed and studi...
research
09/28/2021

Convergence of Deep Convolutional Neural Networks

Convergence of deep neural networks as the depth of the networks tends t...
research
08/05/2023

Approximating Positive Homogeneous Functions with Scale Invariant Neural Networks

We investigate to what extent it is possible to solve linear inverse pro...
research
05/10/2021

ReLU Deep Neural Networks from the Hierarchical Basis Perspective

We study ReLU deep neural networks (DNNs) by investigating their connect...
research
08/16/2023

Convergence of Two-Layer Regression with Nonlinear Units

Large language models (LLMs), such as ChatGPT and GPT4, have shown outst...
research
10/02/2018

GINN: Geometric Illustration of Neural Networks

This informal technical report details the geometric illustration of dec...

Please sign up or login with your details

Forgot password? Click here to reset