Analysis of Invariance and Robustness via Invertibility of ReLU-Networks

06/25/2018
by   Jens Behrmann, et al.
0

Studying the invertibility of deep neural networks (DNNs) provides a principled approach to better understand the behavior of these powerful models. Despite being a promising diagnostic tool, a consistent theory on their invertibility is still lacking. We derive a theoretically motivated approach to explore the preimages of ReLU-layers and mechanisms affecting the stability of the inverse. Using the developed theory, we numerically show how this approach uncovers characteristic properties of the network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2023

On the Optimal Expressive Power of ReLU DNNs and Its Application in Approximation with Kolmogorov Superposition Theorem

This paper is devoted to studying the optimal expressive power of ReLU d...
research
12/07/2020

Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Renormalization Group

The success of deep learning in many real-world tasks has triggered an e...
research
05/10/2021

ReLU Deep Neural Networks from the Hierarchical Basis Perspective

We study ReLU deep neural networks (DNNs) by investigating their connect...
research
04/09/2019

Towards Analyzing Semantic Robustness of Deep Neural Networks

Despite the impressive performance of Deep Neural Networks (DNNs) on var...
research
09/09/2018

Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability

We explore the concept of co-design in the context of neural network ver...
research
02/10/2020

Deep Gated Networks: A framework to understand training and generalisation in deep learning

Understanding the role of (stochastic) gradient descent in training and ...
research
05/04/2022

Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs

This paper aims to theoretically analyze the complexity of feature trans...

Please sign up or login with your details

Forgot password? Click here to reset