Towards Rigorous Understanding of Neural Networks via Semantics-preserving Transformations

01/19/2023
by   Maximilian Schlüter, et al.
0

In this paper we present an algebraic approach to the precise and global verification and explanation of Rectifier Neural Networks, a subclass of Piece-wise Linear Neural Networks (PLNNs), i.e., networks that semantically represent piece-wise affine functions. Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures (TADS). Due to their deterministic and sequential nature, TADS can, similarly to decision trees, be considered as white-box models and therefore as precise solutions to the model and outcome explanation problem. TADS are linear algebras which allows one to elegantly compare Rectifier Networks for equivalence or similarity, both with precise diagnostic information in case of failure, and to characterize their classification potential by precisely characterizing the set of inputs that are specifically classified or the set of inputs where two network-based classifiers differ. All phenomena are illustrated along a detailed discussion of a minimal, illustrative example: the continuous XOR function.

READ FULL TEXT
research
10/11/2022

Neural Networks are Decision Trees

In this manuscript, we show that any neural network having piece-wise li...
research
09/30/2019

Locally Constant Networks

We show how neural models can be used to realize piece-wise constant fun...
research
10/07/2021

Cartoon Explanations of Image Classifiers

We present CartoonX (Cartoon Explanation), a novel model-agnostic explan...
research
04/28/2023

The Power of Typed Affine Decision Structures: A Case Study

TADS are a novel, concise white-box representation of neural networks. I...
research
08/17/2019

A Symbolic Neural Network Representation and its Application to Understanding, Verifying, and Patching Network

Analysis and manipulation of trained neural networks is a challenging an...
research
10/07/2021

Bisimulations for Neural Network Reduction

We present a notion of bisimulation that induces a reduced network which...
research
08/21/2022

Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural Networks

The robustness of deep neural networks is crucial to modern AI-enabled s...

Please sign up or login with your details

Forgot password? Click here to reset