Validation of RELU nets with tropical polyhedra

07/30/2021
by   Eric Goubault, et al.
0

This paper studies the problem of range analysis for feedforward neural networks, which is a basic primitive for applications such as robustness of neural networks, compliance to specifications and reachability analysis of neural-network feedback systems. Our approach focuses on ReLU (rectified linear unit) feedforward neural nets that present specific difficulties: approaches that exploit derivatives do not apply in general, the number of patterns of neuron activations can be quite large even for small networks, and convex approximations are generally too coarse. In this paper, we employ set-based methods and abstract interpretation that have been very successful in coping with similar difficulties in classical program verification. We present an approach that abstracts ReLU feedforward neural networks using tropical polyhedra. We show that tropical polyhedra can efficiently abstract ReLU activation function, while being able to control the loss of precision due to linear computations. We show how the connection between ReLU networks and tropical rational functions can provide approaches for range analysis of ReLU neural networks.

READ FULL TEXT
research
07/12/2020

Abstract Universal Approximation for Neural Networks

With growing concerns about the safety and robustness of neural networks...
research
12/03/2022

Probabilistic Verification of ReLU Neural Networks via Characteristic Functions

Verifying the input-output relationships of a neural network so as to ac...
research
09/26/2018

Deep Neural Networks for Estimation and Inference: Application to Causal Effects and Other Semiparametric Estimands

We study deep neural networks and their use in semiparametric inference....
research
05/22/2023

DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using Bernstein Polynomial Activations and Precise Bound Propagation

Formal certification of Neural Networks (NNs) is crucial for ensuring th...
research
03/05/2021

Precise Multi-Neuron Abstractions for Neural Network Certification

Formal verification of neural networks is critical for their safe adopti...
research
07/21/2021

Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations

We present polynomial time and sample efficient algorithms for learning ...
research
10/03/2017

Training Feedforward Neural Networks with Standard Logistic Activations is Feasible

Training feedforward neural networks with standard logistic activations ...

Please sign up or login with your details

Forgot password? Click here to reset