Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

05/03/2017
by   Ruediger Ehlers, et al.
0

We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers. The starting point of our approach is the addition of a global linear approximation of the overall network behavior to the verification problem that helps with SMT-like reasoning over the network behavior. We present a specialized verification algorithm that employs this approximation in a search process in which it infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving. We also show how to infer additional conflict clauses and safe node fixtures from the results of the analysis steps performed during the search. The resulting approach is evaluated on collision avoidance and handwritten digit recognition case studies.

READ FULL TEXT

page 11

page 13

research
09/07/2021

On the space of coefficients of a Feed Forward Neural Network

We define and establish the conditions for `equivalent neural networks' ...
research
07/17/2023

A DPLL(T) Framework for Verifying Deep Neural Networks

Deep Neural Networks (DNNs) have emerged as an effective approach to tac...
research
02/03/2017

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Deep neural networks have emerged as a widely used and effective means f...
research
11/05/2020

An SMT-Based Approach for Verifying Binarized Neural Networks

Deep learning has emerged as an effective approach for creating modern s...
research
05/22/2019

A CDCL-style calculus for solving non-linear constraints

In this paper we propose a novel approach for checking satisfiability of...
research
03/26/2019

Robustness of Neural Networks to Parameter Quantization

Quantization, a commonly used technique to reduce the memory footprint o...
research
05/22/2018

Breaking the Activation Function Bottleneck through Adaptive Parameterization

Standard neural network architectures are non-linear only by virtue of a...

Please sign up or login with your details

Forgot password? Click here to reset