On Optimizing Back-Substitution Methods for Neural Network Verification

08/16/2022
by   Tom Zelazny, et al.
0

With the increasing application of deep learning in mission-critical systems, there is a growing need to obtain formal guarantees about the behaviors of neural networks. Indeed, many approaches for verifying neural networks have been recently proposed, but these generally struggle with limited scalability or insufficient accuracy. A key component in many state-of-the-art verification schemes is computing lower and upper bounds on the values that neurons in the network can obtain for a specific input domain – and the tighter these bounds, the more likely the verification is to succeed. Many common algorithms for computing these bounds are variations of the symbolic-bound propagation method; and among these, approaches that utilize a process called back-substitution are particularly successful. In this paper, we present an approach for making back-substitution produce tighter bounds. To achieve this, we formulate and then minimize the imprecision errors incurred during back-substitution. Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques, with only minor modifications. We implement our approach as a proof-of-concept tool, and present favorable results compared to state-of-the-art verifiers that perform back-substitution.

READ FULL TEXT
research
02/26/2019

Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

Deep neural networks (DNNs) have been shown lack of robustness for the v...
research
09/21/2020

NeuroDiff: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation

As neural networks make their way into safety-critical systems, where mi...
research
12/15/2022

Optimized Symbolic Interval Propagation for Neural Network Verification

Neural networks are increasingly applied in safety critical domains, the...
research
10/13/2022

Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation

Lipschitz constants are connected to many properties of neural networks,...
research
10/31/2019

An Abstraction-Based Framework for Neural Network Verification

Deep neural networks are increasingly being used as controllers for safe...
research
06/03/2019

Correctness Verification of Neural Networks

We present the first verification that a neural network produces a corre...
research
10/23/2022

Tighter Abstract Queries in Neural Network Verification

Neural networks have become critical components of reactive systems in v...

Please sign up or login with your details

Forgot password? Click here to reset