Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

02/26/2019
by   Pengfei Yang, et al.
0

Deep neural networks (DNNs) have been shown lack of robustness for the vulnerability of their classification to small perturbations on the inputs. This has led to safety concerns of applying DNNs to safety-critical domains. Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs. However, these approaches suffer from either the scalability problem, i.e., only small DNNs can be handled, or the precision problem, i.e., the obtained bounds are loose. This paper improves on a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. More specifically, the values of neurons are represented symbolically and propagated forwardly from the input layer to the output layer, on top of abstract domains. We show that our approach can achieve significantly higher precision and thus can prove more properties than using only abstract domains. Moreover, we show that the bounds derived from our approach on the hidden neurons, when applied to a state-of-the-art SMT based verification tool, can improve its performance. We implement our approach into a software tool and validate it over a few DNNs trained on benchmark datasets such as MNIST, etc.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2023

OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks

Occlusion is a prevalent and easily realizable semantic perturbation to ...
research
08/16/2022

On Optimizing Back-Substitution Methods for Neural Network Verification

With the increasing application of deep learning in mission-critical sys...
research
04/28/2018

Formal Security Analysis of Neural Networks using Symbolic Intervals

Due to the increasing deployment of Deep Neural Networks (DNNs) in real-...
research
11/30/2022

Efficient Adversarial Input Generation via Neural Net Patching

The adversarial input generation problem has become central in establish...
research
09/21/2020

NeuroDiff: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation

As neural networks make their way into safety-critical systems, where mi...
research
09/18/2022

NeuCEPT: Locally Discover Neural Networks' Mechanism via Critical Neurons Identification with Precision Guarantee

Despite recent studies on understanding deep neural networks (DNNs), the...
research
07/12/2023

Towards a Certified Proof Checker for Deep Neural Network Verification

Recent developments in deep neural networks (DNNs) have led to their ado...

Please sign up or login with your details

Forgot password? Click here to reset