Efficient Formal Safety Analysis of Neural Networks

09/19/2018
by   Shiqi Wang, et al.
0

Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10× larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2019

Quantitative Verification of Neural Networks And its Security Applications

Neural networks are increasingly employed in safety-critical domains. Th...
research
03/09/2020

Finding Input Characterizations for Output Properties in ReLU Neural Networks

Deep Neural Networks (DNNs) have emerged as a powerful mechanism and are...
research
11/30/2022

Efficient Adversarial Input Generation via Neural Net Patching

The adversarial input generation problem has become central in establish...
research
12/03/2019

FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks

With a constant improvement in the network architectures and training me...
research
03/26/2022

Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding

The robustness of deep neural networks has received significant interest...
research
04/28/2018

Formal Security Analysis of Neural Networks using Symbolic Intervals

Due to the increasing deployment of Deep Neural Networks (DNNs) in real-...
research
07/17/2019

ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks

Artificial Neural Networks (ANNs) have demonstrated remarkable utility i...

Please sign up or login with your details

Forgot password? Click here to reset