Model-Agnostic Reachability Analysis on Deep Neural Networks

04/03/2023
by   Chi Zhang, et al.
0

Verification plays an essential role in the formal analysis of safety-critical systems. Most current verification methods have specific requirements when working on Deep Neural Networks (DNNs). They either target one particular network category, e.g., Feedforward Neural Networks (FNNs), or networks with specific activation functions, e.g., RdLU. In this paper, we develop a model-agnostic verification framework, called DeepAgn, and show that it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both. Under the assumption of Lipschitz continuity, DeepAgn analyses the reachability of DNNs based on a novel optimisation scheme with a global convergence guarantee. It does not require access to the network's internal structures, such as layers and parameters. Through reachability analysis, DeepAgn can tackle several well-known robustness problems, including computing the maximum safe radius for a given input, and generating the ground-truth adversarial examples. We also empirically demonstrate DeepAgn's superior capability and efficiency in handling a broader class of deep neural networks, including both FNNs, and RNNs with very deep layers and millions of neurons, than other state-of-the-art verification approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2018

Reachability Analysis of Deep Neural Networks with Provable Guarantees

Verifying correctness of deep neural networks (DNNs) is challenging. We ...
research
10/02/2020

Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty

When using deep neural networks to operate safety-critical systems, asse...
research
09/13/2020

Towards the Quantification of Safety Risks in Deep Neural Networks

Safety concerns on the deep neural networks (DNNs) have been raised when...
research
01/28/2023

Reachability Analysis of Neural Network Control Systems

Neural network controllers (NNCs) have shown great promise in autonomous...
research
08/09/2021

Neural Network Repair with Reachability Analysis

Safety is a critical concern for the next generation of autonomy that is...
research
07/14/2022

Work In Progress: Safety and Robustness Verification of Autoencoder-Based Regression Models using the NNV Tool

This work in progress paper introduces robustness verification for autoe...
research
10/19/2018

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is an Extr...

Please sign up or login with your details

Forgot password? Click here to reset