Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm

04/16/2018
by   Wenjie Ruan, et al.
0

Deployment of deep neural networks (DNNs) in safety or security-critical systems demands provable guarantees on their correct behaviour. One example is the robustness of image classification decisions, defined as the invariance of the classification for a given input over a small neighbourhood of images around the input. Here we focus on the L_0 norm, and study the problem of quantifying the global robustness of a trained DNN, where global robustness is defined as the expectation of the maximum safe radius over a testing dataset. We first show that the problem is NP-hard, and then propose an approach to iteratively generate lower and upper bounds on the network's robustness. The approach is anytime, i.e., it returns intermediate bounds and robustness estimates that are gradually, but strictly, improved as the computation proceeds; tensor-based, i.e., the computation is conducted over a set of inputs simultaneously, instead of one by one, to enable efficient GPU computation; and has provable guarantees, i.e., both the bounds and the robustness estimates can converge to their optimal values. Finally, we demonstrate the utility of the proposed approach in practice to compute tight bounds by applying and adapting the anytime algorithm to a set of challenging problems, including global robustness evaluation, guidance for the design of robust DNNs, competitive L_0 attacks, generation of saliency maps for model interpretability, and test generation for DNNs. We release the code of all case studies via Github.

READ FULL TEXT

page 14

page 26

page 33

page 34

page 35

page 36

page 37

page 38

research
07/10/2018

A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

Despite the improved accuracy of deep neural networks, the discovery of ...
research
01/05/2023

gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness

Deep neural networks (DNNs) are a state-of-the-art technology, capable o...
research
06/28/2019

Robustness Guarantees for Deep Neural Networks on Videos

The widespread adoption of deep learning models places demands on their ...
research
08/18/2023

Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees

Identifying safe areas is a key point to guarantee trust for systems tha...
research
09/13/2020

Towards the Quantification of Safety Risks in Deep Neural Networks

Safety concerns on the deep neural networks (DNNs) have been raised when...
research
04/07/2021

Adversarial Robustness Guarantees for Gaussian Processes

Gaussian processes (GPs) enable principled computation of model uncertai...
research
01/04/2022

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

Although Deep Neural Networks (DNNs) have shown incredible performance i...

Please sign up or login with your details

Forgot password? Click here to reset