Towards the Quantification of Safety Risks in Deep Neural Networks

09/13/2020
by   Peipei Xu, et al.
0

Safety concerns on the deep neural networks (DNNs) have been raised when they are applied to critical sectors. In this paper, we define safety risks by requesting the alignment of the network's decision with human perception. To enable a general methodology for quantifying safety risks, we define a generic safety property and instantiate it to express various safety risks. For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists. The computation of the maximum safe radius is reduced to the computation of their respective Lipschitz metrics - the quantities to be computed. In addition to the known adversarial example, reachability example, and invariant example, in this paper we identify a new class of risk - uncertainty example - on which humans can tell easily but the network is unsure. We develop an algorithm, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support efficient computation of the metrics. We perform evaluations on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and ImageNet networks. The experiments show that, our method can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Importantly, as a generic approach, our method can work with a broad class of safety risks and without restrictions on the structure of neural networks.

READ FULL TEXT

page 3

page 12

page 13

page 14

page 15

page 16

research
04/03/2023

Model-Agnostic Reachability Analysis on Deep Neural Networks

Verification plays an essential role in the formal analysis of safety-cr...
research
07/10/2018

A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

Despite the improved accuracy of deep neural networks, the discovery of ...
research
04/16/2018

Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm

Deployment of deep neural networks (DNNs) in safety or security-critical...
research
09/09/2021

Risk-perception-aware control design under dynamic spatial risks

This work proposes a novel risk-perception-aware (RPA) control design us...
research
08/09/2021

Neural Network Repair with Reachability Analysis

Safety is a critical concern for the next generation of autonomy that is...
research
02/06/2023

Concrete Safety for ML Problems: System Safety for ML Development and Assessment

Many stakeholders struggle to make reliances on ML-driven systems due to...
research
05/10/2022

A Safety Assurable Human-Inspired Perception Architecture

Although artificial intelligence-based perception (AIP) using deep neura...

Please sign up or login with your details

Forgot password? Click here to reset