gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness

01/05/2023
by   Natan Levy, et al.
0

Deep neural networks (DNNs) are a state-of-the-art technology, capable of outstanding performance in many key tasks. However, it is challenging to integrate DNNs into safety-critical systems, such as those in the aerospace or automotive domains, due to the risk of adversarial inputs: slightly perturbed inputs that can cause the DNN to make grievous mistakes. Adversarial inputs have been shown to plague even modern DNNs; and so the risks they pose must be measured and mitigated to allow the safe deployment of DNNs in safety-critical systems. Here, we present a novel and scalable tool called gRoMA, which uses a statistical approach for formally measuring the global categorial robustness of a DNN - i.e., the probability of randomly encountering an adversarial input for a specific output category. Our tool operates on pre-trained, black-box classification DNNs. It randomly generates input samples that belong to an output category of interest, measures the DNN's susceptibility to adversarial inputs around these inputs, and then aggregates the results to infer the overall global robustness of the DNN up to some small bounded error. For evaluation purposes, we used gRoMA to measure the global robustness of the widespread Densenet DNN model over the CIFAR10 dataset and our results exposed significant gaps in the robustness of the different output categories. This experiment demonstrates the scalability of the new approach and showcases its potential for allowing DNNs to be deployed within critical systems of interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2021

DeepOpt: Scalable Specification-based Falsification of Neural Networks using Black-Box Optimization

Decisions made by deep neural networks (DNNs) have a tremendous impact o...
research
10/09/2020

Understanding Spatial Robustness of Deep Neural Networks

Deep Neural Networks (DNNs) are being deployed in a wide range of settin...
research
10/21/2021

RoMA: a Method for Neural Network Robustness Measurement and Assessment

Neural network models have become the leading solution for a large varie...
research
02/17/2020

Scalable Quantitative Verification For Deep Neural Networks

Verifying security properties of deep neural networks (DNNs) is becoming...
research
07/03/2020

Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring

Inference accuracy of deep neural networks (DNNs) is a crucial performan...
research
01/27/2018

Interactive Deep Colorization With Simultaneous Global and Local Inputs

Colorization methods using deep neural networks have become a recent tre...
research
04/16/2018

Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm

Deployment of deep neural networks (DNNs) in safety or security-critical...

Please sign up or login with your details

Forgot password? Click here to reset