Scalable Quantitative Verification For Deep Neural Networks

02/17/2020
by   Teodora Baluta, et al.
0

Verifying security properties of deep neural networks (DNNs) is becoming increasingly important. This paper introduces a new quantitative verification framework for DNNs that can decide, with user-specified confidence, whether a given logical property ψ defined over the space of inputs of the given DNN holds for less than a user-specified threshold, θ. We present new algorithms that are scalable to large real-world models as well as proven to be sound. Our approach requires only black-box access to the models. Further, it certifies properties of both deterministic and non-deterministic DNNs. We implement our approach in a tool called PROVERO. We apply PROVERO to the problem of certifying adversarial robustness. In this context, PROVERO provides an attack-agnostic measure of robustness for a given DNN and a test input. First, we find that this metric has a strong statistical correlation with perturbation bounds reported by 2 of the most prominent white-box attack strategies today. Second, we show that PROVERO can quantitatively certify robustness with high confidence in cases where the state-of-the-art qualitative verification tool (ERAN) fails to produce conclusive results. Thus, quantitative verification scales easily to large DNNs.

READ FULL TEXT
research
01/05/2023

gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness

Deep neural networks (DNNs) are a state-of-the-art technology, capable o...
research
10/14/2021

DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks

White-box Adversarial Example (AE) attacks towards Deep Neural Networks ...
research
04/04/2023

CGDTest: A Constrained Gradient Descent Algorithm for Testing Neural Networks

In this paper, we propose a new Deep Neural Network (DNN) testing algori...
research
04/04/2023

Incremental Verification of Neural Networks

Complete verification of deep neural networks (DNNs) can exactly determi...
research
06/25/2019

Quantitative Verification of Neural Networks And its Security Applications

Neural networks are increasingly employed in safety-critical domains. Th...
research
01/29/2023

Towards Verifying the Geometric Robustness of Large-scale Neural Networks

Deep neural networks (DNNs) are known to be vulnerable to adversarial ge...
research
10/29/2021

ε-weakened Robustness of Deep Neural Networks

This paper introduces a notation of ε-weakened robustness for analyzing ...

Please sign up or login with your details

Forgot password? Click here to reset