Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees

08/18/2023
by   Luca Marzari, et al.
0

Identifying safe areas is a key point to guarantee trust for systems that are based on Deep Neural Networks (DNNs). To this end, we introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe, i.e., where the property does hold. Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe. Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits, and can provide a tight (with provable probabilistic guarantees) lower estimate of the safe areas. Our empirical evaluation on different standard benchmarks shows the scalability and effectiveness of our method, offering valuable insights for this new type of verification of DNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2020

PAC Confidence Predictions for Deep Neural Network Classifiers

A key challenge for deploying deep neural networks (DNNs) in safety crit...
research
01/17/2023

The #DNN-Verification problem: Counting Unsafe Inputs for Deep Neural Networks

Deep Neural Networks are increasingly adopted in critical tasks that req...
research
04/16/2018

Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm

Deployment of deep neural networks (DNNs) in safety or security-critical...
research
05/06/2018

Reachability Analysis of Deep Neural Networks with Provable Guarantees

Verifying correctness of deep neural networks (DNNs) is challenging. We ...
research
10/25/2022

Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks

With the rapid growth of machine learning, deep neural networks (DNNs) a...
research
01/18/2022

XAI Model for Accurate and Interpretable Landslide Susceptibility

Landslides are notoriously difficult to predict. Deep neural networks (D...
research
05/26/2023

A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation

The robustness of deep neural networks (DNNs) is crucial to the hosting ...

Please sign up or login with your details

Forgot password? Click here to reset