A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation

05/26/2023
by   Zhiyi Xue, et al.
0

The robustness of deep neural networks (DNNs) is crucial to the hosting system's reliability and security. Formal verification has been demonstrated to be effective in providing provable robustness guarantees. To improve its scalability, over-approximating the non-linear activation functions in DNNs by linear constraints has been widely adopted, which transforms the verification problem into an efficiently solvable linear programming problem. Many efforts have been dedicated to defining the so-called tightest approximations to reduce overestimation imposed by over-approximation. In this paper, we study existing approaches and identify a dominant factor in defining tight approximation, namely the approximation domain of the activation function. We find out that tight approximations defined on approximation domains may not be as tight as the ones on their actual domains, yet existing approaches all rely only on approximation domains. Based on this observation, we propose a novel dual-approximation approach to tighten over-approximations, leveraging an activation function's underestimated domain to define tight approximation bounds. We implement our approach with two complementary algorithms based respectively on Monte Carlo simulation and gradient descent into a tool called DualApp. We assess it on a comprehensive benchmark of DNNs with different architectures. Our experimental results show that DualApp significantly outperforms the state-of-the-art approaches with 100 the verified robustness ratio and 10.64 certified lower bound.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2022

DualApp: Tight Over-Approximation for Neural Network Robustness Verification via Under-Approximation

The robustness of neural networks is fundamental to the hosting system's...
research
11/13/2022

Certifying Robustness of Convolutional Neural Networks with Tight Linear Approximation

The robustness of neural network classifiers is becoming important in th...
research
01/31/2022

LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network Activation Functions

The most scalable approaches to certifying neural network robustness dep...
research
08/21/2022

Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural Networks

The robustness of deep neural networks is crucial to modern AI-enabled s...
research
05/10/2023

DNN Verification, Reachability, and the Exponential Function Problem

Deep neural networks (DNNs) are increasingly being deployed to perform s...
research
08/18/2023

Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees

Identifying safe areas is a key point to guarantee trust for systems tha...
research
11/21/2022

BBReach: Tight and Scalable Black-Box Reachability Analysis of Deep Reinforcement Learning Systems

Reachability analysis is a promising technique to automatically prove or...

Please sign up or login with your details

Forgot password? Click here to reset