DualApp: Tight Over-Approximation for Neural Network Robustness Verification via Under-Approximation

11/21/2022
by   Yiting Wu, et al.
0

The robustness of neural networks is fundamental to the hosting system's reliability and security. Formal verification has been proven to be effective in providing provable robustness guarantees. To improve the verification scalability, over-approximating the non-linear activation functions in neural networks by linear constraints is widely adopted, which transforms the verification problem into an efficiently solvable linear programming problem. As over-approximations inevitably introduce overestimation, many efforts have been dedicated to defining the tightest possible approximations. Recent studies have however showed that the existing so-called tightest approximations are superior to each other. In this paper we identify and report an crucial factor in defining tight approximations, namely the approximation domains of activation functions. We observe that existing approaches only rely on overestimated domains, while the corresponding tight approximation may not necessarily be tight on its actual domain. We propose a novel under-approximation-guided approach, called dual-approximation, to define tight over-approximations and two complementary under-approximation algorithms based on sampling and gradient descent. The overestimated domain guarantees the soundness while the underestimated one guides the tightness. We implement our approach into a tool called DualApp and extensively evaluate it on a comprehensive benchmark of 84 collected and trained neural networks with different architectures. The experimental results show that DualApp outperforms the state-of-the-art approximation-based approaches, with up to 71.22 improvement to the verification result.

READ FULL TEXT
research
05/26/2023

A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation

The robustness of deep neural networks (DNNs) is crucial to the hosting ...
research
11/13/2022

Certifying Robustness of Convolutional Neural Networks with Tight Linear Approximation

The robustness of neural network classifiers is becoming important in th...
research
08/21/2022

Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural Networks

The robustness of deep neural networks is crucial to modern AI-enabled s...
research
01/17/2020

Approximating Activation Functions

ReLU is widely seen as the default choice for activation functions in ne...
research
05/05/2023

On Preimage Approximation for Neural Networks

Neural network verification mainly focuses on local robustness propertie...
research
01/31/2022

LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network Activation Functions

The most scalable approaches to certifying neural network robustness dep...
research
09/21/2020

NeuroDiff: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation

As neural networks make their way into safety-critical systems, where mi...

Please sign up or login with your details

Forgot password? Click here to reset