Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels

07/16/2020
by   Wenjie Wan, et al.
0

Deep Neural Networks (DNNs) have become key components of many safety-critical applications such as autonomous driving and medical diagnosis. However, DNNs have been shown suffering from poor robustness because of their susceptibility to adversarial examples such that small perturbations to an input result in misprediction. Addressing to this concern, various approaches have been proposed to formally verify the robustness of DNNs. Most of these approaches reduce the verification problem to optimization problems of searching an adversarial example for a given input so that it is not correctly classified to the original label. However, they are limited in accuracy and scalability. In this paper, we propose a novel approach that can accelerate the robustness verification techniques by guiding the verification with target labels. The key insight of our approach is that the robustness verification problem of DNNs can be solved by verifying sub-problems of DNNs, one per target label. Fixing the target label during verification can drastically reduce the search space and thus improve the efficiency. We also propose an approach by leveraging symbolic interval propagation and linear relaxation techniques to sort the target labels in terms of chances that adversarial examples exist. This often allows us to quickly falsify the robustness of DNNs and the verification for remaining target labels could be avoided. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with three recent promising DNN verification tools, i.e., MipVerify, DeepZ, and Neurify. Experimental results show that our approach can significantly improve these tools by 36X speedup when the perturbation distance is set in a reasonable range.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2023

OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks

Occlusion is a prevalent and easily realizable semantic perturbation to ...
research
07/02/2022

Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks

As a new programming paradigm, deep neural networks (DNNs) have been inc...
research
10/25/2022

Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks

With the rapid growth of machine learning, deep neural networks (DNNs) a...
research
01/25/2021

Probabilistic Robustness Analysis for DNNs based on PAC Learning

This paper proposes a black box based approach for analysing deep neural...
research
10/02/2017

DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks

Deep neural networks have become widely used, obtaining remarkable resul...
research
06/08/2020

Global Robustness Verification Networks

The wide deployment of deep neural networks, though achieving great succ...
research
11/16/2022

Efficiently Finding Adversarial Examples with DNN Preprocessing

Deep Neural Networks (DNNs) are everywhere, frequently performing a fair...

Please sign up or login with your details

Forgot password? Click here to reset