Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels

07/16/2020
by   Wenjie Wan, et al.
0

Deep Neural Networks (DNNs) have become key components of many safety-critical applications such as autonomous driving and medical diagnosis. However, DNNs have been shown suffering from poor robustness because of their susceptibility to adversarial examples such that small perturbations to an input result in misprediction. Addressing to this concern, various approaches have been proposed to formally verify the robustness of DNNs. Most of these approaches reduce the verification problem to optimization problems of searching an adversarial example for a given input so that it is not correctly classified to the original label. However, they are limited in accuracy and scalability. In this paper, we propose a novel approach that can accelerate the robustness verification techniques by guiding the verification with target labels. The key insight of our approach is that the robustness verification problem of DNNs can be solved by verifying sub-problems of DNNs, one per target label. Fixing the target label during verification can drastically reduce the search space and thus improve the efficiency. We also propose an approach by leveraging symbolic interval propagation and linear relaxation techniques to sort the target labels in terms of chances that adversarial examples exist. This often allows us to quickly falsify the robustness of DNNs and the verification for remaining target labels could be avoided. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with three recent promising DNN verification tools, i.e., MipVerify, DeepZ, and Neurify. Experimental results show that our approach can significantly improve these tools by 36X speedup when the perturbation distance is set in a reasonable range.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/09/2020

Input Validation for Neural Networks via Runtime Local Robustness Verification

Local robustness verification can verify that a neural network is robust...
02/26/2019

Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

Deep neural networks (DNNs) have been shown lack of robustness for the v...
06/08/2020

Global Robustness Verification Networks

The wide deployment of deep neural networks, though achieving great succ...
01/25/2021

Probabilistic Robustness Analysis for DNNs based on PAC Learning

This paper proposes a black box based approach for analysing deep neural...
10/02/2017

DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks

Deep neural networks have become widely used, obtaining remarkable resul...
10/11/2020

Is It Time to Redefine the Classification Task for Deep Neural Networks?

Deep neural networks (DNNs) is demonstrated to be vulnerable to the adve...
11/09/2018

DeepSaucer: Unified Environment for Verifying Deep Neural Networks

In recent years, a number of methods for verifying DNNs have been develo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.