ε-weakened Robustness of Deep Neural Networks

10/29/2021
by   Pei Huang, et al.
0

This paper introduces a notation of ε-weakened robustness for analyzing the reliability and stability of deep neural networks (DNNs). Unlike the conventional robustness, which focuses on the "perfect" safe region in the absence of adversarial examples, ε-weakened robustness focuses on the region where the proportion of adversarial examples is bounded by user-specified ε. Smaller ε means a smaller chance of failure. Under such robustness definition, we can give conclusive results for the regions where conventional robustness ignores. We prove that the ε-weakened robustness decision problem is PP-complete and give a statistical decision algorithm with user-controllable error bound. Furthermore, we derive an algorithm to find the maximum ε-weakened robustness radius. The time complexity of our algorithms is polynomial in the dimension and size of the network. So, they are scalable to large real-world networks. Besides, We also show its potential application in analyzing quality issues.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2014

Towards Deep Neural Network Architectures Robust to Adversarial Examples

Recent work has shown deep neural networks (DNNs) to be highly susceptib...
research
04/13/2020

Adversarial robustness guarantees for random deep neural networks

The reliability of most deep learning algorithms is fundamentally challe...
research
01/22/2021

Adaptive Neighbourhoods for the Discovery of Adversarial Examples

Deep Neural Networks (DNNs) have often supplied state-of-the-art results...
research
10/11/2019

Verification of Neural Networks: Specifying Global Robustness using Generative Models

The success of neural networks across most machine learning tasks and th...
research
02/17/2020

Scalable Quantitative Verification For Deep Neural Networks

Verifying security properties of deep neural networks (DNNs) is becoming...
research
01/11/2022

Quantifying Robustness to Adversarial Word Substitutions

Deep-learning-based NLP models are found to be vulnerable to word substi...
research
06/08/2020

Global Robustness Verification Networks

The wide deployment of deep neural networks, though achieving great succ...

Please sign up or login with your details

Forgot password? Click here to reset