Residual Error: a New Performance Measure for Adversarial Robustness

06/18/2021
by   Hossein Aboutalebi, et al.
0

Despite the significant advances in deep learning over the past decade, a major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks. This sensitivity to making erroneous predictions in the presence of adversarially perturbed data makes deep neural networks difficult to adopt for certain real-world, mission-critical applications. While much of the research focus has revolved around adversarial example creation and adversarial hardening, the area of performance measures for assessing adversarial robustness is not well explored. Motivated by this, this study presents the concept of residual error, a new performance measure for not only assessing the adversarial robustness of a deep neural network at the individual sample level, but also can be used to differentiate between adversarial and non-adversarial examples to facilitate for adversarial example detection. Furthermore, we introduce a hybrid model for approximating the residual error in a tractable manner. Experimental results using the case of image classification demonstrates the effectiveness and efficacy of the proposed residual error metric for assessing several well-known deep neural network architectures. These results thus illustrate that the proposed measure could be a useful tool for not only assessing the robustness of deep neural networks used in mission-critical scenarios, but also in the design of adversarially robust models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2019

Global Adversarial Attacks for Assessing Deep Learning Robustness

It has been shown that deep neural networks (DNNs) may be vulnerable to ...
research
12/26/2017

Building Robust Deep Neural Networks for Road Sign Detection

Deep Neural Networks are built to generalize outside of training set in ...
research
07/20/2020

Neural Network Robustness Verification on GPUs

Certifying the robustness of neural networks against adversarial attacks...
research
12/03/2020

ResPerfNet: Deep Residual Learning for Regressional Performance Modeling of Deep Neural Networks

The rapid advancements of computing technology facilitate the developmen...
research
11/29/2020

Architectural Adversarial Robustness: The Case for Deep Pursuit

Despite their unmatched performance, deep neural networks remain suscept...
research
05/27/2022

Standalone Neural ODEs with Sensitivity Analysis

This paper presents the Standalone Neural ODE (sNODE), a continuous-dept...
research
09/12/2020

How Much Can We Really Trust You? Towards Simple, Interpretable Trust Quantification Metrics for Deep Neural Networks

A critical step to building trustworthy deep neural networks is trust qu...

Please sign up or login with your details

Forgot password? Click here to reset