Global Adversarial Attacks for Assessing Deep Learning Robustness

06/19/2019
by   Hanbin Hu, et al.
0

It has been shown that deep neural networks (DNNs) may be vulnerable to adversarial attacks, raising the concern on their robustness particularly for safety-critical applications. Recognizing the local nature and limitations of existing adversarial attacks, we present a new type of global adversarial attacks for assessing global DNN robustness. More specifically, we propose a novel concept of global adversarial example pairs in which each pair of two examples are close to each other but have different class labels predicted by the DNN. We further propose two families of global attack methods and show that our methods are able to generate diverse and intriguing adversarial example pairs at locations far from the training or testing data. Moreover, we demonstrate that DNNs hardened using the strong projected gradient descent (PGD) based (local) adversarial training are vulnerable to the proposed global adversarial example pairs, suggesting that global robustness must be considered while training robust deep learning networks.

READ FULL TEXT
research
07/07/2023

A Theoretical Perspective on Subnetwork Contributions to Adversarial Robustness

The robustness of deep neural networks (DNNs) against adversarial attack...
research
06/18/2021

Residual Error: a New Performance Measure for Adversarial Robustness

Despite the significant advances in deep learning over the past decade, ...
research
10/26/2022

Improving Adversarial Robustness with Self-Paced Hard-Class Pair Reweighting

Deep Neural Networks are vulnerable to adversarial attacks. Among many d...
research
01/21/2020

Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning

Deep neural networks (DNNs) are increasingly powering high-stakes applic...
research
03/11/2022

Learning from Attacks: Attacking Variational Autoencoder for Improving Image Classification

Adversarial attacks are often considered as threats to the robustness of...
research
06/30/2021

Local Reweighting for Adversarial Training

Instances-reweighted adversarial training (IRAT) can significantly boost...
research
07/29/2022

Robust Trajectory Prediction against Adversarial Attacks

Trajectory prediction using deep neural networks (DNNs) is an essential ...

Please sign up or login with your details

Forgot password? Click here to reset