A Little Robustness Goes a Long Way: Leveraging Universal Features for Targeted Transfer Attacks

06/03/2021
by   Jacob M. Springer, et al.
0

Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures. However, targeted adversarial examples – optimized to be classified as a chosen target class – tend to be less transferable between architectures. While prior research on constructing transferable targeted attacks has focused on improving the optimization procedure, in this work we examine the role of the source classifier. Here, we show that training the source classifier to be "slightly robust" – that is, robust to small-magnitude adversarial examples – substantially improves the transferability of targeted attacks, even between architectures as different as convolutional neural networks and transformers. We argue that this result supports a non-intuitive hypothesis: on the spectrum from non-robust (standard) to highly robust classifiers, those that are only slightly robust exhibit the most universal features – ones that tend to overlap with the features learned by other classifiers trained on the same dataset. The results we present provide insight into the nature of adversarial examples as well as the mechanisms underlying so-called "robust" classifiers.

READ FULL TEXT

page 6

page 18

page 22

page 24

page 25

research
02/09/2021

Adversarial Perturbations Are Not So Weird: Entanglement of Robust and Non-Robust Features in Neural Network Classifiers

Neural networks trained on visual data are well-known to be vulnerable t...
research
08/29/2019

Universal, transferable and targeted adversarial attacks

Deep Neural Network has been found vulnerable in many previous works. A ...
research
12/29/2021

Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently

Deep neural networks are vulnerable to adversarial examples (AEs), which...
research
07/01/2020

Adversarial Example Games

The existence of adversarial examples capable of fooling trained neural ...
research
02/20/2020

A Bayes-Optimal View on Adversarial Examples

The ability to fool modern CNN classifiers with tiny perturbations of th...
research
12/27/2017

Adversarial Patch

We present a method to create universal, robust, targeted adversarial im...
research
02/16/2019

Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training

Adversarial examples in machine learning for images are widely publicize...

Please sign up or login with your details

Forgot password? Click here to reset