Defense-guided Transferable Adversarial Attacks

10/22/2020
by   Zifei Zhang, et al.
0

Though deep neural networks perform challenging tasks excellently, they are susceptible to adversarial exmaples, which mislead classifiers by applying human-imperceptible perturbations on clean inputs. Under the query-free black-box scenario, adversarial examples are hard to transfer to unknown models, and several methods have been proposed with low transferability. To settle such issue, we design a max-min framework inspired by input transformations, which are benificial to both the adversarial attack and defense. Explicitly, we decrease loss values with affline transformations as a defense in the minimum procedure, and then increase loss values with the momentum iterative algorithm as an attack in the maximum procedure. To further promote transferability, we determine transformed values with the max-min theory. Extensive experiments on Imagenet demonstrate that our defense-guided transferable attacks achieve impressive increase on transferability. Experimentally, our best black-box attack fools normally trained models at an 85.3 success rate on average, respectively. Additionally, we provide elucidative insights on the improvement of transferability, and our method is expected to be a benchmark for assessing the robustness of deep models.

READ FULL TEXT

page 8

page 10

research
10/17/2017

Boosting Adversarial Attacks with Momentum

Deep neural networks are vulnerable to adversarial examples, which poses...
research
03/19/2018

Improving Transferability of Adversarial Examples with Input Diversity

Though convolutional neural networks have achieved state-of-the-art perf...
research
07/26/2019

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

Deep models, while being extremely versatile and accurate, are vulnerabl...
research
11/21/2022

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

Deep neural networks are vulnerable to adversarial examples, which attac...
research
06/09/2019

Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense

The worst-case training principle that minimizes the maximal adversarial...
research
04/04/2022

RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition

Deep neural networks have empowered accurate device-free human activity ...
research
02/10/2023

Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples

The transferability of adversarial examples across deep neural networks ...

Please sign up or login with your details

Forgot password? Click here to reset