DeepAI AI Chat
Log In Sign Up

Defense-guided Transferable Adversarial Attacks

10/22/2020
by   Zifei Zhang, et al.
0

Though deep neural networks perform challenging tasks excellently, they are susceptible to adversarial exmaples, which mislead classifiers by applying human-imperceptible perturbations on clean inputs. Under the query-free black-box scenario, adversarial examples are hard to transfer to unknown models, and several methods have been proposed with low transferability. To settle such issue, we design a max-min framework inspired by input transformations, which are benificial to both the adversarial attack and defense. Explicitly, we decrease loss values with affline transformations as a defense in the minimum procedure, and then increase loss values with the momentum iterative algorithm as an attack in the maximum procedure. To further promote transferability, we determine transformed values with the max-min theory. Extensive experiments on Imagenet demonstrate that our defense-guided transferable attacks achieve impressive increase on transferability. Experimentally, our best black-box attack fools normally trained models at an 85.3 success rate on average, respectively. Additionally, we provide elucidative insights on the improvement of transferability, and our method is expected to be a benchmark for assessing the robustness of deep models.

READ FULL TEXT

page 8

page 10

10/17/2017

Boosting Adversarial Attacks with Momentum

Deep neural networks are vulnerable to adversarial examples, which poses...
03/19/2018

Improving Transferability of Adversarial Examples with Input Diversity

Though convolutional neural networks have achieved state-of-the-art perf...
07/26/2019

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

Deep models, while being extremely versatile and accurate, are vulnerabl...
11/21/2022

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

Deep neural networks are vulnerable to adversarial examples, which attac...
06/09/2019

Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense

The worst-case training principle that minimizes the maximal adversarial...
07/12/2022

Frequency Domain Model Augmentation for Adversarial Attack

For black-box attacks, the gap between the substitute model and the vict...
03/25/2022

Improving Adversarial Transferability with Spatial Momentum

Deep Neural Networks (DNN) are vulnerable to adversarial examples. Altho...