DeepAI AI Chat
Log In Sign Up

Defense-guided Transferable Adversarial Attacks

by   Zifei Zhang, et al.

Though deep neural networks perform challenging tasks excellently, they are susceptible to adversarial exmaples, which mislead classifiers by applying human-imperceptible perturbations on clean inputs. Under the query-free black-box scenario, adversarial examples are hard to transfer to unknown models, and several methods have been proposed with low transferability. To settle such issue, we design a max-min framework inspired by input transformations, which are benificial to both the adversarial attack and defense. Explicitly, we decrease loss values with affline transformations as a defense in the minimum procedure, and then increase loss values with the momentum iterative algorithm as an attack in the maximum procedure. To further promote transferability, we determine transformed values with the max-min theory. Extensive experiments on Imagenet demonstrate that our defense-guided transferable attacks achieve impressive increase on transferability. Experimentally, our best black-box attack fools normally trained models at an 85.3 success rate on average, respectively. Additionally, we provide elucidative insights on the improvement of transferability, and our method is expected to be a benchmark for assessing the robustness of deep models.


page 8

page 10


Boosting Adversarial Attacks with Momentum

Deep neural networks are vulnerable to adversarial examples, which poses...

Improving Transferability of Adversarial Examples with Input Diversity

Though convolutional neural networks have achieved state-of-the-art perf...

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

Deep models, while being extremely versatile and accurate, are vulnerabl...

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

Deep neural networks are vulnerable to adversarial examples, which attac...

Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense

The worst-case training principle that minimizes the maximal adversarial...

Frequency Domain Model Augmentation for Adversarial Attack

For black-box attacks, the gap between the substitute model and the vict...

Improving Adversarial Transferability with Spatial Momentum

Deep Neural Networks (DNN) are vulnerable to adversarial examples. Altho...