Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training

07/15/2023
by   Yechao Zhang, et al.
0

Adversarial examples (AEs) for DNNs have been shown to be transferable: AEs that successfully fool white-box surrogate models can also deceive other black-box models with different architectures. Although a bunch of empirical studies have provided guidance on generating highly transferable AEs, many of these findings lack explanations and even lead to inconsistent advice. In this paper, we take a further step towards understanding adversarial transferability, with a particular focus on surrogate aspects. Starting from the intriguing little robustness phenomenon, where models adversarially trained with mildly perturbed adversarial samples can serve as better surrogates, we attribute it to a trade-off between two predominant factors: model smoothness and gradient similarity. Our investigations focus on their joint effects, rather than their separate correlations with transferability. Through a series of theoretical and empirical analyses, we conjecture that the data distribution shift in adversarial training explains the degradation of gradient similarity. Building on these insights, we explore the impacts of data augmentation and gradient regularization on transferability and identify that the trade-off generally exists in the various training mechanisms, thus building a comprehensive blueprint for the regulation mechanism behind transferability. Finally, we provide a general route for constructing better surrogates to boost transferability which optimizes both model smoothness and gradient similarity simultaneously, e.g., the combination of input gradient regularization and sharpness-aware minimization (SAM), validated by extensive experiments. In summary, we call for attention to the united impacts of these two factors for launching effective transfer attacks, rather than optimizing one while ignoring the other, and emphasize the crucial role of manipulating surrogate models.

READ FULL TEXT

page 1

page 8

page 10

page 20

page 21

research
06/16/2022

Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge

Deep neural networks (DNNs) for image classification are known to be vul...
research
08/24/2023

Exploring Transferability of Multimodal Adversarial Samples for Vision-Language Pre-training Models with Contrastive Learning

Vision-language pre-training models (VLP) are vulnerable, especially to ...
research
02/03/2022

Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization

Machine learning (ML) robustness and domain generalization are fundament...
research
04/01/2021

TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness

Adversarial Transferability is an intriguing property of adversarial exa...
research
04/16/2019

Reducing Adversarial Example Transferability Using Gradient Regularization

Deep learning algorithms have increasingly been shown to lack robustness...
research
07/26/2022

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

We propose transferability from Large Geometric Vicinity (LGV), a new te...
research
05/14/2021

High-Robustness, Low-Transferability Fingerprinting of Neural Networks

This paper proposes Characteristic Examples for effectively fingerprinti...

Please sign up or login with your details

Forgot password? Click here to reset