Generating Adversarial Examples with Better Transferability via Masking Unimportant Parameters of Surrogate Model

04/14/2023
by   Dingcheng Yang, et al.
0

Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples. Moreover, the transferability of the adversarial examples has received broad attention in recent years, which means that adversarial examples crafted by a surrogate model can also attack unknown models. This phenomenon gave birth to the transfer-based adversarial attacks, which aim to improve the transferability of the generated adversarial examples. In this paper, we propose to improve the transferability of adversarial examples in the transfer-based attack via masking unimportant parameters (MUP). The key idea in MUP is to refine the pretrained surrogate models to boost the transfer-based attack. Based on this idea, a Taylor expansion-based metric is used to evaluate the parameter importance score and the unimportant parameters are masked during the generation of adversarial examples. This process is simple, yet can be naturally combined with various existing gradient-based optimizers for generating adversarial examples, thus further improving the transferability of the generated adversarial examples. Extensive experiments are conducted to validate the effectiveness of the proposed MUP-based methods.

READ FULL TEXT
research
06/16/2022

Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge

Deep neural networks (DNNs) for image classification are known to be vul...
research
07/01/2020

Query-Free Adversarial Transfer via Undertrained Surrogates

Deep neural networks have been shown to be highly vulnerable to adversar...
research
06/08/2023

Boosting Adversarial Transferability by Achieving Flat Local Maxima

Transfer-based attack adopts the adversarial examples generated on the s...
research
06/25/2020

Does Adversarial Transferability Indicate Knowledge Transferability?

Despite the immense success that deep neural networks (DNNs) have achiev...
research
08/20/2018

Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples

Many deep learning algorithms can be easily fooled with simple adversari...
research
06/14/2021

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

Although the adoption rate of deep neural networks (DNNs) has tremendous...
research
02/10/2020

ABBA: Saliency-Regularized Motion-Based Adversarial Blur Attack

Deep neural networks are vulnerable to noise-based adversarial examples,...

Please sign up or login with your details

Forgot password? Click here to reset