Fuzziness-tuned: Improving the Transferability of Adversarial Examples

03/17/2023
by   Xiangyuan Yang, et al.
0

With the development of adversarial attacks, adversairal examples have been widely used to enhance the robustness of the training models on deep neural networks. Although considerable efforts of adversarial attacks on improving the transferability of adversarial examples have been developed, the attack success rate of the transfer-based attacks on the surrogate model is much higher than that on victim model under the low attack strength (e.g., the attack strength ϵ=8/255). In this paper, we first systematically investigated this issue and found that the enormous difference of attack success rates between the surrogate model and victim model is caused by the existence of a special area (known as fuzzy domain in our paper), in which the adversarial examples in the area are classified wrongly by the surrogate model while correctly by the victim model. Then, to eliminate such enormous difference of attack success rates for improving the transferability of generated adversarial examples, a fuzziness-tuned method consisting of confidence scaling mechanism and temperature scaling mechanism is proposed to ensure the generated adversarial examples can effectively skip out of the fuzzy domain. The confidence scaling mechanism and the temperature scaling mechanism can collaboratively tune the fuzziness of the generated adversarial examples through adjusting the gradient descent weight of fuzziness and stabilizing the update direction, respectively. Specifically, the proposed fuzziness-tuned method can be effectively integrated with existing adversarial attacks to further improve the transferability of adverarial examples without changing the time complexity. Extensive experiments demonstrated that fuzziness-tuned method can effectively enhance the transferability of adversarial examples in the latest transfer-based attacks.

READ FULL TEXT

page 1

page 4

page 8

page 9

research
03/27/2023

Improving the Transferability of Adversarial Examples via Direction Tuning

In the transfer-based adversarial attacks, adversarial examples are only...
research
12/01/2020

Improving the Transferability of Adversarial Examples with the Adam Optimizer

Convolutional neural networks have outperformed humans in image recognit...
research
05/12/2020

Increased-confidence adversarial examples for improved transferability of Counter-Forensic attacks

Transferability of adversarial examples is a key issue to study the secu...
research
06/20/2022

Diversified Adversarial Attacks based on Conjugate Gradient Method

Deep learning models are vulnerable to adversarial examples, and adversa...
research
05/10/2021

Adversarial examples attack based on random warm restart mechanism and improved Nesterov momentum

The deep learning algorithm has achieved great success in the field of c...
research
07/26/2022

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

We propose transferability from Large Geometric Vicinity (LGV), a new te...
research
07/21/2023

Improving Transferability of Adversarial Examples via Bayesian Attacks

This paper presents a substantial extension of our work published at ICL...

Please sign up or login with your details

Forgot password? Click here to reset