Improving the Transferability of Adversarial Samples by Path-Augmented Method

03/28/2023
by   Jianping Zhang, et al.
0

Deep neural networks have achieved unprecedented success on diverse vision tasks. However, they are vulnerable to adversarial noise that is imperceptible to humans. This phenomenon negatively affects their deployment in real-world scenarios, especially security-related ones. To evaluate the robustness of a target model in practice, transfer-based attacks craft adversarial samples with a local model and have attracted increasing attention from researchers due to their high efficiency. The state-of-the-art transfer-based attacks are generally based on data augmentation, which typically augments multiple training images from a linear path when learning adversarial samples. However, such methods selected the image augmentation path heuristically and may augment images that are semantics-inconsistent with the target images, which harms the transferability of the generated adversarial samples. To overcome the pitfall, we propose the Path-Augmented Method (PAM). Specifically, PAM first constructs a candidate augmentation path pool. It then settles the employed augmentation paths during adversarial sample generation with greedy search. Furthermore, to avoid augmenting semantics-inconsistent images, we train a Semantics Predictor (SP) to constrain the length of the augmentation path. Extensive experiments confirm that PAM can achieve an improvement of over 4.8 with the state-of-the-art baselines in terms of the attack success rates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2023

Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer

Deep neural networks are vulnerable to adversarial examples crafted by a...
research
03/31/2022

Improving Adversarial Transferability via Neuron Attribution-Based Attacks

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
research
03/28/2023

Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization

Vision transformers (ViTs) have been successfully deployed in a variety ...
research
04/25/2022

VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization

Invariance to diverse types of image corruption, such as noise, blurring...
research
08/15/2023

Backpropagation Path Search On Adversarial Transferability

Deep neural networks are vulnerable to adversarial examples, dictating t...
research
11/09/2021

MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

Deep neural networks are susceptible to adversarially crafted, small and...
research
06/28/2023

Boosting Adversarial Transferability with Learnable Patch-wise Masks

Adversarial examples have raised widespread attention in security-critic...

Please sign up or login with your details

Forgot password? Click here to reset