StyLess: Boosting the Transferability of Adversarial Examples

04/23/2023
by   Kaisheng Liang, et al.
0

Adversarial attacks can mislead deep neural networks (DNNs) by adding imperceptible perturbations to benign examples. The attack transferability enables adversarial examples to attack black-box DNNs with unknown architectures or parameters, which poses threats to many real-world applications. We find that existing transferable attacks do not distinguish between style and content features during optimization, limiting their attack transferability. To improve attack transferability, we propose a novel attack method called style-less perturbation (StyLess). Specifically, instead of using a vanilla network as the surrogate model, we advocate using stylized networks, which encode different style features by perturbing an adaptive instance normalization. Our method can prevent adversarial examples from using non-robust style features and help generate transferable perturbations. Comprehensive experiments show that our method can significantly improve the transferability of adversarial examples. Furthermore, our approach is generic and can outperform state-of-the-art transferable attacks when combined with other attack techniques.

READ FULL TEXT
research
10/12/2022

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
12/16/2019

CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator

Deep neural networks (DNNs) are vulnerable to adversarial attack despite...
research
12/07/2020

Backpropagating Linearly Improves Transferability of Adversarial Examples

The vulnerability of deep neural networks (DNNs) to adversarial examples...
research
10/28/2022

Improving Transferability of Adversarial Examples on Face Recognition with Beneficial Perturbation Feature Augmentation

Face recognition (FR) models can be easily fooled by adversarial example...
research
05/28/2019

Cross-Domain Transferability of Adversarial Perturbations

Adversarial examples reveal the blind spots of deep neural networks (DNN...
research
04/20/2023

Diversifying the High-level Features for better Adversarial Transferability

Given the great threat of adversarial attacks against Deep Neural Networ...
research
02/20/2021

Going Far Boosts Attack Transferability, but Do Not Do It

Deep Neural Networks (DNNs) could be easily fooled by Adversarial Exampl...

Please sign up or login with your details

Forgot password? Click here to reset