Boosting Adversarial Transferability of MLP-Mixer

04/26/2022
by   Haoran Lyu, et al.
0

The security of models based on new architectures such as MLP-Mixer and ViTs needs to be studied urgently. However, most of the current researches are mainly aimed at the adversarial attack against ViTs, and there is still relatively little adversarial work on MLP-mixer. We propose an adversarial attack method against MLP-Mixer called Maxwell's demon Attack (MA). MA breaks the channel-mixing and token-mixing mechanism of MLP-Mixer by controlling the part input of MLP-Mixer's each Mixer layer, and disturbs MLP-Mixer to obtain the main information of images. Our method can mask the part input of the Mixer layer, avoid overfitting of the adversarial examples to the source model, and improve the transferability of cross-architecture. Extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed MA. Our method can be easily combined with existing methods and can improve the transferability by up to 38.0 examples produced by our method on MLP-Mixer are able to exceed the transferability of adversarial examples produced using DenseNet against CNNs. To the best of our knowledge, we are the first work to study adversarial transferability of MLP-Mixer.

READ FULL TEXT

page 1

page 3

research
02/28/2022

Enhance transferability of adversarial examples with model architecture

Transferability of adversarial examples is of critical importance to lau...
research
06/16/2022

Boosting the Adversarial Transferability of Surrogate Model with Dark Knowledge

Deep neural networks (DNNs) for image classification are known to be vul...
research
11/22/2021

Adversarial Examples on Segmentation Models Can be Easy to Transfer

Deep neural network-based image classification can be misled by adversar...
research
05/02/2023

Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature

Recent research has shown that Deep Neural Networks (DNNs) are highly vu...
research
12/09/2018

Learning Transferable Adversarial Examples via Ghost Networks

The recent development of adversarial attack has proven that ensemble-ba...
research
07/29/2021

Feature Importance-aware Transferable Adversarial Attacks

Transferability of adversarial examples is of central importance for att...
research
02/10/2020

ABBA: Saliency-Regularized Motion-Based Adversarial Blur Attack

Deep neural networks are vulnerable to noise-based adversarial examples,...

Please sign up or login with your details

Forgot password? Click here to reset