DBIA: Data-free Backdoor Injection Attack against Transformer Networks

11/22/2021
by   Peizhuo Lv, et al.
0

Recently, transformer architecture has demonstrated its significance in both Natural Language Processing (NLP) and Computer Vision (CV) tasks. Though other network models are known to be vulnerable to the backdoor attack, which embeds triggers in the model and controls the model behavior when the triggers are presented, little is known whether such an attack is still valid on the transformer models and if so, whether it can be done in a more cost-efficient manner. In this paper, we propose DBIA, a novel data-free backdoor attack against the CV-oriented transformer networks, leveraging the inherent attention mechanism of transformers to generate triggers and injecting the backdoor using the poisoned surrogate dataset. We conducted extensive experiments based on three benchmark transformers, i.e., ViT, DeiT and Swin Transformer, on two mainstream image classification tasks, i.e., CIFAR10 and ImageNet. The evaluation results demonstrate that, consuming fewer resources, our approach can embed backdoors with a high success rate and a low impact on the performance of the victim transformers. Our code is available at https://anonymous.4open.science/r/DBIA-825D.

READ FULL TEXT
research
03/21/2023

Machine Learning for Brain Disorders: Transformers and Visual Transformers

Transformers were initially introduced for natural language processing (...
research
05/28/2023

Key-Value Transformer

Transformers have emerged as the prevailing standard solution for variou...
research
01/29/2023

Graph Mixer Networks

In recent years, the attention mechanism has demonstrated superior perfo...
research
08/22/2023

ConcatPlexer: Additional Dim1 Batching for Faster ViTs

Transformers have demonstrated tremendous success not only in the natura...
research
10/02/2022

Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks

Transformer networks have seen great success in natural language process...
research
01/28/2021

A Neural Few-Shot Text Classification Reality Check

Modern classification models tend to struggle when the amount of annotat...
research
04/25/2023

iMixer: hierarchical Hopfield network implies an invertible, implicit and iterative MLP-Mixer

In the last few years, the success of Transformers in computer vision ha...

Please sign up or login with your details

Forgot password? Click here to reset