TrojViT: Trojan Insertion in Vision Transformers

08/27/2022
by   Mengxin Zheng, et al.
0

Vision Transformers (ViTs) have demonstrated the state-of-the-art performance in various vision-related tasks. The success of ViTs motivates adversaries to perform backdoor attacks on ViTs. Although the vulnerability of traditional CNNs to backdoor attacks is well-known, backdoor attacks on ViTs are seldom-studied. Compared to CNNs capturing pixel-wise local features by convolutions, ViTs extract global context information through patches and attentions. Naïvely transplanting CNN-specific backdoor attacks to ViTs yields only a low clean data accuracy and a low attack success rate. In this paper, we propose a stealth and practical ViT-specific backdoor attack TrojViT. Rather than an area-wise trigger used by CNN-specific backdoor attacks, TrojViT generates a patch-wise trigger designed to build a Trojan composed of some vulnerable bits on the parameters of a ViT stored in DRAM memory through patch salience ranking and attention-target loss. TrojViT further uses minimum-tuned parameter update to reduce the bit number of the Trojan. Once the attacker inserts the Trojan into the ViT model by flipping the vulnerable bits, the ViT model still produces normal inference accuracy with benign inputs. But when the attacker embeds a trigger into an input, the ViT model is forced to classify the input to a predefined target class. We show that flipping only few vulnerable bits identified by TrojViT on a ViT model using the well-known RowHammer can transform the model into a backdoored one. We perform extensive experiments of multiple datasets on various ViT models. TrojViT can classify 99.64% of test images to a target class by flipping 345 bits on a ViT for ImageNet.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2019

TBT: Targeted Neural Network Attack with Bit Trojan

Security of modern Deep Neural Networks (DNNs) is under severe scrutiny ...
research
06/16/2022

Backdoor Attacks on Vision Transformers

Vision Transformers (ViT) have recently demonstrated exemplary performan...
research
03/16/2022

Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?

Vision transformers (ViTs) have recently set off a new wave in neural ar...
research
06/24/2022

Defending Backdoor Attacks on Vision Transformer via Patch Processing

Vision Transformers (ViTs) have a radically different architecture with ...
research
12/07/2021

Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

Vision transformers (ViTs) have demonstrated impressive performance and ...
research
10/12/2022

Few-shot Backdoor Attacks via Neural Tangent Kernels

In a backdoor attack, an attacker injects corrupted examples into the tr...
research
03/11/2021

The Curse of Correlations for Robust Fingerprinting of Relational Databases

Database fingerprinting schemes have been widely adopted to prevent unau...

Please sign up or login with your details

Forgot password? Click here to reset