Backdoor Attacks on Vision Transformers

06/16/2022
by   Akshayvarun Subramanya, et al.
26

Vision Transformers (ViT) have recently demonstrated exemplary performance on a variety of vision tasks and are being used as an alternative to CNNs. Their design is based on a self-attention mechanism that processes images as a sequence of patches, which is quite different compared to CNNs. Hence it is interesting to study if ViTs are vulnerable to backdoor attacks. Backdoor attacks happen when an attacker poisons a small part of the training data for malicious purposes. The model performance is good on clean test images, but the attacker can manipulate the decision of the model by showing the trigger at test time. To the best of our knowledge, we are the first to show that ViTs are vulnerable to backdoor attacks. We also find an intriguing difference between ViTs and CNNs - interpretation algorithms effectively highlight the trigger on test images for ViTs but not for CNNs. Based on this observation, we propose a test-time image blocking defense for ViTs which reduces the attack success rate by a large margin. Code is available here: https://github.com/UCDvision/backdoor_transformer.git

READ FULL TEXT

page 2

page 4

page 6

page 8

page 12

research
05/21/2021

Backdoor Attacks on Self-Supervised Learning

Large-scale unlabeled data has allowed recent progress in self-supervise...
research
08/27/2022

TrojViT: Trojan Insertion in Vision Transformers

Vision Transformers (ViTs) have demonstrated the state-of-the-art perfor...
research
10/18/2020

Poisoned classifiers are not only backdoored, they are fundamentally broken

Under a commonly-studied "backdoor" poisoning attack against classificat...
research
05/31/2022

Surface Analysis with Vision Transformers

The extension of convolutional neural networks (CNNs) to non-Euclidean g...
research
03/19/2022

CNNs and Transformers Perceive Hybrid Images Similar to Humans

Hybrid images is a technique to generate images with two interpretations...
research
03/27/2023

Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder

Deep neural networks are vulnerable to backdoor attacks, where an advers...
research
03/22/2022

GradViT: Gradient Inversion of Vision Transformers

In this work we demonstrate the vulnerability of vision transformers (Vi...

Please sign up or login with your details

Forgot password? Click here to reset