Towards Practical Certifiable Patch Defense with Vision Transformer

03/16/2022
by   Zhaoyu Chen, et al.
0

Patch attacks, one of the most threatening forms of physical attack in adversarial examples, can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous region. Certifiable patch defense can guarantee robustness that the classifier is not affected by patch attacks. Existing certifiable patch defenses sacrifice the clean accuracy of classifiers and only obtain a low certified accuracy on toy datasets. Furthermore, the clean and certified accuracy of these methods is still significantly lower than the accuracy of normal classification networks, which limits their application in practice. To move towards a practical certifiable patch defense, we introduce Vision Transformer (ViT) into the framework of Derandomized Smoothing (DS). Specifically, we propose a progressive smoothed image modeling task to train Vision Transformer, which can capture the more discriminable local context of an image while preserving the global semantic information. For efficient inference and deployment in the real world, we innovatively reconstruct the global self-attention structure of the original ViT into isolated band unit self-attention. On ImageNet, under 2 method achieves 41.70 previous best method (26.00 accuracy, which is quite close to the normal ResNet-101 accuracy. Extensive experiments show that our method obtains state-of-the-art clean and certified accuracy with inferring efficiently on CIFAR-10 and ImageNet.

READ FULL TEXT

page 4

page 5

research
02/25/2020

(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

Patch adversarial attacks on images, in which the attacker can distort p...
research
06/24/2022

Defending Backdoor Attacks on Vision Transformer via Patch Processing

Vision Transformers (ViTs) have a radically different architecture with ...
research
11/19/2021

Zero-Shot Certified Defense against Adversarial Patches with Vision Transformers

Adversarial patch attack aims to fool a machine learning model by arbitr...
research
05/17/2020

PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields

Localized adversarial patches aim to induce misclassification in machine...
research
09/04/2023

Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings

In this paper, we propose a new key-based defense focusing on both effic...
research
08/21/2023

Patch Is Not All You Need

Vision Transformers have achieved great success in computer visions, del...
research
08/20/2021

PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier

The adversarial patch attack against image classification models aims to...

Please sign up or login with your details

Forgot password? Click here to reset