Discrete Representations Strengthen Vision Transformer Robustness

11/20/2021
by   Chengzhi Mao, et al.
10

Vision Transformer (ViT) is emerging as the state-of-the-art architecture for image recognition. While recent studies suggest that ViTs are more robust than their convolutional counterparts, our experiments find that ViTs are overly reliant on local features (e.g., nuisances and texture) and fail to make adequate use of global context (e.g., shape and structure). As a result, ViTs fail to generalize to out-of-distribution, real-world data. To address this deficiency, we present a simple and effective architecture modification to ViT's input layer by adding discrete tokens produced by a vector-quantized encoder. Different from the standard continuous pixel tokens, discrete tokens are invariant under small perturbations and contain less information individually, which promote ViTs to learn global information that is invariant. Experimental results demonstrate that adding discrete representation on four architecture variants strengthens ViT robustness by up to 12 ImageNet robustness benchmarks while maintaining the performance on ImageNet.

READ FULL TEXT

page 7

page 8

page 13

page 14

page 15

page 20

page 21

page 22

research
05/17/2023

CageViT: Convolutional Activation Guided Efficient Vision Transformer

Recently, Transformers have emerged as the go-to architecture for both v...
research
03/03/2022

Multi-Tailed Vision Transformer for Efficient Inference

Recently, Vision Transformer (ViT) has achieved promising performance in...
research
08/30/2021

Hire-MLP: Vision MLP via Hierarchical Rearrangement

This paper presents Hire-MLP, a simple yet competitive vision MLP archit...
research
06/20/2019

Improving the robustness of ImageNet classifiers using elements of human visual cognition

We investigate the robustness properties of image recognition models equ...
research
03/11/2023

Xformer: Hybrid X-Shaped Transformer for Image Denoising

In this paper, we present a hybrid X-shaped vision Transformer, named Xf...
research
08/12/2021

Mobile-Former: Bridging MobileNet and Transformer

We present Mobile-Former, a parallel design of MobileNet and Transformer...
research
02/11/2023

Evaluating the Robustness of Discrete Prompts

Discrete prompts have been used for fine-tuning Pre-trained Language Mod...

Please sign up or login with your details

Forgot password? Click here to reset