Improved Robustness of Vision Transformer via PreLayerNorm in Patch Embedding

11/16/2021
by   Bum Jun Kim, et al.
0

Vision transformers (ViTs) have recently demonstrated state-of-the-art performance in a variety of vision tasks, replacing convolutional neural networks (CNNs). Meanwhile, since ViT has a different architecture than CNN, it may behave differently. To investigate the reliability of ViT, this paper studies the behavior and robustness of ViT. We compared the robustness of CNN and ViT by assuming various image corruptions that may appear in practical vision tasks. We confirmed that for most image transformations, ViT showed robustness comparable to CNN or more improved. However, for contrast enhancement, severe performance degradations were consistently observed in ViT. From a detailed analysis, we identified a potential problem: positional embedding in ViT's patch embedding could work improperly when the color scale changes. Here we claim the use of PreLayerNorm, a modified patch embedding structure to ensure scale-invariant behavior of ViT. ViT with PreLayerNorm showed improved robustness in various corruptions including contrast-varying environments.

READ FULL TEXT
research
12/21/2021

MPViT: Multi-Path Vision Transformer for Dense Prediction

Dense computer vision tasks such as object detection and segmentation re...
research
04/26/2022

Deeper Insights into ViTs Robustness towards Common Corruptions

Recent literature have shown design strategies from Convolutions Neural ...
research
03/26/2023

Sector Patch Embedding: An Embedding Module Conforming to The Distortion Pattern of Fisheye Image

Fisheye cameras suffer from image distortion while having a large field ...
research
06/13/2023

Reviving Shift Equivariance in Vision Transformers

Shift equivariance is a fundamental principle that governs how we percei...
research
05/08/2023

Understanding Gaussian Attention Bias of Vision Transformers Using Effective Receptive Fields

Vision transformers (ViTs) that model an image as a sequence of partitio...
research
07/02/2023

X-MLP: A Patch Embedding-Free MLP Architecture for Vision

Convolutional neural networks (CNNs) and vision transformers (ViT) have ...
research
10/15/2021

Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation

We investigate the robustness of vision transformers (ViTs) through the ...

Please sign up or login with your details

Forgot password? Click here to reset