Vision Transformers: State of the Art and Research Challenges

07/07/2022
by   Bo-Kai Ruan, et al.
0

Transformers have achieved great success in natural language processing. Due to the powerful capability of self-attention mechanism in transformers, researchers develop the vision transformers for a variety of computer vision tasks, such as image recognition, object detection, image segmentation, pose estimation, and 3D reconstruction. This paper presents a comprehensive overview of the literature on different architecture designs and training tricks (including self-supervised learning) for vision transformers. Our goal is to provide a systematic review with the open research opportunities.

READ FULL TEXT
research
06/07/2023

2D Object Detection with Transformers: A Review

Astounding performance of Transformers in natural language processing (N...
research
03/29/2021

Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers

Transformers are increasingly dominating multi-modal reasoning tasks, su...
research
07/25/2022

Deep Laparoscopic Stereo Matching with Transformers

The self-attention mechanism, successfully employed with the transformer...
research
08/22/2023

ConcatPlexer: Additional Dim1 Batching for Faster ViTs

Transformers have demonstrated tremendous success not only in the natura...
research
08/03/2021

Armour: Generalizable Compact Self-Attention for Vision Transformers

Attention-based transformer networks have demonstrated promising potenti...
research
06/01/2022

Fair Comparison between Efficient Attentions

Transformers have been successfully used in various fields and are becom...
research
11/11/2022

A Comprehensive Survey of Transformers for Computer Vision

As a special type of transformer, Vision Transformers (ViTs) are used to...

Please sign up or login with your details

Forgot password? Click here to reset