VOLO: Vision Outlooker for Visual Recognition

06/24/2021
by   Li Yuan, et al.
1

Visual recognition has been dominated by convolutional neural networks (CNNs) for years. Though recently the prevailing vision transformers (ViTs) have shown great potential of self-attention based models in ImageNet classification, their performance is still inferior to that of the latest SOTA CNNs if no extra data are provided. In this work, we try to close the performance gap and demonstrate that attention-based models are indeed able to outperform CNNs. We find a major factor limiting the performance of ViTs for ImageNet classification is their low efficacy in encoding fine-level features into the token representations. To resolve this, we introduce a novel outlook attention and present a simple and general architecture, termed Vision Outlooker (VOLO). Unlike self-attention that focuses on global dependency modeling at a coarse level, the outlook attention efficiently encodes finer-level features and contexts into tokens, which is shown to be critically beneficial to recognition performance but largely ignored by the self-attention. Experiments show that our VOLO achieves 87.1 the first model exceeding 87 using any extra training data In addition, the pre-trained VOLO transfers well to downstream tasks, such as semantic segmentation. We achieve 84.3 on the cityscapes validation set and 54.3 is available at <https://github.com/sail-sg/volo>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2022

UniFormer: Unifying Convolution and Self-attention for Visual Recognition

It is a challenging task to learn discriminative representation from ima...
research
06/23/2021

Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition

In this paper, we present Vision Permutator, a conceptually simple and d...
research
12/23/2022

A Close Look at Spatial Modeling: From Attention to Convolution

Vision Transformers have shown great promise recently for many vision ta...
research
12/20/2021

Lite Vision Transformer with Enhanced Self-Attention

Despite the impressive representation capacity of vision transformer mod...
research
11/14/2022

The Birds Need Attention Too: Analysing usage of Self Attention in identifying bird calls in soundscapes

Birds are vital parts of ecosystems across the world and are an excellen...
research
03/05/2021

Causal Attention for Vision-Language Tasks

We present a novel attention mechanism: Causal Attention (CATT), to remo...
research
07/01/2021

Global Filter Networks for Image Classification

Recent advances in self-attention and pure multi-layer perceptrons (MLP)...

Please sign up or login with your details

Forgot password? Click here to reset