Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition

06/23/2021
by   Qibin Hou, et al.
0

In this paper, we present Vision Permutator, a conceptually simple and data efficient MLP-like architecture for visual recognition. By realizing the importance of the positional information carried by 2D feature representations, unlike recent MLP-like models that encode the spatial information along the flattened spatial dimensions, Vision Permutator separately encodes the feature representations along the height and width dimensions with linear projections. This allows Vision Permutator to capture long-range dependencies along one spatial direction and meanwhile preserve precise positional information along the other direction. The resulting position-sensitive outputs are then aggregated in a mutually complementing manner to form expressive representations of the objects of interest. We show that our Vision Permutators are formidable competitors to convolutional neural networks (CNNs) and vision transformers. Without the dependence on spatial convolutions or attention mechanisms, Vision Permutator achieves 81.5 extra large-scale training data (e.g., ImageNet-22k) using only 25M learnable parameters, which is much better than most CNNs and vision transformers under the same model size constraint. When scaling up to 88M, it attains 83.2 accuracy. We hope this work could encourage research on rethinking the way of encoding spatial information and facilitate the development of MLP-like models. Code is available at https://github.com/Andrew-Qibin/VisionPermutator.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2021

VOLO: Vision Outlooker for Visual Recognition

Visual recognition has been dominated by convolutional neural networks (...
research
03/04/2021

Coordinate Attention for Efficient Mobile Network Design

Recent studies on mobile network design have demonstrated the remarkable...
research
07/21/2022

SplitMixer: Fat Trimmed From MLP-like Models

We present SplitMixer, a simple and lightweight isotropic MLP-like archi...
research
11/10/2022

InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions

Compared to the great progress of large-scale vision transformers (ViTs)...
research
02/14/2022

How Do Vision Transformers Work?

The success of multi-head self-attentions (MSAs) for computer vision is ...
research
12/13/2022

What do Vision Transformers Learn? A Visual Exploration

Vision transformers (ViTs) are quickly becoming the de-facto architectur...
research
07/20/2022

On the Versatile Uses of Partial Distance Correlation in Deep Learning

Comparing the functional behavior of neural network models, whether it i...

Please sign up or login with your details

Forgot password? Click here to reset