Hire-MLP: Vision MLP via Hierarchical Rearrangement

08/30/2021
by   Jianyuan Guo, et al.
0

This paper presents Hire-MLP, a simple yet competitive vision MLP architecture via hierarchical rearrangement. Previous vision MLPs like MLP-Mixer are not flexible for various image sizes and are inefficient to capture spatial information by flattening the tokens. Hire-MLP innovates the existing MLP-based models by proposing the idea of hierarchical rearrangement to aggregate the local and global spatial information while being versatile for downstream tasks. Specifically, the inner-region rearrangement is designed to capture local information inside a spatial region. Moreover, to enable information communication between different regions and capture global context, the cross-region rearrangement is proposed to circularly shift all tokens along spatial directions. The proposed Hire-MLP architecture is built with simple channel-mixing MLPs and rearrangement operations, thus enjoys high flexibility and inference speed. Experiments show that our Hire-MLP achieves state-of-the-art performance on the ImageNet-1K benchmark. In particular, Hire-MLP achieves an 83.4% top-1 accuracy on ImageNet, which surpasses previous Transformer-based and MLP-based models with better trade-off for accuracy and throughput.

READ FULL TEXT
research
08/25/2023

CS-Mixer: A Cross-Scale Vision MLP Model with Spatial-Channel Mixing

Despite their simpler information fusion designs compared with Vision Tr...
research
04/07/2022

DaViT: Dual Attention Vision Transformers

In this work, we introduce Dual Attention Vision Transformers (DaViT), a...
research
01/28/2022

DynaMixer: A Vision MLP Architecture with Dynamic Mixing

Recently, MLP-like vision models have achieved promising performances on...
research
11/20/2021

Discrete Representations Strengthen Vision Transformer Robustness

Vision Transformer (ViT) is emerging as the state-of-the-art architectur...
research
05/28/2023

Using Caterpillar to Nibble Small-Scale Images

Recently, MLP-based models have become popular and attained significant ...
research
03/11/2023

Xformer: Hybrid X-Shaped Transformer for Image Denoising

In this paper, we present a hybrid X-shaped vision Transformer, named Xf...
research
08/12/2021

Mobile-Former: Bridging MobileNet and Transformer

We present Mobile-Former, a parallel design of MobileNet and Transformer...

Please sign up or login with your details

Forgot password? Click here to reset