Visual Parser: Representing Part-whole Hierarchies with Transformers

07/13/2021
by   Shuyang Sun, et al.
0

Human vision is able to capture the part-whole hierarchical information from the entire scene. This paper presents the Visual Parser (ViP) that explicitly constructs such a hierarchy with transformers. ViP divides visual representations into two levels, the part level and the whole level. Information of each part represents a combination of several independent vectors within the whole. To model the representations of the two levels, we first encode the information from the whole into part vectors through an attention mechanism, then decode the global information within the part vectors back into the whole representation. By iteratively parsing the two levels with the proposed encoder-decoder interaction, the model can gradually refine the features on both levels. Experimental results demonstrate that ViP can achieve very competitive performance on three major tasks e.g. classification, detection and instance segmentation. In particular, it can surpass the previous state-of-the-art CNN backbones by a large margin on object detection. The tiny model of the ViP family with 7.2× fewer parameters and 10.9× fewer FLOPS can perform comparably with the largest model ResNeXt-101-64×4d of ResNe(X)t family. Visualization results also demonstrate that the learnt parts are highly informative of the predicting class, making ViP more explainable than previous fundamental architectures. Code is available at https://github.com/kevin-ssy/ViP.

READ FULL TEXT

page 9

page 11

research
02/08/2023

Cross-Layer Retrospective Retrieving via Layer Attention

More and more evidence has shown that strengthening layer interactions c...
research
12/23/2022

A Close Look at Spatial Modeling: From Attention to Convolution

Vision Transformers have shown great promise recently for many vision ta...
research
10/21/2022

Boosting vision transformers for image retrieval

Vision transformers have achieved remarkable progress in vision tasks su...
research
08/15/2023

ICAFusion: Iterative Cross-Attention Guided Feature Fusion for Multispectral Object Detection

Effective feature fusion of multispectral images plays a crucial role in...
research
07/09/2021

Levi Graph AMR Parser using Heterogeneous Attention

Coupled with biaffine decoders, transformers have been effectively adapt...
research
09/20/2023

Attentive VQ-VAE

We present a novel approach to enhance the capabilities of VQVAE models ...
research
06/21/2023

ViTEraser: Harnessing the Power of Vision Transformers for Scene Text Removal with SegMIM Pretraining

Scene text removal (STR) aims at replacing text strokes in natural scene...

Please sign up or login with your details

Forgot password? Click here to reset