Position Labels for Self-Supervised Vision Transformer

06/10/2022
by   Zhemin Zhang, et al.
0

Position encoding is important for vision transformer (ViT) to capture the spatial structure of the input image. General efficacy has been proven in ViT. In our work we propose to train ViT to recognize the 2D position encoding of patches of the input image, this apparently simple task actually yields a meaningful self-supervisory task. Based on previous work on ViT position encoding, we propose two position labels dedicated to 2D images including absolute position and relative position. Our position labels can be easily plugged into transformer, combined with the various current ViT variants. It can work in two ways: 1.As an auxiliary training target for vanilla ViT (e.g., ViT-B and Swin-B) to improve model performance. 2. Combine the self-supervised ViT (e.g., MAE) to provide a more powerful self-supervised signal for semantic feature learning. Experiments demonstrate that solely due to the proposed self-supervised methods, Swin-B and ViT-B obtained improvements of 1.9 Acc) and 5.6

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2021

Rethinking and Improving Relative Position Encoding for Vision Transformer

Relative position encoding (RPE) is important for transformer to capture...
research
04/29/2021

Emerging Properties in Self-Supervised Vision Transformers

In this paper, we question if self-supervised learning provides new prop...
research
07/25/2022

Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer

The success of Vision Transformer (ViT) in various computer vision tasks...
research
09/06/2021

PermuteFormer: Efficient Relative Position Encoding for Long Sequences

A recent variation of Transformer, Performer, scales Transformer to long...
research
06/01/2022

Where are my Neighbors? Exploiting Patches Relations in Self-Supervised Vision Transformer

Vision Transformers (ViTs) enabled the use of transformer architecture o...
research
02/02/2022

Relative Position Prediction as Pre-training for Text Encoders

Meaning is defined by the company it keeps. However, company is two-fold...
research
05/31/2023

Feature Learning in Image Hierarchies using Functional Maximal Correlation

This paper proposes the Hierarchical Functional Maximal Correlation Algo...

Please sign up or login with your details

Forgot password? Click here to reset