What to Hide from Your Students: Attention-Guided Masked Image Modeling

03/23/2022
by   Ioannis Kakogeorgiou, et al.
2

Transformers and masked language modeling are quickly being adopted and explored in computer vision as vision transformers and masked image modeling (MIM). In this work, we argue that image token masking is fundamentally different from token masking in text, due to the amount and correlation of tokens in an image. In particular, to generate a challenging pretext task for MIM, we advocate a shift from random masking to informed masking. We develop and exhibit this idea in the context of distillation-based MIM, where a teacher transformer encoder generates an attention map, which we use to guide masking for the student encoder. We thus introduce a novel masking strategy, called attention-guided masking (AttMask), and we demonstrate its effectiveness over random masking for dense distillation-based MIM as well as plain distillation-based self-supervised learning on classification tokens. We confirm that AttMask accelerates the learning process and improves the performance on a variety of downstream tasks.

READ FULL TEXT

page 3

page 8

page 12

page 25

page 26

research
12/23/2020

Training data-efficient image transformers distillation through attention

Recently, neural networks purely based on attention were shown to addres...
research
07/31/2023

Predicting masked tokens in stochastic locations improves masked image modeling

Self-supervised learning is a promising paradigm in deep learning that e...
research
06/21/2022

SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders

Recently, significant progress has been made in masked image modeling to...
research
05/30/2022

HiViT: Hierarchical Vision Transformer Meets Masked Image Modeling

Recently, masked image modeling (MIM) has offered a new methodology of s...
research
06/06/2023

DenseDINO: Boosting Dense Self-Supervised Learning with Token-Based Point-Level Consistency

In this paper, we propose a simple yet effective transformer framework f...
research
05/30/2023

Contextual Vision Transformers for Robust Representation Learning

We present Contextual Vision Transformers (ContextViT), a method for pro...
research
02/25/2021

How to represent part-whole hierarchies in a neural network

This paper does not describe a working system. Instead, it presents a si...

Please sign up or login with your details

Forgot password? Click here to reset