Exploring Target Representations for Masked Autoencoders

09/08/2022
by   Xingbin Liu, et al.
0

Masked autoencoders have become popular training paradigms for self-supervised visual representation learning. These models randomly mask a portion of the input and reconstruct the masked portion according to the target representations. In this paper, we first show that a careful choice of the target representation is unnecessary for learning good representations, since different targets tend to derive similarly behaved models. Driven by this observation, we propose a multi-stage masked distillation pipeline and use a randomly initialized model as the teacher, enabling us to effectively train high-capacity models without any efforts to carefully design target representations. Interestingly, we further explore using teachers of larger capacity, obtaining distilled students with remarkable transferring ability. On different tasks of classification, transfer learning, object detection, and semantic segmentation, the proposed method to perform masked knowledge distillation with bootstrapped teachers (dBOT) outperforms previous self-supervised methods by nontrivial margins. We hope our findings, as well as the proposed method, could motivate people to rethink the roles of target representations in pre-training masked autoencoders.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2023

Generic-to-Specific Distillation of Masked Autoencoders

Large vision Transformers (ViTs) driven by self-supervised pre-training ...
research
02/04/2023

MOMA:Distill from Self-Supervised Teachers

Contrastive Learning and Masked Image Modelling have demonstrated except...
research
12/10/2022

Complete-to-Partial 4D Distillation for Self-Supervised Point Cloud Sequence Representation Learning

Recent work on 4D point cloud sequences has attracted a lot of attention...
research
11/14/2022

MT4SSL: Boosting Self-Supervised Speech Representation Learning by Integrating Multiple Targets

In this paper, we provide a new perspective on self-supervised speech mo...
research
01/23/2020

Target-Embedding Autoencoders for Supervised Representation Learning

Autoencoder-based learning has emerged as a staple for disciplining repr...
research
05/18/2022

Masked Autoencoders As Spatiotemporal Learners

This paper studies a conceptually simple extension of Masked Autoencoder...

Please sign up or login with your details

Forgot password? Click here to reset