An Empirical Study of Training Self-Supervised Vision Transformers

04/05/2021
by   Xinlei Chen, et al.
39

This paper does not describe a novel method. Instead, it studies a straightforward, incremental, yet must-know baseline given the recent progress in computer vision: self-supervised learning for Vision Transformers (ViT). While the training recipes for standard convolutional networks have been highly mature and robust, the recipes for ViT are yet to be built, especially in the self-supervised scenarios where training becomes more challenging. In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT. We observe that instability is a major issue that degrades accuracy, and it can be hidden by apparently good results. We reveal that these results are indeed partial failure, and they can be improved when training is made more stable. We benchmark ViT results in MoCo v3 and several other self-supervised frameworks, with ablations in various aspects. We discuss the currently positive evidence as well as challenges and open questions. We hope that this work will provide useful data points and experience for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2021

Emerging Properties in Self-Supervised Vision Transformers

In this paper, we question if self-supervised learning provides new prop...
research
04/08/2016

Free-Space Detection with Self-Supervised and Online Trained Fully Convolutional Networks

Recently, vision-based Advanced Driver Assist Systems have gained broad ...
research
01/31/2023

Real Estate Property Valuation using Self-Supervised Vision Transformers

The use of Artificial Intelligence (AI) in the real estate market has be...
research
04/24/2023

A Cookbook of Self-Supervised Learning

Self-supervised learning, dubbed the dark matter of intelligence, is a p...
research
10/13/2022

Demystifying Self-supervised Trojan Attacks

As an emerging machine learning paradigm, self-supervised learning (SSL)...
research
06/14/2021

Partial success in closing the gap between human and machine vision

A few years ago, the first CNN surpassed human performance on ImageNet. ...
research
09/04/2023

Leveraging Self-Supervised Vision Transformers for Neural Transfer Function Design

In volume rendering, transfer functions are used to classify structures ...

Please sign up or login with your details

Forgot password? Click here to reset