Self-Supervised Learning with Swin Transformers

05/10/2021
by   Zhenda Xie, et al.
0

We are witnessing a modeling shift from CNN to Transformers in computer vision. In this work, we present a self-supervised learning approach called MoBY, with Vision Transformers as its backbone architecture. The approach basically has no new inventions, which is combined from MoCo v2 and BYOL and tuned to achieve reasonably high accuracy on ImageNet-1K linear evaluation: 72.8 300-epoch training. The performance is slightly better than recent works of MoCo v3 and DINO which adopt DeiT as the backbone, but with much lighter tricks. More importantly, the general-purpose Swin Transformer backbone enables us to also evaluate the learnt representations on downstream tasks such as object detection and semantic segmentation, in contrast to a few recent approaches built on ViT/DeiT which only report linear evaluation results on ImageNet-1K due to ViT/DeiT not tamed for these dense prediction tasks. We hope our results can facilitate more comprehensive evaluation of self-supervised learning methods designed for Transformer architectures. Our code and models are available at https://github.com/SwinTransformer/Transformer-SSL, which will be continually enriched.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2021

SiT: Self-supervised vIsion Transformer

Self-supervised learning methods are gaining increasing traction in comp...
research
06/09/2023

FLSL: Feature-level Self-supervised Learning

Current self-supervised learning (SSL) methods (e.g., SimCLR, DINO, VICR...
research
06/10/2022

Exploring Feature Self-relation for Self-supervised Transformer

Learning representations with self-supervision for convolutional network...
research
09/13/2023

Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?

Convolutional networks and vision transformers have different forms of p...
research
08/30/2023

Emergence of Segmentation with Minimalistic White-Box Transformers

Transformer-like models for vision tasks have recently proven effective ...
research
06/02/2021

Container: Context Aggregation Network

Convolutional neural networks (CNNs) are ubiquitous in computer vision, ...
research
02/01/2023

A Survey of Deep Learning: From Activations to Transformers

Deep learning has made tremendous progress in the last decade. A key suc...

Please sign up or login with your details

Forgot password? Click here to reset