Semi-Supervised Vision Transformers

11/22/2021
by   Zejia Weng, et al.
0

We study the training of Vision Transformers for semi-supervised image classification. Transformers have recently demonstrated impressive performance on a multitude of supervised learning tasks. Surprisingly, we find Vision Transformers perform poorly on a semi-supervised ImageNet setting. In contrast, Convolutional Neural Networks (CNNs) achieve superior results in small labeled data regime. Further investigation reveals that the reason is CNNs have strong spatial inductive bias. Inspired by this observation, we introduce a joint semi-supervised learning framework, Semiformer, which contains a Transformer branch, a Convolutional branch and a carefully designed fusion module for knowledge sharing between the branches. The Convolutional branch is trained on the limited supervised data and generates pseudo labels to supervise the training of the transformer branch on unlabeled data. Extensive experiments on ImageNet demonstrate that Semiformer achieves 75.5% top-1 accuracy, outperforming the state-of-the-art. In addition, we show Semiformer is a general framework which is compatible with most modern Transformer and Convolutional neural architectures.

READ FULL TEXT
research
09/15/2022

On the Surprising Effectiveness of Transformers in Low-Labeled Video Recognition

Recently vision transformers have been shown to be competitive with conv...
research
01/04/2023

Semi-MAE: Masked Autoencoders for Semi-supervised Vision Transformers

Vision Transformer (ViT) suffers from data scarcity in semi-supervised l...
research
06/01/2022

A comparative study between vision transformers and CNNs in digital pathology

Recently, vision transformers were shown to be capable of outperforming ...
research
11/12/2021

Convolutional Nets Versus Vision Transformers for Diabetic Foot Ulcer Classification

This paper compares well-established Convolutional Neural Networks (CNNs...
research
08/11/2022

Semi-supervised Vision Transformers at Scale

We study semi-supervised learning (SSL) for vision transformers (ViT), a...
research
12/13/2022

The Hateful Memes Challenge Next Move

State-of-the-art image and text classification models, such as Convoluti...
research
12/12/2021

Improving Vision Transformers for Incremental Learning

This paper studies using Vision Transformers (ViT) in class incremental ...

Please sign up or login with your details

Forgot password? Click here to reset