Visual Representation Learning with Self-Supervised Attention for Low-Label High-data Regime

01/22/2022
by   Prarthana Bhattacharyya, et al.
12

Self-supervision has shown outstanding results for natural language processing, and more recently, for image recognition. Simultaneously, vision transformers and its variants have emerged as a promising and scalable alternative to convolutions on various computer vision tasks. In this paper, we are the first to question if self-supervised vision transformers (SSL-ViTs) can be adapted to two important computer vision tasks in the low-label, high-data regime: few-shot image classification and zero-shot image retrieval. The motivation is to reduce the number of manual annotations required to train a visual embedder, and to produce generalizable, semantically meaningful and robust embeddings. For few-shot image classification we train SSL-ViTs without any supervision, on external data, and use this trained embedder to adapt quickly to novel classes with limited number of labels. For zero-shot image retrieval, we use SSL-ViTs pre-trained on a large dataset without any labels and fine-tune them with several metric learning objectives. Our self-supervised attention representations outperforms the state-of-the-art on several public benchmarks for both tasks, namely miniImageNet and CUB200 for few-shot image classification by up-to 6 CUB200 for zero-shot image retrieval by up-to 4 <https://github.com/AutoVision-cloud/SSL-ViT-lowlabel-highdata>.

READ FULL TEXT

page 2

page 4

research
06/07/2022

Masked Unsupervised Self-training for Zero-shot Image Classification

State-of-the-art computer vision models are mostly trained with supervis...
research
02/10/2021

Training Vision Transformers for Image Retrieval

Transformers have shown outstanding results for natural language underst...
research
08/22/2023

Masked Momentum Contrastive Learning for Zero-shot Semantic Understanding

Self-supervised pretraining (SSP) has emerged as a popular technique in ...
research
01/31/2021

Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers

Recently multimodal transformer models have gained popularity because th...
research
11/06/2018

Semantic bottleneck for computer vision tasks

This paper introduces a novel method for the representation of images th...
research
06/13/2023

GeneCIS: A Benchmark for General Conditional Image Similarity

We argue that there are many notions of 'similarity' and that models, li...
research
12/17/2021

SiamTrans: Zero-Shot Multi-Frame Image Restoration with Pre-Trained Siamese Transformers

We propose a novel zero-shot multi-frame image restoration method for re...

Please sign up or login with your details

Forgot password? Click here to reset