Learning Representations by Predicting Bags of Visual Words

02/27/2020
by   Spyros Gidaris, et al.
0

Self-supervised representation learning targets to learn convnet-based image representations from unlabeled data. Inspired by the success of NLP methods in this area, in this work we propose a self-supervised approach based on spatially dense image descriptions that encode discrete visual concepts, here called visual words. To build such discrete representations, we quantize the feature maps of a first pre-trained self-supervised convnet, over a k-means based vocabulary. Then, as a self-supervised task, we train another convnet to predict the histogram of visual words of an image (i.e., its Bag-of-Words representation) given as input a perturbed version of that image. The proposed task forces the convnet to learn perturbation-invariant and context-aware image features, useful for downstream image understanding tasks. We extensively evaluate our method and demonstrate very strong empirical results, e.g., our pre-trained self-supervised representations transfer better on detection task and similarly on classification over classes "unseen" during pre-training, when compared to the supervised case. This also shows that the process of image discretization into visual words can provide the basis for very powerful self-supervised approaches in the image domain, thus allowing further connections to be made to related methods from the NLP domain that have been extremely successful so far.

READ FULL TEXT
research
01/10/2022

Reproducing BowNet: Learning Representations by Predicting Bags of Visual Words

This work aims to reproduce results from the CVPR 2020 paper by Gidaris ...
research
11/24/2021

ViCE: Self-Supervised Visual Concept Embeddings as Contextual and Pixel Appearance Invariant Semantic Representations

This work presents a self-supervised method to learn dense semantically ...
research
06/03/2022

Learning an Adaptation Function to Assess Image Visual Similarities

Human perception is routinely assessing the similarity between images, b...
research
04/18/2022

The Devil is in the Frequency: Geminated Gestalt Autoencoder for Self-Supervised Visual Pre-Training

The self-supervised Masked Image Modeling (MIM) schema, following "mask-...
research
12/21/2020

Online Bag-of-Visual-Words Generation for Unsupervised Representation Learning

Learning image representations without human supervision is an important...
research
10/12/2019

vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations

We propose vq-wav2vec to learn discrete representations of audio segment...
research
11/25/2021

Semantic-Aware Generation for Self-Supervised Visual Representation Learning

In this paper, we propose a self-supervised visual representation learni...

Please sign up or login with your details

Forgot password? Click here to reset