DeepAI AI Chat
Log In Sign Up

Online Bag-of-Visual-Words Generation for Unsupervised Representation Learning

12/21/2020
by   Spyros Gidaris, et al.
3

Learning image representations without human supervision is an important and active research field. Several recent approaches have successfully leveraged the idea of making such a representation invariant under different types of perturbations, especially via contrastive-based instance discrimination training. Although effective visual representations should indeed exhibit such invariances, there are other important characteristics, such as encoding contextual reasoning skills, for which alternative reconstruction-based approaches might be better suited. With this in mind, we propose a teacher-student scheme to learn representations by training a convnet to reconstruct a bag-of-visual-words (BoW) representation of an image, given as input a perturbed version of that same image. Our strategy performs an online training of both the teacher network (whose role is to generate the BoW targets) and the student network (whose role is to learn representations), along with an online update of the visual-words vocabulary (used for the BoW targets). This idea effectively enables fully online BoW-guided unsupervised learning. Extensive experiments demonstrate the interest of our BoW-based strategy which surpasses previous state-of-the-art methods (including contrastive-based ones) in several applications. For instance, in downstream tasks such Pascal object detection, Pascal classification and Places205 classification, our method improves over all prior unsupervised approaches, thus establishing new state-of-the-art results that are also significantly better even than those of supervised pre-training. We provide the implementation code at https://github.com/valeoai/obow.

READ FULL TEXT

page 5

page 13

page 14

02/09/2021

DetCo: Unsupervised Contrastive Learning for Object Detection

Unsupervised contrastive learning achieves great success in learning ima...
02/27/2020

Learning Representations by Predicting Bags of Visual Words

Self-supervised representation learning targets to learn convnet-based i...
11/13/2019

Momentum Contrast for Unsupervised Visual Representation Learning

We present Momentum Contrast (MoCo) for unsupervised visual representati...
07/23/2022

Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

The teacher-free online Knowledge Distillation (KD) aims to train an ens...
07/04/2021

Bag of Instances Aggregation Boosts Self-supervised Learning

Recent advances in self-supervised learning have experienced remarkable ...
03/08/2021

Unsupervised Pretraining for Object Detection by Patch Reidentification

Unsupervised representation learning achieves promising performances in ...
10/05/2015

Relaxed Multiple-Instance SVM with Application to Object Discovery

Multiple-instance learning (MIL) has served as an important tool for a w...

Code Repositories