Hard Negative Mixing for Contrastive Learning

10/02/2020
by   Yannis Kalantidis, et al.
0

Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both crucial in learning such representations. At the same time, data mixing strategies either at the image or the feature level improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e., the effect of hard negatives, has so far been neglected. To get more meaningful negative samples, current top contrastive self-supervised learning approaches either substantially increase the batch sizes, or keep very large memory banks; increasing the memory size, however, leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on linear classification, object detection and instance segmentation and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art self-supervised learning method.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/13/2020

A Simple Framework for Contrastive Learning of Visual Representations

This paper presents SimCLR: a simple framework for contrastive learning ...
03/07/2022

Comparing representations of biological data learned with different AI paradigms, augmenting and cropping strategies

Recent advances in computer vision and robotics enabled automated large-...
05/17/2021

Divide and Contrast: Self-supervised Learning from Uncurated Data

Self-supervised learning holds promise in leveraging large amounts of un...
12/01/2021

CLAWS: Contrastive Learning with hard Attention and Weak Supervision

Learning effective visual representations without human supervision is a...
08/13/2021

GeoCLR: Georeference Contrastive Learning for Efficient Seafloor Image Interpretation

This paper describes Georeference Contrastive Learning of visual Represe...
03/16/2022

Relational Self-Supervised Learning

Self-supervised Learning (SSL) including the mainstream contrastive lear...
10/13/2020

Are all negatives created equal in contrastive instance discrimination?

Self-supervised learning has recently begun to rival supervised learning...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.