Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning

03/23/2021
by   Chaitanya K. Ryali, et al.
1

Unsupervised representation learning is an important challenge in computer vision, with self-supervised learning methods recently closing the gap to supervised representation learning. An important ingredient in high-performing self-supervised methods is the use of data augmentation by training models to place different augmented views of the same image nearby in embedding space. However, commonly used augmentation pipelines treat images holistically, disregarding the semantic relevance of parts of an image-e.g. a subject vs. a background-which can lead to the learning of spurious correlations. Our work addresses this problem by investigating a class of simple, yet highly effective "background augmentations", which encourage models to focus on semantically-relevant content by discouraging them from focusing on image backgrounds. Background augmentations lead to substantial improvements (+1-2 on ImageNet-1k) in performance across a spectrum of state-of-the art self-supervised methods (MoCov2, BYOL, SwAV) on a variety of tasks, allowing us to reach within 0.3 background augmentations improve robustness to a number of out of distribution settings, including natural adversarial examples, the backgrounds challenge, adversarial attacks, and ReaL ImageNet.

READ FULL TEXT

page 4

page 6

page 9

page 12

page 14

page 18

page 19

page 20

research
11/15/2019

Self-supervised Adversarial Training

Recent work has demonstrated that neural networks are vulnerable to adve...
research
11/05/2022

Local Manifold Augmentation for Multiview Semantic Consistency

Multiview self-supervised representation learning roots in exploring sem...
research
11/03/2022

ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations

Deep learning vision systems are widely deployed across applications whe...
research
09/06/2023

ViewMix: Augmentation for Robust Representation in Self-Supervised Learning

Joint Embedding Architecture-based self-supervised learning methods have...
research
12/14/2021

On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation

Self-supervised learning is a powerful way to learn useful representatio...
research
01/31/2022

Adversarial Masking for Self-Supervised Learning

We propose ADIOS, a masked image model (MIM) framework for self-supervis...
research
03/25/2023

Deep Augmentation: Enhancing Self-Supervised Learning through Transformations in Higher Activation Space

We introduce Deep Augmentation, an approach to data augmentation using d...

Please sign up or login with your details

Forgot password? Click here to reset