Improving Contrastive Learning on Visually Homogeneous Mars Rover Images

10/17/2022
by   Isaac Ronald Ward, et al.
0

Contrastive learning has recently demonstrated superior performance to supervised learning, despite requiring no training labels. We explore how contrastive learning can be applied to hundreds of thousands of unlabeled Mars terrain images, collected from the Mars rovers Curiosity and Perseverance, and from the Mars Reconnaissance Orbiter. Such methods are appealing since the vast majority of Mars images are unlabeled as manual annotation is labor intensive and requires extensive domain knowledge. Contrastive learning, however, assumes that any given pair of distinct images contain distinct semantic content. This is an issue for Mars image datasets, as any two pairs of Mars images are far more likely to be semantically similar due to the lack of visual diversity on the planet's surface. Making the assumption that pairs of images will be in visual contrast - when they are in fact not - results in pairs that are falsely considered as negatives, impacting training performance. In this study, we propose two approaches to resolve this: 1) an unsupervised deep clustering step on the Mars datasets, which identifies clusters of images containing similar semantic content and corrects false negative errors during training, and 2) a simple approach which mixes data from different domains to increase visual diversity of the total training dataset. Both cases reduce the rate of false negative pairs, thus minimizing the rate in which the model is incorrectly penalized during contrastive training. These modified approaches remain fully unsupervised end-to-end. To evaluate their performance, we add a single linear layer trained to generate class predictions based on these contrastively-learned features and demonstrate increased performance compared to supervised models; observing an improvement in classification accuracy of 3.06

READ FULL TEXT

page 5

page 19

research
11/23/2020

Boosting Contrastive Self-Supervised Learning with False Negative Cancellation

Self-supervised representation learning has witnessed significant leaps ...
research
08/22/2022

Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound

Self-supervised contrastive representation learning offers the advantage...
research
12/16/2020

ISD: Self-Supervised Learning by Iterative Similarity Distillation

Recently, contrastive learning has achieved great results in self-superv...
research
11/24/2022

Contrastive pretraining for semantic segmentation is robust to noisy positive pairs

Domain-specific variants of contrastive learning can construct positive ...
research
09/05/2023

Doppelgangers: Learning to Disambiguate Images of Similar Structures

We consider the visual disambiguation task of determining whether a pair...
research
02/14/2022

Learn by Challenging Yourself: Contrastive Visual Representation Learning with Hard Sample Generation

Contrastive learning (CL), a self-supervised learning approach, can effe...
research
05/04/2023

Multi-Domain Learning From Insufficient Annotations

Multi-domain learning (MDL) refers to simultaneously constructing a mode...

Please sign up or login with your details

Forgot password? Click here to reset