Using Navigational Information to Learn Visual Representations

02/10/2022
by   Lizhen Zhu, et al.
36

Children learn to build a visual representation of the world from unsupervised exploration and we hypothesize that a key part of this learning ability is the use of self-generated navigational information as a similarity label to drive a learning objective for self-supervised learning. The goal of this work is to exploit navigational information in a visual environment to provide performance in training that exceeds the state-of-the-art self-supervised training. Here, we show that using spatial and temporal information in the pretraining stage of contrastive learning can improve the performance of downstream classification relative to conventional contrastive learning approaches that use instance discrimination to discriminate between two alterations of the same image or two different images. We designed a pipeline to generate egocentric-vision images from a photorealistic ray-tracing environment (ThreeDWorld) and record relevant navigational information for each image. Modifying the Momentum Contrast (MoCo) model, we introduced spatial and temporal information to evaluate the similarity of two views in the pretraining stage instead of instance discrimination. This work reveals the effectiveness and efficiency of contextual information for improving representation learning. The work informs our understanding of the means by which children might learn to see the world without external supervision.

READ FULL TEXT
research
11/19/2020

Heterogeneous Contrastive Learning: Encoding Spatial Information for Compact Visual Representations

Contrastive learning has achieved great success in self-supervised visua...
research
11/25/2020

Can Temporal Information Help with Contrastive Self-Supervised Learning?

Leveraging temporal information has been regarded as essential for devel...
research
11/23/2020

Hierarchically Decoupled Spatial-Temporal Contrast for Self-supervised Video Representation Learning

We present a novel way for self-supervised video representation learning...
research
08/14/2021

Focus on the Positives: Self-Supervised Learning for Biodiversity Monitoring

We address the problem of learning self-supervised representations from ...
research
03/16/2023

All4One: Symbiotic Neighbour Contrastive Learning via Self-Attention and Redundancy Reduction

Nearest neighbour based methods have proved to be one of the most succes...
research
05/30/2022

GMML is All you Need

Vision transformers have generated significant interest in the computer ...
research
06/09/2022

Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis

Recent self-supervised advances in medical computer vision exploit globa...

Please sign up or login with your details

Forgot password? Click here to reset