State Representation Learning Using an Unbalanced Atlas

05/17/2023
by   Li Meng, et al.
0

The manifold hypothesis posits that high-dimensional data often lies on a lower-dimensional manifold and that utilizing this manifold as the target space yields more efficient representations. While numerous traditional manifold-based techniques exist for dimensionality reduction, their application in self-supervised learning has witnessed slow progress. The recent MSIMCLR method combines manifold encoding with SimCLR but requires extremely low target encoding dimensions to outperform SimCLR, limiting its applicability. This paper introduces a novel learning paradigm using an unbalanced atlas (UA), capable of surpassing state-of-the-art self-supervised learning approaches. We meticulously investigated and engineered the DeepInfomax with an unbalanced atlas (DIM-UA) method by systematically adapting the Spatiotemporal DeepInfomax (ST-DIM) framework to align with our proposed UA paradigm, employing rigorous scientific methodologies throughout the process. The efficacy of DIM-UA is demonstrated through training and evaluation on the Atari Annotated RAM Interface (AtariARI) benchmark, a modified version of the Atari 2600 framework that produces annotated image samples for representation learning. The UA paradigm improves the existing algorithm significantly when the number of target encoding dimensions grows. For instance, the mean F1 score averaged over categories of DIM-UA is  75 units.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2021

TLDR: Twin Learning for Dimensionality Reduction

Dimensionality reduction methods are unsupervised approaches which learn...
research
01/13/2022

Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

Despite recent progress made by self-supervised methods in representatio...
research
10/18/2022

Towards Efficient and Effective Self-Supervised Learning of Visual Representations

Self-supervision has emerged as a propitious method for visual represent...
research
10/18/2022

Depth Contrast: Self-Supervised Pretraining on 3DPM Images for Mining Material Classification

This work presents a novel self-supervised representation learning metho...
research
08/16/2022

Matching Multiple Perspectives for Efficient Representation Learning

Representation learning approaches typically rely on images of objects c...
research
10/08/2018

Trace Quotient with Sparsity Priors for Learning Low Dimensional Image Representations

This work studies the problem of learning appropriate low dimensional im...
research
06/17/2022

MET: Masked Encoding for Tabular Data

We consider the task of self-supervised representation learning (SSL) fo...

Please sign up or login with your details

Forgot password? Click here to reset