TLDR: Twin Learning for Dimensionality Reduction

10/18/2021
by   Yannis Kalantidis, et al.
0

Dimensionality reduction methods are unsupervised approaches which learn low-dimensional spaces where some properties of the initial space, typically the notion of "neighborhood", are preserved. They are a crucial component of diverse tasks like visualization, compression, indexing, and retrieval. Aiming for a totally different goal, self-supervised visual representation learning has been shown to produce transferable representation functions by learning models that encode invariance to artificially created distortions, e.g. a set of hand-crafted image transformations. Unlike manifold learning methods that usually require propagation on large k-NN graphs or complicated optimization solvers, self-supervised learning approaches rely on simpler and more scalable frameworks for learning. In this paper, we unify these two families of approaches from the angle of manifold learning and propose TLDR, a dimensionality reduction method for generic input spaces that is porting the simple self-supervised learning framework of Barlow Twins to a setting where it is hard or impossible to define an appropriate set of distortions by hand. We propose to use nearest neighbors to build pairs from a training set and a redundancy reduction loss borrowed from the self-supervised literature to learn an encoder that produces representations invariant across such pairs. TLDR is a method that is simple, easy to implement and train, and of broad applicability; it consists of an offline nearest neighbor computation step that can be highly approximated, and a straightforward learning process that does not require mining negative samples to contrast, eigendecompositions, or cumbersome optimization solvers. By replacing PCA with TLDR, we are able to increase the performance of GeM-AP by 4 performance with 16x fewer dimensions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2022

Deep Active Learning Using Barlow Twins

The generalisation performance of a convolutional neural networks (CNN) ...
research
05/17/2023

State Representation Learning Using an Unbalanced Atlas

The manifold hypothesis posits that high-dimensional data often lies on ...
research
09/27/2022

Learning-Based Dimensionality Reduction for Computing Compact and Effective Local Feature Descriptors

A distinctive representation of image patches in form of features is a k...
research
06/21/2022

TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning

We present Transformation Invariance and Covariance Contrast (TiCo) for ...
research
02/12/2021

Bootstrapped Representation Learning on Graphs

Current state-of-the-art self-supervised learning methods for graph neur...
research
02/14/2022

A Generic Self-Supervised Framework of Learning Invariant Discriminative Features

Self-supervised learning (SSL) has become a popular method for generatin...
research
06/25/2023

A Self-Encoder for Learning Nearest Neighbors

We present the self-encoder, a neural network trained to guess the ident...

Please sign up or login with your details

Forgot password? Click here to reset