A Generic Self-Supervised Framework of Learning Invariant Discriminative Features

02/14/2022
by   Foivos Ntelemis, et al.
0

Self-supervised learning (SSL) has become a popular method for generating invariant representations without the need for human annotations. Nonetheless, the desired invariant representation is achieved by utilising prior online transformation functions on the input data. As a result, each SSL framework is customised for a particular data type, e.g., visual data, and further modifications are required if it is used for other dataset types. On the other hand, autoencoder (AE), which is a generic and widely applicable framework, mainly focuses on dimension reduction and is not suited for learning invariant representation. This paper proposes a generic SSL framework based on a constrained self-labelling assignment process that prevents degenerate solutions. Specifically, the prior transformation functions are replaced with a self-transformation mechanism, derived through an unsupervised training process of adversarial training, for imposing invariant representations. Via the self-transformation mechanism, pairs of augmented instances can be generated from the same input data. Finally, a training objective based on contrastive learning is designed by leveraging both the self-labelling assignment and the self-transformation mechanism. Despite the fact that the self-transformation process is very generic, the proposed training strategy outperforms a majority of state-of-the-art representation learning methods based on AE structures. To validate the performance of our method, we conduct experiments on four types of data, namely visual, audio, text, and mass spectrometry data, and compare them in terms of four quantitative metrics. Our comparison results indicate that the proposed method demonstrate robustness and successfully identify patterns within the datasets.

READ FULL TEXT
research
05/28/2021

Self-supervised Detransformation Autoencoder for Representation Learning in Open Set Recognition

The objective of Open set recognition (OSR) is to learn a classifier tha...
research
11/24/2021

Distribution Estimation to Automate Transformation Policies for Self-Supervision

In recent visual self-supervision works, an imitated classification obje...
research
08/16/2023

Stable and Causal Inference for Discriminative Self-supervised Deep Visual Representations

In recent years, discriminative self-supervised methods have made signif...
research
06/21/2022

TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning

We present Transformation Invariance and Covariance Contrast (TiCo) for ...
research
11/04/2021

MixSiam: A Mixture-based Approach to Self-supervised Representation Learning

Recently contrastive learning has shown significant progress in learning...
research
02/27/2020

GATCluster: Self-Supervised Gaussian-Attention Network for Image Clustering

Deep clustering has achieved state-of-the-art results via joint represen...
research
10/18/2021

TLDR: Twin Learning for Dimensionality Reduction

Dimensionality reduction methods are unsupervised approaches which learn...

Please sign up or login with your details

Forgot password? Click here to reset