AAG: Self-Supervised Representation Learning by Auxiliary Augmentation with GNT-Xent Loss

09/17/2020
by   Yanlun Tu, et al.
18

Self-supervised representation learning is an emerging research topic for its powerful capacity in learning with unlabeled data. As a mainstream self-supervised learning method, augmentation-based contrastive learning has achieved great success in various computer vision tasks that lack manual annotations. Despite current progress, the existing methods are often limited by extra cost on memory or storage, and their performance still has large room for improvement. Here we present a self-supervised representation learning method, namely AAG, which is featured by an auxiliary augmentation strategy and GNT-Xent loss. The auxiliary augmentation is able to promote the performance of contrastive learning by increasing the diversity of images. The proposed GNT-Xent loss enables a steady and fast training process and yields competitive accuracy. Experiment results demonstrate the superiority of AAG to previous state-of-the-art methods on CIFAR10, CIFAR100, and SVHN. Especially, AAG achieves 94.5 higher than the best result of SimCLR with batch size 1024.

READ FULL TEXT
research
01/23/2023

Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay

Self-supervised learning has become a popular approach in recent years f...
research
03/09/2023

Distortion-Disentangled Contrastive Learning

Self-supervised learning is well known for its remarkable performance in...
research
04/17/2021

Color Variants Identification via Contrastive Self-Supervised Representation Learning

In this paper, we utilize deep visual Representation Learning to address...
research
11/15/2021

QK Iteration: A Self-Supervised Representation Learning Algorithm for Image Similarity

Self-supervised representation learning is a fundamental problem in comp...
research
10/18/2022

Towards Efficient and Effective Self-Supervised Learning of Visual Representations

Self-supervision has emerged as a propitious method for visual represent...
research
02/16/2022

Self-Supervised Representation Learning via Latent Graph Prediction

Self-supervised learning (SSL) of graph neural networks is emerging as a...
research
10/30/2022

Saliency Can Be All You Need In Contrastive Self-Supervised Learning

We propose an augmentation policy for Contrastive Self-Supervised Learni...

Please sign up or login with your details

Forgot password? Click here to reset