Multi-Augmentation for Efficient Visual Representation Learning for Self-supervised Pre-training

05/24/2022
by   Van-Nhiem Tran, et al.
0

In recent years, self-supervised learning has been studied to deal with the limitation of available labeled-dataset. Among the major components of self-supervised learning, the data augmentation pipeline is one key factor in enhancing the resulting performance. However, most researchers manually designed the augmentation pipeline, and the limited collections of transformation may cause the lack of robustness of the learned feature representation. In this work, we proposed Multi-Augmentations for Self-Supervised Representation Learning (MA-SSRL), which fully searched for various augmentation policies to build the entire pipeline to improve the robustness of the learned feature representation. MA-SSRL successfully learns the invariant feature representation and presents an efficient, effective, and adaptable data augmentation pipeline for self-supervised pre-training on different distribution and domain datasets. MA-SSRL outperforms the previous state-of-the-art methods on transfer and semi-supervised benchmarks while requiring fewer training epochs.

READ FULL TEXT
research
12/08/2021

Self-Supervised Speaker Verification with Simple Siamese Network and Self-Supervised Regularization

Training speaker-discriminative and robust speaker verification systems ...
research
12/07/2021

ViewCLR: Learning Self-supervised Video Representation for Unseen Viewpoints

Learning self-supervised video representation predominantly focuses on d...
research
03/28/2021

Representation Learning by Ranking under multiple tasks

In recent years, representation learning has become the research focus o...
research
03/02/2023

Evolutionary Augmentation Policy Optimization for Self-supervised Learning

Self-supervised learning (SSL) is a Machine Learning algorithm for pretr...
research
02/16/2022

Planckian jitter: enhancing the color quality of self-supervised visual representations

Several recent works on self-supervised learning are trained by mapping ...
research
07/16/2022

On the Importance of Hyperparameters and Data Augmentation for Self-Supervised Learning

Self-Supervised Learning (SSL) has become a very active area of Deep Lea...
research
11/01/2022

Self-supervised Character-to-Character Distillation

Handling complicated text images (e.g., irregular structures, low resolu...

Please sign up or login with your details

Forgot password? Click here to reset