Mix-up Self-Supervised Learning for Contrast-agnostic Applications

04/02/2022
by   Yichen Zhang, et al.
0

Contrastive self-supervised learning has attracted significant research attention recently. It learns effective visual representations from unlabeled data by embedding augmented views of the same image close to each other while pushing away embeddings of different images. Despite its great success on ImageNet classification, COCO object detection, etc., its performance degrades on contrast-agnostic applications, e.g., medical image classification, where all images are visually similar to each other. This creates difficulties in optimizing the embedding space as the distance between images is rather small. To solve this issue, we present the first mix-up self-supervised learning framework for contrast-agnostic applications. We address the low variance across images based on cross-domain mix-up and build the pretext task based on two synergistic objectives: image reconstruction and transparency prediction. Experimental results on two benchmark datasets validate the effectiveness of our method, where an improvement of 2.5 compared to existing self-supervised learning methods.

READ FULL TEXT
research
02/08/2022

Self-supervised Contrastive Learning for Cross-domain Hyperspectral Image Representation

Recently, self-supervised learning has attracted attention due to its re...
research
08/22/2023

GOPro: Generate and Optimize Prompts in CLIP using Self-Supervised Learning

Large-scale foundation models, such as CLIP, have demonstrated remarkabl...
research
11/22/2020

Run Away From your Teacher: Understanding BYOL by a Novel Self-Supervised Approach

Recently, a newly proposed self-supervised framework Bootstrap Your Own ...
research
08/20/2022

Looking For A Match: Self-supervised Clustering For Automatic Doubt Matching In e-learning Platforms

Recently, e-learning platforms have grown as a place where students can ...
research
10/07/2022

An Investigation into Whitening Loss for Self-supervised Learning

A desirable objective in self-supervised learning (SSL) is to avoid feat...
research
09/28/2022

Efficient Medical Image Assessment via Self-supervised Learning

High-performance deep learning methods typically rely on large annotated...
research
11/08/2021

Hybrid BYOL-ViT: Efficient approach to deal with small datasets

Supervised learning can learn large representational spaces, which are c...

Please sign up or login with your details

Forgot password? Click here to reset