Additional Positive Enables Better Representation Learning for Medical Images

05/31/2023
by   Dewen Zeng, et al.
0

This paper presents a new way to identify additional positive pairs for BYOL, a state-of-the-art (SOTA) self-supervised learning framework, to improve its representation learning ability. Unlike conventional BYOL which relies on only one positive pair generated by two augmented views of the same image, we argue that information from different images with the same label can bring more diversity and variations to the target features, thus benefiting representation learning. To identify such pairs without any label, we investigate TracIn, an instance-based and computationally efficient influence function, for BYOL training. Specifically, TracIn is a gradient-based method that reveals the impact of a training sample on a test sample in supervised learning. We extend it to the self-supervised learning setting and propose an efficient batch-wise per-sample gradient computation method to estimate the pairwise TracIn to represent the similarity of samples in the mini-batch during training. For each image, we select the most similar sample from other images as the additional positive and pull their features together with BYOL loss. Experimental results on two public medical datasets (i.e., ISIC 2019 and ChestX-ray) demonstrate that the proposed method can improve the classification performance compared to other competitive baselines in both semi-supervised and transfer learning settings.

READ FULL TEXT
research
07/13/2020

Whitening for Self-Supervised Representation Learning

Recent literature on self-supervised learning is based on the contrastiv...
research
06/07/2022

TriBYOL: Triplet BYOL for Self-Supervised Representation Learning

This paper proposes a novel self-supervised learning method for learning...
research
03/09/2021

SimTriplet: Simple Triplet Representation Learning with a Single GPU

Contrastive learning is a key technique of modern self-supervised learni...
research
11/21/2021

HoughCL: Finding Better Positive Pairs in Dense Self-supervised Learning

Recently, self-supervised methods show remarkable achievements in image-...
research
11/27/2022

A Knowledge-based Learning Framework for Self-supervised Pre-training Towards Enhanced Recognition of Medical Images

Self-supervised pre-training has become the priory choice to establish r...
research
06/04/2022

MSR: Making Self-supervised learning Robust to Aggressive Augmentations

Most recent self-supervised learning methods learn visual representation...
research
10/14/2020

Self-Supervised Ranking for Representation Learning

We present a new framework for self-supervised representation learning b...

Please sign up or login with your details

Forgot password? Click here to reset