How Transferable Are Self-supervised Features in Medical Image Classification Tasks?

08/23/2021
by   Tuan Truong, et al.
0

Transfer learning has become a standard practice to mitigate the lack of labeled data in medical classification tasks. Whereas finetuning a downstream task using supervised ImageNet pretrained features is straightforward and extensively investigated in many works, there is little study on the usefulness of self-supervised pretraining. In this paper, we assess the transferability of ImageNet self-supervisedpretraining by evaluating the performance of models initialized with pretrained features from three self-supervised techniques (SimCLR, SwAV, and DINO) on selected medical classification tasks. The chosen tasks cover tumor detection in sentinel axillary lymph node images, diabetic retinopathy classification in fundus images, and multiple pathological condition classification in chest X-ray images. We demonstrate that self-supervised pretrained models yield richer embeddings than their supervised counterpart, which benefits downstream tasks in view of both linear evaluation and finetuning. For example, in view of linear evaluation at acritically small subset of the data, we see an improvement up to 14.79 diabetic retinopathy classification task, 5.4 classification task, 7.03 the detection of pathological conditions in chest X-ray. In addition, we introduce Dynamic Visual Meta-Embedding (DVME) as an end-to-end transfer learning approach that fuses pretrained embeddings from multiple models. We show that the collective representation obtained by DVME leads to a significant improvement in the performance of selected tasks compared to using a single pretrained model approach and can be generalized to any combination of pretrained models.

READ FULL TEXT
research
10/11/2020

MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

Self-supervised approaches such as Momentum Contrast (MoCo) can leverage...
research
07/29/2022

Transfer Learning for Segmentation Problems: Choose the Right Encoder and Skip the Decoder

It is common practice to reuse models initially trained on different dat...
research
07/07/2022

An Embedding-Dynamic Approach to Self-supervised Learning

A number of recent self-supervised learning methods have shown impressiv...
research
04/19/2022

Diverse Imagenet Models Transfer Better

A commonly accepted hypothesis is that models with higher accuracy on Im...
research
09/05/2023

Self-Supervised Pretraining Improves Performance and Inference Efficiency in Multiple Lung Ultrasound Interpretation Tasks

In this study, we investigated whether self-supervised pretraining could...
research
09/13/2021

Self supervised learning improves dMMR/MSI detection from histology slides across multiple cancers

Microsatellite instability (MSI) is a tumor phenotype whose diagnosis la...
research
09/07/2022

Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers

Recent trends in self-supervised representation learning have focused on...

Please sign up or login with your details

Forgot password? Click here to reset