MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

10/11/2020
by   Hari Sowrirajan, et al.
4

Self-supervised approaches such as Momentum Contrast (MoCo) can leverage unlabeled data to produce pretrained models for subsequent fine-tuning on labeled data. While MoCo has demonstrated promising results on natural image classification tasks, its application to medical imaging tasks like chest X-ray interpretation has been limited. Chest X-ray interpretation is fundamentally different from natural image classification in ways that may limit the applicability of self-supervised approaches. In this work, we investigate whether MoCo-pretraining leads to better representations or initializations for chest X-ray interpretation. We conduct MoCo-pretraining on CheXpert, a large labeled dataset of X-rays, followed by supervised fine-tuning experiments on the pleural effusion task. Using 0.1 linear model trained on MoCo-pretrained representations outperforms one trained on representations without MoCo-pretraining by an AUC of 0.096 (95 0.130), indicating that MoCo-pretrained representations are of higher quality. Furthermore, a model fine-tuned end-to-end with MoCo-pretraining outperforms its non-MoCo-pretrained counterpart by an AUC of 0.037 (95 with the 0.1 fractions for both the linear model and an end-to-end fine-tuned model with the greater improvements for smaller label fractions. Finally, we observe similar results on a small, target chest X-ray dataset (Shenzhen dataset for tuberculosis) with MoCo-pretraining done on the source dataset (CheXpert), which suggests that pretraining on unlabeled X-rays can provide transfer learning benefits for a target task. Our study demonstrates that MoCo-pretraining provides high-quality representations and transferable initializations for chest X-ray interpretation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2023

Self-Supervised Curricular Deep Learning for Chest X-Ray Image Classification

Deep learning technologies have already demonstrated a high potential to...
research
08/23/2021

How Transferable Are Self-supervised Features in Medical Image Classification Tasks?

Transfer learning has become a standard practice to mitigate the lack of...
research
03/28/2023

Large-scale pretraining on pathological images for fine-tuning of small pathological benchmarks

Pretraining a deep learning model on large image datasets is a standard ...
research
01/18/2021

CheXtransfer: Performance and Parameter Efficiency of ImageNet Models for Chest X-Ray Interpretation

Deep learning methods for chest X-ray interpretation typically rely on p...
research
09/19/2021

A Study of the Generalizability of Self-Supervised Representations

Recent advancements in self-supervised learning (SSL) made it possible t...
research
08/24/2022

Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

Self supervised contrastive learning based pretraining allows developmen...
research
07/23/2022

Self-Supervised Learning of Echocardiogram Videos Enables Data-Efficient Clinical Diagnosis

Given the difficulty of obtaining high-quality labels for medical image ...

Please sign up or login with your details

Forgot password? Click here to reset