CheXtransfer: Performance and Parameter Efficiency of ImageNet Models for Chest X-Ray Interpretation

01/18/2021
by   Alexander Ke, et al.
15

Deep learning methods for chest X-ray interpretation typically rely on pretrained models developed for ImageNet. This paradigm assumes that better ImageNet architectures perform better on chest X-ray tasks and that ImageNet-pretrained weights provide a performance boost over random initialization. In this work, we compare the transfer performance and parameter efficiency of 16 popular convolutional architectures on a large chest X-ray dataset (CheXpert) to investigate these assumptions. First, we find no relationship between ImageNet performance and CheXpert performance for both models without pretraining and models with pretraining. Second, we find that, for models without pretraining, the choice of model family influences performance more than size within a family for medical imaging tasks. Third, we observe that ImageNet pretraining yields a statistically significant boost in performance across architectures, with a higher boost for smaller architectures. Fourth, we examine whether ImageNet architectures are unnecessarily large for CheXpert by truncating final blocks from pretrained models, and find that we can make models 3.25x more parameter-efficient on average without a statistically significant drop in performance. Our work contributes new experimental evidence about the relation of ImageNet to chest x-ray interpretation performance.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

research
11/23/2022

Can we Adopt Self-supervised Pretraining for Chest X-Rays?

Chest radiograph (or Chest X-Ray, CXR) is a popular medical imaging moda...
research
10/11/2020

MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

Self-supervised approaches such as Momentum Contrast (MoCo) can leverage...
research
03/07/2023

Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?

Pretraining a neural network on a large dataset is becoming a cornerston...
research
05/13/2023

How to Train Your CheXDragon: Training Chest X-Ray Models for Transfer to Novel Tasks and Healthcare Systems

Self-supervised learning (SSL) enables label efficient training for mach...
research
12/22/2021

Improved skin lesion recognition by a Self-Supervised Curricular Deep Learning approach

State-of-the-art deep learning approaches for skin lesion recognition of...

Please sign up or login with your details

Forgot password? Click here to reset