A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000

02/13/2023
by   Sontje Ihler, et al.
0

Computer aided diagnosis (CAD) has gained an increased amount of attention in the general research community over the last years as an example of a typical limited data application - with experiments on labeled 100k-200k datasets. Although these datasets are still small compared to natural image datasets like ImageNet1k, ImageNet21k and JFT, they are large for annotated medical datasets, where 1k-10k labeled samples are much more common. There is no baseline on which methods to build on in the low data regime. In this work we bridge this gap by providing an extensive study on medical image classification with limited annotations (5k). We present a study of modern architectures applied to a fixed low data regime of 5000 images on the CheXpert dataset. Conclusively we find that models pretrained on ImageNet21k achieve a higher AUC and larger models require less training steps. All models are quite well calibrated even though we only fine-tuned on 5000 training samples. All 'modern' architectures have higher AUC than ResNet50. Regularization of Big Transfer Models with MixUp or Mean Teacher improves calibration, MixUp also improves accuracy. Vision Transformer achieve comparable or on par results to Big Transfer Models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2018

Do Better ImageNet Models Transfer Better?

Transfer learning has become a cornerstone of computer vision with the a...
research
09/06/2023

Improving Image Classification of Knee Radiographs: An Automated Image Labeling Approach

Large numbers of radiographic images are available in knee radiology pra...
research
07/13/2021

Cats, not CAT scans: a study of dataset similarity in transfer learning for 2D medical image classification

Transfer learning is a commonly used strategy for medical image classifi...
research
08/10/2017

Modality-bridge Transfer Learning for Medical Image Classification

This paper presents a new approach of transfer learning-based medical im...
research
09/30/2022

Medical Image Understanding with Pretrained Vision Language Models: A Comprehensive Study

The large-scale pre-trained vision language models (VLM) have shown rema...
research
08/24/2022

Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

Self supervised contrastive learning based pretraining allows developmen...

Please sign up or login with your details

Forgot password? Click here to reset