Enhancing Network Initialization for Medical AI Models Using Large-Scale, Unlabeled Natural Images

08/15/2023
by   Soroosh Tayebi Arasteh, et al.
0

Pre-training datasets, like ImageNet, have become the gold standard in medical image analysis. However, the emergence of self-supervised learning (SSL), which leverages unlabeled data to learn robust features, presents an opportunity to bypass the intensive labeling process. In this study, we explored if SSL for pre-training on non-medical images can be applied to chest radiographs and how it compares to supervised pre-training on non-medical images and on medical images. We utilized a vision transformer and initialized its weights based on (i) SSL pre-training on natural images (DINOv2), (ii) SL pre-training on natural images (ImageNet dataset), and (iii) SL pre-training on chest radiographs from the MIMIC-CXR database. We tested our approach on over 800,000 chest radiographs from six large global datasets, diagnosing more than 20 different imaging findings. Our SSL pre-training on curated images not only outperformed ImageNet-based pre-training (P<0.001 for all datasets) but, in certain cases, also exceeded SL on the MIMIC-CXR dataset. Our findings suggest that selecting the right pre-training strategy, especially with SSL, can be pivotal for improving artificial intelligence (AI)'s diagnostic accuracy in medical imaging. By demonstrating the promise of SSL in chest radiograph analysis, we underline a transformative shift towards more efficient and accurate AI models in medical imaging.

READ FULL TEXT

page 1

page 3

research
08/12/2021

A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis

Transfer learning from supervised ImageNet models has been frequently us...
research
01/05/2022

Advancing 3D Medical Image Analysis with Variable Dimension Transform based Supervised 3D Pre-training

The difficulties in both data acquisition and annotation substantially r...
research
03/30/2021

Self-supervised Image-text Pre-training With Mixed Data In Chest X-rays

Pre-trained models, e.g., from ImageNet, have proven to be effective in ...
research
03/03/2023

Exploring Self-Supervised Representation Learning For Low-Resource Medical Image Analysis

The success of self-supervised learning (SSL) has mostly been attributed...
research
06/14/2022

Quantitative Imaging Principles Improves Medical Image Learning

Fundamental differences between natural and medical images have recently...
research
05/31/2021

Effect of large-scale pre-training on full and few-shot transfer learning for natural and medical images

Transfer learning aims to exploit pre-trained models for more efficient ...

Please sign up or login with your details

Forgot password? Click here to reset