Delving into Masked Autoencoders for Multi-Label Thorax Disease Classification

10/23/2022
by   Junfei Xiao, et al.
0

Vision Transformer (ViT) has become one of the most popular neural architectures due to its great scalability, computational efficiency, and compelling performance in many vision tasks. However, ViT has shown inferior performance to Convolutional Neural Network (CNN) on medical tasks due to its data-hungry nature and the lack of annotated medical data. In this paper, we pre-train ViTs on 266,340 chest X-rays using Masked Autoencoders (MAE) which reconstruct missing pixels from a small part of each image. For comparison, CNNs are also pre-trained on the same 266,340 X-rays using advanced self-supervised methods (e.g., MoCo v2). The results show that our pre-trained ViT performs comparably (sometimes better) to the state-of-the-art CNN (DenseNet-121) for multi-label thorax disease classification. This performance is attributed to the strong recipes extracted from our empirical studies for pre-training and fine-tuning ViT. The pre-training recipe signifies that medical reconstruction requires a much smaller proportion of an image (10 25 compared with natural imaging. Furthermore, we remark that in-domain transfer learning is preferred whenever possible. The fine-tuning recipe discloses that layer-wise LR decay, RandAug magnitude, and DropPath rate are significant factors to consider. We hope that this study can direct future research on the application of Transformers to a larger variety of medical imaging tasks.

READ FULL TEXT

page 5

page 7

page 8

research
02/10/2016

Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

Remarkable progress has been made in image recognition, primarily due to...
research
06/02/2017

Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?

Training a deep convolutional neural network (CNN) from scratch is diffi...
research
08/17/2022

Data-Efficient Vision Transformers for Multi-Label Disease Classification on Chest Radiographs

Radiographs are a versatile diagnostic tool for the detection and assess...
research
05/10/2023

Medical supervised masked autoencoders: Crafting a better masking strategy and efficient fine-tuning schedule for medical image classification

Masked autoencoders (MAEs) have displayed significant potential in the c...
research
01/29/2023

Towards Vision Transformer Unrolling Fixed-Point Algorithm: a Case Study on Image Restoration

The great success of Deep Neural Networks (DNNs) has inspired the algori...
research
10/28/2021

RadBERT-CL: Factually-Aware Contrastive Learning For Radiology Report Classification

Radiology reports are unstructured and contain the imaging findings and ...
research
02/03/2018

AFT*: Integrating Active Learning and Transfer Learning to Reduce Annotation Efforts

The splendid success of convolutional neural networks (CNNs) in computer...

Please sign up or login with your details

Forgot password? Click here to reset