Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation

02/08/2023
by   Yuhui Zhang, et al.
0

Given the prevalence of 3D medical imaging technologies such as MRI and CT that are widely used in diagnosing and treating diverse diseases, 3D segmentation is one of the fundamental tasks of medical image analysis. Recently, Transformer-based models have started to achieve state-of-the-art performances across many vision tasks, through pre-training on large-scale natural image benchmark datasets. While works on medical image analysis have also begun to explore Transformer-based models, there is currently no optimal strategy to effectively leverage pre-trained Transformers, primarily due to the difference in dimensionality between 2D natural images and 3D medical images. Existing solutions either split 3D images into 2D slices and predict each slice independently, thereby losing crucial depth-wise information, or modify the Transformer architecture to support 3D inputs without leveraging pre-trained weights. In this work, we use a simple yet effective weight inflation strategy to adapt pre-trained Transformers from 2D to 3D, retaining the benefit of both transfer learning and depth information. We further investigate the effectiveness of transfer from different pre-training sources and objectives. Our approach achieves state-of-the-art performances across a broad range of 3D medical image datasets, and can become a standard strategy easily utilized by all work on Transformer-based models for 3D medical images, to maximize performance.

READ FULL TEXT
research
11/29/2021

Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis

Vision Transformers (ViT)s have shown great performance in self-supervis...
research
08/26/2023

Transfer Learning for Microstructure Segmentation with CS-UNet: A Hybrid Algorithm with Transformer and CNN Encoders

Transfer learning improves the performance of deep learning models by in...
research
02/17/2023

GPT4MIA: Utilizing Generative Pre-trained Transformer (GPT-3) as A Plug-and-Play Transductive Model for Medical Image Analysis

In this paper, we propose a novel approach (called GPT4MIA) that utilize...
research
10/14/2022

Optimizing Vision Transformers for Medical Image Segmentation and Few-Shot Domain Adaptation

The adaptation of transformers to computer vision is not straightforward...
research
05/20/2022

Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT)

Vision transformers, with their ability to more efficiently model long-r...
research
05/05/2022

Understanding Transfer Learning for Chest Radiograph Clinical Report Generation with Modified Transformer Architectures

The image captioning task is increasingly prevalent in artificial intell...
research
03/04/2021

Contrastive Learning Meets Transfer Learning: A Case Study In Medical Image Analysis

Annotated medical images are typically rarer than labeled natural images...

Please sign up or login with your details

Forgot password? Click here to reset