Unified 2D and 3D Pre-training for Medical Image classification and Segmentation

12/17/2021
by   Yutong Xie, et al.
0

Self-supervised learning (SSL) opens up huge opportunities for better utilizing unlabeled data. It is essential for medical image analysis that is generally known for its lack of annotations. However, when we attempt to use as many as possible unlabeled medical images in SSL, breaking the dimension barrier (, making it possible to jointly use both 2D and 3D images) becomes a must. In this paper, we propose a Universal Self-Supervised Transformer (USST) framework based on the student-teacher paradigm, aiming to leverage a huge of unlabeled medical data with multiple dimensions to learn rich representations. To achieve this, we design a Pyramid Transformer U-Net (PTU) as the backbone, which is composed of switchable patch embedding (SPE) layers and Transformer layers. The SPE layer switches to either 2D or 3D patch embedding depending on the input dimension. After that, the images are converted to a sequence regardless of their original dimensions. The Transformer layer then models the long-term dependencies in a sequence-to-sequence manner, thus enabling USST to learn representations from both 2D and 3D images. USST has two obvious merits compared to current dimension-specific SSL: (1) more effective - can learn representations from more and diverse data; and (2) more versatile - can be transferred to various downstream tasks. The results show that USST provides promising results on six 2D/3D medical image classification and segmentation tasks, outperforming the supervised ImageNet pre-training and advanced SSL counterparts substantially.

READ FULL TEXT

page 6

page 8

page 13

research
03/09/2022

Uni4Eye: Unified 2D and 3D Self-supervised Pre-training via Masked Image Modeling Transformer for Ophthalmic Image Classification

A large-scale labeled dataset is a key factor for the success of supervi...
research
01/03/2023

A New Perspective to Boost Vision Transformer for Medical Image Classification

Transformer has achieved impressive successes for various computer visio...
research
12/06/2021

Joint Learning of Localized Representations from Medical Images and Reports

Contrastive learning has proven effective for pre-training image models ...
research
04/25/2022

Masked Image Modeling Advances 3D Medical Image Analysis

Recently, masked image modeling (MIM) has gained considerable attention ...
research
01/18/2023

ViT-AE++: Improving Vision Transformer Autoencoder for Self-supervised Medical Image Representations

Self-supervised learning has attracted increasing attention as it learns...
research
11/27/2022

A Knowledge-based Learning Framework for Self-supervised Pre-training Towards Enhanced Recognition of Medical Images

Self-supervised pre-training has become the priory choice to establish r...
research
12/04/2022

Joint Self-Supervised Image-Volume Representation Learning with Intra-Inter Contrastive Clustering

Collecting large-scale medical datasets with fully annotated samples for...

Please sign up or login with your details

Forgot password? Click here to reset