Universal Model for 3D Medical Image Analysis

10/13/2020
by   Xiaoman Zhang, et al.
0

Deep Learning-based methods recently have achieved remarkable progress in medical image analysis, but heavily rely on massive amounts of labeled training data. Transfer learning from pre-trained models has been proposed as a standard pipeline on medical image analysis to address this bottleneck. Despite their success, the existing pre-trained models are mostly not tuned for multi-modal multi-task generalization in medical domains. Specifically, their training data are either from non-medical domain or in single modality, failing to attend to the problem of performance degradation with cross-modal transfer. Furthermore, there is no effort to explicitly extract multi-level features required by a variety of downstream tasks. To overcome these limitations, we propose Universal Model, a transferable and generalizable pre-trained model for 3D medical image analysis. A unified self-supervised learning scheme is leveraged to learn representations from multiple unlabeled source datasets with different modalities and distinctive scan regions. A modality invariant adversarial learning module is further introduced to improve the cross-modal generalization. To fit a wide range of tasks, a simple yet effective scale classifier is incorporated to capture multi-level visual representations. To validate the effectiveness of the Universal Model, we perform extensive experimental analysis on five target tasks, covering multiple imaging modalities, distinctive scan regions, and different analysis tasks. Compared with both public 3D pre-trained models and newly investigated 3D self-supervised learning methods, Universal Model demonstrates superior generalizability, manifested by its higher performance, stronger robustness and faster convergence. The pre-trained Universal Model is available at: \href{https://github.com/xm-cmic/Universal-Model}{https://github.com/xm-cmic/Universal-Model}.

READ FULL TEXT

page 1

page 2

page 4

page 7

page 9

research
08/19/2019

Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis

Transfer learning from natural image to medical image has established as...
research
04/07/2023

UniSeg: A Prompt-driven Universal Segmentation Model as well as A Strong Representation Learner

The universal model emerges as a promising trend for medical image segme...
research
07/02/2015

Cross Modal Distillation for Supervision Transfer

In this work we propose a technique that transfers supervision between i...
research
04/15/2022

CAiD: Context-Aware Instance Discrimination for Self-supervised Learning in Medical Imaging

Recently, self-supervised instance discrimination methods have achieved ...
research
06/11/2020

W-net: Simultaneous segmentation of multi-anatomical retinal structures using a multi-task deep neural network

Segmentation of multiple anatomical structures is of great importance in...
research
04/25/2023

Bayesian Optimization Meets Self-Distillation

Bayesian optimization (BO) has contributed greatly to improving model pe...
research
06/08/2022

One Hyper-Initializer for All Network Architectures in Medical Image Analysis

Pre-training is essential to deep learning model performance, especially...

Please sign up or login with your details

Forgot password? Click here to reset