Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis

08/19/2019
by   Zongwei Zhou, et al.
1

Transfer learning from natural image to medical image has established as one of the most practical paradigms in deep learning for medical image analysis. However, to fit this paradigm, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information and inevitably compromising the performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learned by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of our Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated yet recurrent anatomy in medical images can serve as strong supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.

READ FULL TEXT

page 15

page 17

page 18

page 19

page 20

page 21

page 23

page 24

research
07/14/2020

Learning Semantics-enriched Representation via Self-discovery, Self-classification, and Self-restoration

Medical images are naturally associated with rich semantics about the hu...
research
08/12/2021

A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis

Transfer learning from supervised ImageNet models has been frequently us...
research
01/25/2019

Surrogate Supervision for Medical Image Analysis: Effective Deep Learning From Limited Quantities of Labeled Data

We investigate the effectiveness of a simple solution to the common prob...
research
10/13/2020

Universal Model for 3D Medical Image Analysis

Deep Learning-based methods recently have achieved remarkable progress i...
research
02/21/2021

Transferable Visual Words: Exploiting the Semantics of Anatomical Patterns for Self-supervised Learning

This paper introduces a new concept called "transferable visual words" (...
research
04/21/2022

DiRA: Discriminative, Restorative, and Adversarial Learning for Self-supervised Medical Image Analysis

Discriminative learning, restorative learning, and adversarial learning ...
research
08/16/2020

Training CNN Classifiers for Semantic Segmentation using Partially Annotated Images: with Application on Human Thigh and Calf MRI

Objective: Medical image datasets with pixel-level labels tend to have a...

Please sign up or login with your details

Forgot password? Click here to reset