DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets

11/20/2020
by   Jianpeng Zhang, et al.
0

Due to the intensive cost of labor and expertise in annotating 3D medical images at a voxel level, most benchmark datasets are equipped with the annotations of only one type of organs and/or tumors, resulting in the so-called partially labeling issue. To address this, we propose a dynamic on-demand network (DoDNet) that learns to segment multiple organs and tumors on partially labeled datasets. DoDNet consists of a shared encoder-decoder architecture, a task encoding module, a controller for generating dynamic convolution filters, and a single but dynamic segmentation head. The information of the current segmentation task is encoded as a task-aware prior to tell the model what the task is expected to solve. Different from existing approaches which fix kernels after training, the kernels in dynamic head are generated adaptively by the controller, conditioned on both input image and assigned task. Thus, DoDNet is able to segment multiple organs and tumors, as done by multiple networks or a multi-head network, in a much efficient and flexible manner. We have created a large-scale partially labeled dataset, termed MOTS, and demonstrated the superior performance of our DoDNet over other competitors on seven organ and tumor segmentation tasks. We also transferred the weights pre-trained on MOTS to a downstream multi-organ segmentation task and achieved state-of-the-art performance. This study provides a general 3D medical image segmentation model that has been pre-trained on a large-scale partially labelled dataset and can be extended (after fine-tuning) to downstream volumetric medical data segmentation tasks. The dataset and code areavailableat: https://git.io/DoDNet

READ FULL TEXT

page 1

page 3

page 8

research
11/13/2022

Learning from partially labeled data for multi-organ and tumor segmentation

Medical image benchmarks for the segmentation of organs and tumors suffe...
research
03/04/2022

Universal Segmentation of 33 Anatomies

In the paper, we present an approach for learning a single model that un...
research
06/23/2023

3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable Medical Image Segmentation

Despite that the segment anything model (SAM) achieved impressive result...
research
06/23/2023

How to Efficiently Adapt Large Segmentation Model(SAM) to Medical Images

The emerging scale segmentation model, Segment Anything (SAM), exhibits ...
research
03/08/2021

Incremental Learning for Multi-organ Segmentation with Partially Labeled Datasets

There exists a large number of datasets for organ segmentation, which ar...
research
06/10/2023

AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt Encoder

The recently introduced Segment Anything Model (SAM) combines a clever a...
research
04/12/2019

Prior-aware Neural Network for Partially-Supervised Multi-Organ Segmentation

Accurate multi-organ abdominal CT segmentation is essential to many clin...

Please sign up or login with your details

Forgot password? Click here to reset