AnatomyNet: Deep 3D Squeeze-and-excitation U-Nets for fast and fully automated whole-volume anatomical segmentation

by   Wentao Zhu, et al.
University of California, Irvine

Radiation therapy (RT) is a common treatment for head and neck (HaN) cancer where therapists are often required to manually delineate boundaries of the organs-at-risks (OARs). Automated head and neck anatomical segmentation provides a way to speed up and improve the reproducibility of radiation therapy planning. In this work, we propose the AnatomyNet, an end-to-end and atlas-free three dimensional squeeze-and-excitation U-Net (3D SE U-Net), for fast and fully automated whole-volume HaN anatomical segmentation. There are two main challenges for fully automated HaN OARs segmentation: 1) challenge in segmenting small anatomies (i.e., optic chiasm and optic nerves) occupying only a few slices, and 2) training model with inconsistent data annotations with missing ground truth for some anatomical structures because of different RT planning. We propose the AnatomyNet that has one down-sampling layer with the trade-off between GPU memory and feature representation capacity, and 3D SE residual blocks for effective feature learning to alleviate these challenges. Moreover, we design a hybrid loss function with the Dice loss and the focal loss. The Dice loss is a class level distribution loss that depends less on the number of voxels in the anatomy, and the focal loss is designed to deal with highly unbalanced segmentation. For missing annotations, we propose masked loss and weighted loss for accurate and balanced weights updating in the learning of the AnatomyNet. We collect 261 HaN CT images to train the AnatomyNet for segmenting nine anatomies. Compared to previous state-of-the-art methods for each anatomy from the MICCAI 2015 competition, the AnatomyNet increases Dice similarity coefficient (DSC) by 3.3 seconds on average to segment a whole-volume HaN CT image of an average dimension of 178x302x225.


page 7

page 8

page 9

page 13

page 14


AnatomyNet: Deep Learning for Fast and Fully Automated Whole-volume Segmentation of Head and Neck Anatomy

Methods: Our deep learning model, called AnatomyNet, segments OARs from ...

Fully Automated Organ Segmentation in Male Pelvic CT Images

Accurate segmentation of prostate and surrounding organs at risk is impo...

Segmentation of Head and Neck Organs at Risk Using CNN with Batch Dice Loss

This paper deals with segmentation of organs at risk (OAR) in head and n...

Comparison of 2D vs. 3D U-Net Organ Segmentation in abdominal 3D CT images

A two-step concept for 3D segmentation on 5 abdominal organs inside volu...

Multiview and Multiclass Image Segmentation using Deep Learning in Fetal Echocardiography

Congenital heart disease (CHD) is the most common congenital abnormality...

Deep Negative Volume Segmentation

Clinical examination of three-dimensional image data of compound anatomi...

Code Repositories


Reimplement of AnatomyNet

view repo



view repo

Please sign up or login with your details

Forgot password? Click here to reset