Head and neck cancer is one of the most common cancers around the world torre2015global . Radiation therapy is the primary method for treating patients with head and neck cancers. The planning of the radiation therapy relies on accurate organs-at-risks (OARs) segmentation han2008atlas , which is usually undertaken by radiation therapists with laborious manual delineation. Computational tools that automatically segment the anatomical regions can greatly alleviate doctors’ manual efforts if these tools can delineate anatomical regions accurately with a reasonable amount of time sharp2014vision .
There is a vast body of literature on automatically segmenting anatomical structures from CT or MRI images. Here we focus on reviewing literature related to head and neck (HaN) CT anatomy segmentation. Traditional anatomical segmentation methods use primarily atlas-based methods, producing segmentations by aligning new images to a fixed set of manually labelled exemplars raudaschl2017evaluation . Atlas-based segmentation methods typically undergo a few steps, including preprocessing, atlas creation, image registration, and label fusion. As a consequence, their performances can be affected by various factors involved in each of these steps, such as methods for creating atlas han2008atlas ; voet2011does ; isambert2008evaluation ; fritscher2014automatic ; commowick2008atlas ; sims2009pre ; fortunati2013tissue ; verhaart2014relevance ; wachinger2015contour , methods for label fusion duc2015validation ; duc2015validation ; fortunati2015automatic , and methods for registration zhang2007automatic ; chen2010combining ; han2008atlas ; duc2015validation ; fritscher2014automatic ; fortunati2013tissue ; wachinger2015contour ; qazi2011auto ; leavens2008validation . Although atlas-based methods are still very popular and by far the most widely used methods in anatomy segmentation, their main limitation is the difficulty to handle anatomy variations among patients because they use a fixed set of atlas. In addition, it is computationally intensive and can take many minutes to complete one registration task even with most efficient implementations xu2018use .
Instead of aligning images to a fixed set of exemplars, learning-based methods trained to directly segment OARs without resorting to reference exemplars have also been tried tam2018automated ; wu2018auto ; tong2018hierarchical ; pednekar2018image ; wang2018hierarchical . However, most of the learning-based methods require laborious preprocessing steps, and/or hand-crafted image features. As a result, their performances tend to be less robust than registration-based methods.
Recently, deep convolutional models have shown great success for biomedical image segmentation ronneberger2015u , and have been introduced to the field of HaN anatomy segmentation fritscher2016deep ; ibragimov2017segmentation ; ren2018interleaved ; hansch2018comparison . However, the existing HaN-related deep-learning-based methods either use sliding windows working on patches that cannot capture global features, or rely on atlas registration to obtain highly accurate small regions of interest in the preprocessing. What is more appealing are models that receive the whole-volume image as input without heavy-duty preprocessing, and then directly output the segmentations of all interested anatomies.
In this work, we study the feasibility and performance of constructing and training a deep neural net model that jointly segment all OARs in a fully end-to-end fashion, receiving raw whole-volume HaN CT images as input and generating the masks of all OARs in one shot. The success of such a system can improve the current performance of automated anatomy segmentation by simplifying the entire computational pipeline, cutting computational cost and improving segmentation accuracy.
There are, however, a number of obstacles that need to overcome in order to make such a deep convolutional neural net based system successful. First, in designing network architectures, we ought to keep the maximum capacity of GPU memories in mind. Since whole-volume images are used as input, each image feature map will be 3D, limiting the size and number of feature maps at each layer of the neural net due to memory constraints. Second, OARs contain organs/regions of variable sizes, including some OARs with very small sizes. Accurately segmenting these small-volumed structures is always a challenge. Third, existing datasets of HaN CT images contain data collected from various sources with non-standardized annotations. In particular, many images in the training data contain annotations of only a subset of OARs. How to effectively handle missing annotations needs to be addressed in the design of the training algorithms.
Here we propose a deep learning based framework, called AnatomyNet, to segment OARs using a single network, trained end-to-end. The network receives whole-volume CT images as input, and outputs the segmented masks of all OARs. Our method requires minimal pre- and post-processing, and utilizes features from all slices to segment anatomical regions. We overcome the three major obstacles outlined above through designing a novel network architecture and utilizing novel loss functions for training the network.
More specifically, our major contributions include the following. First, we extend the standard U-Net model for 3D HaN image segmentation by incorporating a new feature extraction component, based on squeeze-and-excitation (SE) residual blockshu2017squeeze . Second, we propose a new loss function for better segmenting small-volumed structures. Small volume segmentation suffers from the imbalanced data problem, where the number of voxels inside the small region is much smaller than those outside, leading to the difficulty of training. New classes of loss functions have been proposed to address this issue, including Tversky loss salehi2017tversky , generalized Dice coefficients crum2006generalized ; sudre2017generalised , focal loss lin2017focal , sparsity label assignment deep multi-instance learning zhu2017deep , and exponential logarithm loss. However, we found none of these solutions alone was adequate to solve the extremely data imbalanced problem (1/100,000) we face in segmenting small OARs, such as optic nerves and chiasm, from HaN images. We propose a new loss based on the combination of Dice scores and focal losses, and empirically show that it leads to better results than other losses. Finally, to tackle the missing annotation problem, we train the AnatomyNet with masked and weighted loss function to account for missing data and to balance the contributions of the losses originating from different OARs.
To train and evaluate the performance of AnatomyNet, we curated a dataset of 261 head and neck CT images from a number of publicly available sources. We carried out systematic experimental analyses on various components of the network, and demonstrated their effectiveness by comparing with other published methods. When benchmarked on the test dataset from the MICCAI 2015 competition on HaN segmentation, the AnatomyNet outperformed the state-of-the-art method by 3.3% in terms of Dice coefficient (DSC), averaged over nine anatomical structures.
The rest of the paper is organized as follows. Section II.2 describes the network structure and SE residual block of AnatomyNet. The designing of the loss function for AnatomyNet is present in Section II.3. How to handle missing annotations is addressed in Section II.4. Section III validates the effectiveness of the proposed networks and components. Discussions and limitations are in Section IV. We conclude the work in Section V.
Ii Materials and Methods
Next we describe our deep learning model to delineate OARs from head and neck CT images. Our model receives whole-volume HaN CT images of a patient as input and outputs the 3D binary masks of all OARs at once. The dimension of a typical HaN CT is around , but the sizes can vary across different patients because of image cropping and different settings. In this work, we focus on segmenting nine OARs most relevant to head and neck cancer radiation therapy - brain stem, chiasm, mandible, optic nerve left, optic nerve right, parotid gland left, parotid gland right, submandibular gland left, and submandibular gland right. Therefore, our model will produce nine 3D binary masks for each whole volume CT.
Before we introduce our model, we first describe the curation of training and testing data. Our data consists of whole-volume CT images together with manually generated binary masks of the nine anatomies described above. There were collected from four publicly available sources: 1) DATASET 1 (38 samples) consists of the training set from the MICCAI Head and Neck Auto Segmentation Challenge 2015 raudaschl2017evaluation . 2) DATASET 2 (46 samples) consists of CT images from the Head-Neck Cetuximab collection, downloaded from The Cancer Imaging Archive (TCIA)111https://wiki.cancerimagingarchive.net/ clark2013cancer . 3) DATASET 3 (177 samples) consists of CT images from four different institutions in Québec, Canada vallieres2017radiomics , also downloaded from TCIA clark2013cancer . 4) DATATSET 4 (10 samples) consists of the test set from the MICCAI HaN Segmentation Challenge 2015. We combined the first three datasets and used the aggregated data as our training data, altogether yielding 261 training samples. DATASET 4 was used as our final evaluation/test dataset so that we can benchmark our performance against published results evaluated on the same dataset. Each of the training and test samples contains both head and neck images and the corresponding manually delineated OARs.
In generating these datasets, We carried out several data cleaning steps, including 1) mapping annotation names named by different doctors in different hospitals into unified annotation names, 2) finding correspondences between the annotations and the CT images, 3) converting annotations in the radiation therapy format into usable ground truth label mask, and 4) removing chest from CT images to focus on head and neck anatomies. We have taken care to make sure that the four datasets described above are non-overlapping to avoid any potential pitfall of inflating testing or validation performance.
ii.2 Network architecture
We take advantage of the robust feature learning mechanisms obtained from squeeze-and-excitation (SE) residual blocks hu2017squeeze , and incorporate them into a modified U-Net architecture for medical image segmentation. We propose a novel three dimensional U-Net with squeeze-and-excitation (SE) residual blocks and hybrid focal and dice loss for anatomical segmentation as illustrated in Fig. 1.
, one of the most commonly used neural net architectures in biomedical image segmentation. The standard U-Net contains multiple down-sampling layers via max-pooling or convolutions with strides over two. Although they are beneficial to learn high-level features for segmenting complex, large anatomies, these down-sampling layers can hurt the segmentation of small anatomies such as optic chiasm, which occupy only a few slices in HaN CT images. We design the AnatomyNet with only one down-sampling layer to account for the trade-off between GPU memory usage and network learning capacity. The down-sampling layer is used in the first encoding block so that the feature maps and gradients in the following layers occupy less GPU memory than other network structures. Inspired by the effectiveness of squeeze-and-excitation residual features on image object classification, we design 3D squeeze-and-excitation (SE) residual blocks in the AnatomyNet for OARs segmentation. The SE residual block adaptively calibrates residual feature maps within each feature channel. The 3D SE Residual learning extracts 3D features from CT image directly by extending two-dimensional squeeze, excitation, scale and convolutional functions to three-dimensional functions. It can be formulated as
where denotes the feature map of one channel from the residual feature . is the squeeze function, which is global average pooling here. are the number of slices, height, and width of respectively.and , and weights and . The
is the sigmoid function. The
is typically a ReLU function, but we use LeakyReLU in the AnatomyNetmaas2013rectifier . We use the learned scale value to calibrate the residual feature channel , and obtain the calibrated residual feature . The SE block is illustrated in the upper right corner in Fig. 1.
The AnatomyNet is a variant of U-Net with only one down-sampling and squeeze-and-excitation (SE) residual building blocks. The number before symbol @ denotes the number of output channels, while the number after the symbol denotes the size of feature map relative to the input. In the decoder, we use concatenated features. Hybrid loss with dice loss and focal loss is employed to force the model to learn not-well-classified voxels. Masked and weighted loss function is used for ground truth with missing annotations and balanced gradient descent respectively. The decoder layers are symmetric with the encoder layers. The SE residual block is illustrated in the upper right corner.
The AnatomyNet replaces the standard convolutional layers in the U-Net with SE residual blocks to learn effective features. The input of AnatomyNet is a cropped whole-volume head and neck CT image. We remove the down-sampling layers in the second, third, and fourth encoder blocks to improve the performance of segmenting small anatomies. In the output block, we concatenate the input with the transposed convolution feature maps obtained from the second last block. After that, a convolutional layer with 16 kernels and LeakyReLU activation function is employed. In the last layer, we use a convolutional layer with 10
kernels and soft-max activation function to generate the segmentation probability maps for nine OARs plus background.
ii.3 Loss function
Small object segmentation is always a challenge in semantic segmentation. From the learning perspective, the challenge is caused by imbalanced data distribution, because image semantic segmentation requires pixel-wise labeling and small-volumed organs contribute less to the loss. In our case, the small-volumed organs, such as optic chiasm, only take about 1/100,000 of the whole-volume CT images from Fig. 2. The dice loss, the minus of dice coefficient (DSC), can be employed to partly address the problem by turning pixel-wise labeling problem into minimizing class-level distribution distance salehi2017tversky .
Several methods have been proposed to alleviate the small-volumed organ segmentation problem. The generalized dice loss uses squared volume weights. However, it makes the optimization unstable in the extremely unbalanced segmentation sudre2017generalised . The exponential logarithmic loss is inspired by the focal loss for class-level loss as , where is the dice coefficient (DSC) for the interested class, can be set as 0.3, and is the expectation over classes and whole-volume CT images. The gradient of exponential logarithmic loss w.r.t. DSC is . The absolute value of gradient is getting bigger for well-segmented class ( close to 1). Therefore, the exponential logarithmic loss still places more weights on well-segmented class, and is not effective in learning to improve on not-well-segmented class.
In the AnatomyNet, we employ a hybrid loss consisting of contributions from both dice loss and focal loss lin2017focal . The dice loss learns the class distribution alleviating the imbalanced voxel problem, where as the focal loss forces the model to learn poorly classified voxels better. The total loss can be formulated as
where , and are the true positives, false negatives and false positives for class calculated by prediction probabilities respectively, is the predicted probability for voxel being class , is the ground truth for voxel being class , is the total number of anatomies plus one (background), is the trade-off between dice loss and focal loss , and are the trade-offs of penalties for false negatives and false positives which are set as 0.5 here, is the total number of voxels in the CT images. is set to be 0.1, 0.5 or 1 based on the performance on the validation set. Because of size differences for different HaN whole-volume CT images, we set the batch size to be 1.
ii.4 Handling missing annotations
Another challenge in anatomical segmentation is due to missing annotations common in the training datasets, because annotators often include different anatomies in their annotations. For example, we collect 261 head and neck CT images with anatomical segmentation ground truths from 5 hospitals, and the numbers of nine annotated anatomies are very different as shown in Table 1. To handle this challenge, we mask out the background (denoted as class 0) and the missed anatomy. Let
denote the index of anatomies. We employ a mask vectorfor the th CT image, and denote background as label . That is if anatomy c is annotated, and otherwise. For the background, the mask is if all anatomies are annotated, and 0 otherwise.
|Opt Ner L||133|
|Opt Ner R||133|
The missing annotations for some anatomical structures cause imbalanced class-level annotations. To address this problem, we employ weighted loss function for balanced weights updating of different anatomies. The weights are set as the inverse of the number of annotations for class , , so that the weights in deep networks are updated equally with different anatomies. The dice loss for th CT image in equation 2 can be written as
The focal loss for missing annotations in the th CT image can be written as
We use loss in the AnatomyNet.
ii.5 Implementation details and performance evaluation
and the number of epochs being 150. Then we used stochastic gradient descend with momentum 0.9, learning rateand the number of epochs 50. During training, we used affine transformation and elastic deformation for data augmentation, implemented on the fly.
We use Dice coefficient (DSC) as the final evaluation metric, defined to be, where , , and are true positives, false negatives, false positives, respectively.
We trained our deep learning model, AnatomyNet, on 261 training samples, and evaluated its performance on the MICCAI head and neck segmentation challenge 2015 test data (10 samples, DATASET 4) and compared it to the performances of previous methods benchmarked on the same test dataset. Before we present the final results, we first describe the rationale behind several designing choices under AnatomyNet, including architectural designs and model training.
iii.1 Determining down-sampling scheme
The standard U-Net model has multiple down-sampling layers, which help the model learn high-level image features. However, down-sampling also reduces image resolution and makes it harder to segment small OARs such as optic nerves and chiasm. To evaluate the effect of the number of down-sampling layers on the segmentation performance, we experimented with four different down-sampling schemes shown in Table 2. Pool 1 uses only one down-sampling step, while Pool 2, 3, and 4 use 2, 3 and 4 down-sampling steps, respectively, distributed over consecutive blocks. With each down-sampling, the feature map size is reduced by half. We incorporated each of the four down-sampling schemes into the standard U-Net model, which was then trained on the training set and evaluated on the test set. For fair comparisons, we used the same number of filters in each layer. The decoder layers of each model are set to be symmetric with the encoder layers.
|Nets||1st block||2nd block||3rd block||4th block|
The DSC scores of the four down-sampling schemes are shown in Table 3. On average, one down-sampling block (Pool 1) yields the best average performance, beating other down-sampling schemes in 6 out of 9 anatomies. The performance gaps are most prominent on three small-volumed OARs - optic nerve left, optic nerve right and optic chiasm, which demonstrates that the U-Net with one down-sampling layer works better on small organ segmentation than the standard U-Net. The probable reason is that small organs reside in only a few slices and more down-sampling layers are more likely to miss features for the small organs in the deeper layers. Based on these results, we decide to use only one down-sampling layer in AnatomyNet (Fig. 1).
|Anatomy||Pool 1||Pool 2||Pool 3||Pool 4|
|Optic Ner L||69.1||65.7||67.2||67.9|
|Optic Ner R||66.9||65.0||66.2||63.7|
iii.2 Choosing network structures
In addition to down-sample schemes, we also tested several other architecture designing choices. The first one is on how to combine features from horizontal layers within U-Net. Traditional U-Net uses concatenation to combine features from horizontal layers in the decoder, as illustrated with dash lines in Fig. 1. However, recent feature pyramid network (FPN) recommends summation to combine horizontal features lin2017feature . Another designing choice is on choosing local feature learning blocks with each layer. The traditional U-Net uses simple 2D convolution, extended to 3D convolution in our case. To learn more effective features, we tried two other feature learning blocks: a) residual learning, and b) squeeze-and-excitation residual learning. Altogether, we investigated the performances of the following six architectural designing choices:
3D SE Res UNet, the architecture implemented in AnatomyNet (Fig. 1) with both squeeze-excitation residual learning and concatenated horizontal features.
3D Res UNet, replacing the SE Residual blocks in 3D SE Res UNet with residual blocks.
Vanilla U-Net, replacing the SE Residual blocks in 3D SE Res UNet with 3D convolutional layers.
3D SE Res UNet (sum), replacing concatenations in 3D SE Res UNet with summations. When the numbers of channels are different, one additional 3D convolutional layer is used to map the encoder to the same size as the decoder.
3D Res UNet (sum), replacing the SE Residual blocks in 3D SE Res UNet (sum) with residual blocks.
Vanilla U-Net (sum), replacing the SE Residual blocks in 3D SE Res UNet (sum) with 3D convolutional layers.
The six models were trained on the same training dataset with identical training procedures. The performances measured by DSC on the test dataset are summarized in Table 4. We notice a few observations from this study. First, feature concatenation shows consistently better performance than feature summation. It seems feature concatenation provides more flexibility in feature learning than the fixed operation through feature summation. Second, 3D SE residual U-Net with concatenation yields the best performance. It demonstrates the power of SE features on 3D semantic segmentation, because the SE scheme learns the channel-wise calibration and helps alleviate the dependencies among channel-wise features as discussed in Section II.2.
The SE residual block learning incorporated in AnatomyNet results in 2-3% improvements in DSC over the traditional U-Net model, outperforming U-Net in 6 out of 9 anatomies.
iii.3 Choosing loss functions
We also validated the effects of different loss functions on training and model performance. To differentiate the effects of loss functions from network design choices, we used only the vanilla U-Net and trained it with different loss functions. This way, we can focus on studying the impact of loss functions on model performances. We tried four loss functions, including Dice loss, exponential logarithmic loss, hybrid loss between Dice loss and focal loss, and hybrid loss between Dice loss and cross entropy. The trade-off parameter in hybrid losses ( in Eq. 2) was chosen from either 0.1, 0.5 or 1, based on the performance on a validation set. For hybrid loss between Dice loss and focal loss, the best was found to be 0.5. For hybrid loss between Dice loss and cross entropy, the best was 0.1.
|Optic Ner L||69.1||67.9||68.4||69.6|
|Optic Ner R||66.9||65.9||69.1||67.4|
The performances of the model trained with the four loss functions described above are shown in Table 5. The performances are measured in terms of the average DSC on the test dataset. We notice a few observations from this experiment. First, the two hybrid loss functions consistently outperform simple Dice or exponential logarithmic loss, beating the other two losses in 8 out of 9 anatomies. This suggests that taking the voxel-level loss into account can improve performance. Second, between the two hybrid losses, Dice combined with focal loss has better performances. In particular, it leads to significant improvements (2-3%) on segmenting two small anatomies - optic nerve R and optic chiasm, consistent with our motivation discussed in the Section II.3.
Based on the above observations, the hybrid loss with Dice combined with focal loss was used to train AnatomyNet, and benchmark its performance against previous methods.
iii.4 Comparing to state-of-the-art methods
After having determined the structure of AnatomyNet and the loss function for training it, we set out to compare its performance with previous state-of-the-art methods. For consistency purpose, all models were evaluated on the MICCAI head and neck challenge 2015 test set. The average DSC of different methods are summarized in Table 6. The best result for each anatomy from the MICCAI 2015 challenge is denoted as MICCAI 2015 raudaschl2017evaluation , which may come from different teams with different methods.
MICCAI 2015 competition merged left and right paired organs into one target, while we treat them as two separate anatomies. As a result, MICCAI 2015 competition is a seven (6 organs + background) class segmentation and ours is a ten-class segmentation, which makes the segmentation task more challenging. Nonetheless, the AnatomyNet achieves an average Dice coefficient of 79.25, which is 3.3% better than the best result from MICCAI 2015 Challenge (Table 6). In particular, the improvements on optic nerves are about 9-10%, suggesting that deep learning models are better equipped to handle small anatomies with large variations among patients. The AnatomyNet also outperforms the atlas based ConvNets in fritscher2016deep on all classes, which is likely contributed by the fact that the end-to-end structure in AnatomyNet for whole-volume HaN CT image captures global information for relative spatial locations among anatomies. Compared to the interleaved ConvNets in ren2018interleaved on small-volumed organs, such as chiasm, optic nerve left and optic nerve right, AnatomyNet is better on 2 out of 3 cases. The interleaved ConvNets achieved higher performance on chiasm, which is likely contributed by the fact that its prediction was operated on small region of interest (ROI), obtained first through atlas registration, while AnatomyNet operates directly on whole-volume slices.
Aside from the improvement on segmentation accuracy, another advantage of AnatomyNet is that it is orders of magnitude faster than traditional atlas-based methods using in the MICCAI 2015 challenge. AnatomyNet takes about 0.12 seconds to fully segment a head and neck CT image of dimension . By contrast, the atlas-based methods can take a dozen minutes to complete one segmentation depending on implementation details and the choices on the number of atlases.
iii.5 Visualizations on MICCAI 2015 test
In Fig. 3 and Fig. 4, we visualize the segmentation results by AnatomyNet on four cases from the test datset. Each row represents one (left and right) anatomy or 3D reconstructed anatomy. Each column denotes one sample. The last two columns show cases where AnatomyNet did not perform well. The discussions of these cases are presented in Section IV.2. Green denotes the ground truth. Red represents predicted segmentation results. Yellow denotes the overlap between ground truth and prediction. We visualize the slices containing the largest area of each related organ. For small OARs such as optic nerves and chiasm (shown in Fig. 4), only cross-sectional slices are shown.
iii.6 Visualizations on independent samples
To check the generalization ability of the trained model, we also visualize the segmentation results of the trained model on a small internal dataset in Fig. 5 and Fig. 6. Visual inspection suggests that the trained model performed well on this independent test set. In general, the performances on larger anatomies are better than small ones (such as optic chiasm), which can be attributed by both manual annotation inconsistencies and algorithmic challenges in segmenting these small regions.
iv.1 Impacts of training datasets
The training datasets we collected come from various sources with annotations done by different groups of physicians with different guiding criteria. It is unclear how the different datasets might contribute the model performance. For this purpose, we carried out an experiment to test the model performance trained with two different datasets: a) using only the training data provided in the MICCAI head and neck segmentation challenge 2015 (DATASET 1, 38 samples), and b) the combined training data with 216 samples (DATASET 1-3 combined). In terms of annotations, the first dataset is more consistent with the test dataset, therefore less likely to suffer from annotational inconsistencies. However, on the other hand, the size of the dataset is much smaller, posing challenges to training deep learning models.
Table 7 shows the test performances of a 3D Res U-Net model trained with the above-mentioned two datasets after applying the same training procedure of minimizing Dice loss. We notice a few observations. First, overall the model trained with the larger dataset (DATATSET 1-3) achieves better performance with a 2.5% improvement over the smaller dataset, suggesting that the larger sample size does lead to a better performance. Second, although the larger dataset improves performances on average, there are some OARs on which the smaller dataset actually does better, most noticeably, on mandible and optic nerves. This suggests that there are indeed significant data annotation inconsistencies between different datasets, whose impact on model performance cannot be neglected. Third, to further check the generalization ability of the model trained with DATASET 1 only, we checked its performance on DATASETS 2-3 and found its performance was generally poor. Altogether, this suggests both annotation quality and data size are important for training deep learning models. How to address inconsistencies in existing datasets is an interesting open question to be addressed in the future.
|Datasets||DATASET 1||DATASET 1,2,3|
|Opt Ner L||74.62||69.80|
|Opt Ner R||73.77||67.50|
There are a couple of limitations in the current implementation of AnatomyNet. First, AnatomyNet treats voxels equally in the loss function and network structure. As a result, it cannot model the shape prior and connectivity patterns effectively. The translation and rotation invariance of convolution are great for learning appearance features, but suffer from the loss of spatial information. For example, the AnatomyNet sometimes misclassifies a small background region into OARs (Fig. 3,4). The mis-classification results in a partial anatomical structures, which can be easily excluded if the overall shape information can also be learned. A network with multi-resolution outputs from different levels of decoders, or deeper layers with bigger local receptive fields should help alleviate this issue.
|Ren et al. 2018 ren2018interleaved||AnatomyNet|
|Optic Ner L||3-8||2.330.84||4.852.32|
|Optic Ner R||3-8||2.130.96||4.774.27|
Second, our evaluation of the segmentation performance is primarily based on the Dice coefficient. Although it is a common metric used in image segmentation, it may not be the most relevant one for clinical applications. Identifying a new metric in consultation with the physicians practicing in the field would be an important next step in order for real clinical applications of the method. Along this direction, we quantitatively evaluated the geometric surface distance by calculating the average 95th percentile Hausdorff distance (unit: mm, detailed formulation in raudaschl2017evaluation ) (Table 8). We should note that this metric imposes more challenges to AnatomyNet than other methods operating on local patches (such as the method by Ren et al. ren2018interleaved
), because AnatomyNet operates on whole-volume slices and a small outlier prediction outside the normal range of OARs can lead to drastically bigger Hausdorff distance. Nonetheless, AnatomyNet is roughly within the range of the best MICCAI 2015 challenge results on six out of nine anatomiesraudaschl2017evaluation . Its performance on this metric can be improved by considering surface and shape priors into the model as discussed above nikolov2018deep ; zhu2018adversarial .
In summary, we have proposed an end-to-end atlas-free and fully automated deep learning model for anatomy segmentation from head and neck CT images. We propose a number of techniques to improve model performance and facilitate model training. To alleviate highly imbalanced challenge for small-volumed organ segmentation, a hybrid loss with class-level loss (dice loss) and focal loss (forcing model to learn not-well-predicted voxels better) is employed to train the network, and one single down-sampling layer is used in the encoder. To handle missing annotations, masked and weighted loss is implemented for accurate and balanced weights updating. The 3D SE block is designed in the U-Net to learn effective features. Our experiments demonstrate that our model provides new state-of-the-art results on head and neck OARs segmentation, outperforming previous models by 3.3%. It is significantly faster, requiring only a fraction of a second to segment nine anatomies from a head and neck CT. In addition, the model is able to process a whole-volume CT and delineate all OARs in one pass. All together, our work suggests that deep learning offers a flexible and efficient framework for delineating OARs from CT images. With additional training data and improved annotations, it would be possible to further improve the quality of auto-segmentation, bringing it closer to real clinical practice.
Acknowledgements.We would like to acknowledge the support received from NVIDIA on GPU computing, and helpful discussions with Tang H and Yang L.
- (1) L. A. Torre, F. Bray, R. L. Siegel, J. Ferlay, J. Lortet-Tieulent, and A. Jemal, “Global cancer statistics, 2012,” CA: a cancer journal for clinicians, 2015.
- (2) X. Han, M. S. Hoogeman, P. C. Levendag, L. S. Hibbard, D. N. Teguh, P. Voet, A. C. Cowen, and T. K. Wolf, “Atlas-based auto-segmentation of head and neck ct images,” in MICCAI, 2008.
- (3) G. Sharp, K. D. Fritscher, V. Pekar, M. Peroni, N. Shusharina, H. Veeraraghavan, and J. Yang, “Vision 20/20: perspectives on automated image segmentation for radiotherapy,” Medical physics, vol. 41, no. 5, 2014.
- (4) P. F. Raudaschl, P. Zaffino, G. C. Sharp, M. F. Spadea, A. Chen, B. M. Dawant, T. Albrecht, T. Gass, C. Langguth, M. Lüthi et al., “Evaluation of segmentation methods on head and neck ct: Auto-segmentation challenge 2015,” Medical physics, 2017.
- (5) P. W. Voet, M. L. Dirkx, D. N. Teguh, M. S. Hoogeman, P. C. Levendag, and B. J. Heijmen, “Does atlas-based autosegmentation of neck levels require subsequent manual contour editing to avoid risk of severe target underdosage? a dosimetric analysis,” Radiotherapy and Oncology, vol. 98, no. 3, pp. 373–377, 2011.
- (6) A. Isambert, F. Dhermain, F. Bidault, O. Commowick, P.-Y. Bondiau, G. Malandain, and D. Lefkopoulos, “Evaluation of an atlas-based automatic segmentation software for the delineation of brain organs at risk in a radiation therapy clinical context,” Radiotherapy and oncology, vol. 87, no. 1, pp. 93–99, 2008.
- (7) K. D. Fritscher, M. Peroni, P. Zaffino, M. F. Spadea, R. Schubert, and G. Sharp, “Automatic segmentation of head and neck ct images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours,” Medical physics, vol. 41, no. 5, 2014.
- (8) O. Commowick, V. Grégoire, and G. Malandain, “Atlas-based delineation of lymph node levels in head and neck computed tomography images,” Radiotherapy and Oncology, vol. 87, no. 2, pp. 281–289, 2008.
- (9) R. Sims, A. Isambert, V. Grégoire, F. Bidault, L. Fresco, J. Sage, J. Mills, J. Bourhis, D. Lefkopoulos, O. Commowick et al., “A pre-clinical assessment of an atlas-based automatic segmentation tool for the head and neck,” Radiotherapy and Oncology, 2009.
- (10) V. Fortunati, R. F. Verhaart, F. van der Lijn, W. J. Niessen, J. F. Veenland, M. M. Paulides, and T. van Walsum, “Tissue segmentation of head and neck ct images for treatment planning: a multiatlas approach combined with intensity modeling,” Medical physics, 2013.
- (11) R. F. Verhaart, V. Fortunati, G. M. Verduijn, A. Lugt, T. Walsum, J. F. Veenland, and M. M. Paulides, “The relevance of mri for patient modeling in head and neck hyperthermia treatment planning: A comparison of ct and ct-mri based tissue segmentation on simulated temperature,” Medical physics, vol. 41, no. 12, 2014.
- (12) C. Wachinger, K. Fritscher, G. Sharp, and P. Golland, “Contour-driven atlas-based segmentation,” IEEE transactions on medical imaging, vol. 34, no. 12, pp. 2492–2505, 2015.
- (13) H. Duc, K. Albert, G. Eminowicz, R. Mendes, S.-L. Wong, J. McClelland, M. Modat, M. J. Cardoso, A. F. Mendelson, C. Veiga et al., “Validation of clinical acceptability of an atlas-based segmentation algorithm for the delineation of organs at risk in head and neck cancer,” Medical physics, vol. 42, no. 9, pp. 5027–5034, 2015.
- (14) V. Fortunati, R. F. Verhaart, W. J. Niessen, J. F. Veenland, M. M. Paulides, and T. van Walsum, “Automatic tissue segmentation of head and neck mr images for hyperthermia treatment planning,” Physics in Medicine & Biology, vol. 60, no. 16, p. 6547, 2015.
- (15) T. Zhang, Y. Chi, E. Meldolesi, and D. Yan, “Automatic delineation of on-line head-and-neck computed tomography images: toward on-line adaptive radiotherapy,” International Journal of Radiation Oncology* Biology* Physics, vol. 68, no. 2, pp. 522–530, 2007.
- (16) A. Chen, M. A. Deeley, K. J. Niermann, L. Moretti, and B. M. Dawant, “Combining registration and active shape models for the automatic segmentation of the lymph node regions in head and neck ct images,” Medical physics, vol. 37, no. 12, pp. 6338–6346, 2010.
- (17) A. A. Qazi, V. Pekar, J. Kim, J. Xie, S. L. Breen, and D. A. Jaffray, “Auto-segmentation of normal and target structures in head and neck ct images: A feature-driven model-based approach,” Medical physics, 2011.
- (18) C. Leavens, T. Vik, H. Schulz, S. Allaire, J. Kim, L. Dawson, B. O’Sullivan, S. Breen, D. Jaffray, and V. Pekar, “Validation of automatic landmark identification for atlas-based segmentation for radiation treatment planning of the head-and-neck region,” in SPIE, 2008.
- (19) H. Xu, A. Arsene Henry, M. Robillard, M. Amessis, and Y. M. Kirova, “The use of new delineation tool “mirada” at the level of regional lymph nodes, step-by-step development and first results for early-stage breast cancer patients,” The British journal of radiology, 2018.
- (20) C. Tam, X. Yang, S. Tian, X. Jiang, J. Beitler, and S. Li, “Automated delineation of organs-at-risk in head and neck ct images using multi-output support vector regression,” in SPIE, 2018.
- (21) X. Wu, J. K. Udupa, Y. Tong, D. Odhner, G. V. Pednekar, C. B. Simone, D. McLaughlin, C. Apinorasethkul, J. Lukens, D. Mihailidis et al., “Auto-contouring via automatic anatomy recognition of organs at risk in head and neck cancer on ct images,” in SPIE, 2018.
- (22) Y. Tong, J. K. Udupa, X. Wu, D. Odhner, G. Pednekar, C. B. Simone, D. McLaughlin, C. Apinorasethkul, G. Shammo, P. James et al., “Hierarchical model-based object localization for auto-contouring in head and neck radiation therapy planning,” in SPIE, 2018.
- (23) G. V. Pednekar, J. K. Udupa, D. J. McLaughlin, X. Wu, Y. Tong, C. B. Simone, J. Camaratta, and D. A. Torigian, “Image quality and segmentation,” in SPIE, 2018.
- (24) Z. Wang, L. Wei, L. Wang, Y. Gao, W. Chen, and D. Shen, “Hierarchical vertex regression-based segmentation of head and neck ct images for radiotherapy planning,” IEEE TIP, 2018.
- (25) O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in MICCAI, 2015.
- (26) K. Fritscher, P. Raudaschl, P. Zaffino, M. F. Spadea, G. C. Sharp, and R. Schubert, “Deep neural networks for fast segmentation of 3d medical images,” in MICCAI, 2016.
B. Ibragimov and L. Xing, “Segmentation of organs-at-risks in head and neck ct images using convolutional neural networks,”Medical physics, 2017.
- (28) X. Ren, L. Xiang, D. Nie, Y. Shao, H. Zhang, D. Shen, and Q. Wang, “Interleaved 3d-cnn s for joint segmentation of small-volume structures in head and neck ct images,” Medical physics, 2018.
- (29) A. Hänsch, M. Schwier, T. Gass, T. Morgas, B. Haas, J. Klein, and H. K. Hahn, “Comparison of different deep learning approaches for parotid gland segmentation from ct images,” in SPIE, 2018.
- (30) J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in IEEE CVPR, 2018.
- (31) S. S. M. Salehi, D. Erdogmus, and A. Gholipour, “Tversky loss function for image segmentation using 3d fully convolutional deep networks,” in International Workshop on MLMI, 2017.
- (32) W. R. Crum, O. Camara, and D. L. Hill, “Generalized overlap measures for evaluation and validation in medical image analysis,” IEEE TMI, 2006.
- (33) C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, 2017.
- (34) T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense object detection,” in CVPR, 2017.
- (35) W. Zhu, Q. Lou, Y. S. Vang, and X. Xie, “Deep multi-instance networks with sparse label assignment for whole mammogram classification,” in MICCAI, 2017.
- (36) K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle et al., “The cancer imaging archive (tcia): maintaining and operating a public information repository,” Journal of digital imaging, vol. 26, no. 6, pp. 1045–1057, 2013.
- (37) M. Vallières, E. Kay-Rivest, L. J. Perrin, X. Liem, C. Furstoss, H. J. Aerts, N. Khaouam, P. F. Nguyen-Tan, C.-S. Wang, K. Sultanem et al., “Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer,” Scientific reports, vol. 7, no. 1, p. 10117, 2017.
- (38) W. Zhu, C. Liu, W. Fan, and X. Xie, “Deeplung: Deep 3d dual path nets for automated pulmonary nodule detection and classification,” IEEE WACV, 2018.
- (39) W. Zhu, Y. S. Vang, Y. Huang, and X. Xie, “Deepem: Deep 3d convnets with em for weakly supervised pulmonary nodule detection,” MICCAI, 2018.
- (40) A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, vol. 30, no. 1, 2013, p. 3.
T. Tieleman and G. Hinton, “Lecture 6.5-rmsprop: Divide the gradient by a
running average of its recent magnitude,”
COURSERA: Neural networks for machine learning, vol. 4, no. 2, pp. 26–31, 2012.
- (42) T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in CVPR, vol. 1, no. 2, 2017, p. 4.
- (43) S. Nikolov, S. Blackwell, R. Mendes, J. De Fauw, C. Meyer, C. Hughes, H. Askham, B. Romera-Paredes, A. Karthikesalingam, C. Chu et al., “Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy,” arXiv preprint arXiv:1809.04430, 2018.
- (44) W. Zhu, X. Xiang, T. D. Tran, G. D. Hager, and X. Xie, “Adversarial deep structured nets for mass segmentation from mammograms,” in IEEE ISBI, 2018.