Deep learning and its application to medical image segmentation

by   Holger R. Roth, et al.

One of the most common tasks in medical imaging is semantic segmentation. Achieving this segmentation automatically has been an active area of research, but the task has been proven very challenging due to the large variation of anatomy across different patients. However, recent advances in deep learning have made it possible to significantly improve the performance of image recognition and semantic segmentation methods in the field of computer vision. Due to the data driven approaches of hierarchical feature learning in deep learning frameworks, these advances can be translated to medical images without much difficulty. Several variations of deep convolutional neural networks have been successfully applied to medical images. Especially fully convolutional architectures have been proven efficient for segmentation of 3D medical images. In this article, we describe how to build a 3D fully convolutional network (FCN) that can process 3D images in order to produce automatic semantic segmentations. The model is trained and evaluated on a clinical computed tomography (CT) dataset and shows state-of-the-art performance in multi-organ segmentation.



There are no comments yet.



Organ Segmentation From Full-size CT Images Using Memory-Efficient FCN

In this work, we present a memory-efficient fully convolutional network ...

Knowledge-based Fully Convolutional Network and Its Application in Segmentation of Lung CT Images

A variety of deep neural networks have been applied in medical image seg...

CNN-based Segmentation of Medical Imaging Data

Convolutional neural networks have been applied to a wide variety of com...

A multi-scale pyramid of 3D fully convolutional networks for abdominal multi-organ segmentation

Recent advances in deep learning, like 3D fully convolutional networks (...

Conditional Random Fields as Recurrent Neural Networks for 3D Medical Imaging Segmentation

The Conditional Random Field as a Recurrent Neural Network layer is a re...

Conditional Deep Convolutional Neural Networks for Improving the Automated Screening of Histopathological Images

Semantic segmentation of breast cancer metastases in histopathological s...

Holographic Visualisation of Radiology Data and Automated Machine Learning-based Medical Image Segmentation

Within this thesis we propose a platform for combining Augmented Reality...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Automated segmentation of medical images is challenging because of the large shape and size variations of anatomy between patients. Furthermore, low contrast to surrounding tissues can make automated segmentation difficult [1]. Recent advantages in this field have mainly been due to the application of deep learning based methods that allow the efficient learning of features directly from the imaging data. Especially, the development of fully convolutional neural networks (FCN) [2] has further improved the state-of-the-art in semantic segmentation of medical images.

In this article, we describe how FCNs have been derived from CNNs and how to utilize 3D FCNs that can segment volumetric medical images with high accuracy and robustness.

I-a Convolutional neural networks (CNN)

Many of the recent advances in computer visions are due to the efficient application of convolutional neural networks (CNN) on graphics processing units (GPUs). GPU acceleration has significantly sped up computations, allowing the training of very deep and complex models on large datasets. For example, Krizhevsky et al. [3] nearly halved the error rate on the ImageNet

challenge dataset from one year to the next by training a deep CNN on two GPU cards on over one million images. CNNs are effective because they can learn hierarchical feature representations of the image in a purely data-driven manner. This means that features which are good for classification are learned from the images just given a supervisory signal that defines the desired classification output. This so-called “supervised learning” has been recently applied to many fields of science, including biomedical and radiological imaging

[4, 5, 6, 7], and significantly advanced the state-of-the-art [8].

Typically, a CNN consists of several layers of convolutional, pooling, and fully-connected (or densely connected) neural network layers [9]

. The convolutional layers make use of spatial correlation in the input images by sharing the filter kernel weights for the computation of each feature map. Pooling layers allow reducing the dimensions of each input feature map while preserving the most relevant feature responses. Commonly used pooling includes max- or average-pooling. Max-pooling can also add some invariance to local shifting of the objects in the input image. The outputs of each CNN layer are typically fed to non-linear activation functions (often rectified linear units (ReLUs)

[3]). The use of non-linear activation functions allows us to model very complex mappings between the input image and the desired outputs.

Figure 1 shows a schematic overview of a typical CNN architecture that produces a per-image prediction by using a softmax output for multi-class classification. This type of architecture has been successfully applied to many medical imaging tasks. A few examples in radiology are anatomy classification [10, 11], false-positives reduction for computer-aided detection of lymph nodes or colonic polyps [12], the detection of pulmonary nodules [13], and the detection of pulmonary embolisms [14]. CNNs have also been successfully applied to the classification of endoscopic video sequences, e.g. in colonoscopy [15] or laparoscopic surgery [16].


Fig. 1: Convolutional neural network (CNN). A straightforward application of CNNs for anatomy classification in whole body CT scans can be found in [10] (illustration after [2]).

I-B Fully convolutional networks (FCN)

One downside of CNNs is that the spatial information of the image is lost when the convolutional features are fed into the final fully connected layers of the network. However, spatial information is especially important for semantic segmentation tasks. Hence, the fully convolutional network (FCN) was proposed by Long et al. [2] to overcome this limitation. In FCNs, the final densely connected layers of the CNN are replaced by transposed convolutional layers in order to apply a learned up-sampling to the low-resolution feature maps within the network. This operation can recover the original spatial dimensions of the input image while performing semantic segmentation at the same time. Similar network structures have been successfully applied to semantic segmentation tasks in medical imaging [17, 18, 19] and to the segmentation of biomedical images such as histology slides [20]. Extensions to 3D biomedical imaging data from modalities such as confocal microscopy [21] or magnetic resonance imaging (MRI) have been proposed [22]. In a typical FCN architecture, skip connections can be utilized to connect different levels of the network in order to preserve image features that are “closer” to the original image. This helps the network to achieve a more detailed segmentation result and can simplify or speed up training [23, 24]. The typical setup of an FCN is illustrated in Fig. 2 for the semantic segmentation of CT image slices.


Fig. 2: Fully convolutional network (FCN). Examples of FCNs applied to semantic segmentation tasks in medical imaging can be found in [17, 18, 19, 25] (illustration after [2]).

I-C Related work

Semantic segmentation of CT images has been an active area of research over the years. Classical approaches for multi-organ segmentation range from statistical shape models [26, 27] to techniques that employ image registration. Many methods include some form of multi-atlas label fusion [28, 29, 30]

which has been widely applied in clinical research and practice. Furthermore, approaches that combine techniques from multi-atlas registration and machine learning have been proposed

[31, 32]. However, the difficulties of modeling the complex shape variations between patients, especially for abdominal organs, have made it difficult for registration-based methods to perform adequately for very non-rigid organs [33].

Today, many successful deep learning methods from computer vision are being adapted to segmentation tasks in medical imaging. Recent examples include [34, 1, 35, 36, 18, 22, 36, 7] and many others. Most of these methods are based on 2D and 3D variants of FCNs [2] that allow the extraction of features that are useful for image segmentation directly from the imaging data. This is crucial for the success of deep learning [9] and avoids the need for “hand-crafting” features suitable for detection of individual organs.

Ii Methods


Fig. 3:

The architecture of our 3D U-Net like fully convolutional network. It applies an end-to-end architecture using same size convolutions (via zero padding) with kernel sizes of


FCNs have made it feasible to train models for pixel-wise semantic segmentation in an end-to-end fashion [2]. Efficient implementations of 3D convolution and growing GPU memory have further made it possible to extend these methods to 3D medical imaging and to train networks on large amounts of annotated image volumes. One such example is the recently proposed 3D U-Net architecture [21]. In the following, we describe how to apply the 3D U-Net architecture to the problem of multi-organ segmentation in CT.

Ii-a 3D Fully convolutional networks (3D U-Net)

As described above, FCNs have the ability to solve challenging classification tasks in a data-driven manner, given a training set of images and labels with , where denotes one of CT images and denotes the corresponding ground truth label image. This setup allows the network to find a direct mapping from the image to the segmentation by learning a very complex non-linear function between the two. The 3D U-Net architecture [21] consists of symmetric analysis and synthesis paths with four resolution levels each. Each resolution level in the analysis path contains two convolutional layers with

kernels, each followed by ReLU activations and a

max pooling with strides of two in each dimension. In the synthesis path, transposed convolutions are utilized to remap the lower resolution feature maps within the network to the higher resolution space of the input images. These are again followed by two

convolutions, each of which employs ReLU activations. Furthermore, 3D U-Net utilizes shortcut (or skip) connections from layers of equal resolution in the analysis path to provide higher-resolution features to the synthesis path [21]. The final convolutional layer employs a voxel-wise softmax

activation function to compute a 3D probability map for each of the target organs as the output of our network.

In this example, we investigate a custom-build 3D FCN similar to 3D U-Net as illustrated in Fig. 3. The network has the same input and output volume sizes and uses same size convolutions with zero padding throughout. For training, we use randomly cropped subvolumes extracted from several training CT volumes. Here, we chose a size of only for each subvolume that is small enough to allow the extraction of three subvolumes for mini-batch training on a single GPU. Batch sizes

1 can lead to better convergence during training by using batch normalization

[37] when sampled from different patient volumes [38]. Our 3D FCN uses concatenation skip connections to the encoder part of the network as in the original 3D U-Net [20, 23], resulting in 19M trainable parameters.

During inference (prediction), the network can be reshaped in order to process the test images more efficiently [2]. Hence, the network is resized to an input size that covers the whole dimension of a given CT volume and the full output is built up by applying an overlapping tiles approach in -direction. Again, the network reshaping size is dependent on the amount of available GPU memory.

Ii-B Data augmentation

In training, we employ smooth B-spline deformations to both the image and label data, similar to [21]

. The deformation fields are randomly sampled from a uniform distribution with a maximum displacement of 4 and a grid spacing of 24 voxels. Furthermore, we applied random rotations between -

and +, and translations of -20 to +20 voxels in each dimension at each iteration in order to generate plausible deformations during training. This type of data augmentation can help to artificially increase the training data and encourages convergence to more robust solutions while reducing overfitting to the training data (see Fig. 4).

Ii-C Loss function

The Dice similarity coefficient (DSC) is often used to measure the amount of agreement between two binary regions. Hence, it is widely used as a metric for evaluating performance of image segmentation algorithms. A differentiable version has been proposed by Milletari et al. [22]

which we use for training our 3D U-Net model. In order to optimize the DSC score on the training data, we minimize the following loss function for each class



Here, represents the value of the softmax probability map and the corresponding ground truth at voxel of in the current image volume. In order to predict multiple classes for segmentation, we calculate the total loss as


where is the number of classes (number foreground classes, plus background) and is a weight factor that can influence the contribution of each label class . In this example, we keep for all labels. Alternative weighting schemes have been explored in [39, 40].


Fig. 4: Training/testing curves of the 3D FCN network for multi-organ segmentation using the Dice loss for optimization.

Ii-D Implementation

All models are implemented in Keras


with the TensorFlow

222 backend, which employs automatic differentiation in order compute the gradients for optimizing the model [41]. We use Adam optimization [42] with an initial learning rate of . We train our model for 50,000 iterations, which takes about one week on a NVIDIA Quadro P6000 GPUs with 24 GB. Inference, however, is achieved in less than 1 minute per case.

Iii Experiments & Results


(a) Ground truth (axial)


(b) Ground truth (3D)


(c) Segmentation (axial)


(d) Segmentation (3D)
Fig. 9: Visualization of the resulting multi-organ segmentations in CT showing both axial (a, c) and 3D renderings (b, d) of the results and corresponding ground truth after up-sampling the prediction to half the original CT resolution.

Iii-a Data

Our dataset originates from a research study with gastric cancer patients. 377 contrast enhanced abdominal CT scans were acquired in the portal venous phase. Each CT volume consists of 460–1177 slices of pixels. Voxel dimensions are [0.59-0.98, 0.59-0.98, 0.5-1.0] mm. In each image, the arteries, portal vein, liver, spleen, stomach, gallbladder, and pancreas were delineated by several trained researchers and confirmed by a clinician. Because of GPU memory constraints and computational efficiency, we downsample all images by a factor of 4, resulting axial images sizes of (number of slices)/4.

Dice (%) Avg. Std. Min. Max.
artery 84.1% 5.0% 66.9% 91.7%
vein 77.5% 8.9% 29.2% 89.2%
liver 96.6% 1.1% 91.4% 98.5%
spleen 96.3% 2.0% 79.8% 98.9%
stomach 95.6% 7.7% 0.0% 99.7%
gallbladder 90.1% 10.9% 0.0% 97.8%
pancreas 85.5% 8.9% 28.0% 95.5%
Total Avg. 89.4% 6.4% 42.2% 95.9%
TABLE 1: Quantitative results of the 3D FCN network in training (n=340).
Dice (%) Avg. Std. Min. Max.
artery 83.5% 4.1% 73.7% 91.1%
vein 80.5% 6.8% 49.0% 89.4%
liver 97.1% 1.0% 93.5% 98.3%
spleen 97.7% 0.8% 95.2% 98.9%
stomach 96.1% 7.9% 49.4% 98.9%
gallbladder 85.1% 15.7% 28.6% 97.4%
pancreas 84.9% 9.1% 52.5% 95.1%
Total Avg. 89.3% 6.5% 63.1% 95.6%
TABLE 2: Quantitative results of the 3D FCN network in testing (n=37).

Iii-B Evaluation

We evaluate this model using a random split of 340 training and 37 testing patients, and achieve an average Dice score performance of 89.4 6.4 (range [42.2, 95.9])% in training (see Table 1), and 89.3 6.5 (range [63.1, 95.6])% in testing (see Table 2). This result indicates that the used dataset size and use of data augmentation is sufficient to avoid overfitting to the training data. Furthermore, this result is comparable or better than other state-of-the-art deep learning architectures for single and multi-organ segmentation in CT [38, 43, 25, 44]. However, direct comparison to other methods is difficult due to the different datasets and validation schemes employed. An example prediction result of the model is shown in Fig. 9.

Iv Conclusions

This example model achieves state-of-the-art performances in automated multi-organ segmentation of abdominal CT with 90% average Dice score in testing across all targeted organs. We showed that deep 3D FCNs can be efficiently trained on modern GPUs. In the future, the availability of larger amounts of GPU memory will allow the processing of whole CT volumes at higher resolution. Increasing dataset sizes will likely further improve the performance of automated multi-organ segmentation in medical imaging. The current model does not contain any constraints on the shape of the segmented anatomy and can result in disconnected or isolated regions. In the future, some anatomical constraints could be included in order to guarantee topologically correct segmentation results [45].


This research was supported by MEXT KAKENHI (26108006, 26560255, 25242047, 17H00867, 15H01116) and the JPSP International Bilateral Collaboration Grant.


  • [1] H. R. Roth, L. Lu, N. Lay, A. P. Harrison, A. Farag, A. Sohn, and R. M. Summers, “Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation,” arXiv preprint arXiv:1702.00045, 2017.
  • [2] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pp. 3431–3440, 2015.
  • [3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
  • [4] A. A. Cruz-Roa, J. E. A. Ovalle, A. Madabhushi, and F. A. G. Osorio, “A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 403–410, Springer, 2013.
  • [5] H. R. Roth, L. Lu, A. Seff, K. M. Cherry, J. Hoffman, S. Wang, J. Liu, E. Turkbey, and R. M. Summers, “A new 2.5 d representation for lymph node detection using random sets of deep convolutional neural network observations,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 520–527, Springer, 2014.
  • [6]

    H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning,”

    IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1285–1298, 2016.
  • [7] K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker, “Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation,” Medical image analysis, vol. 36, pp. 61–78, 2017.
  • [8] H. Greenspan, B. van Ginneken, and R. M. Summers, “Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1153–1159, 2016.
  • [9] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  • [10] H. R. Roth, C. T. Lee, H.-C. Shin, A. Seff, L. Kim, J. Yao, L. Lu, and R. M. Summers, “Anatomy-specific classification of medical images using deep convolutional nets,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on, pp. 101–104, IEEE, 2015.
  • [11] Z. Yan, Y. Zhan, Z. Peng, S. Liao, Y. Shinagawa, S. Zhang, D. N. Metaxas, and X. S. Zhou, “Multi-instance deep learning: Discover discriminative local anatomies for bodypart recognition,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1332–1343, 2016.
  • [12] H. R. Roth, L. Lu, J. Liu, J. Yao, A. Seff, K. Cherry, L. Kim, and R. M. Summers, “Improving computer-aided detection using convolutional neural networks and random view aggregation,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1170–1181, 2016.
  • [13] A. A. A. Setio, F. Ciompi, G. Litjens, P. Gerke, C. Jacobs, S. J. van Riel, M. M. W. Wille, M. Naqibullah, C. I. Sánchez, and B. van Ginneken, “Pulmonary nodule detection in ct images: false positive reduction using multi-view convolutional networks,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1160–1169, 2016.
  • [14] N. Tajbakhsh, M. B. Gotway, and J. Liang, “Computer-aided pulmonary embolism detection using a novel vessel-aligned multi-planar image representation and convolutional neural networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 62–69, Springer, 2015.
  • [15] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, “Convolutional neural networks for medical image analysis: Full training or fine tuning?,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1299–1312, 2016.
  • [16] A. P. Twinanda, S. Shehata, D. Mutter, J. Marescaux, M. de Mathelin, and N. Padoy, “Endonet: A deep architecture for recognition tasks on laparoscopic videos,” IEEE transactions on medical imaging, vol. 36, no. 1, pp. 86–97, 2017.
  • [17] H. R. Roth, L. Lu, A. Farag, A. Sohn, and R. M. Summers, “Spatial aggregation of holistically-nested networks for automated pancreas segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 451–459, Springer, 2016.
  • [18] X. Zhou, T. Ito, R. Takayama, S. Wang, T. Hara, and H. Fujita, “Three-dimensional ct image segmentation by combining 2d fully convolutional network with 3d majority voting,” in International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, pp. 111–120, Springer, 2016.
  • [19] Y. Zhou, L. Xie, W. Shen, Y. Wang, E. K. Fishman, and A. L. Yuille, “A fixed-point model for pancreas segmentation in abdominal ct scans,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 693–701, Springer, 2017.
  • [20] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241, Springer, 2015.
  • [21] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D u-net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 424–432, Springer, 2016.
  • [22] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 3D Vision (3DV), 2016 Fourth International Conference on, pp. 565–571, IEEE, 2016.
  • [23] M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, pp. 179–187, Springer, 2016.
  • [24]

    C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning.,” in

    AAAI, pp. 4278–4284, 2017.
  • [25] X. Zhou, R. Takayama, S. Wang, T. Hara, and H. Fujita, “Deep learning of the sectional appearances of 3d ct images for anatomical structure segmentation based on an fcn voting method,” Medical physics, 2017.
  • [26] J. J. Cerrolaza, M. Reyes, R. M. Summers, M. Á. González-Ballester, and M. G. Linguraru, “Automatic multi-resolution shape modeling of multi-organ structures,” Medical image analysis, vol. 25, no. 1, pp. 11–21, 2015.
  • [27] T. Okada, M. G. Linguraru, M. Hori, R. M. Summers, N. Tomiyama, and Y. Sato, “Abdominal multi-organ segmentation from ct images using conditional shape–location and unsupervised intensity priors,” Medical image analysis, vol. 26, no. 1, pp. 1–18, 2015.
  • [28] T. Rohlfing, R. Brandt, R. Menzel, and C. R. Maurer, “Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains,” NeuroImage, vol. 21, no. 4, pp. 1428–1442, 2004.
  • [29] H. Wang, A. Pouch, M. Takabe, B. Jackson, J. Gorman, R. Gorman, and P. A. Yushkevich, “Multi-atlas segmentation with robust label transfer and label fusion,” in Information processing in medical imaging: proceedings of the… conference, vol. 23, p. 548, NIH Public Access, 2013.
  • [30] J. E. Iglesias and M. R. Sabuncu, “Multi-atlas segmentation of biomedical images: a survey,” Medical image analysis, vol. 24, no. 1, pp. 205–219, 2015.
  • [31] T. Tong, R. Wolz, Z. Wang, Q. Gao, K. Misawa, M. Fujiwara, K. Mori, J. V. Hajnal, and D. Rueckert, “Discriminative dictionary learning for abdominal multi-organ segmentation,” Medical Image Analysis, vol. 23, no. 1, pp. 92–104, 2015.
  • [32] M. Oda, N. Shimizu, K. Karasawa, Y. Nimura, T. Kitasaka, K. Misawa, M. Fujiwara, D. Rueckert, and K. Mori, “Regression forest-based atlas localization and direction specific atlas generation for pancreas segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 556–563, Springer, 2016.
  • [33] C. P. Lee, Z. Xu, R. P. Burke, R. B. Baucom, B. K. Poulose, R. G. Abramson, and B. A. Landman, “Evaluation of five image registration tools for abdominal ct: pitfalls and opportunities with soft anatomy,” in Proceedings of SPIE–the International Society for Optical Engineering, vol. 9413, NIH Public Access, 2015.
  • [34] H. R. Roth, L. Lu, A. Farag, H.-C. Shin, J. Liu, E. B. Turkbey, and R. M. Summers, “Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 556–564, Springer, 2015.
  • [35] Y. Zhou, L. Xie, W. Shen, E. Fishman, and A. Yuille, “Pancreas segmentation in abdominal ct scan: A coarse-to-fine approach,” arXiv preprint arXiv:1612.08230, 2016.
  • [36] P. F. Christ, M. E. A. Elshaer, F. Ettlinger, S. Tatavarty, M. Bickel, P. Bilic, M. Rempfler, M. Armbruster, F. Hofmann, M. D’Anastasi, et al., “Automatic liver and lesion segmentation in ct using cascaded fully convolutional neural networks and 3D conditional random fields,” in MICCAI, pp. 415–423, Springer, 2016.
  • [37] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning, pp. 448–456, 2015.
  • [38] H. Roth, M. Oda, N. Shimizu, H. Oda, Y. Hayashi, T. Kitasaka, M. Fujiwara, K. Misawa, and K. Mori, “Towards dense volumetric pancreas segmentation in ct using 3d fully convolutional networks,” SPIE Medical Imaging, arXiv preprint arXiv:1711.06439, 2017.
  • [39] H. O. M. O. Y. H. K. M. C. Shen, H. R. Roth and K. Mori, “On the influence of dice loss function in multi-class organ segmentation of abdominal ct using 3d fully convolutional networks,” Technical Report MI2017-51, vol. 117, pp. 15–20, 2017.
  • [40] C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 240–248, Springer, 2017.
  • [41] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
  • [42] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [43] E. Gibson, F. Giganti, Y. Hu, E. Bonmati, S. Bandula, K. Gurusamy, B. R. Davidson, S. P. Pereira, M. J. Clarkson, and D. C. Barratt, “Towards image-guided pancreas and biliary endoscopy: Automatic multi-organ segmentation on abdominal ct with dense dilated networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 728–736, Springer, 2017.
  • [44] H. R. Roth, H. Oda, Y. Hayashi, M. Oda, N. Shimizu, M. Fujiwara, K. Misawa, and K. Mori, “Hierarchical 3D fully convolutional networks for multi-organ segmentation,” arXiv preprint arXiv:1704.06382, 2017.
  • [45] O. Oktay, E. Ferrante, K. Kamnitsas, M. Heinrich, W. Bai, J. Caballero, R. Guerrero, S. Cook, A. de Marvao, D. O’Regan, et al., “Anatomically constrained neural networks (acnn): Application to cardiac image enhancement and segmentation,” arXiv preprint arXiv:1705.08302, 2017.