Automated sub-cortical brain structure segmentation combining spatial and deep convolutional features

09/26/2017 ∙ by Kaisar Kushibar, et al. ∙ University of Girona 0

Sub-cortical brain structure segmentation in Magnetic Resonance Images (MRI) has attracted the interest of the research community for a long time because morphological changes in these structures are related to different neurodegenerative disorders. However, manual segmentation of these structures can be tedious and prone to variability, highlighting the need for robust automated segmentation methods. In this paper, we present a novel convolutional neural network based approach for accurate segmentation of the sub-cortical brain structures that combines both convolutional and prior spatial features for improving the segmentation accuracy. In order to increase the accuracy of the automated segmentation, we propose to train the network using a restricted sample selection to force the network to learn the most difficult parts of the structures. We evaluate the accuracy of the proposed method on the public MICCAI 2012 challenge and IBSR 18 datasets, comparing it with different available state-of-the-art methods and other recently proposed deep learning approaches. On the MICCAI 2012 dataset, our method shows an excellent performance comparable to the best challenge participant strategy, while performing significantly better than state-of-the-art techniques such as FreeSurfer and FIRST. On the IBSR 18 dataset, our method also exhibits a significant increase in the performance with respect to not only FreeSurfer and FIRST, but also comparable or better results than other recent deep learning approaches. Moreover, our experiments show that both the addition of the spatial priors and the restricted sampling strategy have a significant effect on the accuracy of the proposed method. In order to encourage the reproducibility and the use of the proposed method, a public version of our approach is available to download for the neuroimaging community.



There are no comments yet.


page 4

page 5

page 7

page 8

Code Repositories


Sub-cortical brain tissue segmentation using CNN

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Brain structure segmentation in Magnetic Resonance Images (MRI) is one of the major interests in medical practice due to its various applications, including pre-operative evaluation and surgical planning, radiotherapy treatment planning, longitudinal monitoring for disease progression or remission (Kikinis et al., 1996; Phillips et al., 2015; Pitiot et al., 2004). The sub-cortical structures (i.e. thalamus, caudate, putamen, pallidum, hippocampus, amygdala, and accumbens) have attracted the interest of the research community for a long time, since their morphological changes are frequently associated with psychiatric and neurodegenerative disorders and could be used as biomarkers of some diseases (Debernard et al., 2015; Mak et al., 2014). Therefore, segmentation of sub-cortical brain structures in MRI for quantitative analysis has a major clinical application. However, manual segmentation of MRI is extremely time consuming and hardly reproducible due to inter- and intra- variability of the operators, highlighting the need for automated accurate segmentation methods.

Recently, González-Villà et al. (2016), reviewed different approaches for brain structure segmentation in MRI. One of the commonly used automatic brain structure segmentation tools in medical practice is FreeSurfer111, which uses non-linear registration and an atlas-based segmentation approach (Fischl et al., 2002). Another classical approach, also popular in the medical community, is the method proposed by Patenaude et al. (2011) – FIRST, which is included into the publicly available software FSL222 This method uses the principles of Active Shape (Cootes et al., 1995) and Active Appearance Models (Cootes et al., 2001) that are put within a Bayesian framework, allowing to use the probabilistic relationship between shape and intensity to its full extent.

In recent years, deep learning methods, in particular, Convolutional Neural Networks (CNN), have demonstrated a state-of-the-art performance in many computer vision tasks such as visual object detection, classification and segmentation (Krizhevsky et al., 2012; He et al., 2016; Szegedy et al., 2015; Girshick et al., 2014). Unlike handcrafted features, CNN methods learn from observed data (LeCun et al., 1998) making the features more relevant to a task. Therefore, CNNs are also becoming a popular technique applied in medical image analysis. There have been many advances in the application of deep learning in medical imaging such as expert-level performance in skin cancer classification (Esteva et al., 2017), high rate detecting cancer metastases (Liu et al., 2017), Alzheimer’s disease classification (Sarraf et al., 2016), and spotting early signs of autism (Hazlett et al., 2017).

Some CNN methods have also been proposed for brain structure segmentation. One of the common techniques used in the literature is patch-based segmentation, where patches of a certain size are extracted around each voxel and classified using a CNN. Application of 2D, 3D, 2.5D patches (three patches from the orthogonal views of an MRI volume) and their combinations including multi-scale patches can be found in the literature for brain structure segmentation

(Brébisson & Montana, 2015; Bao & Chung, 2016; Milletari et al., 2017; Mehta et al., 2017). Combining patches of different dimensions is done in a multi-path manner, where CNNs consist of different branches corresponding to each patch type. In contrast to patch-based CNNs, fully convolutional neural networks (FCNN) produce segmentation for a neighborhood of an input patch (Long et al., 2015). Shakeri et al. (2016) adapted the work of Chen et al. (2016) for semantic segmentation of natural images using FCNN. Moreover, 3D FCNNs, which segment a 3D neighborhood of an input patch at once, have been investigated by Dolz et al. (2017) and Wachinger et al. (2017). Although FCNNs show improvement in segmentation speed due to parallel segmentation of several voxels, it suffers from a high number of parameters in the network in comparison with patch-based CNNs. Also, it is common to apply post-processing methods to refine the final segmentation output. Inference of CNN-priors and statistical models such as Markov Random Fields and Conditional Random Fields (Lafferty et al., 2001) were used in the experiments of Brébisson & Montana (2015), Shakeri et al. (2016), and Wachinger et al. (2017). A modified Random Walker based segmentation refinement has been also proposed by Bao & Chung (2016). Apart from implicit information that is provided by the extracted patches from MRI volumes, explicit characteristics distinguishing spatial consistency have been studied. Brébisson & Montana (2015) included distances to centroids to their networks. Wachinger et al. (2017)

used the Euclidean and spectral coordinates computed from eigenfunctions of a Laplace-Beltrami operator of a solid 3D brain mask, to provide a distinctive perception of spatial location for every voxel. These kinds of features provide additional spatial information, however, extracting these explicit features from an unannotated MRI volume requires some preliminary operations to be attended (e.g. repetitive training of the network to compute initial segmentation mask).

From the reviewed literature, we have observed that most of the current deep learning approaches for sub-cortical brain structure segmentation focus on segmenting only the large sub-cortical structures (thalamus, caudate, putamen, pallidum). However, other important small structures (i.e. hippocampus, amygdala, accumbens), which are used for examining neurological disorders such as schizophrenia (Altshuler et al., 1998; Lawrie et al., 2003), anxiety disorder (Milham et al., 2005), bipolar disorder (Altshuler et al., 1998), Alzheimer (Fox et al., 1996), etc., are not considered. In this work, we are presenting our CNN approach for segmenting all the sub-cortical structures. A recent approach of Ghafoorian et al. (2017)

has been taken as a seminal work in our research. In their work, spatial features, provided by tissue atlas probabilities, were combined with 2D CNN features for segmenting White Matter Hyperintensities in MRI. In this paper, we are presenting a different 2.5D CNN architecture, i.e the three orthogonal views of the 3D volume, for segmenting the sub-cortical brain structures that combines spatial features in a similar way to

Ghafoorian et al. (2017). To the best of our knowledge, this is the first deep learning method incorporating atlas probabilities for sub-cortical brain structure segmentation. Moreover, we propose a new sample selection technique to allow the neural network to learn to segment the most difficult areas of the structures in the images. We test the proposed strategy in two well-known datasets: MICCAI 2012333 (Landman & Warfield, 2012) and IBSR 18444; and compare our results with the classical and recent CNN strategies for brain structure segmentation. Moreover, we make our method publicly available for the community, accessible online at

2 Method

2.1 Input features

in our method, we employ 2.5D patches to incorporate information from three orthogonal views of a 3D volume. In our case, each patch has a size of pixels. 3D patches provide more information of surroundings for the voxel that is being classified, but it is computationally and memory expensive. Thus, by using 2.5D patches, we approximate the information that is provided by a 3D patch in computational time and memory efficient manner.

Along with the appearance based features provided by the T1-w MRI, we employ spatial features extracted from a structural probabilistic atlas. In our experiments, we used the well-known Harvard-Oxford

(Caviness Jr et al., 1996) atlas template in MNI152 space distributed with the FSL package 555, which has been built using 47 young adult healthy brains. In our method, first, T1-w image of the MNI152 template is affine registered to T1-w image of the considered datasets using a block matching approach (Ourselin et al., 2000). Then, non-linear registration of the atlas template to subject volume is applied using fast free-form deformation method (Modat et al., 2010). The deformation field obtained after the registration is used to move the probabilistic atlas into the subject space. Registration processes have been carried out using the well known and publicly available tool NiftyReg666

. Afterwards, vectors of size 15, corresponding to seven anatomical structures with left and right parts separately and background, were extracted from probabilistic atlas for every voxel and used as an input feature to train the network.

2.2 CNN architecture

Figure 1

illustrates our proposed CNN architecture. It consists of three branches corresponding to the patches extracted from axial, coronal, and sagittal views of a 3D volume, and one branch corresponding to the spatial priors. The branch for the spatial prior accepts a vector of size 15 with atlas probabilities for each structure and the background. The first three branches have the same organization of convolutional and max-pooling layers as shown in Figure 


(B). All the feature maps of the convolutional layers are passed through the Rectified Linear Unit (ReLU) activation function

(Glorot et al., 2011). For all the convolutional layers, kernels of size are set to make the CNN deep without losing in performance and bursting the number of parameters as it has been studied in Simonyan & Zisserman (2014). Then, the outputs of the convolutional layers are flattened and followed by fully connected (FC) layers with 180 units each. Next, FC layers of each branch including atlas probabilities are fully connected to two consecutive FC layers with 540 and 270 units. The final classification layer has 15 units with the softmax activation function.

Figure 1: The proposed 2.5D CNN architecture has three convolutional branches and a branch for spatial prior. 2D patches of size pixels are extracted from three orthogonal views of a 3D volume. Spatial prior branch accepts a vector of size 15 with atlas probabilities for each of the 14 structures and background.

2.3 CNN training

For training our network, we extract 2.5D patches from the training set and using the provided ground truth labels we optimize the kernel and fully connected layer unit weights based on the loss function. In the proposed network we employ the categorical cross-entropy loss function, which is minimized using the Adam

(Kingma & Ba, 2014) optimization method. This technique automatically controls the learning rate and uses moving averages of the parameters, which allows the step size to be effectively large and converge to optimal step size without tuning it manually.

When training the CNN, it is important to take into account how the training samples are extracted from an image. Random selection of certain number of samples from an image is one of the common techniques in the literature. However, when it comes to the segmentation of the sub-cortical structures, the background (negative) samples turn out to be dispersed in the subject volume. Hence, it would lead to imperfect segmentation results on the borders of the structures, which are the most delicate areas to process due to the low contrast between the structure and the background. Therefore, we propose to extract the negative samples only from the structure boundaries as shown in Figure 2. In doing so, we force the network to learn only from the structure boundaries and dismiss other parts of the background.

Figure 2: Negative sample selection from the boundaries of the target structures. (a) T1-w image with a rectangle representing the ROI; (b) T1-w ROI; (c) structure boundaries; (d) groundtruth labels with boundaries.

The training sample selection is performed as follows: from all the available training images, we first select the positive samples from all the voxels from the 14 sub-cortical structures. Then, the same number of negative samples are randomly selected from the structure boundaries within five voxel distance, forming a balanced dataset of sub-cortical and boundary voxels. More details about batch size and number of epochs of the training process for the selected datasets will be given in Section 


2.4 CNN testing

To perform the segmentation of a new image volume, we extract all the patches from the image and predict class label probabilities using the trained CNN. Then, we assign a label corresponding to the maximum a posteriori probability for every input patch. Notice that knowing the order of the patch extraction is important to be able to reconstruct the final segmentation output. We also take advantage of the location of the sub-cortical structures, which are located in the central part of the brain. Due to the knowledge provided by the atlases, regions of interest (ROI) are automatically defined for all the subject volumes to achieve faster training and testing speeds.

Since the network has been trained with the negative samples extracted only from the structure boundaries, it will produce spurious outputs in unseen areas of the background when segmenting a testing volume. In order to overcome this issue, we apply a post-processing step, where for each class only the region with the biggest volume within the ROI is preserved.

2.5 Implementation and technical details

The proposed method has been implemented in the Python language777, using Lasagne888

and Theano

999 Bergstra et al. (2011) libraries. All experiments have been run on a GNU/Linux machine box running Ubuntu 16.04, with 32 GB RAM memory. CNN training has been carried out on a single TITAN-X GPU (NVIDIA corp, United States) with 12 GB RAM memory. The proposed method is currently available for downloading at our research website101010

3 Results

This section presents the results obtained by the proposed method on two datasets. The first dataset is the one provided in the MICCAI Multi-Atlas Labeling challenge111111 (Landman & Warfield, 2012) and the second is a publicly available dataset from the Internet Brain Segmentation Repository121212 (IBSR). Details of these datasets and the corresponding results will be given in Section 3.2 and in Section 3.3.

3.1 Evaluation measures

For evaluating the proposed method, we selected two metrics that are commonly used in the literature. These are overlap and spatial distance-based metrics, which show similarity and discrepancy of automatic and manual segmentations. The first measurement is Dice Similarity Coefficient (DSC) (Dice, 1945) defined as the following for automatic segmentation and manual segmentation :


DSC measures the overlap of the segmentation with the ground truth on a scale between 0 and 1, where the former shows no overlap and the latter represents 100% overlap with the ground truth.

For the spatial distance based metric, Hausdorff Distance (HD) is used in our experiments. This metric is defined as a function of the Euclidean distances between the voxels of and as:.


In other words, HD is the maximum distance from all the minimum distances between boundaries of segmentation and boundaries of the ground truth.

Similarly to Wachinger et al. (2017), we used Wilcoxon signed-rank test to test the statistical significance of: 1) the differences in DSC and HD between our and state-of-the-art methods; and 2) the effect of using spatial features and the proposed sample selection technique.

3.2 MICCAI 2012 Dataset

This dataset consists of 35 T1-w MRI volumes split into 15 cases for training and 20 cases for testing. Manually segmented ground truth for each image is available as well, which contains 134 structures overall. In our experiments, we extracted 14 classes corresponding to seven sub-cortical structures with left and right parts separately. All the subject volumes have even voxel spacing of 1 mm with a size of voxels in axial, sagittal, and coronal views respectively.

3.2.1 Experimental details

Skull-stripping was applied to extract the brain and cut out other parts appearing in the MRI such as eyes, skull, skin, and fat using the BET algorithm (Smith, 2002). Then, the spatial intensity variations on the MRI volumes were corrected using a bias field correction algorithm – N4ITK (Tustison et al., 2010), which is included in the publicly available ITK131313 toolkit. Both preprocessing methods were run with default parameters.

In our experiments, we trained a single model using the available training set of 15 images, while we tested the other 20 images as provided in the original MICCAI 2012 Challenge. From the training set, we extracted around ( of sub-cortical voxels and of boundary voxels) sample patches of size pixels from three orthogonal views, where around () were used for training and samples for validation (). The extracted patches were passed to the network for training in batches of size . The network was trained for 200 epochs, while in order to prevent the network from over-fitting, we applied early stopping of the training process. The training process was automatically terminated when the validation accuracy did not increase after 20 epochs.

3.2.2 Comparison with other available methods

The performance of the proposed approach is compared with widely used tools in medical practice – FreeSurfer and FIRST. We also compared the performance of our method with the one of PICSL (Wang & Yushkevich, 2013) method, which is a multi-atlas based segmentation strategy that uses joint fusion technique with corrective learning. PICSL has been the winner of the MICCAI 2012 Challenge for brain structure segmentation and still shows the best results on this dataset. For the methods of FreeSurfer and FIRST, we used their default parameters to produce segmentation masks for the testing volumes. Accordingly, the training and testing split matches the configuration we used for evaluating the proposed method. We have to note that, with this dataset, there were no individually reported numerical results for each of the sub-cortical structure in other CNN based approaches.

3.2.3 Results

Method FreeSurfer Fischl (2012) FIRST Patenaude et al. (2011) PICSL Wang & Yushkevich (2013) Our method
Tha.L 0.8300.018 4.941.01 0.8890.018 4.650.90 0.9200.013 3.220.99 0.9210.018 3.391.13
Tha.R 0.8490.021 4.760.75 0.8900.017 4.390.92 0.9240.008 3.110.79 0.9200.016 3.311.01
Cau.L 0.8080.079 9.893.09 0.7970.046 3.561.30 0.8850.074 3.441.89 0.8940.071 3.322.00
Cau.R 0.8010.042 10.393.09 0.8370.117 4.161.37 0.8870.065 3.601.67 0.8920.057 3.511.67
Put.L 0.7710.039 6.311.09 0.8600.060 3.791.76 0.9090.042 3.071.40 0.9160.023 2.631.09
Put.R 0.7990.026 5.850.84 0.8760.080 3.261.23 0.9080.046 2.911.41 0.9140.031 2.750.99
Pal.L 0.6930.189 3.891.07 0.8150.088 2.890.71 0.8730.032 2.520.54 0.8430.101 2.380.76
Pal.R 0.7920.085 3.450.98 0.7990.060 3.180.93 0.8740.047 2.490.59 0.8610.049 2.590.61
Hip.L 0.7840.054 6.351.87 0.8090.022 5.491.66 0.8710.024 4.341.66 0.8760.020 4.482.02
Hip.R 0.7940.025 6.191.59 0.8100.140 4.801.66 0.8690.022 4.011.45 0.8790.020 3.761.23
Amy.L 0.5850.064 5.050.97 0.7210.053 3.540.72 0.8320.026 2.440.29 0.8330.032 2.390.39
Amy.R 0.5760.076 5.430.90 0.7070.054 4.110.75 0.8120.033 2.720.50 0.8210.027 2.720.69
Acc.L 0.6300.055 4.281.11 0.6990.089 6.818.76 0.7900.050 2.570.67 0.7990.052 2.390.64
Acc.R 0.4430.065 5.471.02 0.6780.081 3.931.75 0.7830.058 2.650.76 0.7910.067 2.540.65
Avg. 0.7250.137 5.872.48 0.7990.094 4.182.76 0.8670.061 3.081.27 0.8690.064 3.011.30
Table 1: MICCAI 2012 dataset results. Mean DSC standard deviation and HD standard deviation values for each structure obtained using FreeSurfer, FIRST, PICSL, and our method. Structure acronyms are: left thalamus (Tha.L), right thalamus (Tha.R), left caudate (Cau.L), right caudate (Cau.R), left putamen (Put.L), right putamen (Put.R), left pallidum (Pal.L), right pallidum (Pal.R), left hippocampus (Hip.L), right hippocampus (Hip.R), left amygdala (Amy.L), right amygdala (Amy.R), left accumbens (Acc.L), right accumbens (Acc.R) and average value (Avg.). Highest DSC and HD values for each structure are shown in bold.

Table 1 shows overall and per structure mean DSC and HD values on the MICCAI 2012 dataset. According to the results, our method showed significantly higher DSC of 0.869 than FIRST and FreeSurfer which yielded 0.799 and 0.725 overall mean DSC, respectively. Moreover, as it can be observed, the HD values showed similar behavior as DSC, where the proposed approach significantly outperformed both of these methods , in average, with a reduction of 1.17 mm and 2.86 mm with respect to FIRST and FreeSurfer. Our method did not show a significant difference in comparison with PICSL in terms of DSC having similar mean of 0.867 and 0.869 for PICSL and our method, respectively. The HD values of our approach and PICSL also confirmed previously observed DSC numbers.

Figure 3 shows a qualitative comparison of segmentation outputs from FreeSurfer, FIRST, PICSL, and our method. As it can be observed, FreeSurfer provided the worst segmentation output with coarse structure boundaries. FIRST produced smooth segmentation on the borders, however, the overlap between the groundtruth was poor. Our method’s segmentation output was similar to the one of PICSL’s and both of the methods had consistent structure boundaries, which were not far from the groundtruth.

Apart from having similar results to the best performing method on this dataset, our strategy gained a good improvement in training and segmentation times. According to Landman & Warfield (2012), PICSL took 330 CPU hours for training 138 classifiers used for correcting systematic errors. Reported segmentation time of PICSL with optimal parameters was more than minutes per subject volume (Wang & Yushkevich, 2013). In comparison with the above, the execution time of our CNN strategy was around 8 hours for training and less than 5 minutes for testing, including the atlas registration.

Figure 3: Qualitative comparison of segmentation outputs obtained by FreeSurfer, FIRST, PICSL, and our method on MICCAI 2012 dataset. A) T1-w image; B) Groundtruth; C) FreeSurfer; D) FIRST; E) PICSL; F) Our method. Visible structures on coronal view: thalamus, caudate, pallidum, putamen, hippocampus, and amygdala.

3.3 IBSR 18 Dataset

This dataset consists of 18 T1-w subject volumes with manually segmented ground truth with 32 classes. Similarly to the MICCAI 2012 dataset, we extracted 14 classes corresponding to seven sub-cortical brain structures with left and right parts separately. The subject volumes of this dataset have dimensionality of and different voxel spacings: mm, mm, and mm. Images in this dataset have lower contrast and resolution in comparison with the MICCAI 2012 dataset, which makes the segmentation task even more challenging.

3.3.1 Experimental details

For the experiments with this dataset, we followed the same preprocessing steps as done with the MICCAI 2012 dataset, which included skull-stripping and bias field correction. Since there was no training and testing split on this dataset, we performed our experiments using a leave-one-subject-out cross-validation scheme. For each 17-1 fold, we extracted around patches from each of the three orthogonal views, divided into () training and () validation sets. Each model was trained for 200 epochs applying also early stopping in the training process after 20 epochs.

3.3.2 Comparison with other available methods

For this dataset, the comparison of our results will be shown: 1) with the state-of-the-art FreeSurfer and FIRST methods including the statistical significance test, since the evaluation values for each subject volume were computed by us using the corresponding tools; and 2) with recent CNN approaches of Shakeri et al. (2016), Mehta et al. (2017) (BrainSegNet), Bao & Chung (2016) (MS-CNN), and Dolz et al. (2017). The results for the recent methods were taken from their corresponding papers exactly as they have been reported. We have to mention that most of the CNN based methods report results only for a specific group of sub-cortical structures, but do not show or consider the results for the other, yet important, sub-cortical structures. Note also that the comparison on HD metric is present only for FreeSurfer, FIRST and our method, but not for other considered methods because most of the approaches do not report HD values.

3.3.3 Results

Table 2 shows the mean DSC and HD values for each of the evaluated methods. Our method showed a better performance in comparison to both FreeSurfer and FIRST methods for all the sub-cortical structures. The overall DSC mean of our method was significantly higher than both of the methods , with mean DSC of 0.740, 0.808, and 0.843 for FreeSurfer, FIRST and the proposed strategy, respectively. In terms of HD values, our method showed overall mean of , whereas FreeSurfer and FIRST yielded and , respectively. The proposed strategy significantly outperformed FreeSurfer with , however the difference with FIRST was not significant . As shown in Table 2

, FreeSurfer performed worst for almost all the structures, while FIRST and our method showed similar performance. On both thalamus structures, our method showed lowest score in comparison with the other methods, however it yielded better HD for the small structures like amygdala, accumbens, and hippocampus. In general, HD metric is very sensitive to outliers, hence, a few misclassified voxels can cause considerable reduction in performance as seen in the results for the thalamus structure in our method.

Method FreeSurfer FIRST Shakeri et al. (2016) BrainSegNet MS-CNN Dolz et al. (2017) Our method
Tha.L 0.8150.056 5.3671.168 0.8930.017 3.8190.850 0.8660.023 0.880.050 0.889 0.92 0.9100.014 7.1590.402
Tha.R 0.8640.022 4.4711.245 0.8850.012 4.2731.137 0.8740.021 0.900.029 0.9140.016 7.2560.571
Cau.L 0.7960.050 6.4351.939 0.7830.044 4.1281.575 0.7780.053 0.860.047 0.849 0.91 0.8960.018 4.0541.412
Cau.R 0.8090.048 8.2012.443 0.8700.027 3.6870.791 0.7830.068 0.880.048 0.8960.020 4.1531.061
Put.L 0.7890.038 5.3100.923 0.8690.020 4.4211.185 0.8380.026 0.910.022 0.875 0.90 0.9000.014 5.2161.788
Put.R 0.8290.031 4.7161.189 0.8800.010 4.7251.814 0.8240.039 0.910.023 0.9040.012 4.5770.410
Pal.L 0.6320.171 4.6521.294 0.8100.033 3.4770.572 0.7630.031 0.810.089 0.787 0.86 0.8250.050 3.8490.574
Pal.R 0.7740.032 3.9660.793 0.8090.037 3.9901.075 0.7360.055 0.830.086 0.8290.046 3.7000.576
Hip.L 0.7600.036 5.7871.264 0.8060.023 5.5711.592 - 0.810.065 0.788 - 0.8510.024 4.1771.087
Hip.R 0.7670.060 5.6151.600 0.8170.023 4.3490.984 - 0.830.071 0.8510.024 4.1240.824
Amy.L 0.6610.069 5.5211.517 0.7420.064 4.6481.950 - 0.760.087 0.654 - 0.7630.052 4.3260.822
Amy.R 0.6900.067 4.7201.553 0.7570.062 4.4021.493 - 0.710.087 0.7680.058 4.2921.064
Acc.L 0.6040.071 3.6340.783 0.6840.098 7.7708.803 - - - - 0.7440.053 3.0260.676
Acc.R 0.5740.074 4.5071.077 0.7030.076 3.7331.482 - - 0.7520.047 2.9950.609
Avg. 0.7400.110 5.2071.761 0.8080.080 4.4992.810 0.8080.063 0.8410.064 0.807 0.898 0.8430.071 4.4931.533
Table 2: Comparison of our method with the state-of-the-art methods as well as previous CNN approaches on IBSR dataset in terms of DSC, HD, and standard deviation. Structure acronyms are: left thalamus (Tha.L), right thalamus (Tha.R), left caudate (Cau.L), right caudate (Cau.R), left putamen (Put.L), right putamen (Put.R), left pallidum (Pal.L), right pallidum (Pal.R), left hippocampus (Hip.L), right hippocampus (Hip.R), left amygdala (Amy.L), right amygdala (Amy.R), left accumbens (Acc.L), right accumbens (Acc.R). “-” represents no results were reported on corresponding structure. The average (Avg.) values show mean DSC for the presented structure DSC scores. Highest DSC and HD values for each structure are shown in bold.

Compared to other CNNs, our approach outperformed the method proposed by Shakeri et al. (DSC = ) on the eight evaluated structures. Similarly, the performance of the proposed approach was also superior on the six structures evaluated in the work of Mehta et al. (DSC=). Further, we compare our method with MS-CNN, which has reported average DSC values for six structures for left and right parts together (overall DSC = ). Our method’s mean DSC on these structures was , which was higher than the result of MS-CNN () and yielded higher DSC scores for all the structures. Finally, when compared with the work of Dolz et al., our method showed a comparable performance, although this last work showed slightly higher averaged DSC values for the four biggest structures.

3.4 Effect of the spatial priors

We ran experiments using the proposed method with and without spatial priors to determine the effect of using such features to the segmentation performance on both datasets. For this experiment, we analyzed the results in terms of DSC on the MICCAI 2012 dataset. We did not present the results of this experiment for the IBSR 18 dataset for simplicity, since it produced the similar outcome. In order to test our network without the spatial features, we modified the architecture (Figure 1) by removing the branch of atlas probabilities and keeping only three branches of convolutional layers.

Table 3,

Method Random sampling No atlas Final method
Tha.L 0.8600.013 0.9110.024 0.9210.017
Tha.R 0.8620.014 0.9170.017 0.9200.016
Cau.L 0.8310.067 0.8800.103 0.8940.071
Cau.R 0.8340.048 0.8640.131 0.8920.057
Put.L 0.8710.024 0.9000.073 0.9160.023
Put.R 0.8720.027 0.9130.029 0.9140.031
Pal.L 0.7840.040 0.8520.086 0.8430.101
Pal.R 0.7750.057 0.8330.099 0.8610.049
Hip.L 0.7780.034 0.8710.019 0.8760.020
Hip.R 0.7700.026 0.8760.018 0.8790.020
Amy.L 0.7090.025 0.8240.037 0.8330.032
Amy.R 0.7160.054 0.8190.035 0.8210.027
Acc.L 0.7440.060 0.7960.052 0.7990.052
Acc.R 0.6890.091 0.7530.106 0.7910.067
Avg. 0.7920.076 0.8580.083 0.8690.064
Table 3: Effect of spatial features and the proposed sample selection technique. MICCAI 2012 dataset. Random sampling – method without using the sample selection from boundaries (including the spatial priors). No atlas – method without incorporating atlas priors (using the sampling technique). Final method – proposed method that includes both the spatial features and the sampling technique. Structure acronyms are: left thalamus (Tha.L), right thalamus (Tha.R), left caudate (Cau.L), right caudate (Cau.R), left putamen (Put.L), right putamen (Put.R), left pallidum (Pal.L), right pallidum (Pal.R), left hippocampus (Hip.L), right hippocampus (Hip.R), left amygdala (Amy.L), right amygdala (Amy.R), left accumbens (Acc.L), right accumbens (Acc.R).

shows DSC results of our method with random sampling, without using spatial features, and the final method. Inclusion of the spatial features significantly improved the overall DSC , as well as the results for almost all the structures. The segmentation difference can be seen from Figure 4,

Figure 4: Comparison of segmentation output for the difficult areas of the (A) caudate, (B) pallidum, and (C) accumbens structures in some of the images from MICCAI 2012 dataset using the proposed method with and without using the spatial priors. The caudate and pallidum areas are shown in red and green circles respectively from axial view, and accumbens is shown in blue circle from coronal view.

where difficult areas of the caudate, pallidum, and accumbens structures were segmented better by the method that comprised the spatial features. Hence, the spatial priors helped to overcome difficult areas, producing more accurate segmentation for some images that had intensity and shape irregularities that could not be observed in any of the training images.

3.5 Effect of sample selection

In this section, we show the effect of sample selection from structure boundaries using the MICCAI 2012 dataset. For this experiment, random sample selection from all the brain tissues has been used for training the network. For every epoch, we extracted the same number of voxels () for both the sub-cortical structures () and background (). Here, background voxels were randomly selected from whole brain volume, instead of selecting only from structure boundaries (see Figure 1(d)). The network was again trained for 200 epochs using the same configuration. Spatial features were also included in training.

Table 3 shows the results corresponding to this experiment. Mean DSC obtained with our network without using the sample selection technique was compared to of the final approach. Accordingly, the proposed sample selection technique significantly improved the network’s performance in average as well as for each of the structures . Figure 5 illustrates the segmentation results produced by our final approach and without applying sampling from borders. As it can be seen from the difference between groundtruth and segmentation masks, the final strategy produced better segmentation on the boundaries than random sample selection method.

Figure 5: Illustration of misclassification occurrence on borders. MICCAI 2012 dataset. (A, B) T1-w image and manual segmentation; (C, D) segmentation using random sample selection and difference from groundtruth; (E, F) segmentation using the sample selection from borders and difference from groundtruth.

In fact, the difference of our segmentation and the ground truth mask was not substantial, but only a few voxels. We also can observe that the intensities on the border voxels of the structures are mostly confounding. Therefore, assigning these voxels to the structure or background is highly dependent on ground truth.

4 Discussion

In this paper, we have proposed a fully automated 2.5D patch-based CNN approach that combines both convolutional and a priori spatial features for accurate segmentation of the sub-cortical brain structures. In our approach, a structural sub-cortical atlas has been registered into the image space to extract the spatial probability of each voxel. Then, fused with the extracted convolutional features in the fully connected layers. The inclusion of the spatial information increases the execution time by adding atlas registration. However, it allows us to filter out misclassified regions that have bigger size than the actual structures in the segmentation output, which may appear in unobserved areas (i.e. not included in the training phase) of the brain as a consequence of applying restricted sampling. As seen in all the experiments, the addition of the spatial priors and the restricted sampling strategy have a significant effect on the accuracy of the proposed method, outperforming or showing a comparable performance to both classic as well as other novel learning approaches for segmenting the sub-cortical structures.

Compared to other state-of-the-art techniques such as FreeSurfer and FIRST, the spatial agreement of the proposed method with the manual segmentation is clearly higher in all evaluated datasets. As seen in other radiological tasks, this reinforces the effectiveness of CNN techniques when manual expert annotations are available. On the MICCAI 2012 dataset, our method shows an excellent performance, slightly over-performing the best challenge participant strategy – PICSL. Although not directly evaluated, our method clearly reduces the training and inference time. However, it has to be noted that most of the execution time of PICSL is due to highly computational registration processes which were carried out on CPU, while our method relies on GPU processors to speed-up training. Other CNN methods have also been evaluated on the MICCAI 2012 database (Wachinger et al., 2017; Mehta et al., 2017). However, these works do not report exact evaluation values for sub-cortical structures, constraining us in performing a quantitative comparison.

In contrast, different CNN methods that have been evaluated using the IBSR 18 dataset have reported exact numerical values. When compared to other CNN approaches, our method also showed a significant increase in the performance with respect to most of them, and a comparable performance with the method proposed by Dolz et al. However, as seen in Section 3.3, previous studies do not always deal with all sub-cortical structures, restricting a more detailed comparison with respect to other methods. Additionally, the training methodology also differed among the strategies. In this aspect, although all our experiments were carried out using the leave-one-out approach, we also repeated our IBSR 18 experiments using a 6-fold (15 training and 3 testing) validation strategy to perform a fair comparison with some of the considered methods. The complete results of the 6-fold validation strategy were not depicted in the paper for simplicity, but, our network achieved similar results with only of difference in DSC with respect to the leave-one-out strategy, showing the robustness of the proposed approach to changes in the number of training images.

According to the experimental results, employing the spatial features to the CNN significantly improved the performance of the network. The atlas priors showed to be useful in guiding the network when segmenting the difficult areas. As we have seen in Section 3.4, CNN that leveraged the spatial priors coped with these intensity based difficulties. Accordingly, by providing the atlas probabilities, we make sure that the anatomical shape and structure are taken into account before assigning a label to a voxel. Since the sub-cortical structures follow the similar anatomical structure in all patients, the inclusion of the spatial features makes the segmentation approach more robust to irregularities in intensity based features obtained from T1-w images by providing additional location-based information. Despite being prone to the inherent errors in image registration, the addition of these a priori spatial class probabilities, or other explicit fused problem-specific information, may have other direct benefits such as reduction of the effect of low contrast, poor resolution, presence of noise, and artifacts close to the structure boundaries.

Our results also show the importance of sampling and class balancing in the training process. By feeding the network with only the most difficult negative samples, we ensure that useful samples were used in the training process. When compared to the rest of CNN approaches, our method without restricted sampling yields a similar performance to other methods such as the one of Shakeri et al. (2016) and MS-CNN (Bao & Chung, 2016) even if trained on the same conditions, which highlights the effectiveness of the used sampling strategy. As a counterpart, this kind of approaches tend to generate false positive regions outside the sub-cortical space, due to the lack of contextual spatial information of the whole brain. Within our approach, we take advantage of the already computed spatial priors to reduce the segmentation to only a region of interest containing the sub-cortical structures, which reduces remarkably the inference time. Remaining false positive voxels are then post-processed by maintaining only the biggest region for each class.

Our study comprises some limitations. As part of supervised training strategies, the accuracy of CNN methods tend to decrease significantly in other image domains (i.e. different MRI scanner, image protocol, etc.) than the ones used for training. Nevertheless, there is still a little evidence of the capability of CNN methods in radiological tasks with small or none datasets, which highlights the need of further studying this issue to increase the accuracy of such approaches. With no more evidence in this field, FIRST may be more appropriate in these scenarios when few or no training data is available. Another constraint involves the applicability of the proposed method on datasets of images with neurological diseases comprising, for instance, white matter lesions, which has been recently shown in González-Villà et al. (2017), where such conditions affected the brain structure segmentation task.

5 Conclusion

In this paper, we have presented a novel CNN based deep learning approach for accurate and robust segmentation of the sub-cortical brain structures that combines both convolutional and prior spatial features for improving the segmentation accuracy. In order to increase the accuracy of the classifier, we have proposed to train the network using a restricted sample selection to force the network to learn the most difficult parts of the structures. As seen from all the experiments carried out on the public MICCAI 2012 and IBSR 18 datasets, the addition of the spatial priors and the restricted sampling strategy have a significant impact on the effectiveness of the proposed method, outperforming or showing a comparable performance to state-of-the-art methods such as FreeSurfer, FIRST and different recently proposed CNN approaches. In order to encourage the reproducibility and the use of the proposed method, a public version is available to download for the neuroimaging community at our research website.

6 Acknowledgements

Kaisar Kushibar and Jose Bernal hold FI-DGR2017 grant from the Catalan Government with reference numbers 2017FI_B00372 and 2017FI_B00476 respectively. This work has been partially supported by La Fundació la Marató de TV3, by Retos de Investigación TIN2014-55710-R and TIN2015-73563-JIN from the Ministerio de Ciencia y Tecnologia, and by MPC UdG 2016/022 grant. The authors gratefully acknowledge the support of the NVIDIA Corporation with their donation of the TITAN-X PASCAL GPU used in this research.



  • Altshuler et al. (1998) Altshuler, L. L., Bartzokis, G., Grieder, T., Curran, J., & Mintz, J. (1998). Amygdala enlargement in bipolar disorder and hippocampal reduction in schizophrenia: an MRI study demonstrating neuroanatomic specificity. Archives of General Psychiatry, 55, 663–664.
  • Bao & Chung (2016) Bao, S., & Chung, A. C. (2016). Multi-scale structured CNN with label consistency for brain MR image segmentation. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, (pp. 1–5).
  • Bergstra et al. (2011) Bergstra, J., Breuleux, O., Lamblin, P., Pascanu, R., Delalleau, O., Desjardins, G., Goodfellow, I., Bergeron, A., Bengio, Y., & Kaelbling, P. (2011). Theano: Deep learning on GPUs with python.

    Journal of Machine Learning Research

    , 1, 1–48.
  • Brébisson & Montana (2015) Brébisson, A., & Montana, G. (2015). Deep neural networks for anatomical brain segmentation. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    (pp. 20–28).
  • Caviness Jr et al. (1996) Caviness Jr, V. S., Meyer, J., Makris, N., & Kennedy, D. N. (1996).

    MRI-based topographic parcellation of human neocortex: an anatomically specified method with estimate of reliability.

    Journal of Cognitive Neuroscience, 8, 566–587.
  • Chen et al. (2016) Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2016). DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. ArXiv e-prints, . arXiv:1606.00915.
  • Cootes et al. (2001) Cootes, T. F., Edwards, G. J., & Taylor, C. J. (2001). Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 681--685.
  • Cootes et al. (1995) Cootes, T. F., Taylor, C. J., Cooper, D. H., & Graham, J. (1995). Active shape models-their training and application. Computer Vision and Image Understanding, 61, 38--59.
  • Debernard et al. (2015) Debernard, L., Melzer, T. R., Alla, S., Eagle, J., Van Stockum, S., Graham, C., Osborne, J. R., Dalrymple-Alford, J. C., Miller, D. H., & Mason, D. F. (2015). Deep grey matter MRI abnormalities and cognitive function in relapsing-remitting multiple sclerosis. Psychiatry Research: Neuroimaging, 234, 352--361.
  • Dice (1945) Dice, L. R. (1945). Measures of the amount of ecologic association between species. Ecology, 26, 297--302.
  • Dolz et al. (2017) Dolz, J., Desrosiers, C., & Ayed, I. B. (2017). 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study. Neuroimage, (p. In press. doi:10.1016/j.neuroimage.2017.04.039).
  • Esteva et al. (2017) Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542, 115--118.
  • Fischl (2012) Fischl, B. (2012). FreeSurfer. Neuroimage, 62, 774--781.
  • Fischl et al. (2002) Fischl, B., Salat, D. H., Busa, E., Albert, M., Dieterich, M., Haselgrove, C., Van Der Kouwe, A., Killiany, R., Kennedy, D., Klaveness, S. et al. (2002). Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron, 33, 341--355.
  • Fox et al. (1996) Fox, N., Warrington, E., Freeborough, P., Hartikainen, P., Kennedy, A., Stevens, J., & Rossor, M. N. (1996). Presymptomatic hippocampal atrophy in Alzheimer’s disease: A longitudinal MRI study. Brain, 119, 2001--2007.
  • Ghafoorian et al. (2017) Ghafoorian, M., Karssemeijer, N., Heskes, T., van Uden, I. W., Sanchez, C. I., Litjens, G., de Leeuw, F.-E., van Ginneken, B., Marchiori, E., & Platel, B. (2017). Location sensitive deep convolutional neural networks for segmentation of white matter hyperintensities. Scientific Reports, 7, 5110.
  • Girshick et al. (2014) Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 580--587).
  • Glorot et al. (2011) Glorot, X., Bordes, A., & Bengio, Y. (2011). Deep sparse rectifier neural networks. In

    Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics

    (pp. 315--323).
  • González-Villà et al. (2016) González-Villà, S., Oliver, A., Valverde, S., Wang, L., Zwiggelaar, R., & Lladó, X. (2016). A review on brain structures segmentation in magnetic resonance imaging. Artificial Intelligence in Medicine, 73, 45--69.
  • González-Villà et al. (2017) González-Villà, S., Valverde, S., Cabezas, M., Pareto, D., Vilanova, J. C., Ramió-Torrentà, L., Rovira, À., Oliver, A., & Lladó, X. (2017). Evaluating the effect of multiple sclerosis lesions on automatic brain structure segmentation. Neuroimage: Clinical, 15, 228--238.
  • Hazlett et al. (2017) Hazlett, H. C., Gu, H., Munsell, B. C., Kim, S. H., Styner, M., Wolff, J. J., Elison, J. T., Swanson, M. R., Zhu, H., Botteron, K. N. et al. (2017). Early brain development in infants at high risk for autism spectrum disorder. Nature, 542, 348--351.
  • He et al. (2016) He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770--778).
  • Kikinis et al. (1996) Kikinis, R., Shenton, M. E., Iosifescu, D. V., McCarley, R. W., Saiviroonporn, P., Hokama, H. H., Robatino, A., Metcalf, D., Wible, C. G., Portas, C. M. et al. (1996). A digital brain atlas for surgical planning, model-driven segmentation, and teaching. IEEE Transactions on Visualization and Computer Graphics, 2, 232--241.
  • Kingma & Ba (2014) Kingma, D. P., & Ba, J. (2014). Adam: A Method for Stochastic Optimization. ArXiv e-prints, . arXiv:1412.6980.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097--1105).
  • Lafferty et al. (2001) Lafferty, J., McCallum, A., Pereira, F. et al. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (pp. 282--289). volume 1.
  • Landman & Warfield (2012) Landman, B., & Warfield, S. (2012). MICCAI 2012 workshop on multi-atlas labeling. In Medical Image Computing and Computer Assisted Intervention Conference.
  • Lawrie et al. (2003) Lawrie, S. M., Whalley, H. C., Job, D. E., & Johnstone, E. C. (2003). Structural and functional abnormalities of the amygdala in schizophrenia. Annals of the New York Academy of Sciences, 985, 445--460.
  • LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278--2324.
  • Liu et al. (2017) Liu, Y., Gadepalli, K., Norouzi, M., Dahl, G. E., Kohlberger, T., Boyko, A., Venugopalan, S., Timofeev, A., Nelson, P. Q., Corrado, G. S., Hipp, J. D., Peng, L., & Stumpe, M. C. (2017). Detecting Cancer Metastases on Gigapixel Pathology Images. ArXiv e-prints, . arXiv:1703.02442.
  • Long et al. (2015) Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3431--3440).
  • Mak et al. (2014) Mak, E., Bergsland, N., Dwyer, M., Zivadinov, R., & Kandiah, N. (2014). Subcortical atrophy is associated with cognitive impairment in mild parkinson disease: a combined investigation of volumetric changes, cortical thickness, and vertex-based shape analysis. American Journal of Neuroradiology, 35, 2257--2264.
  • Mehta et al. (2017) Mehta, R., Majumdar, A., & Sivaswamy, J. (2017). BrainSegNet: a convolutional neural network architecture for automated segmentation of human brain structures. Journal of Medical Imaging, 4, 024003--024003.
  • Milham et al. (2005) Milham, M. P., Nugent, A. C., Drevets, W. C., Dickstein, D. S., Leibenluft, E., Ernst, M., Charney, D., & Pine, D. S. (2005). Selective reduction in amygdala volume in pediatric anxiety disorders: a voxel-based morphometry investigation. Biological Psychiatry, 57, 961--966.
  • Milletari et al. (2017) Milletari, F. et al. (2017). Hough-CNN: deep learning for segmentation of deep brain regions in MRI and ultrasound. Computer Vision and Image Understanding, .
  • Modat et al. (2010) Modat, M., Ridgway, G. R., Taylor, Z. A., Lehmann, M., Barnes, J., Hawkes, D. J., Fox, N. C., & Ourselin, S. (2010). Fast free-form deformation using graphics processing units. Computer Methods and Programs in Biomedicine, 98, 278--284.
  • Ourselin et al. (2000) Ourselin, S., Roche, A., Prima, S., & Ayache, N. (2000). Block matching: A general framework to improve robustness of rigid registration of medical images. In MICCAI (pp. 557--566). Springer volume 1935.
  • Patenaude et al. (2011) Patenaude, B., Smith, S. M., Kennedy, D. N., & Jenkinson, M. (2011). A Bayesian model of shape and appearance for subcortical brain segmentation. Neuroimage, 56, 907--922.
  • Phillips et al. (2015) Phillips, J. L., Batten, L. A., Tremblay, P., Aldosary, F., & Blier, P. (2015).

    A prospective, longitudinal study of the effect of remission on cortical thickness and hippocampal volume in patients with treatment-resistant depression.

    International Journal of Neuropsychopharmacology, 18, pyv037.
  • Pitiot et al. (2004) Pitiot, A., Delingette, H., Thompson, P. M., & Ayache, N. (2004). Expert knowledge-guided segmentation system for brain MRI. Neuroimage, 23, S85--S96.
  • Sarraf et al. (2016) Sarraf, S., Tofighi, G. et al. (2016). DeepAD: Alzheimer′ s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI. bioRxiv, (p. 070441).
  • Shakeri et al. (2016) Shakeri, M., Tsogkas, S., Ferrante, E., Lippe, S., Kadoury, S., Paragios, N., & Kokkinos, I. (2016). Sub-cortical brain structure segmentation using F-CNN’s. In Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on (pp. 269--272). IEEE.
  • Simonyan & Zisserman (2014) Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. ArXiv e-prints, . arXiv:1409.1556.
  • Smith (2002) Smith, S. M. (2002). Fast robust automated brain extraction. Human Brain Mapping, 17, 143--155.
  • Szegedy et al. (2015) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1--9).
  • Tustison et al. (2010) Tustison, N. J., Avants, B. B., Cook, P. A., Zheng, Y., Egan, A., Yushkevich, P. A., & Gee, J. C. (2010). N4ITK: improved N3 bias correction. IEEE Transactions on Medical Imaging, 29, 1310--1320.
  • Wachinger et al. (2017) Wachinger, C., Reuter, M., & Klein, T. (2017). DeepNAT: Deep convolutional neural network for segmenting neuroanatomy. Neuroimage, (p. In press. doi:10.1016/j.neuroimage.2017.02.035).
  • Wang & Yushkevich (2013) Wang, H., & Yushkevich, P. A. (2013). Groupwise segmentation with multi-atlas joint label fusion. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 711--718). Springer.