1 Introduction
Image segmentation is a fundamental process in several medical applications. Diagnosis, treatment, planning and monitoring, as well as pathology characterization, benefit from accurate segmentation. In this paper we are interested in brain subcortical structures located at the frontostriatal system. Previous studies have shown the involvement of the frontostriatal structures in different neurodegenerative and neuropsychiatric disorders, including schizophrenia, Alzheimer’s disease, attention deficit, and subtypes of epilepsy [1]. Segmenting these parts of the brain enables a physician to extract various volumetric and morphological indicators, facilitating the quantitative analysis and characterization of several neurological diseases and their evolution.
In the past few years, deep learning techniques, and particularly Convolutional Neural Networks (CNNs), have rapidly become the tool of choice for tackling challenging computer vision tasks. CNNs were popularized by Lecun, after delivering stateofart results on handwritten digit recognition
[2]. However, they fell out of favor in the following years, mostly due to hardware and training data limitations. Nowadays, the availability of largescale datasets (e.g. ImageNet), powerful GPUs and appropriate software libraries, have rekindled the interest in deep learning and have made it possible to harness their power. Krizhevsky
et al. [3] published results demonstrating clear superiority of deep architectures over handcrafted features or shallow networks, for the task of image classification. Since then, CNNs have helped set new performance records for many other tasks; object detection, texture recognition and object semantic segmentation just to name a few.Our work is similar in spirit to [4], but with some notable differences. In [4] the authors train one CNN for each of the three orthogonal views of MRI scans, for knee cartilage segmentation, with the loss being computed on the concatenated outputs of the three networks. The inputs to each CNN are
image patches and the output is a softmax probability of the central pixel belonging to the tibial articular cartilage. In contrast, our method operates on full 2D image slices, exploiting context information to accurately segment regions of interest in the brain. In addition, we use
fully convolutional CNNs [5]to construct dense segmentation maps for the whole image, instead of classifying individual patches. Furthermore, our method handles multiple class labels instead of delivering a foregroundbackground segmentation, and it does that efficiently, performing a single forward pass in
.CNNs are characterized by large receptive fields that allow us to exploit context information across the spatial plane. Processing 2D slices individually, however, means that we remain agnostic to 3D context which is important, since we are dealing with volumetric data. The obvious approach of operating directly on the 3D volume instead of 2D slices, would drastically reduce the amount of data available for training, making our system prone to overfitting, while increasing its computational requirements. Alternatively, we construct a Markov Random Field on top of the CNN output in order to impose volumetric homogeneity to the final results. The CNN scores are considered as unary potentials of a multilabel energy minimization problem, where spatial homogeneity is propagated through the pairwise relations of a 6neighborhood grid. For inference we choose the popular alphaexpansion technique that leads to guaranteed optimality bounds for the type of energies we define [6].
2 Using CNNs for Semantic Segmentation
Our network is inspired by the Deeplab architecture that was recently proposed for semantic segmentation of objects [7]. Due to limited space, we refer the reader to [7] for details. One obvious and straightforward choice for adapting the Deeplab network to our task, would be to simply finetune the last three convolutional layers that replace their fully connected counterparts in the VGG16 network, while initializing the rest of the weights to the VGG16 values. This is a common approach when adapting an already existing architecture to a new task, but given the very different nature of natural RGB images and MR image data (RGB vs. grayscale, varying vs. black background), we decided to train a fully convolutional network from scratch.
Training a deep network from scratch presents us with some challenges. Medical image datasets tend to be smaller than natural image datasets, and segmentation annotations are generally hard to obtain. In our case, we only have a few 3D scans at our disposal, which increases the risk of overfitting. In addition, the repeated pooling and subsampling steps that are applied in the input images as it flows through a CNN network, decrease the output resolution, making it difficult to detect and segment finer structures in the human brain. To address these challenges, we make a series of design choices for our network: first, we opt for a shallower network, composed of five pairs of convolutional/max pooling layers. We subsample the input only for the first two maxpooling layers, and keep a stride of
for the remaining layers, introducing holes, as in [7]. This allows us to keep increasing the effective receptive field of filters, without further reducing the resolution of the output response maps. For a input image, the total subsampling factor of the network is , resulting in a array, where is the number of class labels. A pixel stride is used for all convolutional layers and activation probability for all dropout layers. The complete list of layers and important parameters is given in Table 1. At test time, a 2D image is fed to the network and the output is a threedimensional array of probability maps (one for each class), obtained via a softmax operation. To obtain a brain segmentation at this stage, we simply resize the output to the input image dimensions using bilinear interpolation and assign at each pixel the label with the highest probability. However, we still need to impose volumetric homogeneity to the solution. We propose to do it using Markov Random Fields.
Block  conv kernel  # filters  hole stride  pool kernel  pool stride  dropout 

1  77  64  1  3  2  no 
2  55  128  1  3  2  no 
3  33  256  2  3  1  yes 
4  33  512  2  3  1  yes 
5  33  512  2  3  1  yes 
6  44  1024  4  no pooling  yes  
7  11  39  1  no pooling  no 
2.1 Multilabel segmentation using CNNbased priors
For every slice of a 3D image, the output of the proposed CNN is a softmax map that indicates the probability of every pixel to be part of a given brain structure (label). We consider the volume formed by the stacked CNN output slices, as a prior of the brain 3D structures, where indicated a voxel from the original image.
Let be a graph representing a Markov Random Field, where nodes in are variables (voxels) and is a standard 6neighborhood system defining a 3D grid. Variables can take labels from a labelspace . A labeling assigns one label to every variable. We define the energy which consists of unary potentials and pairwise potentials such that it is minimum when corresponds to the best possible labeling.
Unary terms are defined as , and they assign low energy to high probability values. Pairwise terms encode the spatial homogeneity constraint by simply encouraging neighbor variables to take the same semantic label. In order to align the segmentation boundaries with intensity edges, we made this term inversely proportional to the difference of the intensity and associated to the given voxels. The pairwise formulation is where . Finally, the energy minimization problem is defined as:
(1) 
represents the optimal label assignment. Note that this energy is a metric in the space of labels ; thus, it is guaranteed that using alphaexpansion technique we can find a solution whose energy lies within a factor of 2 with respect to the optimal energy (i.e. ). Alphaexpansion is a well known movemaking technique to perform approximate inference using graph cuts, that has shown to be accurate in a broad range of vision problems. We refer the reader to [6] for a complete discussion on energy minimization using alphaexpansion.
3 Experiments and Discussion
We used the proposed method to segment a group of subcortical structures located at the frontostriatal network, including thalamus, caudate, putamen and pallidum. We evaluated our approach on two brain MRI datasets.
The first one is a publicly available dataset provided by the Internet Brain Segmentation Repository (IBSR) [8]. It contains labeled 3D T1weighted MR scans with slice thickness of around . In this work we use the subset of primarily subcortical labels, including left and right thalamus, caudate, putamen, and pallidum. The second dataset is obtained from a Rolandic Epilepsy (RE) study, including children with epilepsy and matched healthy individuals. For each participant, T1weighted magnetic resonance images (MRI) were acquired with a T scanner (Philips Acheiva) with an inplane resolution of and slice thickness of . The left and right putamen structures were manually annotated by an experienced user. For both datasets, we process volumes slice by slice, after resizing them to pixels. We treat these 2D slices as individual grayscale images to train our CNN.
In the first experiment, we compare the performance of our segmentation method using CNN priors, with an approach based on Random Forest priors, where the same MRF refinement is applied. The RFbased pervoxel likelihoods are computed in the same way as [9]. Then, the RF probability maps are considered as the unary potentials of a Markov Random Field and alphaexpansion is used to compute the most likely label for each voxel, as explained in Section 2.1. Figure 1 and Figure 2 show the average Dice coefficient, Hausdorff distance, and contour mean distance between output segmentations and the ground truth for different structures. These results show that the CNNbased approach achieves higher Dice compared to RFbased method, while producing lower Hausdorff and contour mean distance.
In the second experiment, we compare the accuracy of our proposed method with two publicly available stateoftheart automatic segmentation toolboxes, Freesurfer [10], and FSLFIRST [11]. In Table 2 we report the average Dice coefficient for the left and right structures; these results show that our method provides better segmentations compared to the stateoftheart for three subcortical structures in both IBSR and RE dataset. However, Freesurfer results in better segmentation for caudate in the IBSR dataset which could be attributed to the limitation of CNN in capturing thin tail areas of the caudate structures. In Figure 3 we show qualitative results.
3.1 CNN Training and Evaluation Details
The input to our network is a single 2D slice from a 3D MRI scan, along with the corresponding label map. We apply data augmentation to avoid overfitting: we use horizontally flipped and translated versions of the input images by 5, 10, 15, 20 pixels, across the axes. Other transformations, such as rotation, could be considered as well. The MR image data are centered and the background always takes zero values, so we do not perform mean image subtraction as is usually the case.
In the case of IBSR, we split the available data into three sets. Each time, we use two of the sets as training data (approximately training samples) and the third set as test data. One of the training data volumes is left out and used as validation data. Similarly, we split RE into two subsets of equal size, using one for training and one for testing, each time. We train on both datasets for epochs starting with a learning rate of and dropping it at a logarithmic rate until . For training, we use standard SGD with a momentum of and a softmax loss. For all our experiments we used MATLAB and the deep learning library MatConvNet [12]. Code, computed probability maps, and more results can be found at https://github.com/tsogkas/brainseg.
We also experimented with CNNs trained on 2D slices from the other two views (sagittal and coronal) but the resulting models performed poorly. The problem is rooted in the inherent symmetry of some brain structures and the fact that the CNN is evaluated on individual slices, ignoring 3D structure. For instance, when processing slices across sagittal view, the right and left putamen appear at roughly the same positions in the image. They are also very similar in terms of shape and appearance, which fools the system into assigning the same label to both regions. This simple example demonstrates the need for richer priors that take into account the full volume structure to assign class labels.
4 Conclusion
In this paper, we proposed a deep learning framework for segmenting frontostriatal subcortical structures in MR images of the human brain. We trained a fully convolutional neural network for segmentation of 2D slices and treated the output probability maps as a proxy for the respective voxel likelihoods. We further improved segmentation results by using the CNN outputs as potentials of a Markov Random Field (MRF) to impose spatial volumetric homogeneity. Our experiments show that the proposed method outperforms approaches based on other learned priors, as well as stateoftheart segmentation methods. However, we also note some limitations: the current model is not able to accurately capture thin tail areas of the caudate structures. Second, symmetric structures confound the CNN training process when considering views which are parallel to the plane of symmetry. Third, graphbased methods have to be used to impose volumetric consistency since training is done on 2D slices. Different network layouts, taking account of volumetric structure can possibly help overcome these limitations.
Proposed  Freesurfer  FSL  

IBSRThalamus  0.87  0.86  0.85 
IBSRCaudate  0.78  0.82  0.68 
IBSRPutamen  0.83  0.81  0.81 
IBSRPallidum  0.75  0.71  0.73 
REPutamen  0.89  0.74  0.88 
References
 [1] Y. Chudasama and T.W. Robbins, “Functions of frontostriatal systems in cognition: Comparative neuropsychopharmacological studies in rats, monkeys and humans,” Biological Psychology, 2006.
 [2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, 1998.
 [3] A. Krizhevsky, I. Sutskever, and G.E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.

[4]
A. Prasoon, K. Petersen, C. Igel, F. Lauze, E. Dam, and M. Nielsen,
“Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network,”
in MICCAI. 2013.  [5] Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,” CVPR, 2015.
 [6] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” PAMI, 2001.
 [7] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A.L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” arXiv preprint arXiv:1412.7062, 2014.

[8]
T. Rohlfing,
“Image similarity and tissue overlaps as surrogates for image registration accuracy: Widely used but unreliable,”
IEEE Transactions on Medical Imaging, 2012.  [9] S. Alchatzidis, A. Sotiras, and N. Paragios, “Discrete multi atlas segmentation using agreement constraints,” in BMVC, 2014.
 [10] B. Fischl, D. H. Salat, E. Busa, M. Albert, M. Dieterich, C. Haselgrove, A. Van Der Kouwe, R. Killiany, D. Kennedy, S. Klaveness, et al., “Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain,” Neuron, 2002.
 [11] B. Patenaude, S. M. Smith, Kennedy D. N., and M. Jenkinson, “A bayesian model of shape and appearance for subcortical brain segmentation,” NeuroImage, 2011.
 [12] A. Vedaldi and K. Lenc, “Matconvnetconvolutional neural networks for matlab,” arXiv preprint arXiv:1412.4564, 2014.
Comments
There are no comments yet.