1 Introduction
Segmentation of anatomical structures is an essential task for a range of medical image processing applications such as imagebased diagnosis, anatomical structure modeling, surgical planning and guidance. Although automatic segmentation methods [1] have been investigated for many years, they can rarely achieve sufficiently accurate and robust results to be useful for many medical imaging applications. This is mainly due to poor image quality (with noise, artifacts and low contrast), large variations among patients, inhomogeneous appearances brought by pathology, and variability of protocols among clinicians leading to different definitions of a given structure boundary. Interactive segmentation methods, which take advantage of users’ knowledge of anatomy and the clinical question to overcome the challenges faced by automatic methods, are widely used for higher accuracy and robustness [2].
Although leveraging user interactions helps to obtain more precise segmentation [3, 4, 5, 6]
, the resulting requirement for many user interactions increases the burden on the user. A good interactive segmentation method should require as few user interactions as possible, leading to interaction efficiency. Machine learning methods are commonly used to reduce user interactions. For example, GrabCut
[7]uses Gaussian Mixture Models to represent color distributions. It requires the user to provide a bounding box around the object of interest for segmentation and allows additional scribbles for refinement. SlicSeg
[8]employs Online Random Forests to segment a Magnetic Resonance Imaging (MRI) volume by learning from userprovided scribbles in only one slice. Active learning is used in
[9] to actively select candidate regions for querying the user.Recently, deep learning techniques with convolutional neural networks (CNNs) have achieved increasing success in image segmentation [10, 11, 12]. They can find the most suitable features through automatic learning instead of manual design. By learning from large amounts of training data, CNNs have achieved stateoftheart performance for automatic segmentation [12, 13, 14]. One of the most widely used CNNs is the Fully Convolutional Network (FCN) [11]. It outputs the segmentation directly by computing forward propagation only once at the testing time.
Recent advances of CNNs for image segmentation mainly focus on two aspects. The first is to overcome the problem of reduction of resolution caused by repeated combination of maxpooling and downsampling. Though some upsampling layers can be used to recover the resolution, this easily leads to bloblike segmentation results and low accuracy for tiny structures
[11]. In [15, 12], dilated convolution is proposed to replace some downsampling layers and it allows exponential expansion of the receptive field without the loss of resolution. However, the CNNs in [15, 12] keep three layers of pooling and downsampling therefore their output resolution is still reduced eight times compared with the input. The second aspect is to enforce interpixel dependency to get a spatially regularized result. This helps to recover edge details and reduce noise in pixel classification. DeepLab [16] and DeepMedic [14] used fully connected Conditional Random Fields (CRFs) as a postprocessing step. However, the parameters of these CRFs rely on manual tuning which is time consuming and may not ensure optimal values. It was shown in [17]that the CRF can be formulated as a Recurrent Neural Network (RNN) so that it can be trained endtoend utilizing the backpropagation algorithm. However, this CRF constrains the pairwise potentials as Gaussian functions, which may be too restrictive for some complex cases, and this method does not apply automatic learning to all its parameters. Thus, using more freeform learnable pairwise potential functions and allowing automatic learning of all the parameters can potentially achieve better results.
This paper aims to integrate user interactions into CNN frameworks to obtain accurate and robust segmentation of 2D and 3D medical images, and at the same time, we aim to make the interactive framework more efficient with a minimal number of user interactions by using CNNs. With the good performance of CNNs shown for automatic image segmentation tasks [10, 11, 16, 13, 14], we hypothesize that they can reduce the number of user interactions for interactive image segmentation. However, only a few works have been reported to apply CNNs to interactive segmentation tasks [18, 19, 20, 21].
The contributions of this work are fourfold. 1). We propose a deep CNNbased interactive framework for 2D and 3D medical image segmentation. We use one CNN to get an initial automatic segmentation, which is refined by another CNN that takes as input the initial segmentation and user interactions; 2). We present a new way to combine user interactions with CNNs based on geodesic distance maps that are used as extra channels of the input for CNNs. We show that using geodesic distance can lead to improved segmentation accuracy compared with using Euclidean distance; 3). We propose a resolutionpreserving CNN structure which leads to a more detailed segmentation result compared with traditional CNNs with resolution loss, and 4). We extend the current RNNbased CRFs [17] for segmentation so that the backpropagatable CRFs can use user interactions as hard constraints and all the parameters of potential functions can be trained in an endtoend way. We apply the proposed method to 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from fluid attenuation inversion recovery (FLAIR) images.
2 Related Works
2.1 Image Segmentation based on CNNs
Typical CNNs such as AlexNet [22], GoogleNet [23], VGG [24] and ResNet [25] were originally designed for image classification tasks. Some early works adapted such networks for pixel labeling with patch or regionbased methods [13, 10]. Such methods achieved higher accuracy than traditional methods that relied on handcrafted features. However, they suffered from inefficiency for testing. FCNs [11]
take an entire image as input and give a dense segmentation. In order to overcome the problem of loss of spatial resolution due to multistage maxpooling and downsampling, it uses a stack of deconvolution (a.k.a. upsampling) layers and activation functions to upsample the feature maps. Inspired by the convolution and deconvolution framework of FCNs, a Ushape network (UNet)
[26] and its 3D version [18] were proposed for biomedical image segmentation. A similar network (VNet) [27] was proposed to segment the prostate from 3D MRI volumes.To overcome the drawbacks of successive maxpooling and downsampling that lead to a loss of feature map resolution, dilated convolution [12, 15] was proposed to preserve the resolution of feature maps and enlarge the receptive field to incorporate larger contextual information. In [28], a stack of dilated convolutions was used for object tracking and semantic segmentation. Dilated convolution has also been used for instancesensitive segmentation [29] and action detection from video frames [30].
Multiscale features extracted from CNNs have been shown to be effective for improving segmentation accuracy
[11, 12, 15]. One way of obtaining multiscale features is to pass several scaled versions of the input image through the same network. The features from all the scales can be fused for pixel classification [31]. In [13, 14], the features of each pixel were extracted from two concentric patches with different sizes. In [32], multiscale images at different stages were fed into a recurrent convolutional neural network. Another widely used way to obtain multiscale features is exploiting the feature maps from different levels of a CNN. For example, in [33], features from intermediate layers are concatenated for segmentation and localization. In [11, 12], predictions from the final layer are combined with those from previous layers.2.2 Interactive Image Segmentation
Interactive image segmentation has been widely used in various applications [34, 35, 20]. There are many kinds of user interactions, such as clickbased [36], contourbased [4] and bounding boxbased methods [7]. Drawing scribbles is userfriendly and particularly popular, e.g., in Graph Cuts [3], GeoS [37, 6], and Random Walks [5]. However, most of these methods rely on lowlevel features and require a relatively large amount of user interactions to deal with images with low contrast and ambiguous boundaries. Machine learning methods [38, 8, 39] have been proposed to learn from user interactions. They can achieve higher segmentation accuracy with fewer user interactions. However, they are limited by handcrafted features that depend on the user’s experience.
Recently, using deep CNNs to improve interactive segmentation has attracted increasing attention due to CNNs’ automatic feature learning and high performance. For instance, 3D UNet [18] learns from sparsely annotated images and can be used for semiautomatic segmentation. ScribbleSup [19] also trains CNNs for semantic segmentation supervised by scribbles. DeepCut [20] employs userprovided bounding boxes as annotations to train CNNs for the segmentation of fetal MRI. However, these methods are not fully interactive for testing since they do not accept further interactions for refinement. In [21], a deep interactive object selection method was proposed where userprovided clicks are transformed into Euclidean distance maps and then concatenated with the input of FCNs. However, the Euclidean distance does not take advantage of image context information. In contrast, the geodesic distance transform [37, 6, 40] encodes spatial regularization and contrastsensitivity but it has not been used for CNNs.
2.3 CRFs for Spatial Regularization
Graphical models such as CRFs [41, 42, 12] have been widely used to enhance segmentation accuracy by introducing spatial consistency. In [41], spatial regularization was obtained by minimizing the Potts energy with a mincut/maxflow algorithm. In [42], the discrete maxflow problem was mapped to its continuous optimization formulation. Such methods encourage segmentation consistency between adjacent pixel pairs with high similarity. In order to better model longrange connections within the image, a fully connected CRF was used in [43] to establish pairwise potentials on all pairs of pixels in the image. To make the inference of this CRF efficient, the pairwise edge potentials were defined by a linear combination of Gaussian kernels in [44]. The parameters of CRFs in these works were manually tuned or inefficiently learned by grid search. In [45]
, a maximum margin learning method was proposed to learn CRFs using Graph Cuts. Other methods including structured output Support Vector Machines
[46], approximate marginal inference [47] and gradientbased optimization [48]were also proposed to learn parameters in CRFs. They treat the learning of CRFs as an independent step after the training of classifiers.
The CRFRNN network [17] formulated dense CRFs as RNNs so that the CNNs and CRFs can be jointly trained in an endtoend system for segmentation. However, the pairwise potentials in [17] are limited to weighted Gaussians and not all the parameters are trainable due to the Permutohedral lattice implementation [49]. In [50], a Gaussian Mean Field (GMF) network was proposed and combined with CNNs where all the parameters are trainable. More freeform pairwise potentials for a pair of superpixels or image patches were proposed in [51, 31], but such CRFs have a low resolution. In [52], a generic CNNCRF model was proposed to handle arbitrary potentials for labeling body parts in depth images. However, it has not yet been validated with other segmentation applications.
3 Method
The proposed deep interactive segmentation method based on CNNs and geodesic distance transforms (DeepIGeoS) is depicted in Fig. 1. To minimize the number of user interactions, we propose to use two CNNs: an initial segmentation proposal network (PNet) and a refinement network (RNet). PNet takes as input a raw image with channels and gives an initial automatic segmentation. Then the user checks the segmentation and provides some interactions (clicks or scribbles) to indicate missegmented regions. RNet takes as input the original image, the initial segmentation and the user interactions to provide a refined segmentation. PNet and RNet use a resolutionpreserving structure that captures highlevel features from a large receptive field without loss of resolution. They share the same structure except the difference in the input dimensions. Based on the initial automatic segmentation obtained by PNet, the user might give clicks/scribbles to refine the result more than one time through RNet. Differently from previous works [53] that retrain the learning model each time when new user interactions are given, the proposed RNet is only trained with user interactions once since it takes a considerable time to retrain a CNN model with a large training set.
To make the segmentation result more spatially consistent and to use scribbles as hard constraints, both PNet and RNet are connected with a CRF, which is modeled as an RNN (CRFNet) so that it can be trained jointly with PNet/RNet by backpropagation. We use freeform pairwise potentials in the CRFNet. The way user interactions are used is presented in 3.1. The structures of 2D/3D PNet and RNet are detailed in 3.2. In 3.3, we describe the implementation of our CRFNet. Training details are described in 3.4.
3.1 User Interactionbased Geodesic Distance Maps
In our method, scribbles are provided by the user to refine an initial automatic segmentation obtained by PNet. A scribble labels a set of pixels as the foreground or background. Interactions with the same label are converted into a distance map. In [21], the Euclidean distance was used due to its simplicity. However, the Euclidean distance treats each direction equally and does not take the image context into account. In contrast, the geodesic distance helps to better differentiate neighboring pixels with different appearances, and improve label consistency in homogeneous regions [6]. GeoF [40] uses the geodesic distance to encode variable dependencies in the feature space and it is combined with Random Forests for semantic segmentation. However, it is not designed to deal with user interactions. We propose to encode user interactions via geodesic distance transforms for CNNbased segmentation.
Suppose and represent the set of pixels belonging to foreground scribbles and background scribbles, respectively. Let be a pixel in an image , then the unsigned geodesic distance from to the scribble set is:
(1) 
(2) 
where is the set of all paths between pixel and . is one feasible path and it is parameterized by [0,1]. is a unit vector that is tangent to the direction of the path. If no scribbles are drawn for either the foreground or background, the corresponding geodesic distance map is filled with random numbers.
Fig. 2 shows an example of geodesic distance transforms of user interactions. The geodesic distance maps of user interactions and the initial automatic segmentation have the same size as . They are concatenated with the raw channels of so that a concatenated image with +3 channels is obtained, which is used as the input of the refinement network RNet.
3.2 ResolutionPreserving CNNs using Dilated Convolution
CNNs in our method are designed to capture highlevel features from a large receptive field without the loss of resolution of the feature maps. They are adapted from VGG16 [24] and made resolutionpreserving. Fig. 3 shows the structure of 2D and 3D PNet. In 2D PNet, the first 13 convolution layers are grouped into five blocks. The first and second blocks have two convolution layers respectively, and each of the remaining blocks has three convolution layers. The size of the convolution kernel is fixed as 33 in all these convolution layers. 2D RNet uses the same structure as 2D PNet except that its number of input channels is +3 and it employs user interactions in the CRFNet. To obtain an exponential increase of the receptive field, VGG16 uses a maxpooling and downsampling layer after each block. However, this implementation would decrease the resolution of feature maps exponentially. Therefore, to preserve resolution through the network, we remove the maxpooling and downsampling layers and use dilated convolution in each block.
Let be a 2D image of size , and let be a square dilated convolution kernel with a size of (2+1)(2+1) and a dilation parameter , where and . The dilated convolution of with is defined as:
(3) 
For 2D PNet/RNet, we set to 1 for block 1 to block 5, so the size of a convolution kernel becomes 33. The dilation parameter in block is set to:
(4) 
where is a system parameter controlling the base dilation parameter of the network. We set =1 in experiments.
The receptive field of a dilated convolution kernel is (2+1)(2+1). Let denote the receptive field of block . can be computed as:
(5) 
where is the number of convolution layers in block , with a value of 2, 2, 3, 3, 3 for the five blocks respectively. When =1, the receptive field size of each block is =4+1, =12+1, =36+1, =84+1, =180+1, respectively. Thus, these blocks capture features at different scales.
The stride of each convolution layer is set to 1. The number of output channels of convolution in each block is set to a fixed number
. In order to use multiscale features, we concatenate the features from different blocks to get a composed feature of length 5. This feature is fed into a classifier that is implemented by two additional layers as shown in block 6 in Fig. 3(a). These two layers use convolution kernels with size of 11 and dilation parameter of 0. Block 6 gives each pixel an initial score of belonging to the foreground or background class. In order to get a more spatially consistent segmentation and add hard constraints when scribbles are given, we apply a CRF on the basis of the output from block 6. The CRF is implemented by a recurrent neural network (CRFNet, detailed in 3.3), which can be jointly trained with PNet or RNet. The CRFNet gives a regularized prediction for each pixel, which is fed into a cross entropy loss function layer.
Similar network structures are used by 3D PNet/RNet for 3D segmentation, as shown in Fig. 3(b). To reduce the memory consumption for 3D images, we use one downsampling layer before the resolutionpreserving layers and compress the output features of block 1 to 5 by a factor four via 111 convolutions before the concatenation layer.
3.3 Backpropagatable CRFNet with Freeform Pairwise Potentials and User Constraints
In [17], a CRF based on RNN was proposed and it can be trained by backpropagation. Rather than using Gaussian functions, we extend this CRF so that the pairwise potentials can be freeform functions and we refer to it as CRFNet(f). In addition, we integrate user interactions in our CRFNet(f) in the interactive refinement context, which is referred to as CRFNet(fu). The CRFNet(f) is connected to PNet and the CRFNet(fu) is connected to RNet.
Let be the label map assigned to an image with a label set = {0, 1, …,  1}. The Gibbs distribution
models the probability of
given in a CRF, where is the normalization factor known as the partition function, and is the Gibbs energy:(6) 
where the unary potential measures the cost of assigning label to pixel , and the pairwise potential is the cost of assigning labels to a pixel pair . is the set of all pixel pairs. In our method, the unary potential is obtained from PNet or RNet that gives classification scores at each pixel. The pairwise potential is:
(7) 
where is the Euclidean distance between pixels and . is the compatibility between the label of and that of represented by a matrix of size . , where and represent the feature vectors of and , respectively. The feature vectors can either be learned by a network or be derived from image features such as spatial location with intensity values. For experiments we used the latter one, as in [17, 44, 3] for simplicity and efficiency. () is a function in terms of and . Instead of defining () as a single Gaussian function [3] or a combination of several Gaussian functions [17, 44], we set it as a freeform function represented by a fully connected neural network (PairwiseNet) which can be learned during training. The structure of PairwiseNet is shown in Fig. 4. The input is a vector composed of and . There are two hidden layers and one output layer.
Graph Cuts [3, 45] can be used to minimize Eq. (6) when () is submodular [54] such as when the segmentation is binary with () being the delta function and () being positive. However, this is not the case for our method since we learn () and () where () may not be the delta function and () could be negative. Continuous maxflow [42] can also be used for the minimization, but its parameters are manually designed. Alternatively, meanfield approximation [17, 44, 50] is often used to efficiently minimize the energy while allowing learning parameters by backpropagation. Instead of computing directly, an approximate distribution is computed so that the KLdivergence is minimized. This yields an iterative update of [17, 44, 50].
(8) 
(9) 
where is the label set. and are a pixel pair. For the proposed CRFNet(fu), with the set of userprovided scribbles , we force the probability of pixels in the scribble set to be 1 or 0. The following equation is used as the update rule for each iteration:
(10) 
where denotes the userprovided label of a pixel that is in the scribble set . We follow the implementation in [17] to update through a multistage meanfield method in an RNN. Each meanfield layer splits Eq. (8) into four steps including message passing, compatibility transform, adding unary potentials and normalizing [17].
3.4 Implementation Details
The rasterscan algorithm [6] was used to compute geodesic distance transforms by applying a forward pass scanning and a backward pass scanning with a 33 kernel for 2D and a 333 kernel for 3D. It is fast due to accessing the image memory in contiguous blocks. For the proposed CRFNet with freeform pairwise potentials, two observations motivate us to use pixel connections based on local patches instead of full connections within the entire image. First, the permutohedral lattice implementation [44, 17] allows efficient computation of fully connected CRFs only when pairwise potentials are Gaussian functions. However, a method that relaxes the requirement of pairwise potentials as freeform functions represented by a network (Fig. 4) cannot use that implementation and therefore would be inefficient for fully connected CRFs. Suppose an image with size , a fully connected CRF has (1) pixel pairs. For a small image with ==100, the number of pixel pairs would be almost 10, which requires not only a huge amount of memory but also long computational time. Second, though longdistance dependency helps to improve segmentation in most RGB images [44, 17, 12], this would be very challenging for medical images since the contrast between the target and background is often low [55]. In such cases, longdistance dependency may lead the label of a target pixel to be corrupted by the large number of background pixels with similar appearances. Therefore, to maintain a good efficiency and avoid longdistance corruptions, we define the pairwise connections for one pixel within a local patch centered on that. In our experiments, the patch size is set to 77 for 2D images and 553 for 3D images.
We initialize () as (, ) = [], where [] is the Iverson Bracket [17]. A fully connected neural network (PairwiseNet) with two hidden layers is used to learn the freeform pairwise potential function (Fig. 4
). The first and second hidden layers have 32 and 16 neurons, respectively. In practice, this network is implemented by an equivalent fully convolutional neural network with 1
11 kernels. We use a pretraining step to initialize the PairwiseNet with an approximation of a contrast sensitive function [3]:(11) 
where is the dimension of the feature vectors and , and and are two parameters controlling the magnitude and shape of the initial pairwise function respectively. In this initialization step, we set to 0.08 and to 0.5 based on experience. Similar to [44, 16, 17], we set and as values in input channels (i.e, image intensity in our case) of PNet for simplicity of implementation and for obtaining contrastsensitive pairwise potentials. To pretrain the PairwiseNet we generate a training set with 100k samples, where is the set of features simulating the concatenated and , and is the set of prediction values simulating . For each sample in , the feature vector has a dimension of +1 where the first dimensions represent the value of and the last dimension denotes . The th channel of is filled with a random number , where (0, 2) for and (0, 8) for +1. The ground truth of prediction value for is obtained by Eq. (11). After generating and
, we use a Stochastic Gradient Descent (SGD) algorithm with a quadratic loss function to pretrain the PairwiseNet.
For preprocessing, all the images are normalized by the mean value and standard variation of the training set. We apply data augmentation by vertical or horizontal flipping, random rotation with angle range [/8, /8] and random zoom with scaling factor range [0.8, 1.25]. We use the cross entropy loss function and SGD algorithm for optimization with minibatch size 1, momentum 0.99 and weight decay 510. The learning rate is halved every 5k iterations. Since a proper initialization of PNet and CRFNet(f) is helpful for a faster convergence of the joint training, we train the PNet with CRFNet(f) in three steps. First, the PNet is pretrained with initial learning rate 10 and maximal number of iterations 100k. Second, the PairwiseNet in the CRFNet(f) is pretrained as described above. Third, the PNet and CRFNet(f) are jointly trained with initial learning rate 10 and maximal number of iterations 50k.
After the training of PNet with CRFNet(f), we automatically simulate user interactions to train RNet with CRFNet(fu). First, PNet with CRFNet(f) is used to obtain an automatic segmentation for each training image. It is compared with the ground truth to find missegmented regions. Then the user interactions on each missegmented region are simulated by randomly sampling pixels in that region. Suppose the size of one connected undersegmented or oversegmented region is , we set for that region to 0 if 30 and /100 otherwise based on experience. Examples of simulated user interactions on training images are shown in Fig. 5. With these simulated user interactions on the initial segmentation of training data, the training of RNet with CRFNet(fu) is implemented through SGD, which is similar to the training of PNet with CRFNet(f).
We implemented our 2D networks by Caffe
^{1}^{1}1http://caffe.berkeleyvision.org [56]and 3D networks by Tensorflow
^{2}^{2}2https://www.tensorflow.org [57] using NiftyNet^{3}^{3}3http://niftynet.io [58]. Our training process was done via two 8core E52623v3 Intel Haswells and two K80 NVIDIA GPUs and 128GB memory. The testing process with user interactions was performed on a MacBook Pro (OS X 10.9.5) with 16GB RAM and an Intel Core i7 CPU running at 2.5GHz and an NVIDIA GeForce GT 750M GPU. A Matlab and PyQt GUI were developed for 2D and 3D interactive segmentation tasks, respectively. (See supplementary videos)4 Experiments
4.1 Comparison Methods and Evaluation Metrics
We compared our PNet with FCN [11] and DeepLab [16] for 2D segmentation and DeepMedic [14] and HighRes3DNet [58] for 3D segmentation. Pretrained models of FCN^{4}^{4}4https://github.com/shelhamer/fcn.berkeleyvision.org and DeepLab^{5}^{5}5https://bitbucket.org/deeplab/deeplabpublic
based on ImageNet were finetuned for 2D placenta segmentation. Since the input of FCN and DeepLab should have three channels, we duplicated each of the graylevel images twice and concatenated them into a threechannel image as the input. DeepMedic and HighRes3DNet were originally designed for multimodality or multiclass 3D segmentation. We adapted them for single modality binary segmentation. We also compared 2D/3D PNet with 2D/3D PNet(b5) that only uses the features from block 5 (Fig.
3) instead of the concatenated multiscale features.The proposed CRFNet(f) with freeform pairwise potentials was compared with: 1). Dense CRF as an independent postprocessing step for the output of PNet. We followed the implementation in [44, 16, 14]. The parameters of this CRF were manually tuned based on a coarsetofine search scheme as suggested by [16], and 2). CRFNet(g) which refers to the CRF that can be trained jointly with CNNs by using Gaussian pairwise potentials [17].
We compared three methods to deal with user interactions. 1). Mincut userediting [7], where the initial probability map (output of PNet in our case) is combined with user interactions to solve an energy minimization problem with mincut [3]; 2). Using the Euclidean distance of user interactions in RNet, which is referred to as RNet(Euc), and 3). The proposed RNet with the geodesic distance of user interactions.
We also compared DeepIGeoS with several other interactive segmentation methods. For 2D slices, DeepIGeoS was compared with: 1). Geodesic Framework [37] that computes a probability based on the geodesic distance from userprovided scribbles for pixel classification; 2). Graph Cuts [3] that models segmentation as a mincut problem based on user interactions; 3). Random Walks [5] that assigns a pixel with a label based on the probability that a random walker reaches a foreground or background seed first, and 4). SlicSeg [8] that uses Online Random Forests to learn from the scribbles and predict the remaining pixels. For 3D images, DeepIGeoS was compared with GeoS [6] and ITKSNAP [59]. Two users (an Obstetrician and a Radiologist) respectively used these interactive methods to segment every test image until the result was visually acceptable.
For quantitative evaluation, we measured the Dice score and the average symmetric surface distance (ASSD).
(12) 
where and represent the region segmented by the algorithm and the ground truth, respectively.
(13) 
where and represent the set of surface points of the target segmented by the algorithm and the ground truth, respectively. is the shortest Euclidean distance between and . We used the Student’s test to compute the value in order to see whether the results of two algorithms significantly differ from each other.
4.2 2D Placenta Segmentation from Fetal MRI
4.2.1 Clinical Background and Experiments Setting
Fetal MRI is an emerging diagnostic tool complementary to ultrasound due to its large field of view and good soft tissue contrast. Segmenting the placenta from fetal MRI is important for fetal surgical planning such as in the case of twintotwin transfusion syndrome [60]. Clinical fetal MRI data are often acquired with a large slice thickness for good contrasttonoise ratio. Movement of the fetus can lead to inhomogeneous appearances between slices. In addition, the location and orientation of the placenta vary largely between individuals. These factors make automatic and 3D segmentation of the placenta a challenging task [61]. Interactive 2D slicebased segmentation is expected to achieve more robust results [8, 53]. The 2D segmentation results can also be used for motion correction and highresolution volume reconstruction [62].
We collected clinical MRI scans for 25 pregnancies in the second trimester. The data were acquired in axial view with pixel size between 0.7422 mm0.7422 mm and 1.582 mm1.582 mm and slice thickness 3  4 mm. Each slice was resampled with a uniform pixel size of 1 mm1 mm and cropped by a box of size 172128 containing the placenta. We used 17 volumes with 624 slices for training, three volumes with 122 slices for validation and five volumes with 179 slices for testing. The ground truth was manually delineated by a experienced Radiologist.
4.2.2 Automatic Segmentation by 2D PNet with CRFNet(f)
Method  Dice(%)  ASSD(pixels) 

FCN [11]  81.4711.40  2.661.39 
DeepLab [16]  83.389.53  2.200.84 
2D PNet(b5)  83.1613.01  2.361.66 
2D PNet  84.7811.74  2.091.53 
2D PNet + Dense CRF  84.9012.05  2.051.59 
2D PNet + CRFNet(g)  85.4412.50  1.981.46 
2D PNet + CRFNet(f)  85.8611.67  1.851.30 
Fig. 6 shows the automatic segmentation results obtained by different networks. It shows that FCN is able to capture the main region of the placenta. However, the segmentation results are bloblike with smooth boundaries. DeepLab is better than FCN, but its bloblike results are similar to those of FCN. This is mainly due to the downsampling and upsampling procedure employed by these methods. In contrast, 2D PNet(b5) and 2D PNet obtain more detailed results. It can be observed that 2D PNet achieves better results than the other three networks. However, there are still some obvious missegmented regions by 2D PNet. Table I presents a quantitative comparison of these networks based on all the testing data. 2D PNet achieves a Dice score of 84.7811.74% and an ASSD of 2.091.53 pixels, and it performs better than the other three networks.
Based on 2D PNet, we investigated the performance of different CRFs. A visual comparison between Dense CRF, CRFNet(g) with Gaussian pairwise potentials and CRFNet(f) with freeform pairwise potentials is shown in Fig. 7. In the first column, the placenta is undersegmented by 2D PNet. Dense CRF leads to very small improvements on the result. CRFNet(g) and CRFNet(f) improve the result by preserving more placenta regions, and the later shows a better segmentation. In the second column, 2D PNet obtains an oversegmentation of adjacent fetal brain and maternal tissues. Dense CRF does not improve the segmentation noticeably, but CRFNet(g) and CRFNet(f) remove more oversegmentated areas. CRFNet(f) shows a better performance than the other two CRFs. The quantitative evaluation of these three CRFs is presented in Table I, which shows Dense CRF leads to a result that is very close to that of 2D PNet (value 0.05), while the last two CRFs significantly improve the segmentation (value 0.05). In addition, CRFNet(f) is better than CRFNet(g). Fig. 7 and Table I indicate that large missegmentation exists in some images, therefore we use 2D RNet with CRFNet(fu) to refine the segmentation interactively in the following.
4.2.3 Interactive Refinement by 2D RNet with CRFNet(fu)
Method  Dice(%)  ASSD(pixels) 

Before refinement  85.8611.67  1.851.30 
Mincut userediting  87.049.79  1.631.15 
2D RNet(Euc)  88.2610.61  1.541.18 
2D RNet  88.765.56  1.310.60 
2D RNet(Euc) + CRFNet(fu)  88.718.42  1.260.59 
2D RNet + CRFNet(fu)  89.315.33  1.220.55 
Fig. 8 shows examples of interactive refinement based on 2D RNet with CRFNet(fu) that uses freeform pairwise potentials and employs user interactions as hard constraints. The first column in Fig. 8 shows initial segmentation results obtained by 2D PNet + CRFNet(f). The user provides clicks/scribbles to indicate the foreground (red) or the background (cyan). The second to last column in Fig. 8 show the results for five variations of refinement. These refinement methods correct most of the missegmented areas but perform at different levels in dealing with local details, as indicated by white arrows. Fig. 8 shows 2D RNet with geodesic distance performs better than mincut userediting and 2D RNet(Euc) that uses Euclidean distance. CRFNet(fu) can further improve the segmentation. For quantitative comparison, we measured the segmentation accuracy after the first iteration of user refinement (giving user interactions to mark all the main missegmented regions and applying refinement once), in which the same initial segmentation and the same set of user interactions were used by the five refinement methods. The results are presented in Table II, which shows the combination of the proposed 2D RNet using geodesic distance and CRFNet(fu) leads to more accurate segmentations than the other refinement methods with the same set of user interactions. The Dice score and ASSD of 2D RNet + CRFNet(fu) are 89.315.33% and 1.220.55 pixels, respectively.
4.2.4 Comparison with Other Interactive Methods
Fig. 9 shows a visual comparison between DeepIGeoS and Geodesic Framework [37], Graph Cuts [3], Random Walks [5] and SlicSeg [8] for 2D placenta segmentation. The first row shows the initial scribbles and the resulting segmentation. Notice no initial scribbles are needed for DeepIGeoS. The second row shows refined results, where DeepIGeoS only needs two short strokes to get an accurate segmentation, while the other methods require far more scribbles to get similar results. Quantitative comparison of these methods based on the final segmentation given by the two users is presented in Fig. 10. It shows these methods achieve similar accuracy, but DeepIGeoS requires far fewer user interactions and less time. (See supplementary video 1)
4.3 3D Brain Tumor Segmentation from FLAIR Images
4.3.1 Clinical Background and Experiments Setting
Gliomas are the most common brain tumors in adults with little improvement in treatment effectiveness despite considerable research works [63]. With the development of medical imaging, brain tumors can be imaged by different MR protocols with different contrasts. For example, T1weighted images highlight enhancing part of the tumor and FLAIR acquisitions highlight the peritumoral edema. Segmentation of brain tumors can provide better volumetric measurements and therefore has enormous potential value for improved diagnosis, treatment planning, and followup of individual patients. However, automatic brain tumor segmentation remains technically challenging because 1) the size, shape, and localization of brain tumors have considerable variations among patients; 2) the boundaries between adjacent structures are often ambiguous.
In this experiment, we investigate interactive segmentation of the whole tumor from FLAIR images. We used the 2015 Brain Tumor Segmentation Challenge (BraTS) [63] training set with images of 274 cases. The ground truth were manually delineated by several experts. Differently from previous works using this dataset for multilabel and multimodality segmentation [14, 64], as a first demonstration of deep interactive segmentation in 3D, we only use FLAIR images in the dataset and only segment the whole tumor. We randomly selected 234 cases for training and used the remaining 40 cases for testing. All these images had been skullstripped and resampled to size of 240240155 with isotropic resolution 1mm. We cropped each image based on the bounding box of its nonzero region. The feature channel number of 3D PNet and RNet was .
4.3.2 Automatic Segmentation by 3D PNet with CRFNet(f)
Method  Dice(%)  ASSD(pixels) 

DeepMedic [14]  83.878.72  2.381.52 
HighRes3DNet [58]  85.478.66  2.202.24 
3D PNet(b5)  85.367.34  2.212.13 
3D PNet  86.687.67  2.142.17 
3D PNet + Dense CRF  87.067.23  2.102.02 
3D PNet + CRFNet(f)  87.556.72  2.041.70 
Fig. 11 shows examples of automatic segmentation of brain tumor by 3D PNet, which is compared with DeepMedic [14], HighRes3DNet [58] and 3D PNet(b5). In the first column, DeepMedic segments the tumor roughly, with some missed regions near the boundary. HighRes3DNet reduces the missed regions but leads to some oversegmentation. 3D PNet(b5) obtains a similar result to that of HighRes3DNet. In contrast, 3D PNet achieves a more accurate segmentation, which is closer to the ground truth. More examples in the second and third column in Fig. 11 also show 3D PNet outperforms the other networks. Quantitative evaluation of these four networks is presented in Table III. DeepMedic achieves an average dice score of 83.87. HighRes3DNet and 3D PNet(b5) achieve similar performance, and they are better than DeepMedic. 3D PNet outperforms these three counterparts with 86.687.67% in terms of Dice and 2.142.17 pixels in terms of ASSD. Note that the proposed 3D PNet has far fewer parameters compared with HighRes3DNet. It is more memory efficient and therefore can perform inference on a 3D volume in interactive time.
Since CRFRNN [17] was only implemented for 2D, in the context of 3D segmentation we only compared 3D CRFNet(f) with 3D Dense CRF [14] that uses manually tuned parameters. Visual comparison between these two types of CRFs working with 3D PNet is shown in Fig. 12. It can be observed that CRFNet(f) achieves more noticeable improvement compared with Dense CRF that is used as postprocessing without endtoend learning. Quantitative measurement of Dense CRF and CRFNet(f) is listed in Table III. It shows that only CRFNet(f) obtains significantly better segmentation than 3D PNet with value 0.05.
4.3.3 Interactive Refinement by 3D RNet with CRFNet(fu)
Method  Dice(%)  ASSD(pixels) 

Before refinement  87.556.72  2.041.70 
Mincut userediting  88.417.05  1.741.53 
3D RNet(Euc)  88.827.68  1.601.56 
3D RNet  89.306.82  1.521.37 
3D RNet(Euc) + CRFNet(fu)  89.277.32  1.481.22 
3D RNet + CRFNet(fu)  89.936.49  1.431.16 
Fig. 13 shows examples of interactive refinement of brain tumor segmentation using 3D RNet with CRFNet(fu). The initial segmentation is obtained by 3D PNet + CRFNet(f). With the same set of user interactions, we compared the refined results of mincut userediting and four variations of 3D RNet: using geodesic or Euclidean distance transforms with or without CRFNet(fu). Fig. 13 shows that mincut userediting achieves a small improvement. It can be found that more accurate results are obtained by using geodesic distance than using Euclidean distance, and CRFNet(fu) can further help to improve the segmentation. For quantitative comparison, we measured the segmentation accuracy after the first iteration of refinement, in which the same set of scribbles were used for different refinement methods. The quantitative evaluation is listed in Table IV, showing that the proposed 3D RNet with geodesic distance and CRFNet(fu) achieves higher accuracy than the other variations with a Dice score of 89.936.49% and ASSD of 1.431.16 pixels.
4.3.4 Comparison with Other Interactive Methods
Fig. 14 shows a visual comparison between GeoS [6], ITKSNAP [59] and DeepIGeoS. In the first row, the tumor has a good contrast with the background. All the compared methods achieve very accurate segmentations. In the second row, a lower contrast makes it difficult for the user to identify the tumor boundary. Benefited from the initial tumor boundary that is automatically identified by 3D PNet, DeepIGeoS outperforms GeoS and ITKSNAP. Quantitative comparison is presented in Fig. 15. It shows DeepIGeoS achieves higher accuracy compared with GeoS and ITKSNAP. In addition, the user time for DeepIGeoS is about one third of that for the other two methods. Supplementary video 2 shows more examples of DeepIGeoS for 3D brain tumor segmentation.
5 Conclusion
In this work, we presented a deep learningbased interactive framework for 2D and 3D medical image segmentation. We proposed a PNet to obtain an initial automatic segmentation and an RNet to refine the result based on user interactions that are transformed into geodesic distance maps and then integrated into the input of RNet. We also proposed a resolutionpreserving network structure with dilated convolution for dense prediction, and extended the existing RNNbased CRF so that it can learn freeform pairwise potentials and take advantage of user interactions as hard constraints. Segmentation results of the placenta from 2D fetal MRI and brain tumors from 3D FLAIR images show that our proposed method achieves better results than automatic CNNs. It requires far less user time compared with traditional interactive methods and achieves higher accuracy for 3D brain tumor segmentation. The framework can be extended to deal with multiple organs in the future.
Acknowledgments
This work was supported through an Innovative Engineering for Health award by the Wellcome Trust (WT101957); Engineering and Physical Sciences Research Council (EPSRC) (NS/A000027/1, EP/H046410/1, EP/J020990/1, EP/K005278), Wellcome/EPSRC [203145Z/16/Z], the National Institute for Health Research University College London Hospitals Biomedical Research Centre (NIHR BRC UCLH/UCL), the Royal Society [RG160569], a UCL Overseas Research Scholarship and a UCL Graduate Research Scholarship, hardware donated by NVIDIA and of Emerald, a GPUaccelerated High Performance Computer, made available by the Science & Engineering South Consortium operated in partnership with the STFC RutherfordAppleton Laboratory.
References
 [1] N. Sharma and L. M. Aggarwal, “Automated medical image segmentation techniques.” Journal of medical physics, vol. 35, no. 1, pp. 3–14, 2010.
 [2] F. Zhao and X. Xie, “An Overview of Interactive Medical Image Segmentation,” Annals of the BMVA, vol. 2013, no. 7, pp. 1–22, 2013.
 [3] Y. Boykov and M.P. Jolly, “Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in ND Images,” in ICCV, 2001, pp. 105–112.
 [4] C. Xu and J. L. Prince, “Snakes, Shapes, and Gradient Vector Flow,” TIP, vol. 7, no. 3, pp. 359–369, 1998.
 [5] L. Grady, “Random walks for image segmentation,” PAMI, vol. 28, no. 11, pp. 1768–1783, 2006.
 [6] A. Criminisi, T. Sharp, and A. Blake, “GeoS: Geodesic Image Segmentation,” in ECCV, 2008.
 [7] C. Rother, V. Kolmogorov, and A. Blake, “”GrabCut”: Interactive Foreground Extraction Using Iterated Graph Cuts,” ACM Trans. on Graphics, vol. 23, no. 3, pp. 309–314, 2004.
 [8] G. Wang, M. A. Zuluaga, R. Pratt, M. Aertsen, T. Doel, M. Klusmann, A. L. David, J. Deprest, T. Vercauteren, and S. Ourselin, “SlicSeg: A minimally interactive segmentation of the placenta from sparse and motioncorrupted fetal MRI in multiple views,” Medical Image Analysis, vol. 34, pp. 137–147, 2016.
 [9] B. Wang, W. Liu, M. Prastawa, A. Irimia, P. M. Vespa, J. D. V. Horn, P. T. Fletcher, and G. Gerig, “4D active cut: An interactive tool for pathological anatomy modeling,” in ISBI, 2014.
 [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in CVPR, 2014.
 [11] J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in CVPR, 2015, pp. 3431–3440.
 [12] L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs,” in ICLR, 2015.
 [13] M. Havaei, A. Davy, D. WardeFarley, A. Biard, A. Courville, Y. Bengio, C. Pal, P.M. Jodoin, and H. Larochelle, “Brain Tumor Segmentation with Deep Neural Networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2016.
 [14] K. Kamnitsas, C. Ledig, V. F. J. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker, “Efficient MultiScale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation,” Medical Image Analysis, vol. 36, pp. 61–78, 2017.
 [15] F. Yu and V. Koltun, “MultiScale Context Aggregation By Dilated Convolutions,” in ICLR, 2016.
 [16] L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” PAMI, vol. PP, no. 99, pp. 1–1, 2017.
 [17] S. Zheng, S. Jayasumana, B. RomeraParedes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr, “Conditional Random Fields as Recurrent Neural Networks,” in ICCV, 2015.
 [18] A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D UNet : Learning Dense Volumetric Segmentation from Sparse Annotation,” in MICCAI, 2016.
 [19] D. Lin, J. Dai, J. Jia, K. He, and J. Sun, “ScribbleSup: ScribbleSupervised Convolutional Networks for Semantic Segmentation,” in CVPR, 2016.
 [20] M. Rajchl, M. Lee, O. Oktay, K. Kamnitsas, J. PasseratPalmbach, W. Bai, M. Rutherford, J. Hajnal, B. Kainz, and D. Rueckert, “DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks,” TMI, vol. PP, no. 99, pp. 1–1, 2016.
 [21] N. Xu, B. Price, S. Cohen, J. Yang, and T. Huang, “Deep Interactive Object Selection,” in CVPR, 2016, pp. 373–381.
 [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in NIPS, 2012.
 [23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in CVPR, 2015.
 [24] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” in ICLR, 2015.
 [25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in CVPR, 2016.
 [26] M. S. Hefny, T. Okada, M. Hori, Y. Sato, and R. E. Ellis, “UNet: Convolutional Networks for Biomedical Image Segmentation,” in MICCAI, 2015, pp. 234–241.
 [27] F. Milletari, N. Navab, and S.A. Ahmadi, “VNet: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation,” in IC3DV, 2016, pp. 565–571.
 [28] P. Ondruska, J. Dequaire, D. Z. Wang, and I. Posner, “EndtoEnd Tracking and Semantic Segmentation Using Recurrent Neural Networks,” in Robotics: Science and Systems, Workshop on Limits and Potentials of Deep Learning in Robotics, 2016.
 [29] J. Dai, K. He, Y. Li, S. Ren, and J. Sun, “Instancesensitive Fully Convolutional Networks,” in ECCV, 2016.
 [30] C. Lea, R. Vidal, A. Reiter, and G. D. Hager, “Temporal Convolutional Networks: A Unified Approach to Action Segmentation,” in ECCV, 2016.
 [31] G. Lin, C. Shen, I. Reid, and A. van dan Hengel, “Efficient piecewise training of deep structured models for semantic segmentation,” in CVPR, 2016.
 [32] P. Pinheiro and R. Collobert, “Recurrent convolutional neural networks for scene labeling,” in ICML, 2014.
 [33] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik, “Hypercolumns for object segmentation and finegrained localization,” in CVPR, 2015.
 [34] C. J. Armstrong, B. L. Price, and W. A. Barrett, “Interactive segmentation of image volumes with Live Surface,” Computers and Graphics (Pergamon), vol. 31, no. 2, pp. 212–229, 2007.
 [35] J. E. Cates, A. E. Lefohn, and R. T. Whitaker, “GIST: An interactive, GPUbased level set segmentation tool for 3D medical images,” Medical Image Analysis, vol. 8, no. 3, pp. 217–231, 2004.
 [36] S. A. Haider, M. J. Shafiee, A. Chung, F. Khalvati, A. Oikonomou, A. Wong, and M. A. Haider, “Singleclick, semiautomatic lung nodule contouring using hierarchical conditional random fields,” in ISBI, 2015.
 [37] X. Bai and G. Sapiro, “A Geodesic Framework for Fast Interactive Image and Video Segmentation and Matting,” in ICCV, oct 2007.
 [38] O. Barinova, R. Shapovalov, S. Sudakov, and A. Velizhev, “Online Random Forest for Interactive Image Segmentation,” in EEML, 2012.
 [39] I. Luengo, M. C. Darrow, M. C. Spink, Y. Sun, W. Dai, C. Y. He, W. Chiu, T. Pridmore, A. W. Ashton, E. M. Duke, M. Basham, and A. P. French, “SuRVoS: SuperRegion Volume Segmentation workbench,” Journal of Structural Biology, vol. 198, no. 1, pp. 43–53, 2017.
 [40] P. Kohli, J. Shotton, and A. Criminisi, “GeoF Geodesic Forests for Learning Coupled Predictors,” in CVPR, 2013.
 [41] Y. Boykov and V. Kolmogorov, “An experimental comparison of mincut/maxflow algorithms for energy minimization in vision,” PAMI, vol. 26, no. 9, pp. 1124–1137, 2004.
 [42] J. Yuan, E. Bae, and X. C. Tai, “A study on continuous maxflow and mincut approaches,” in CVPR, 2010.
 [43] N. Payet and S. Todorovic, “(RF)^2 – Random Forest Random Field,” NIPS, vol. 1, pp. 1885–1893, 2010.
 [44] P. Krähenbühl and V. Koltun, “Efficient inference in fully connected CRFs with gaussian edge potentials,” in NIPS, 2011.
 [45] M. Szummer, P. Kohli, and D. Hoiem, “Learning CRFs using graph cuts,” in ECCV, 2008.
 [46] J. I. Orlando and M. Blaschko, “Learning FullyConnected CRFs for Blood Vessel Segmentation in Retinal Images,” in MICCAI, 2014.
 [47] J. Domke, “Learning graphical model parameters with approximate marginal inference,” PAMI, vol. 35, no. 10, pp. 2454–2467, 2013.
 [48] P. Krähenbühl and V. Koltun, “Parameter Learning and Convergent Inference for Dense Random Fields,” in ICML, 2013, pp. 1–9.
 [49] A. Adams, J. Baek, and M. A. Davis, “Fast highdimensional filtering using the permutohedral lattice,” Computer Graphics Forum, vol. 29, no. 2, pp. 753–762, 2010.
 [50] R. Vemulapalli, O. Tuzel, M.y. Liu, and R. Chellappa, “Gaussian Conditional Random Field Network for Semantic Segmentation,” in CVPR, 2016.

[51]
F. Liu, C. Shen, and G. Lin, “Deep Convolutional Neural Fields for Depth Estimation from a Single Image,” in
CVPR, 2014.  [52] A. Kirillov, S. Zheng, D. Schlesinger, W. Forkel, A. Zelenin, P. Torr, and C. Rother, “Efficient Likelihood Learning of a Generic CNNCRF Model for Semantic Segmentation,” in ACCV, 2016.
 [53] G. Wang, M. A. Zuluaga, R. Pratt, M. Aertsen, T. Doel, M. Klusmann, A. L. David, J. Deprest, T. Vercauteren, and S. Ourselin, “Dynamically Balanced Online Random Forests for Interactive ScribbleBased Segmentation,” in MICCAI, 2016.
 [54] V. Kolmogorov and R. Zabih, “What Energy Functions Can Be Minimized via Graph Cuts?” PAMI, vol. 26, no. 2, pp. 147–159, 2004.
 [55] D. Han, J. Bayouth, Q. Song, A. Taurani, M. Sonka, J. Buatti, and X. Wu, “Globally optimal tumor segmentation in PETCT images: A graphbased cosegmentation method,” in IPMI, 2011.
 [56] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional Architecture for Fast Feature Embedding,” in ACMICM, 2014.
 [57] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, X. Zheng, and G. Brain, “TensorFlow: A System for LargeScale Machine Learning TensorFlow: A system for largescale machine learning,” in OSDI, 2016, pp. 265–284.
 [58] W. Li, G. Wang, L. Fidon, S. Ourselin, M. J. Cardoso, and T. Vercauteren, “On the Compactness , Efficiency , and Representation of 3D Convolutional Networks : Brain Parcellation as a Pretext Task,” in IPMI, 2017.
 [59] P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, and G. Gerig, “Userguided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability,” NeuroImage, vol. 31, no. 3, pp. 1116–1128, 2006.
 [60] J. A. Deprest, A. W. Flake, E. Gratacos, Y. Ville, K. Hecher, K. Nicolaides, M. P. Johnson, F. I. Luks, N. S. Adzick, and M. R. Harrison, “The Making of Fetal Surgery,” Prenatal Diagnosis, vol. 30, no. 7, pp. 653–667, 2010.
 [61] A. Alansary, K. Kamnitsas, A. Davidson, M. Rajchl, C. Malamateniou, M. Rutherford, J. V. Hajnal, B. Glocker, D. Rueckert, and B. Kainz, “Fast Fully Automatic Segmentation of the Human Placenta from Motion Corrupted MRI,” in MICCAI, 2016.
 [62] K. Keraudren, M. KuklisovaMurgasova, V. Kyriakopoulou, C. Malamateniou, M. A. Rutherford, B. Kainz, J. V. Hajnal, and D. Rueckert, “Automated Fetal Brain Segmentation from 2D MRI Slices for Motion Correction,” NeuroImage, vol. 101, no. 1 November 2014, pp. 633–643, 2014.
 [63] B. H. Menze, A. Jakab, S. Bauer, J. KalpathyCramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, L. Lanczi, E. Gerstner, M. A. Weber, T. Arbel, B. B. Avants, N. Ayache, P. Buendia, D. L. Collins, N. Cordier, J. J. Corso, A. Criminisi, T. Das, H. Delingette, Ç. Demiralp, C. R. Durst, M. Dojat, S. Doyle, J. Festa, F. Forbes, E. Geremia, B. Glocker, P. Golland, X. Guo, A. Hamamci, K. M. Iftekharuddin, R. Jena, N. M. John, E. Konukoglu, D. Lashkari, J. A. Mariz, R. Meier, S. Pereira, D. Precup, S. J. Price, T. R. Raviv, S. M. Reza, M. Ryan, D. Sarikaya, L. Schwartz, H. C. Shin, J. Shotton, C. A. Silva, N. Sousa, N. K. Subbanna, G. Szekely, T. J. Taylor, O. M. Thomas, N. J. Tustison, G. Unal, F. Vasseur, M. Wintermark, D. H. Ye, L. Zhao, B. Zhao, D. Zikic, M. Prastawa, M. Reyes, and K. Van Leemput, “The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS),” TMI, vol. 34, no. 10, pp. 1993–2024, 2015.
 [64] L. Fidon, W. Li, L. C. Garciaperaza herrera, J. Ekanayake, N. Kitchen, S. Ourselin, and T. Vercauteren, “Scalable multimodel convolutional networks for brain tumour segmentation,” in MICCAI, 2017.
Comments
There are no comments yet.