HyperDense-Net: A densely connected CNN for multi-modal image segmentation

10/16/2017 ∙ by Jose Dolz, et al. ∙ 1

Neonatal brain segmentation in magnetic resonance (MR) is a challenging problem due to poor image quality and similar levels of intensity between white and gray matter in MR-T1 and T2 images. To tackle this problem, most existing approaches are based on multi-atlas label fusion strategies, which are time-consuming and sensitive to registration errors. As alternative to these methods, we propose a hyper densely connected 3D convolutional neural network that employs MR-T1 and T2 as input, processed independently in two separated paths. A main difference with respect to previous densely connected networks is the use of direct connections between layers from the same and different paths. Adopting such dense connectivity leads to a benefit from a learning perspective thanks to: i) including deep supervision and ii) improving gradient flow. This approach has been evaluated in the MICCAI grand Challenge iSEG and obtains very competitive results among 21 teams, ranking first and second in many metrics, which translates into a promising performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The precise segmentation of infant brain images into white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) during the first year of life is of great importance in the study of early brain development. Recognizing particular brain abnormalities shortly after birth might allow to predict neuro-developmental disorders. To that end, magnetic resonance imaging (MRI) is the preferred modality for imaging neonatal brain because it is safe, non-invasive and provides cross-sectional views of the brain in multiple contrasts. Nevertheless, neonatal brain segmentation in MRI is a challenging problem due to several factors, such as reduced tissue contrast, increased noise, motion artifacts or ongoing white matter myelination in infants.

To address this problem, a wide variety of methods have been proposed [1]. A popular approach uses multiple atlases to model the anatomical variability of brain tissues [2, 3]. However, the performance of techniques based solely on atlas fusion is somewhat limited. Label propagation or adaptive methods like parametric or deformable models [4]

can be applied to refine prior estimates of tissue probability

[4]. Nevertheless, an important drawback of using such approaches for infant brain segmentation is the high risk of error due to high spatial variability in the neonatal population. Moreover, to obtain accurate segmentations, these methods typically require a large number of annotated images, which is time-consuming and requires extensive expertise.

In the last years, deep learning methods have been proposed as an efficient alternative to aforementioned approaches. Particularly, convolutional neural networks (CNNs) have been employed successfully to address various medical image segmentation problems, achieving state-of-the-art performance in a broad range of applications [5, 6, 7, 8, 9] including infant brain tissue segmentation [10, 11, 12]. For instance, a multi-scale 2D CNN architecture is proposed in [10] to obtain accurate and spatially-consistent segmentations from a single image modality.

To overcome the problem of extremely low tissue contrast between WM and GM, various works have considered multiple modalities as input to a CNN. In [11], MR-T1, T2 and fractional anisotropy (FA) images are merged in the input of the network. Similarly, Nie et al. [12] proposed a fully convolutional neural network (FCNN), where these image modalities are processed in three independent paths, and their corresponding features later fused for final segmentation. Yet, these approaches present some significant limitations. First, some architectures [10, 11] adopt a sliding-window strategy where regions defined by the window are processed one-by-one. This leads to a low efficiency and a non-structured prediction which reduces the segmentation accuracy. Second, these methods often employ 2D patches as input to the network, completely discarding anatomic context in directions orthogonal to the 2D plane. As shown in [7], considering 3D convolutions instead of 2D ones results in a better segmentation.

In light of above-mentioned challenges and limitations, we propose a hyper-densely connected 3D fully convolutional network, called HyperDenseNet, for the voxel-level segmentation of infant brain in MR-T1 and T2 images. Unlike the methods presented in [10, 11, 12], our network can incorporate 3D context and volumetric cues for effective volume prediction. The proposed HyperDenseNet network also extends our recent work in [7] by exploiting dense connections in a multi-modal image scenario. This dense connectivity facilitates the learning process by including deep supervision and improving gradient flow. To the best of our knowledge, this is the first attempt to densely-connect layers across multiple independent paths, each of them specifically designed for a different image modality. We validate the proposed network on data from the iSEG-2017 MICCAI Grand Challenge on 6-month infant brain MRI Segmentation, showing the state-of-the-art performance of our network.

2 Methodology

2.1 Single-path baseline

The architectures presented in this work, which are built on top of DeepMedic [6], are inspired by our recent work in [7]

, where we proposed a 3D fully CNN to segment subcortical brain structures. An important feature of that network was its ability to model both local and global context by embedding intermediate-layer outputs in the final prediction. This helped enforce consistency between features extracted at different scales, and embed fine-grained information directly in the segmentation process. Hence, outputs from intermediate convolutional layers (i.e., layers 3 and 6) were directly connected to the first fully connected layer (

fully_conv_1)111Fully connected layers are replaced by a set of convolutional filters..

As baseline, we extend this semi-dense architecture to a fully-dense one, by connecting the output of all convolutional layers to fully_conv_1. In this network, MR-T1 and T2 are concatenated before the input of the CNN, and processed together via a single path. Table 1

shows the architecture of this baseline network, where each convolutional block is composed of batch normalization, a non-linearity (PReLu), and a convolution. Due to space limitations, we refer the reader to

[6] and [7] for additional details.

Conv. kernel # kernels Output Size Dropout
conv_1 333 25 No
conv_2 333 25 No
conv_3 333 25 No
conv_4 333 50 No
conv_5 333 50 No
conv_6 333 50 No
conv_7 333 75 No
conv_8 333 75 No
conv_9 333 75 No
fully_conv_1 111 400 Yes
fully_conv_2 111 200 Yes
fully_conv_3 111 150 Yes
Classification 111 4 No
Table 1: Layers used in the proposed architecture and corresponding values with an input of size

. In the case of multi-modal images, convolutional layers (conv_x) are present in both paths of the network. All convolutional layers have a stride of one pixel.

2.2 The proposed hyper-dense network

The concept of “the deeper the better” is considered as a key principle in deep learning architectures [13]. Nevertheless, one obstacle when dealing with deep architectures is the problem of vanishing/exploding gradients, which hamper convergence during training. To address these limitations in very deep architectures, densely connected networks were proposed in [14]. DenseNets are built on the idea that adding direct connections from any layer to all subsequent layers in a feed-forward manner makes training easier and more accurate. This is motivated by three observations. First, there is an implicit deep supervision thanks to short paths to all feature maps in the architecture. Second, direct connections between all layers help to improve the flow of information and gradients throughout the entire network. And third, dense connections have a regularizing effect, which results in a reduced risk of over-fitting on tasks with smaller training sets.

Inspired by the recent success of such densely-connected networks in medical image segmentation [15, 16], we propose a hyper-dense architecture, called HyperDenseNet, for the segmentation of multi-modal images. Unlike the baseline model, where dense connections are employed through all the layers in a single stream, we exploit the concept of dense connectivity in a multi-modal image setting. In this scenario, each modality is processed in an independent path, and dense connections occur not only between layers within the same path, but also between layers in different paths.

Figure 1: A section of the proposed HyperDenseNet. Each gray region represents a convolutional block. Red arrows correspond to convolutions and black arrows indicate dense connections between feature maps. Hyper-dense connections are propagated through all the layers of the network.

The blocks composing our HyperDenseNet are similar to those in the baseline architecture. Let be the output of the

layer. In CNNs, this vector is typically obtained from the output of the previous layer

by a mapping

composed of a convolution followed by a non-linear activation function:

(1)

In a densely-connected network, connectivity follows a pattern that iteratively concatenates all feature outputs in a feed-forward manner, i.e.

(2)

where represents a concatenation operation.

Pushing this idea further, HyperDenseNet considers a more sophisticated connectivity pattern that also links the output from layers in different streams, each one associated with a different image modality. Denote as and the outputs of the layer in streams 1 and 2, respectively. The output of the layer in a stream can then be defined as

(3)

A section of the proposed architecture is depicted in Figure 1, where each gray region represents a convolutional block. For simplicity, we assume that red arrows indicate convolution operations only, and that black arrows represent direct connections between feature maps from different layers. Thus, the input of each convolutional block (maps before the red arrow) consists in the concatenation of the outputs (maps after red arrow) of all preceding layers from both paths.

2.2.1 Training parameters and implementation details

To have a large receptive field, FCNNs typically expect full images as input. The number of parameters is then limited via pooling/unpooling layers. A problem with this approach is the loss of resolution from repeated down-sampling operations. In the proposed method, we follow the technique described in [6, 7], where sub-volumes are used as input and pooling layers are avoided. While sub-volumes of size are considered training, we used sub-volumes during inference, as in [6, 7].

To initialize the weights of the network, we adopted the strategy proposed in [17]

that allows very deep architectures to converge rapidly. In this strategy, a zero-mean Gaussian distribution of standard deviation

is used to initialize the weights in layer , where

denotes the number of connections to units in that layer. Momentum was set to 0.6 and the initial learning rate to 0.001, being reduced by a factor of 2 after every 5 epochs (starting from epoch 10). Network parameters are optimized via the RMSprop optimizer, with cross-entropy as cost function. The network was trained for 30 epochs, each one composed of 20 subepochs. At each subepoch, a total of 1000 samples were randomly selected from the training images and processed in batches of size 5.

We extended our 3D FCNN architecture proposed in [7]

, which is based on Theano and whose source code can be found at

https://www.github.com/josedolz/LiviaNET. Training and testing was performed on a server equipped with a NVIDIA Tesla P100 GPU with 16 GB of RAM memory. Training HyperDenseNet took around 70 min per epoch, and around 35 hours in total. Segmenting a whole 3D MR scan requires 70-80 seconds on average.

3 Experiments and results

3.1 Dataset

The dataset employed in this study is publicly available from the iSEG Grand MICCAI Challenge222http://iseg2017.web.unc.edu/. Selected scans for training and testing were acquired at the UNC-Chapel Hill and were randomly chosen from the pilot study of Baby Connectome Project (BCP)333http://babyconnectomeproject.org. All scans were acquired on a Siemens head-only 3T scanners with a circular polarized head coil. During the scan, infants were asleep, unsedated, fitted with ear protection, and their heads were secured in a vacuum-fixation device.

T2 images were linearly aligned to their corresponding T1 images. All images were resampled into an isotropic 1 11 mm resolution. Using in-house tools, standard image pre-processing steps were then applied before manual segmentation, including skull stripping, intensity inhomogeneity correction, and removal of the cerebellum and brain stem. We used 9 subjects for training the network, one for validation and 13 subjects for testing.

3.2 Results

To demonstrate the benefits of the proposed HyperDenseNet, Table 2 compares the segmentation accuracy of our architecture for CSF, GM and WM brain tissues, with that of the baseline. Three metrics are employed for evaluation: Dice Coefficient (DC), modified Hausdorff distance (MHD) and average symmetric distance (ASD). Higher DC values indicate greater overlap between automatic and manual contours, while lower MHD and ASD values indicate higher boundary similarity.

Results in Table 2

show HyperDenseNet to outperform the baseline. Thus, our networks yields better DC and ASD accuracy values than the baseline, for all cases. Likewise, it achieves a lower MHD for GM and WM tissues. Considering standard deviations, the accuracy of HyperDenseNet shows less variance than the baseline, again in GM and WM regions. A paired sample t-test between both configurations revealed that differences were statistically significant (p

0.05) across all the results, except for the MHD in CSF tissues (p 0.658).

DC MHD ASD
CSF
Baseline 0.953 (0.007) 9.296 (0.942) 0.128 (0.016)
HyperDenseNet 0.957 (0.007) 9.421 (1.392) 0.119 (0.017)
Gray Matter
Baseline 0.916 (0.009) 7.131 (1.729) 0.346 (0.041)
HyperDenseNet 0.920 (0.008) 5.752 (1.078) 0.329 (0.041)
White Matter
Baseline 0.895 (0.015) 6.903 (1.140) 0.406 (0.051)
HyperDenseNet 0.901 (0.014) 6.659 (0.932) 0.382 (0.047)
Table 2: Mean segmentation values and standard deviation provided by the iSEG Challenge organizers for the two analyzed methods. In bold is highlighted the best performance for each metric.

A comparison of the training and validation accuracy between the baseline and HyperDenseNet is shown in Figure 2. In these figures, mean DC for the three brain tissue is evaluated on training samples after each sub-epoch, and in the whole validation volume after each epoch. It can be observed that in both cases HyperDenseNet outperforms the baseline, achieving better results faster. This can be attributed to the higher number of direct connections between different layers, which facilitates back-propagation of the gradient to shallow layers without diminishing magnitude and thus easing the optimization.

Figure 2: Training (top) and validation (bottom) accuracy plots.

Figure 3 depicts visual results for the subject used in validation. It can be observed that HyperDenseNet (middle) recovers thin regions better than the baseline (left), which can explain improvements in distance-based metrics. As confirmed in Table 2, this effect is most prominent in boundaries between the gray and white matter. Further, HyperDenseNet produces fewer false positives for WM than the baseline, which tends to over-estimate the segmentation in this region.

Baseline HyperDenseNet Reference Contour

Figure 3: Comparison of the segmentation results achieved by the baseline and HyperDenseNet to manual reference contour on the subject employed for validation.

Comparing these results with the performance of methods submitted in the first round of the iSEG Challenge, HyperDenseNet ranked among the top-3 in 6 out of 9 metrics, being the best method in 4 of them. We can therefore say that it achieves state-of-the-art performance for the task at hand. A noteworthy point is the lower performance observed with all tested methods for the segmentation of GM and WM. This suggests that segmenting these tissues is relatively more challenging due to the unclear boundaries between them.

An extension of this study would be to investigate deeper networks with fewer number of filters per layer, as in recently-proposed dense networks. This may reduce the number of trainable parameters, while maintaining or even improving the performance. Further, as in [14], individual weights from dense connections could be also investigated to determine their relative importance. This would allow us to remove useless connections, making the model lighter without degrading its performance.

4 Conclusion

In this paper, we proposed a hyper-densely connected 3D fully CNN to segment infant brain tissue in MRI. This network, called HyperDenseNet, pushes the concept of connectivity beyond recent works, exploiting dense connections in a multi-modal image scenario. Instead of considering dense connections in a single stream, HyperDenseNet processes each modality in independent paths which are inter-connected in a dense manner.

We validated the proposed network in the iSEG-2017 MICCAI Grand Challenge on 6-month infant brain MRI Segmentation, reporting state-of-the-art results. In the future, we plan to investigate the effectiveness of HyperDenseNet in other segmentation problems that can benefit from multi-modal data.

References

  • [1] Antonios Makropoulos et al., “A review on automatic fetal and neonatal brain MRI segmentation,” NeuroImage, 2017.
  • [2] M Jorge Cardoso et al., “AdaPT: an adaptive preterm segmentation algorithm for neonatal brain MRI,” NeuroImage, vol. 65, pp. 97–108, 2013.
  • [3] Li Wang et al., “Segmentation of neonatal brain MR images using patch-driven level sets,” NeuroImage, vol. 84, pp. 141–158, 2014.
  • [4] Li Wang et al., “Automatic segmentation of neonatal images using convex optimization and coupled level sets,” NeuroImage, vol. 58, no. 3, pp. 805–817, 2011.
  • [5] Mohammad Havaei, Nicolas Guizard, Nicolas Chapados, and Yoshua Bengio, “Hemis: Hetero-modal image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2016, pp. 469–477.
  • [6] Konstantinos Kamnitsas et al., “Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation,” Medical image analysis, vol. 36, pp. 61–78, 2017.
  • [7] Jose Dolz et al., “3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study,” NeuroImage, 2017.
  • [8] Tobias Fechter et al., “Esophagus segmentation in CT via 3D fully convolutional neural network and random walk,” Medical Physics, 2017.
  • [9] Christian Wachinger, Martin Reuter, and Tassilo Klein, “Deepnat: Deep convolutional neural network for segmenting neuroanatomy,” NeuroImage, 2017.
  • [10] Pim Moeskops et al., “Automatic segmentation of MR brain images with a convolutional neural network,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1252–1261, 2016.
  • [11] W. Zhang et al., “Deep convolutional neural networks for multi-modality isointense infant brain image segmentation,” NeuroImage, vol. 108, pp. 214–224, 2015.
  • [12] Dong Nie et al., “Fully convolutional networks for multi-modality isointense infant brain image segmentation,” in 13th International Symposium on Biomedical Imaging (ISBI), 2016. IEEE, 2016, pp. 1342–1345.
  • [13] Kaiming He et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE CVPR, 2016, pp. 770–778.
  • [14] Gao Huang et al., “Densely connected convolutional networks,” in Proceedings of the IEEE CVPR, 2017.
  • [15] Xiaomeng Li et al., “H-DenseUNet: Hybrid densely connected UNet for liver and liver tumor segmentation from CT volumes,” arXiv:1709.07330, 2017.
  • [16] Lequan Yu et al., “Automatic 3D cardiovascular MR segmentation with densely-connected volumetric convnets,” in International Conference on MICCAI. Springer, 2017, pp. 287–295.
  • [17] Kaiming He et al.,

    “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,”

    in Proceedings of the IEEE ICCV, 2015, pp. 1026–1034.