A Two-Stream Meticulous Processing Network for Retinal Vessel Segmentation

01/15/2020 ∙ by Shaoming Zheng, et al. ∙ 19

Vessel segmentation in fundus is a key diagnostic capability in ophthalmology, and there are various challenges remained in this essential task. Early approaches indicate that it is often difficult to obtain desirable segmentation performance on thin vessels and boundary areas due to the imbalance of vessel pixels with different thickness levels. In this paper, we propose a novel two-stream Meticulous-Processing Network (MP-Net) for tackling this problem. To pay more attention to the thin vessels and boundary areas, we firstly propose an efficient hierarchical model automatically stratifies the ground-truth masks into different thickness levels. Then a novel two-stream adversarial network is introduced to use the stratification results with a balanced loss function and an integration operation to achieve a better performance, especially in thin vessels and boundary areas detecting. Our model is proved to outperform state-of-the-art methods on DRIVE, STARE, and CHASE_DB1 datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Fundus image analysis serves as a key and non-invasive tool in the diagnosis and treatment of many ophthalmological and cardiovascular diseases. Additionally, with the developing of deep learning methods, many network architectures based on U-Net or adversarial procedures have been proposed to learn the end-to-end relations between an original image and a ground-truth binary mask manually labeled by experts. Maninis

[5] proposed Deep Retinal Image Understanding (DRIU) which fine-tuned VGGNet. During the progress of deep learning approaches, segmentation performance on thin vessels has become a great challenge and focus. Zhang et al. [12] propose a U-Net architecture (ML-UNet) [7] for multi-label segmentation of thin and stem (thick) vessels. Yan et al. [10] propose a novel segment-level loss in addition to the pixel-level loss to train a U-Net architecture (JL-UNet), and report increased segmentation accuracy for thin vessels. Yet, the work of Zhang et al. [12] and Yan et al. [10] which propose an essentially multi-label miscellaneous network, do not have an end-to-end network which dedicated for specific binary classification tasks focusing different types of features. Additionally, Gu et. al [3] propose a context encoder network (CE-Net) to better extract the high-level information of the image, while the CE-Net loses to focus on thin and boundary areas.

In this paper, we inspect the rationale behind this problem from a perspective of data balancing. The reason that ordinary neural networks did not obtain desirable segmentation performance on thin vessels and boundary areas is that vessel data are suffered from imbalance internal to an assumed identical class (vascular or non-vascular). Vessels with different thickness levels may have different features for identification and localization, making them essentially different classes in a segmentation task. Therefore, balancing across these classes becomes an important work to avoid bias in learning. However, such balancing remains challenging as in most available segmentation datasets, the ground-truth mask is binary, providing no immediate information regarding thickness levels. In view of this challenge, we propose a novel morphological model that automatically segments and classifies (stratifies) ground-truth masks into strata regarding vessel thickness levels using hierarchical opening operations. In order to further increase the segmentation performance, we also propose a two-stream model that learns both general retinal vascular features and those specific to thin vessels and boundary areas by processing both all strata and only the thin vessels (the following ”thin vessels” refer to both thin vessels and boundary areas) stratum. The results from the two streams are united (pixel-wise ORed) to output the final result.

Our contributions mainly lie in 3 aspects. (1) We propose a novel two-stream architecture to synthesize features of different thickness levels. (2) An efficient hierarchical model of opening operations, which automatically stratifies the ground-truth masks to inject thickness levels sensitivity to our model and is jointly utilized with a proposed CE-GAN model whose generator is based on the CE-Net [3] architecture. (3) A balanced loss function and an integration operation to unify and enable weighing on vessel classes of various thickness levels.

Figure 1: The proposed method includes a meticulous processing architecture, an automatic stratification method and CE-GAN network, where CE-GAN and automatic stratification are embedded in the meticulous processing architecture.

2 Proposed Method

2.1 Automatic Stratification

For each original sample , the mask is stratified into componential masks (strata): , each with only the vessel labels of the corresponding thickness levels. The stratification is achieved via opening (erosion then dilation) with thresholding kernels. For the opening operation, we apply thresholds for kernels sizes: , . We define the diameter of the vessel as the discrete Fréchet distance between its two border curves and :

(1)

where and : are two non-decreasing surjections and is the Chebyshev distance between two pixels. All vessels of are guaranteed to be completely erased via a kernel, while all vessels with (attenuated during erosion) restore their original outlines after dilation and are intact from the whole opening process. This process results in an intermediary semi-limited mask , wherefrom we can derive the final precisely selective strata:

(2)

where .

2.2 Two-stream Model

In order to learn vessel features of different specificities, we propose a novel two-stream model for both general features and those especially related to thin vessels. On one stream, learns general features via training against 3 strata. To effectively learn the features of different thickness levels, we propose to concatenate both the two stratified masks (stem and thin) and the original mask (raw) along a third, strata dimension to form of shape for later training. Samples with stratified masks are fed to a general end-to-end U-Net-like segmentation network that outputs a prediction map against each strata. On the other stream, an additional end-to-end network dedicated for segmenting thin vessels outputs only one prediction map against only the stratum of thin vessels labels .

We use weighted MSE as the losses of the network and apply corresponding backward updates to it. In this way, vessels of different thickness levels have configurable weights in the final losses and the thickness-insensitive segmentation dataset are able to be internally balanced:

(3)

where

stands for the Frobenius norm of the residual tensor.

The segmentation problem can also be formulated as an image-to-image translation task from the original image to the ground-truth mask. Specifically, we materialize the two-stream network as adversarial CE-GAN models. Under this context, we train the generative networks from those following loses:

(4)

In addition, generators are also trained directly against the ground-truth strata to refine the segmentation results with L1 norm and . Moreover, the adversarial segmentation networks are updated using a min-max algorithm, where the losses of the above two training ends are regularized by a hyper-parameter :

(5)
DRIVE STARE CHASE_DB1
Methods Year
Unsupervised
Zhang [11] 2016 0.7743 0.9725 0.9476 0.9636 0.7791 0.9758 0.9554 0.9748 0.7626 0.9661 0.9452 0.9606
Fan [1] 2019 0.736 0.981 0.960 - 0.791 0.970 0.957 - 0.657 0.973 0.951 -
Classical Supervised
Fraz [2] 2012 0.7406 0.9807 0.9480 0.9747 0.7548 0.9763 0.9534 0.9768 0.7224 0.9711 0.9469 0.9712
Wang [9] 2019 0.7648 0.9817 0.9541 - 0.7523 0.9885 0.9603 - 0.7730 0.9792 0.9603 -
Deep Learning
Maninis [5] 2016 0.8280 0.9728 0.9541 0.9801 0.7919 0.9827 0.9706 0.9814 0.7651 0.9822 0.9657 0.9746
ML-UNet [12] 2018 0.8723 0.9618 0.9504 0.9799 0.7673 0.9901 0.9712 0.9882 0.7667 0.9825 0.9649 0.9839
JL-UNet [10] 2018 0.7653 0.9818 0.9542 0.9752 0.7581 0.9846 0.9612 0.9801 0.7633 0.9809 0.9610 0.9781
Gu [3] 2019 0.8309 - 0.9545 0.9779 - - - - - - - -
Proposed 2019 0.7862 0.9858 0.9681 0.9844 0.7934 0.9884 0.9733 0.9883 0.7492 0.9890 0.9722 0.9858
Table 1: Performance Comparisons with Previous Work
DRIVE STARE DB1
CE-Net [3] 0.9779 0.9810 0.9806
CE-GAN 0.9820 0.9817 0.9812
CE-GAN + stratify 0.9839 0.9850 0.9840
CE-GAN + stratify + thin 0.9844 0.9883 0.9858
Table 2: s of ablation study of the MP-Net

Since both the two networks produce smooth predictions, first we binarize the preliminary outputs with a threshold of 127. Then as the final outputs of our system, positive binarized predictions are united (pixel-wise ORed) with that from each prediction maps.

3 Experiments

3.1 Datasets and Experimental Setup

We evaluate our model on three standard datasets widely used for the retinal vessels segmentation task. All of these three datasets contain no annotations of vessels thickness levels and are therefore appropriate for our stratification model to process. DRIVE [8] 111https://www.isi.uu.nl/Research/Databases/DRIVE/ contains 40 color fundus (CF) images with manually labeled ground-truth masks, where 20 images for training and use the remaining 20 images for testing. To reduce selection bias, we repeat the experiment 5 times and report the averaged result. STARE [4] 222http://cecas.clemson.edu/~ahoover/stare/ contains 20 manually labeled CF images. We report average results on 4-fold cross-validation with 15 training samples and 5 testing samples. CHASE_DB1 [6] 333https://blogs.kingston.ac.uk/retinal/chasedb1/ contains 28 labeled samples, where we report average performances on 4-fold cross-validation.

3.2 Evaluation Metrics

Standard metrics for binary classification tasks including Area Under Curve () of Receiver Operating Characteristic (ROC), Accuracy (), Specificity (), and Sensitivity () (Recall) are used for evaluating our model. The definitions of the selected metrics are given by: , , and , where , , , and respectively stand for true positives, true negatives, false positives, and false negatives.

Figure 2: An example from the DRIVE dataset. Stratification (first row, left to right): (1) input image, (2) raw mask, (3) stem mask, (4) thin mask; Segmentation Results (second row): (5) overall prediction (red are false positive area while green are false negative area), (6) raw prediction, (7) stem prediction, (8) thin prediction (of the stream)

3.3 Experimental Results

To justify the performance of our model, we compare the 4 metrics with 8 representative previous works from all 3 open-access datasets. The comparison results presented in Table 1 show that our MP-Net model outperforms the state-of-the-art methods regarding accuracy and in all three datasets, which meter the practical prediction quality and the overall prediction quality independent on thresholding specifications. The advancement is greater in the DRIVE dataset. It’s related to the fact that the DRIVE dataset contains more thin vessels, which is the main target of our model. Specificity is also the highest in DRIVE and CHASE_DB1 while sensitivity is highest in STARE. Particularly, our method outperforms ML-UNet [12] and JL-UNet [10] which adopt a different multi-class approach to also especially tackle the thin-vessels challenge. Figure 2 shows an example of our segmentation maps on DRIVE. As can be seen, most thin vessels and boundary areas have been meticulously picked up.

3.4 Ablation Study

Our proposed MP-Net can be roughly decomposed into 4 major progressive phases: (1) the backbone Context-Encoder Network (CE-Net) as a standalone generator segmenting non-stratified images, (2) the non-stratified CE-Net in (1) together with a discriminator to form a CE-GAN, (3) CE-GAN with a stratified CE-Net (i.e. with raw, stem, and thin strata) to form one stream of the MP-Net, and (4) The one-stream MP-Net in (3) with another stream of thin-stratum-specific GAN in (2) to form the complete two-stream MP-Net. We perform a whole series of ablation studies on all the datasets to verify the effect of each component via separation. The results in Table 2 validate that the stratification and mingled training mechanism and thin-specific designs are both effective improvements to the baseline system.

4 Conclusion

In this paper, we propose the Meticulous-Processing Network (MP-Net) which refines segmentation performance on thin vessels by stratifying and training on different thickness levels. The performance comparison and ablation study validate our design. This composited method can also be extended to more vessel-like segmentation tasks.

References

  • [1] Z. Fan, J. Lu, C. Wei, H. Huang, X. Cai, and X. Chen (2018) A hierarchical image matting model for blood vessel segmentation in fundus images. IEEE Transactions on Image Processing 28 (5), pp. 2367–2377. Cited by: Table 1.
  • [2] M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, and S. A. Barman (2012) An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Transactions on Biomedical Engineering 59 (9), pp. 2538–2548. Cited by: Table 1.
  • [3] Z. Gu, J. Cheng, H. Fu, K. Zhou, H. Hao, Y. Zhao, T. Zhang, S. Gao, and J. Liu (2019) CE-net: context encoder network for 2d medical image segmentation. IEEE transactions on medical imaging. Cited by: §1, §1, Table 1, Table 2.
  • [4] A. Hoover, V. Kouznetsova, and M. Goldbaum (1998) Locating blood vessels in retinal images by piece-wise threshold probing of a matched filter response.. In Proceedings of the AMIA Symposium, pp. 931. Cited by: §3.1.
  • [5] K. Maninis, J. Pont-Tuset, P. Arbeláez, and L. Van Gool (2016) Deep retinal image understanding. In International conference on medical image computing and computer-assisted intervention, pp. 140–148. Cited by: §1, Table 1.
  • [6] C. G. Owen, A. R. Rudnicka, R. Mullen, S. A. Barman, D. Monekosso, P. H. Whincup, J. Ng, and C. Paterson (2009) Measuring retinal vessel tortuosity in 10-year-old children: validation of the computer-assisted image analysis of the retina (caiar) program. Investigative ophthalmology & visual science 50 (5), pp. 2004–2010. Cited by: §3.1.
  • [7] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §1.
  • [8] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken (2004) Ridge-based vessel segmentation in color images of the retina. IEEE transactions on medical imaging 23 (4), pp. 501–509. Cited by: §3.1.
  • [9] X. Wang, X. Jiang, and J. Ren (2019) Blood vessel segmentation from fundus image by a cascade classification framework. Pattern Recognition 88, pp. 331–341. Cited by: Table 1.
  • [10] Z. Yan, X. Yang, and K. Cheng (2018) Joint segment-level and pixel-wise losses for deep learning based retinal vessel segmentation. IEEE Transactions on Biomedical Engineering 65 (9), pp. 1912–1923. Cited by: §1, Table 1, §3.3.
  • [11] J. Zhang, B. Dashtbozorg, E. Bekkers, J. P. Pluim, R. Duits, and B. M. ter Haar Romeny (2016) Robust retinal vessel segmentation via locally adaptive derivative frames in orientation scores. IEEE transactions on medical imaging 35 (12), pp. 2631–2644. Cited by: Table 1.
  • [12] Y. Zhang and A. C. Chung (2018) Deep supervision with additional labels for retinal vessel segmentation task. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 83–91. Cited by: §1, Table 1, §3.3.