DeepAI
Log In Sign Up

Automated and Network Structure Preserving Segmentation of Optical Coherence Tomography Angiograms

12/20/2019
by   Ylenia Giarratano, et al.
2

Optical coherence tomography angiography (OCTA) is a novel non-invasive imaging modality for the visualisation of microvasculature in vivo. OCTA has encountered broad adoption in retinal research. OCTA potential in the assessment of pathological conditions and the reproducibility of studies relies on the quality of the image analysis. However, automated segmentation of parafoveal OCTA images is still an open problem in the field. In this study, we generate the first open dataset of retinal parafoveal OCTA images with associated ground truth manual segmentations. Furthermore, we establish a standard for OCTA image segmentation by surveying a broad range of state-of-the-art vessel enhancement and binarisation procedures. We provide the most comprehensive comparison of these methods under a unified framework to date. Our results show that, for the set of images considered, the U-Net machine learning (ML) architecture achieves the best performance with a Dice similarity coefficient of 0.89. For applications where manually segmented data is not available to retrain this ML approach, our findings suggest that optimal oriented flux is the best handcrafted filter enhancement method for OCTA images from those considered. Furthermore, we report on the importance of preserving network connectivity in the segmentation to enable vascular network phenotyping. We introduce a new metric for network connectivity evaluations in segmented angiograms and report an accuracy of up to 0.94 in preserving the morphological structure of the network in our segmentations. Finally, we release our data and source code to support standardisation efforts in OCTA image segmentation.

READ FULL TEXT VIEW PDF

page 1

page 3

page 5

page 6

10/26/2021

Image Magnification Network for Vessel Segmentation in OCTA Images

Optical coherence tomography angiography (OCTA) is a novel non-invasive ...
12/17/2018

OCTID: Optical Coherence Tomography Image Database

Optical coherence tomography (OCT) is a non-invasive imaging modality wh...
11/04/2014

State-of-the-Art in Retinal Optical Coherence Tomography Image Analysis

Optical Coherence Tomography (OCT) is one of the most emerging imaging m...
01/29/2018

Deep Learning based Retinal OCT Segmentation

Our objective is to evaluate the efficacy of methods that use deep learn...

I Introduction

A number of studies over the past years have demonstrated that phenotypes of the retinal vasculature represent important biomarkers for early identification of pathological conditions such as diabetic retinopathy [1], cardiovascular disease [2], and neurodegenerative disease [3]. Therefore, information regarding structural and functional changes in the retinal blood vessels can play a crucial role in the diagnosis and monitoring of these diseases.
Optical coherence tomography angiography (OCTA) is a novel non-invasive imaging modality that allows visualisation of the microvasculature in vivo across retinal layers. It is based on the principle of repeating multiple OCT B-scans in rapid succession at each location on the retina. Static tissues will remain the same, while tissues containing flowing red blood cells will show intensity variations over time. OCTA can provide angiograms at different retinal depths and, unlike fluorescein angiography, does not require any dye injection, which may carry the risk of adverse reactions [4].
OCTA diagnosis potential has already been established in the context of neurovascular disease, diabetic retinopathy and, more recently, in chronic kidney disease. In [5], microvascular characteristics calculated from OCTA images are compared between Alzheimer’s disease patients, mild cognitive impairment (MCI) patients, and cognitive intact controls. Results showed a decrease in vessel density (VD) and perfusion density (PD) of Alzheimer participants compared with the MCI and controls, opening to the possibility that changes in the retinal microvasculature may mirror small vessel disease in the brain, which is currently not possible to image clinically. Multiple studies on diabetic retinopathy have demonstrated that measurements from the foveal avascular zone (FAZ) in OCTA images are discriminant features in diabetic eyes compared to healthy individuals, even before retinopathy develops [6, 7]. Finally, a recent study on renal impairment [8] demonstrated the potential of OCTA to find associations between changes in the retina and chronic kidney disease (CKD). OCTA scans revealed close association between CKD and lower paracentral retinal vascular density in hypertensive patients.
Measurements used in these studies are based on quantifying phenotypes such as vessel density (VD), fractal dimension (FD), and percentage area of nonperfusion (PAN), extracted from binary masks of OCTA images [9, 10]. However, the accuracy of these measurements and their reproducibility relies on the quality of the image segmentation. Since manual segmentation of blood vessels is a time consuming procedure that requires intra and inter-rater repeatability, there is a necessity to establish a fast automated method not affected by individual subjectivity. The development of automated segmentation algorithms for OCTA images is a novel research field and no consensus exists in the literature about the best approaches. For example, in [11, 12] OCTA phenotypes are calculated on manually traced vessels. Simple thresholding procedures are used in [13, 9, 14]. Hessian filters followed by thresholding are applied on the original image to enhance vessels structure in [15, 16]

. A convolutional deep neural network approach was proposed in

[17] and more recently the U-net architecture was adapted to OCTA in [18]. However, how these different approaches compare to each other is not known. Furthermore, it is currently unknown how these methods perform when it comes to preserving network connectivity in the segmentation. This is a key aspect that can enable advanced vascular network phenotyping based on network science approaches (e.g. [19, 20]).
In this work, we take advantage of OCTA images from the PREVENT cohort (https://preventdementia.co.uk/), an ongoing prospective study aimed to predict early onset of dementia [21]. We derive and validate the first open dataset of retinal parafoveal OCTA images with associated ground truth manual segmentations. Furthermore, we establish a standard for OCTA image segmentation by surveying a broad range of state-of-the-art vessel enhancement and binarisation procedures. We provide the most comprehensive comparison of these methods under a unified framework to date. Our results show that, for the set of images considered, the U-Net machine learning (ML) architecture achieves the best performance. For applications where manually segmented data is not available to retrain this ML approach, our findings suggest that optimal oriented flux is the best handcrafted filter enhancement method from those considered. Furthermore, we report on the importance of preserving network connectivity in the segmentation to enable vascular network phenotyping. We introduce a new metric for network connectivity evaluations in segmented microvascular angiograms and report excellent accuracy in preserving the morphological structure of the network in our segmentations. Finally, we release our data and source code to support standardisation efforts in OCTA image segmentation.

Ii Methods

Ii-a Data acquisition

Imaging was performed using the commercial RTVue-XR Avanti OCT system (OptoVue, Fremont, CA). Consequent B-scans, each one consisting of 304304 A-scans, were generated in mm and mm fields of view centered at the fovea. Maximum decorrelation value is used to generate en face angiograms of superficial, deep and choriocapillary layers. In this work, we selected images only of the superficial layer (containing the vasculature enclosed in the internal limiting membrane layer (ILM) and the inner plexiform layer (IPL)) with 33 mm field of view from left and right eyes of 17 participants with and without family history of dementia as part of a prospective study aimed to find early biomarkers of neurodegenerative diseases (PREVENT). For each of those images we extracted five subimages, one from each clinical region of interest (ROI): superior, nasal, inferior, temporal, and foveal (Figure 1A), up to a total of 170 ROIs. Poor quality ROIs were discarded and from the remaining, 55 ROIs from 11 participants were selected and split into training (30 ROIs) and test (25 ROIs) subgroups.

Ii-B Manual segmentation

A number of challenges need to be overcome in OCTA manual segmentation: images suffer from poor contrast, low signal to noise ratio and can contain motion artifacts generated during the scan acquisition. The most common visible artifacts are vertical and horizontal line distortions, as shown in Figure

1B. Furthermore, the fact that images are constructed from the average of a volume means that, in our segmentation, we cannot distinguish vessels going past each other at different depths. In general, bigger vessels appear brighter and easier to trace, however, the smallest capillaries are challenging to segment and therefore are affected to subjective interpretation by any given rater.
Previous OCTA studies have performed manual continuous blood vessel delineation with or without consideration of vessel width ([17] and [18], respectively). Given the sources of uncertainty previously described, this approach may overinterpret vessel connectivity and suffer from reproducibility issues that remain currently unexplored in the literature. Instead, we adopted a more conservative approach and performed pixelwise manual segmentation (using the ITK-SNAP software [22]). A previous study performing pixelwise segmentation [23] did not assess reproducibility of the segmentations and could not resolve the finest capillaries in the scans.

Fig. 1: (A) Extraction of images from each clinical region of interest: superior, nasal, foveal, inferior, and temporal. (B) Examples (arrows) of horizontal artifacts in OCTA images.

Ii-C Automated image segmentation methods

Vessel enhancement approaches consist of filters that improve the contrast between vessels and background. We chose four well-known handcrafted filters for blood vessel segmentation, based on implementation availability and previous applications to the enhancement of tubular-like structures in retinal images: Frangi [24], Gabor [25], SCIRD-TS [26], and OOF [27]. All these filters require parameter tuning. In our case, from a range of possible configurations we selected the optimal set of parameters that gave the best performance when compared to the manual segmentation (see Supplementary Table S1).
Although handcrafted filters work in many cases, often real images do not satisfy their assumptions (e.g. locally tubular structure and gaussian intensity profile). To overcome this issue probabilistic and machine learning frameworks have been proposed [23], [17]

. In this study, we considered the latter by implementing two deep learning architectures. We implemented a pixelwise convolutional neural network (CNN) and the more recently proposed U-Net architecture

[28]. The design of the CNN for pixelwise classification is based on the one proposed in [17]

for OCTA segmentation. It consists of three convolutional layers with rectified linear unit activation (ReLU), each followed by maxpooling. To reduce the risk of overfitting, dropout is used before the last fully connected layer. For each training image we extracted the same number of vessel and background pixels to balance the classes. A patch containing the pixel to classify and its

neighbourhood is used as input to the network. More than 200,000 patches were used during the training. Finally, the probability of belonging to a vessel or background is then used to generate the enhanced grayscale image (see Supplementary Table

S2).

Developed for biomedical image segmentation, U-Net is a fully convolutional neural network characterised by a contracting path and an expansive path that confer to the network its U shape. It has proved to be fast and accurate even with few training images. The architecture consists of modules of two repeated convolutional layers with ReLU activation function followed by maxpooling for the encoder path, upsampling and two repeated convolutional layers for the decoder path (see Supplementary Table

S3). The lowest resolution is

pixels and binary cross entropy is used as loss function. From each ROI, 1,000 patches of size

are extracted to train the network, for a total of 30,000 training inputs. Given our initial sample size, data augmentation (flipping horizontally or vertically) has been used in both CNN and U-Net.

Vessel enhancement is often followed by a threshold step to obtain the vessel binary mask. However, modern methods employ the enhanced vasculature as a preliminary step for more advanced binarisation algorithms, such as machine learning (ML) classifiers. Therefore, we decided to compare adaptive thresholding, which is a form of thresholding that takes into account spatial variations in illumination, with support vector machines (SVMs), random forest (RF) and k-nearest neighbours (k-NN) as binarisation procedure in the case of Frangi, Gabor, SCIRD-TS. A two step binarisation procedure, suggested in

[29]

has been used in the case of OOF. Finally, a global thresholding is used to binarise the probability maps obtained from the CNN architecture, based on the shape of the pixel intensity histogram, and adaptive thresholding on the one obtained from the U-Net. In each of the ML binary classifiers we used seven features to characterise pixels: intensity based features extracted from a

pixel neighbourhood (intensity value, range, average, standard deviation, and entropy) and geometric features (the local curvature information provided by the hessian eigenvalues),

[30].

Ii-D Segmentation evaluation

Cohen’s kappa coefficient is a robust statistic for testing intra and inter-rater variability. Considering and as the observed agreement and the chance agreement, respectively, it can be computed as:

(1)

In our study is the accuracy in pixel classification (vessel vs background) and is the sum of the probability of both raters randomly selecting vessel pixels and the probability of both of them randomly selecting background pixels for a given ROI.
For the ROIs in the test set, pixelwise comparison between manual and automated segmentation was performed using Dice similarity coefficient defined as

(2)

where TP, FP, FN represent true positive, false positive and false negative, respectively.
Furthermore, for the evaluation of the agreement in network morphology between manual and automated segmentations, we used the CAL metric proposed in [31]. It is based on three descriptive features:

  • connectivity (C), to assess the fragmentation degree between segmentations, described mathematically by the formula

    (3)

    where and are the number of connected components in the segmented and ground truth image, while is the number of vessel pixels in the mask;

  • area (A), to evaluate the degree of overlapping, defined as

    (4)

    where is a morphological dilatation using a disc of radius ;

  • length (L), to capture the degree of coincidence, described by

    (5)

    where indicates a skeletonisation procedure and is a morphological dilatation using a disc of radius .

In this study we set and both equal to 1. The product of C, A, and L, (CAL) results sensitive to the vascular features and takes values in the range [0,1], with the zero denoting the worst segmentation and 1 the perfect segmentation.
Finally, since we are interested in penalising methods that introduce disconnections in the segmentation of the largest connected component of the network, we introduce a new metric, namely the largest connected component ratio (LCC), defined as:

(6)

where and are the lengths, in terms of number of pixels, of the largest connected component in the skeleton of the segmented and ground truth images. The closest to 1 is the LCC ratio, the more similar is in structure the largest connected component of the segmented image compared to the ground truth.

Iii Results

Iii-a Inter and intra-rater agreements

The ground truth dataset contains 55 ROIs segmented by one rater (rater A). Rater A (Y.G.) segmented 20 images twice, and rater B (E.B.) performed the same task on another 20 images once. We calculated the Cohen’s kappa coefficient for each pairwise comparison. Results show good agreements (for each pair ) with an average of for the intra rater agreement and between operators, which demonstrates that the proposed approach to segmentation is reproducible.

Iii-B Automated approaches for pixelwise classification

Segmentation performances are shown in Table I. The U-Net outperforms all the other methods, by reaching a Dice score of 0.89. Among the handcrafted filters, OOF and Frangi filters achieve good performances with an average Dice score of 0.86 and 0.85, respectively. The use of machine learning methods as binarisation procedure slightly improve performances compared to the thresholding procedure in Frangi, Gabor and SCIRD-TS. Investigating the enhanced images (Figure 2), we noticed that each method suffers from different deficiencies. Frangi filter clusters nearby vessels, losing important information contained in the microvasculature. Gabor filter enhances centrelines, performing poorly on the detection of vessel edges. SCIRD-TS remodels the vasculature making it more regular and equally spaced. OOF retrieves the smallest capillaries but overenhances noise in the foveal region. Figure 3 shows segmentation results after applying each vessel enhancement method and best binarisation procedure. Figure 4 shows whole image segmentations with the best handcrafted and learnt filters.

Fig. 2: Example of vessels enhancement. Original, ground truth and images after vessel enhancement by using Frangi, Gabor, SCIRD-TS, OOF, CNN, and U-Net.
Fig. 3: Vessel segmentation in superior parafoveal OCTA image. Original, ground truth images followed by binary images after vessel enhancement by using Frangi (+RF), Gabor (+RF), SCIRD-TS (+SVM), OOF, CNN, and U-Net.
Fig. 4: Whole image segmentation by using the best two methods, OOF and U-Net. Optovue RTVue XR Avanti scan logo on the bottom left corner was removed from the original image.
Performance measure
Method Dice CAL LCC ratio
Frangi+Threshing 0.83 0.83 0.88
Frangi+k-NN 0.84 0.86 0.91
Frangi+SVM 0.85 0.87 0.94
Frangi+RF 0.85 0.87 0.94
Method Dice CAL LCC ratio
Gabor+Threshing 0.77 0.75 0.76
Gabor+k-NN 0.82 0.84 0.83
Gabor+SVM 0.83 0.85 0.84
Gabor+RF 0.83 0.85 0.87

Method Dice CAL LCC ratio
SCIRD-TS+Threshing 0.71 0.66 0.68
SCIRD-TS+k-NN 0.72 0.74 0.90
SCIRD-TS+SVM 0.75 0.75 0.75
SCIRD-TS+RF 0.74 0.75 0.80

Method Dice CAL LCC ratio
OOF 0.86 0.85 0.94
CNN 0.83 0.85 0.94
U-Net 0.89 0.89 0.93


TABLE I: Table of performances in terms of Dice similarity coefficient, CAL and LCC ratio.

Iii-C Foveal avascular zone

Foveal images are characterised by the presence of a predominant dark area called foveal avascular zone (FAZ), due to the fact that it is free from blood vessels. We noticed that handcrafted filters have difficulties with those images, overenhancing noise in the central region (Figure 5). This led to vessel detection in the FAZ when thresholding methods are applied. Machine learning methods were less affected by this issue since they learnt from the ground truth data. These results demonstrate the necessity for more investigations towards preprocessing procedures that can reduce the noise in the FAZ without affecting the properties of the vascular network when handcrafted filters are used for vessel enhancement.

Fig. 5: Foveal images: exxamples of enhancement in the FAZ.

Iv Discussion and Conclusions

Retinal image analysis has demonstrated great potential for the discovery of biomarkers of eye and systemic disease. Recently, OCTA imaging has enabled the visualisation of the smallest capillaries in the retina without the need of a contrast agent. However, its potential for the assessment of pathological conditions and the reproducibility of studies based on it rely on the quality of the image analysis. Automated OCTA image segmentation is an open problem in the field. In this study, we generate the first open dataset of retinal parafoveal OCTA images with associated ground truth manual segmentations. We pay special attention to segmenting the images in a reproducible way and demonstrate good inter- and intra-rater agreement. Furthermore, we establish a standard for automated OCTA image segmentation by surveying a broad range of openly available state-of-the-art vessel enhancement and binarisation procedures. We present a comparison of these methods under a unified computational framework and make the source code available. We evaluate segmentation quality measures to guide the identification of the algorithm that not only provides the best agreement with the manually segmented images, but also achieves the best preservation of their network morphology. We believe that this will open the door to the development of novel vascular morphology metrics based on the application of network science principles capable of deeply phenotyping retinal vasculature.
Our results indicate that, for the set of images considered, the U-Net architecture is the best automated segmentation method for parafoveal OCTA images, since it achieves the best performances in Dice similarity coefficient. Our U-Net implementation achieves better results than the one presented in [18] and comparable to the best results reported to date [23]. However, like-for-like comparison of methods across the literature is difficulted by a lack of consistency in the approach to manual segmentation (e.g. ability to resolve finest capillaries in the OCTA images [23] and consideration of vessel width in the segmentations [18]). Our study provides the necessary tools to standardise method comparison in the future. Interestingly, OOF achieves segmentation performances in line with the neural network architectures without the requirement of extensive manually segmented images for training purposes. We found that in our analysis, ML techniques improved segmentation performance when used as binarisation methods following handcrafted filter enhancement. Furthermore, we report on the importance of preserving network connectivity in the segmentation to enable vascular network phenotyping. We introduce a new metric for network connectivity evaluations in segmented angiograms, to be used in conjunction with pixelwise similarity coefficients such as Dice, and report excellent accuracy in preserving the morphological structure of the network in our segmentations.
Our results highlight challenges in the segmentation of the FAZ: handcrafted filters suffer from noise enhancement in this region, indicating the necessity of masking that area or the use of more sophisticated denoising preprocessing procedures when those filters are applied. Future work will involve the characterisation of FAZ signal as either noise or signal from deeper layers and research towards improved network structure preserving segmentation algorithms, e.g. improving the connectivity in the U-Net segmentation including topological information into the loss function of the learning process [32].

Source code and data availability

OCTA images and rater A (Y.G.) segmentations are available in https://doi.org/10.7488/ds/2729

. Handcrafted filter code was implemented in MATLAB R2018b (Version 9.5). Python 3.6.9 was used to build ML methods. Keras library with Tensorflow backend was used to implement the CNN and U-Net. Source code available at

https://github.com/giaylenia/OCTA_segm_study.

Acknowledgements

This research was supported by the Medical Research Council (MRC). We thank the PREVENT research team and study participants. Image acquisition was carried out at the Edinburgh Imaging facility QMRI, University of Edinburgh. The research team acknowledges the financial support of NHS Research Scotland (NRS), through Edinburgh Clinical Research Facility. The authors would like to thank the members of the VAMPIRE team (https://vampire.computing.dundee.ac.uk) for fruitful discussions.

References

  • [1] A. J. Jenkins, M. V. Joglekar, A. A. Hardikar, A. C. Keech, D. N. O’Neal, and A. S. Januszewski, “Biomarkers in diabetic retinopathy,” Review of Diabetic Studies, vol. 12, no. 1-2, pp. 159–195, 2015.
  • [2] R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nature Biomedical Engineering, vol. 2, no. 3, pp. 158–164, 2018.
  • [3] D. C. DeBuc, G. M. Somfai, and A. Koller, “Retinal microvascular network alterations: Potential biomarkers of cerebrovascular and neural diseases,” American Journal of Physiology - Heart and Circulatory Physiology, vol. 312, no. 2, pp. H201–H212, 2017.
  • [4] F. Musa, W. J. Muen, R. Hancock, and D. Clark, “Adverse effects of fluorescein angiography in hypertensive and elderly patients,” Acta Ophthalmologica Scandinavica, vol. 84, no. 6, pp. 740–742, 2006.
  • [5] S. P. Yoon, D. S. Grewal, A. C. Thompson, B. W. Polascik, C. Dunn, J. R. Burke, and S. Fekrat, “Retinal Microvascular and Neurodegenerative Changes in Alzheimer’s Disease and Mild Cognitive Impairment Compared with Control Participants,” Ophthalmology Retina, vol. 3, no. 6, pp. 489–499, 2019. [Online]. Available: https://doi.org/10.1016/j.oret.2019.02.002
  • [6] J. Khadamy, K. Aghdam, and K. Falavarjani, “An update on optical coherence tomography angiography in diabetic retinopathy,” Journal of Ophthalmic & Vision Research, vol. 13, p. 487, 10 2018.
  • [7] N. Takase, M. Nozaki, A. Kato, H. Ozeki, M. Yoshida, and Y. Ogura, “Enlargement of foveal avascular zone in diabetic eyes evaluated by en face optical coherence tomography angiography,” Retina, vol. 35, no. 11, pp. 2377–2383, 2015.
  • [8] M. Vadalà, M. Castellucci, G. Guarrasi, M. Terrasi, T. La Blasca, and G. Mulè, “Retinal and choroidal vasculature changes associated with chronic kidney disease,” Graefe’s Archive for Clinical and Experimental Ophthalmology, vol. 257, no. 8, pp. 1687–1698, 2019.
  • [9] P. L. Nesper, P. K. Roberts, A. C. Onishi, H. Chai, L. Liu, L. M. Jampol, and A. A. Fawzi, “Quantifying Microvascular Abnormalities With Increasing Severity of Diabetic Retinopathy Using Optical Coherence Tomography Angiography,” Investigative ophthalmology & visual science, vol. 58, no. 6, pp. BIO307–BIO315, 2017.
  • [10] R. Reif, J. Qin, L. An, Z. Zhi, S. Dziennis, and R. Wang, “Quantifying optical microangiography images obtained from a spectral domain optical coherence tomography system,” International Journal of Biomedical Imaging, vol. 2012, 2012.
  • [11] A. Y. Alibhai, E. M. Moult, R. Shahzad, C. B. Rebhun, C. Moreira-neto, M. Mcgowan, D. Lee, B. Lee, C. R. Baumal, A. J. Witkin, E. Reichel, J. S. Duker, J. G. Fujimoto, and N. K. Waheed, “HHS Public Access,” vol. 2, no. 5, pp. 418–427, 2019.
  • [12] B. D. Krawitz, S. Mo, L. S. Geyman, S. A. Agemy, N. K. Scripsema, P. M. Garcia, T. Y. Chui, and R. B. Rosen, “Acircularity index and axis ratio of the foveal avascular zone in diabetic eyes and healthy controls measured by optical coherence tomography angiography,” Vision Research, vol. 139, pp. 177–186, 2017.
  • [13] A. C. Onishi, P. L. Nesper, P. K. Roberts, G. A. Moharram, H. Chai, L. Liu, L. M. Jampol, and A. A. Fawzi, “Importance of considering the middle capillary plexus on OCT angiography in diabetic retinopathy,” Investigative Ophthalmology and Visual Science, vol. 59, no. 5, pp. 2167–2176, 2018.
  • [14] T. S. Hwang, S. S. Gao, L. Liu, A. K. Lauer, C. J. Flaxel, D. J. Wilson, D. Huang, Y. Jia, and O. Health, “HHS Public Access,” vol. 134, no. 4, pp. 367–373, 2016.
  • [15] A. Y. Kim, Z. Chu, A. Shahidzadeh, R. K. Wang, C. A. Puliafito, and A. H. Kashani, “Quantifying microvascular density and morphology in diabetic retinopathy using spectral-domain optical coherence tomography angiography,” Investigative Ophthalmology and Visual Science, vol. 57, no. 9, pp. OCT362–OCT370, 2016.
  • [16] M. Zhang, T. S. Hwang, C. Dongye, D. J. Wilson, D. Huang, and Y. Jia, “Automated quantification of nonperfusion in three retinal plexuses using projection-resolved optical coherence tomography angiography in diabetic retinopathy,” Investigative Ophthalmology and Visual Science, vol. 57, no. 13, pp. 5101–5106, 2016.
  • [17] P. Prentašic, M. Heisler, Z. Mammo, S. Lee, A. Merkur, E. Navajas, M. F. Beg, M. Šarunic, and S. Loncaric, “Segmentation of the foveal microvasculature using deep learning networks,” Journal of Biomedical Optics, vol. 21, no. 7, p. 075008, 2016.
  • [18] L. Mou, Y. Zhao, L. Chen, J. Cheng, Z. Gu, H. Hao, H. Qi, Y. Zheng, A. Frangi, and J. Liu, CS-Net: Channel and Spatial Attention Network for Curvilinear Structure Segmentation, 10 2019, pp. 721–730.
  • [19] I. Amat-Roldan, A. Berzigotti, R. Gilabert, and J. Bosch, “Assessment of hepatic vascular network connectivity with automated graph analysis of dynamic contrast-enhanced us to evaluate portal hypertension in patients with cirrhosis: A pilot study1,” Radiology, vol. 277, no. 1, pp. 268–276, 2015.
  • [20] A. P. Alves, O. N. Mesquita, J. Gómez-Gardeñes, and U. Agero, “Graph analysis of cell clusters forming vascular networks,” Royal Society Open Science, vol. 5, no. 3, 2018.
  • [21] C. W. Ritchie, K. Wells, and K. Ritchie, “The PREVENT research programme-A novel research programme to identify and manage midlife risk for dementia: The conceptual framework,” International Review of Psychiatry, vol. 25, no. 6, pp. 748–754, 2013.
  • [22] P. A. Yushkevich, J. Piven, H. Cody Hazlett, R. Gimpel Smith, S. Ho, J. C. Gee, and G. Gerig, “User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability,” Neuroimage, vol. 31, no. 3, pp. 1116–1128, 2006.
  • [23] N. Eladawi, M. Elmogy, O. Helmy, A. Aboelfetouh, A. Riad, H. Sandhu, S. Schaal, and A. El-Baz, “Automatic blood vessels segmentation based on different retinal maps from OCTA scans,” Computers in Biology and Medicine, vol. 89, no. August, pp. 150–161, 2017.
  • [24] A. F. Frangi, W. J. Niessen, K. L. Vincken, and M. A. Viergever, “Multiscale vessel enhancement filtering,” in Medical Image Computing and Computer-Assisted Intervention - MICCAI’98, ser. Lecture Notes in Computer Science, A. C. W.M. Wells and S. Delp, Eds., vol. 1496.   Berlin, Germany: Springer Verlag, 1998, pp. 130–137.
  • [25] J. V. Soares, J. J. Leandro, R. M. Cesar, H. F. Jelinek, and M. J. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1214–1222, 2006.
  • [26] R. Annunziata and E. Trucco, “Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks,” IEEE Transactions on Medical Imaging, vol. 35, no. 11, pp. 2381–2392, 2016.
  • [27] M. W. Law and A. C. Chung, “Three dimensional curvilinear structure detection using optimally oriented flux,” in Computer Vision - ECCV 2008, vol. 5305.   Springer, 2008, pp. 368–382.
  • [28] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds.   Cham: Springer International Publishing, 2015, pp. 234–241.
  • [29] A. Li, J. You, C. Du, and Y. Pan, “Automated segmentation and quantification of OCT angiography for tracking angiogenesis progression,” Biomedical Optics Express, vol. 8, no. 12, p. 5604, 2017.
  • [30] P. Rodrigues, P. Guimarães, T. Santos, S. Simão, T. Miranda, P. Serranho, and R. Bernardes, “Two-dimensional segmentation of the retinal vascular network from optical coherence tomography,” Journal of Biomedical Optics, vol. 18, no. 12, p. 126011, 2013.
  • [31] M. E. Gegundez-Arias, A. Aquino, J. M. Bravo, and D. Marin, “A function for quality evaluation of retinal vessel segmentations,” IEEE Transactions on Medical Imaging, vol. 31, no. 2, pp. 231–239, 2012.
  • [32] J. R. Clough, I. Oksuz, N. Byrne, J. A. Schnabel, and A. P. King, “Explicit Topological Priors for Deep-Learning Based Image Segmentation Using Persistent Homology,” Lecture Notes in Computer Science, vol. 11492 LNCS, no. 40119, pp. 16–28, 2019.

Supplementary material

Parameters
Frangi FrangiScaleRange [0.5, 2]
FrangiScaleRatio 0.5
FrangiBetaOne 1
FrangiBetaTwo 15
Gabor scales [1,2,3,4]
epsilon 4
k0 [0 3]
SCIRD–TS [1,5]
0.5
[1,2]
0.5
[-0.1, 0.1]
0.025
10
9
0.05
OOF range [0.5, 2]
sigma 0.5
upthreshold 70
TABLE S1: Table of parameters for the handcrafted filters
Layer Type Maps and size Kernel size
0 Input 1 map of neurons
1 Convolution2D 32 maps of neurons
2 Maxpooling2D 32 maps of neurons
3 Convolution2D 32 maps of neurons
4 Maxpooling2D 32 maps of neurons
5 Convolution2D 32 maps of neurons
6 Maxpooling2D 32 maps of neurons
7 Dense neurons
8 Dropout
9 Dense 1 neuron
TABLE S2: CNN layers architecture
Layer Type Maps and size Kernel size
0 Input 1 map of neurons
1 Convolution2D 32 maps of neurons
2 Convolution2D 32 maps of neurons
3 Maxpooling2D 32 maps of neurons
4 Convolution2D 64 maps of neurons
5 Convolution2D 64 maps of neurons
6 Maxpooling2D 64 maps of neurons
7 Convolution2D 128 maps of neurons
8 Convolution2D 128 maps of neurons
9 Upsampling2D 128 maps of neurons
10 Concatenate 192 maps of neurons
11 Convolution2D 64 maps of neurons
12 Convolution2D 64 maps of neurons
9 Upsampling2D 64 maps of neurons
10 Concatenate 96 maps of neurons
11 Convolution2D 32 maps of neurons
12 Convolution2D 32 maps of neurons
13 Convolution2D 1 map of neurons
TABLE S3: U-Net layers architecture