Accurate Retinal Vessel Segmentation via Octave Convolution Neural Network

06/28/2019
by   Zhun Fan, et al.
Microsoft
3

Retinal vessel segmentation is a crucial step in diagnosing and screening various diseases, including diabetes, ophthalmologic diseases, and cardiovascular diseases. In this paper, we proposed an effective and efficient method for accurate vessel segmentation in color fundus images using encoder-decoder based octave convolution network. Comparing to other convolution network based methods that utilize vanilla convolution for feature extraction, the proposed method adopts octave convolution for multiple-spatial-frequency features learning, thus can better capture retinal vasculature with varying size and shape. We empirically demonstrate that the feature map of low-frequency kernels responds focus on the major vascular tree, whereas the high-frequency feature map can better captures the minor details of low contrasted thin vessels. To provide the network capability of learning how to decode multifrequency features, we extended octave convolution and proposed a novel operation named octave transposed convolution with the same multifrequency approach. We also proposed a novel encoder-decoder based fully convolution network named Octave UNet that generates high resolution vessel segmentation in single forward feeding. The proposed method is evaluated on four publicly available datasets, DRIVE, STARE, CHASE_DB1, and HRF datasets. Extensive experimental results demonstrate the proposed method achieves better or compatible performance to state-of-the-art methods with fast processing speed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 7

page 8

page 9

05/28/2019

Deep Dilated Convolutional Nets for the Automatic Segmentation of Retinal Vessels

The reliable segmentation of retinal vasculature can provide the means t...
09/25/2020

DPN: Detail-Preserving Network with High Resolution Representation for Efficient Segmentation of Retinal Vessels

Retinal vessels are important biomarkers for many ophthalmological and c...
04/17/2021

Objective-Dependent Uncertainty Driven Retinal Vessel Segmentation

From diagnosing neovascular diseases to detecting white matter lesions, ...
11/22/2019

HybridNetSeg: A Compact Hybrid Network for Retinal Vessel Segmentation

A large number of retinal vessel analysis methods based on image segment...
10/26/2019

Dense Dilated Network with Probability Regularized Walk for Vessel Detection

The detection of retinal vessel is of great importance in the diagnosis ...
07/04/2019

FPCNet: Fast Pavement Crack Detection Network Based on Encoder-Decoder Architecture

Timely, accurate and automatic detection of pavement cracks is necessary...
11/25/2020

The Unreasonable Effectiveness of Encoder-Decoder Networks for Retinal Vessel Segmentation

We propose an encoder-decoder framework for the segmentation of blood ve...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Retinal vessel segmentation is a crucial prerequisite step of retinal fundus image analysis because retinal vasculature can aid in accurate localization of other anatomical structures of retina. Retinal vasculature is also extensively used for diagnosis assistance, screening, and treatment planing of ocular diseases such as age-related macular degeneration, glaucoma, and diabetic retinopathy[1]. The morphological characteristics of retinal vessel such as shape, tortuosity, branching pattern, and width are important indicators for hypertension, cardiovascular and other systemic diseases[2, 3]. For example, the increase in vessel tortuosity has shown statistical correlations with the progression of retinopathy of prematurity[4], and hypertensive retinopathy[5]. Quantitative information obtained from retinal vascular can also be used for early detection of diabetes[6] and progress monitoring of proliferative diabetic retinopathy[7]. Because fundus images can be conveniently acquired by fundus camera, retinal vasculature can be directly visualized by a non-inversive manner[3]. Therefore, many large-scale-population-based studies[8, 9, 10] routinely adopted retinal vessel segmentation and are conducted to find statistical correlations between changes of retinal vasculature and a disease. Furthermore, retinal vessel segmentation can be utilized for biometric identification because the vessel structure is found to be unique for each individual[11, 12].

In clinical practice, retinal vasculature is often manually annotated by ophthalmologists from fundus images. This manual segmentation procedure is a tedious, laborious and time-consuming task that requires skill training and expert knowledge. Moreover, manual vessel segmentation is intuitive and error-prone, and lacks repeatability and reproducibility. Especially in the case of large-scale-population-based clinical study, manual vessel segmentation becomes a bottleneck when the amount of data increases.

To reduce the workload of manual segmentation and increase accuracy, processing speed, and reproducibility of retinal vessel segmentation, a tremendous amount of research efforts have been dedicated in developing fully automated or semiautomated methods for retinal vessel segmentation. However, retinal vessel segmentation is a non-trivial task due to various complexities of fundus images and retinal structures. First of all, quality of fundus images can differ due to various imaging artifacts such as blur, noise, uneven illumination, drift of image intensity, and lack of vessel background contrast[1, 2]. Secondly, retinal fundus images are acquired by projecting different 3D retinal structures with varying depth onto a 2D images[1], which leads to overlapping non-vascular structures and vascular structures. Various anatomical structures such as optic disc, macula, and fovea present in fundus images and complicate the segmentation of vasculature. Additionally, the possible presence of abnormalities such as exudates, hemorrhages, cotton wool spots and micro-aneurysms also pose challenges to retinal vessel segmentation. Finally, one could argue that the complex nature of retinal vasculature presents the most significant challenge to accurate segmentation. Even though, in most situations, the orientation and image intensity of a vessel does not change abruptly and is expected to be locally linear and connected, while gradually change in intensity along their elongated lengths and form a binary-tree-like structure[2], but the shape, width, local intensity, and branching pattern of retinal vessels can vary greatly. For example, there is a wide range of vessel widths ranging from 1 pixel to 20 pixels, depending on both the width of the vessel and the resolution of fundus image[2]. Therefor, if both major and thin vessels are tackled by applying the same technique, then it might tend to over segment one or the other[1].

Over the past decades, numerous retinal vessel segmentation methods have been proposed in the literature[1, 2, 13, 3, 14]. The existing methods can be categorized into unsupervised methods and supervised methods according to whether or not prior information such as vessel groundtruth is utilized as supervision to guide the learning process of a vessel prediction model.

I-a Unsupervised Methods

Without the needs of groundtruths and supervised training, most of the unsupervised methods are rule-based methods, which mainly includes image processing techniques, morphological approaches, matched filtering methods, multiscale methods, and vessel tracking methods.

Methods derived from traditional digital image processing techniques are widely adopted to address the problem of low image quality and adopted by other methods as preprocessing procedure. For example, histogram equalization based methods[15] are commonly employed to correct intensity inhomogeneity of unevenly illuminated fundus images, whereas morphological opening operation is often utilized to remove vessel central reflexion. Sigurosson et al.[16] proposed a morphological approach based on path opening filter for vasculature extraction. Fan et al.[17] proposed an unsupervised morphological method based on hierarchical image matting for retinal vessel segmentation, where the trimaps are generated by sophisticated hand crafted features.

The matched filtering methods often extract vessel features by calculating the convolutional response of an input image and 2D filters, e.g., Gaussian kernels[18], multiscale second-order Gaussian derivatives kernels[19] and multi-wavelet kernels[20], which profile the vessel cross-sectional intensity. Azzopardi et al.[21] proposed the bar-combination of shifted filter Responses approach that selectively responds to vessel-like linear structures. Zhang et al.[19] proposed a set of novel filters based on 3D rotating frames of orientation score that raised form 2D images by wavelet-type transformation.

The multiscale approaches exploit the fact that vascular structures appear at multiple scales and orientation. Many multiscale schemes[22, 23, 24] are based on Eigen analysis of Hessian matrix. Multiscale line detection approach was proposed by Nguyen et al.[25] for extending the basic line detection method introduced by Ricci et al.[26]. By taking into account the average grey level responds of a group of lines of various scale, orientation and length passing through a target pixel, the multiscale line detection can recognize close parallel vessels and central vessel reflexion.

Vessel tracking methods[27, 28, 29, 30] use local intensity and tortuosity information to guide the tracking of vessel. These methods provide precise vessel width and connectivity information, but cannot detect disconnected vessels with no seed point.

I-B Supervised Methods

Supervised methods for retinal vessel segmentation are based on binary pixel classification, i.e., predicting whether a pixel in fundus image belongs to vessel class or non-vessel class. In supervised methods, pairs of data sample containing a fundus image or a local patch of fundus image and its corresponding vessel groundtruth are commonly used to train feature detectors and binary classifiers. Traditional machine learning approaches contains two steps, i.e., feature engineering and classification. The feature engineering step involves hand crafting and extract features to capture the intrinsic characteristics of a target pixel by utilized its local information conveyed within local image patches. Staal et al.

[31] proposed a ridge based feature extraction method that exploits the elongated structure of vessels, and K-nearest neighbor classifier is employed to achieve pixel-wise vessel segmentation. Soares et al.[32]

utilized multiscale 2D Gabor wavelet transformation for feature extraction. The feature vectors extracted were composed of multiscale wavelet responds and pixel intensity. Then a Bayesian classifier with Gaussian mixture model as the class-conditional probability was use to performa pixel-wise binary classification based on the features extracted. Lupascu et al.

[33] introduces the feature-based AdaBoost classifier for vessel segmentation. A rich collection of responds of different filters, various structure features and geometry features are extracted at multiple spatial scale, and utilized to train a AdaBoost classifier. Other classifiers such as neural networks[34, 35, 36]

, support vector machines

[37, 38, 39]

, random forests

[40, 41] and ensemble models[42]

, are also employed as classifiers in conjunction with various hand crafted features extracted from local patches for classifying the central pixel of patches. The above supervised methods relay on application dependent feature representations designed by domain experts, which involves a heuristic and laborious manual feature design procedure. Most of these features considered extracting features at multiple spatial scale to better capture the varying size, shape and scale of vasculature. However, hand crafted features has limitation of generalize ability, especially in cases of pathological retina and complex vasculature.

Differ from traditional machine learning approaches, modern deep learning techniques learn hierarchical features representations through multiple levels of abstraction from fundus images and vessel groundtruths automatically. Li et al.

[43] proposed a cross modality learning framework that employed a de-noising auto-encoder for learning initial features for the first layer of neural network. This cross-modality approach is extended by Fan et al.[44], where a stacked de-noising auto-encoder is used for greedy layer wise pre-training of each layer of neural network. However, these neural networks are fully connected between adjacent layers, which leads to problems such as over-parameterizing and overfitting for the task of vessel segmentation. Furthermore, the amount of trainable parameters within a fully connected neural networks are related to the size of input images.

To address these problems, Convolutional Neural Networks (CNNs) are employed for vessel segmentation in recent researches. Oliveira et al.[45] combined multiscale stationary wavelet transformation and Fully Convolutional Network (FCN)[46] to segment vessel structures within local patches of fundus images. Another popular encoder-decoder based architecture, UNet[47], is first introduced by Antiga et al.[48] for vessel segmentation in fundus image patches, and similar methods and results are also reported by Wang et al.[49]. Alom et al.[50] proposed incorporating residual blocks[51] and recurrent convolutional neural network[52] with the macro-architecture of UNet and proposed R2-UNet model to improve the performance of retinal vessel segmentation.

However, these patch-to-patch based processing scheme involves cropping patches of fundus images, processing these patches and then merging the results, which is an inefficient approach that includes redundant computation when cropped patches are overlapped. Moreover, patch-to-patch based approach dose not account for non-local correlations when classifying the center pixel or all pixels within the patch, which may leads to failures caused by noise and abnormalities. Fu et al.[53, 54] proposed a end-to-end approach named DeepVessel, which is based on applying deep supervision[55]

on multiscale and multilevel FCN features and adopting conditional random field formulated as a recurrent neural network

[56]. This similar deep supervision strategy is adopted by Mo et al.[57] on a deeper FCN model and achieved better vessel segmentation performance.

Although these existing methods have been successful in achieving segmentation performance close to trained human observers, accurately segmenting vasculature in color fundus images remains a challenging problem due to the various complexities mention above.

In this paper, we proposed an effective and efficient method for accurately segmenting both major and thin vessels in color fundus images though automatically learning and decoding hierarchical features with multiple-spatial-frequencies. The main contribution of this work are in three folds:

  1. Motivated by the observation that vasculature can be decomposed into low spatial frequency components that describes the smoothly changing structure (e.g., the major vessels) and high spatial frequency components that describes the rapidly changing details in spatial dimensions (e.g., the minor details of thin vessels), we adopted octave convolution (OctConv)[58] for building feature encoder blocks, and used them to automatically learn hierarchical multifrequency features at multiple levels of a neural network. Moreover, we empirically demonstrate these low- and high- frequency features can focus on major vascular tree and thin vessels respectively by visualizing the feature maps of responds of the octave convolution kernels.

  2. To provide the network the capability of learning how to decode multifrequency features, we proposed a novel basic operation called octave transposed convolution (OctTrConv). OctTrConv takes in feature maps with multiple spatial frequencies and restores their spatial resolution by learning a set of convolution kernels. Decoder blocks are builded with OctTrConv and OctConv, and then utilized for decoding multifrequency feature maps.

  3. We also proposed a novel encoder-decoder based neural network architecture named Octave UNet, which contains two main components. The encoder utilizes multiple aforementioned multifrequency encoder blocks for hierarchical multifrequency feature learning, whereas the decoder contains multiple aforementioned multifrequency decoder blocks for gradual feature decoding. Skip connections similar from the vanilla UNet[46] are also adopted to feed additional location-information-rich feature maps with multiple spatial frequencies to the decoder blocks for facilitating the process of recovering spatial resolution and generating high-resolution vessel probability map. Octave UNet can be trained in a end-to-end manner and deployed to produce full-sized vessel maps, which is much more efficient then the aforementioned patch-to-patch approaches and can achieves processing time comparable to other deep learning approaches if not faster. Furthermore, Octave UNet achieved better or compatible performance to state-of-the-art methods on 4 publicly available datasets, without any preprocessing, complex data augmentation, or post-processing procedures.

The remaining sections are organized as following: Section II presents our proposed method. Section III introduces the datasets and data augmentation strategies used in this work. In Section IV, we presented the training methodology and implementation details, along with extensive experimental results of the proposed method. Finally, we conclude this work in Section V.

Ii Method

Ii-a Multifrequency feature extraction with OctConv

Retinal vessel forms complex tree-like structure with varying size, shape and vessel width. Unlike the major vascular tree, the thin vessels often have small vessel width and low background contrast, which is difficult for human observer to distinguish.

As illustrated in Fig.1, the low- and high- frequency components of retinal vasculature focused on capturing major vessels and thin vessels respectively. Motivated by this observation, we adopted octave convolution (OctConv)[58] as multifrequency features extractor.

(a) Fundus image.
(b) Image of vasculature.
(c) Low frequency component.
(d) High frequency component.
Fig. 1: The image of vasculature can be decomposed into low spatial frequency components that describes the major vascular tree and high spatial frequency component that describes the edges and minor details of thin vessel.

The computational graph of OctConv is illustrated in Fig.2. Let and denote the input of high- and low- frequency feature maps, then the high- and low- frequency output of OctConv will be given by and , where and denotes the intra-frequency information update, whereas and denotes the inter-frequency information exchange.

Specifically, let denotes the octave kernel that composed of a set of convolution kernels with different number of channels, denotes the biases, denote the size of a square kernel,

denote the non-linear activation function, and

denote the floor operation, the high- and low- frequency responds at location is given by (1) and (2) respectively.

(1)
(2)

It is worth mentioning that and is exactly the vanilla convolution operation, whereas is equivalent to first downsampling input by a scale of two and then applying vanilla convolution and is equivalent to upsampling the output of vanilla convolution by a scale of two.

Fig. 2: OctConv operation is denoted by a blue arrow within an octagon. The zoomed in figure show an abstraction of computation graph that contains inter-frequency information exchange ( and ) and intra-frequency information update ( and ).

Whereas the original goal of OctConv is to reduce the spatial redundancy of convolutional feature maps base on the assumption that these kernel responds can be decomposed into low- and high- spatial frequency components like nature images, we empirically shown in Fig.6 that by controlling the scale of receptive field of the convolution kernels, low- and high- frequency feature maps can learn to focus on major vascular tree and minor details of thin vessels respectively.

Ii-B Multifrequency feature decoding with OctTrConv

For retinal vessel segmentation, given only the mean of multifrequency feature extraction is not enough to perform dense pixel classification. During the feature encoding process shown in columns of Fig.6

, the spatial dimension of the feature maps reduce gradually and loss spatial details, of which the compression effect forces the kernels to learn more discriminative features with higher levels of abstraction. Therefore, a process of decoding feature maps to recover spatial details and generate high resolution vessel map is needed. A naive way of achieving this is to use bilinear interpolation, which lacks the capability of learning the decode mapping like using vanilla transposed convolution. However, vanilla transposed convolution is not compatible to multifrequency feature maps that have different spatial dimension. Moreover, naively using multiple stems of transposed convolution separately lacks information exchange between frequencies and implies that multifrequency features should utilized independently for reconstructing multiple segmentation results, which may not be a correct assumption.

To address these issues, we extended OctConv and proposed a novel operation named octave transposed convolution (OctTrConv), which provides the capability of learning suitable mappings for decoding multifrequency features. As illustrated in Fig.3, OctTrConv takes in feature maps with multiple spatial frequencies and restores their spatial resolution by learning a set of mappings including intra-frequency information update and inter-frequency information exchange.

The computational graph of OctTrConv is illustrated in Fig.3. Let and denote the input of high- and low- frequency feature maps, then the high- and low- frequency output of OctConv will be given by and , where and denotes the intra-frequency information update whereas and denotes the inter-frequency information exchange.

Similarly, let denotes the octave kernel that composed of a set of trainable kernels, denotes the biases, denote the size of a square kernel, denote the non-linear activation function, and denote the floor operation, the high- and low- frequency responds at location is given by (3) and (4) respectively.

(3)
(4)
Fig. 3: OctTrConv operation is denoted by a green arrow within an octagon. The zoomed in figure show an abstraction of computation graph that contains inter-frequency information exchange ( and ) and intra-frequency information update ( and ).

It is worth mentioning that and is exactly the vanilla transposed convolution operation, whereas is equivalent to first downsampling input by a scale of 2 and then applying vanilla transposed convolution and is equivalent to upsampling the output of vanilla transposed convolution by a scale of two.

Ii-C Octave UNet

In this section, a novel encoder-decoder based neural network architecture named Octave UNet is proposed. After end-to-end training, Octave UNet is capable of extracting and decoding hierarchical multifrequency features for segmenting retinal vasculature in full-sized fundus images.

Fig. 4: The detailed architecture of Octave UNet. Feature maps are denoted as cubics with spatial dimension on the side and number of channels on top. The hyper-parameters adopted is the same as the vanilla UNet[46].

Octave UNet is consist of two main processes, i.e., feature encoding and decoding. Building upon the OctConv and OctTrConv operation, we designed multifrequency feature encoder blocks and decoder blocks for hierarchical multifrequency feature learning and hierarchical multifrequency feature decoding.

(a) Encoder block.
(b) Decoder block.
Fig. 5:

Multifrequency feature encoder block and decoder block. The red arrow denotes max pooling that downsamples input feature by a scale of 2. The gray arrow denotes skip connection that copy and concatenate feature maps. OctConv and OctTrConv operation are denoted by a blue and green arrow within an octagon respectively. Note that the spatial dimension of feature maps can remain the same within an encoder or decoder block by controlling the kernel sizes, padding pattern and strides of OctConv and OctTrConv.

By stacking multiple encoder blocks sequentially as in Fig.4, hierarchical multifrequency features can be learn to capture both the low spatial frequency components that describes the smoothly changing structure such as the major vessels, and high spatial frequency components that describes the rapidly changing details in spatial dimensions like the minor details of low contrast vessels, as shown in columns of Fig.6.

(a) High frequency from 1st OctConv.
(b) Low frequency from 1st OctConv.
(c) High frequency from encoder 1.
(d) Low frequency from encoder 1.
(e) High frequency from encoder 2.
(f) Low frequency from encoder 2.
Fig. 6: Examples of octave kernel responds, i.e., multifrequency feature maps from different encoder blocks. The spatial dimensions of the feature maps are shown in the lower and left axis.

As the feature encoding processes as shown in columns of Fig.6, the spatial dimension of the feature maps reduce gradually and loss spatial details. Only using the high-abstract-level features that lacks location information maybe insufficient for generating precise segmentation result. Inspired by the vanilla UNet[46], skip connections is adopted to concatenate low-level location-information-rich features to the inputs of decoder blocks as shown in Fig.5 and Fig.4. As the feature decoding processes, stack of decoder blocks generally restore location information and spatial details as shown in columns of Fig.7.

(a) High frequency from decoder 2.
(b) Low frequency from decoder 2.
(c) High frequency from decoder 3.
(d) Low frequency from decoder 3.
(e) High frequency from decoder 4.
(f) Low frequency from decoder 4.
Fig. 7: Examples of octave transposed kernel responds, i.e., multifrequency reconstructions from various decoder blocks. The spatial dimensions of the feature maps are shown in the lower and left axis.

It is worth mentioning that, the initial OctConv layer of Octave UNet in Fig.4 contains only computation of and , where is the input of fundus image. Similarly, the final OctConv layer in Fig.4 contains only computation of , where

is the probability vessel map output by Octave UNet. Except for the final OctConv layer that is activated by sigmoid function (

) for performing binary classification, ReLU activation

[59] (

) is adopted for all the other layers. Batch Normalization

[60] is also added after every convolution layer.

Octave UNet can be trained in a end-to-end manner on sample pairs of full-sized fundus images and vessel groundtruths. Comparing with patch-to-patch approaches that requires cropping, processing, and then merging local patches, Octave UNet can generate full-sized high resolution vessel maps with processing time comparable to other deep learning approaches, if not faster.

Iii Material

Iii-a Datasets

The proposed method is evaluated on four publicly available retinal fundus image datasets: DRIVE, STARE, CHASE_DB1, and HRF dataset. The DRIVE dataset[31] is consist of 40 color fundus photographs that obtained from a diabetic retinopathy screening program. Each fundus image is composed of pixels of 8 bits per channel and provided with its corresponding vessel groundtruth annotated by human observers. The set of 40 images was divided into a test and training set both containing 20 images. For each image in test set, additional vessel groundtruths is also provided. The STARE dataset[61] is consist of 20 color fundus images, of which half contain pathology. Each fundus image is digitalized to pixels of 24 bits per channel and provided with 2 sets of vessel groundtruths annotated by different human observers. The CHASE_DB1 dataset[42] is consist of 28 color fundus images. Each fundus image is digitalized to pixels and provided with 2 sets of vessel groundtruths annotated by different clinical experts. The HRF dataset[62] is consist of 45 color fundus images. Among them, 15 images are from health subjects, another 15 are from subjects with diabetic retinopathy, and the rest are from subjects with glaucoma. Each fundus image is digitalized to pixels and provided with its corresponding vessel groundtruth annotated by human observers. An overview of these 4 publicly available datasets is provided in Table I.

Dataset Year Description Resolution
DRIVE 2004
40 in total,
20 for training,
20 for testing.
STARE 2000
20 in total,
10 are abnormal.
CHASE_DB1 2011 28 in total.
HRF 2011
45 in total,
15 each for healthy,
diabetic and gaulcoma.
TABLE I: Overview of datasets adopted in this paper.

Iii-B Data preprocessing and augmentation

No preprocessing and post-processing steps are used in this implementation. Only horizontal and vertical random flip is adopted for data augmentation.

Iv Experiments

Iv-a Evaluation metrics

Retinal vessel segmentation is often formulated as a binary dense classification task, i.e., predicting each pixels belongs to positive (vessel) or negative (non-vessel) class within an input image. As shown in Table II, a pixel prediction can fall into one of the four categories, i.e., True Positive (TP), True Negative (TP), False Positive (FP), and False Negative (FN). By plotting these pixels with different color, e.g., TP with Green, FP with Red, TN with Black, and FN with Blue, analytical vessel map of a method can be generated, as shown in (b) of Fig.10.

Predicted class Groundtruth class
Vessel Non-vessel
Vessel True Positive (TP) False Negative (FN)
Non-vessel False Positive (FP) True Negative (TP)
TABLE II:

A binary confusion matrix for vessel segmentation.

In this paper, we adopted 5 commonly used metrics for evaluation and comparing with other state-of-the-art methods: accuracy (ACC), sensitivity (SE), specificity (SP), F1 score (F1), and Area Under Receiver Operating Characteristic curve (AUROC).

Equation (5) measures the overall accuracy of a method, i.e., how often is the method correct?

(5)

Sensitivity measures how often dose the method predict positive when a pixel actually belongs to positive class, as shown in (6).

(6)

Specificity measures how often dose the method predict negative when a pixel actually belongs to negative class, as show in (7).

(7)

F1 score in (8

) is the harmonic mean of sensitivity and precision (

).

(8)

It is worth noting that these metrics that based on confusion matrix is threshold-sensitive for methods that output vessel probability map, e.g., the proposed Octave UNet. For methods that use binary thresholding to obtain the final segmentation result, i.e., the prediction of binarized vessel map, Acc, SE, SP and F1 are dependent to the binarization method. In this paper, without special mentioning, all threshold-sensitive metrics are calculated with the simplest thresholding method possible, i.e., global thresholding with threshold

.

Additionally, we adopted AUROC, which is insensitive to the global threshold. Calculate the area under ROC curve requires first create ROC curve by plotting the sensitivity (or True Positive Rate, TPR) against the false positive rate (FPR, ) at various global threshold values. An example of ROC curve and its corresponding AUROC is shown in Fig.8. For an oracle that can perfectly segment retinal vessel, Acc, SE, SP, F1 and AUROC should all hit the best score: 1.

Fig. 8: An example of ROC curve and the corresponding AUROC.

Iv-B Experiment setup

Iv-B1 Loss function

To alleviate the effect of imbalanced-classes problem, i.e., the vessel pixel to non-vessel pixel ratio is about

, class weighted binary cross-entropy is adopted as loss function for training, where the positive class weight is calculated on training set before training.

Iv-B2 Training details

The proposed Octave UNet is trained with Adam optimizer[63] with default hyper-parameters (e.g., and ). The initial learning rate is set to . A learning rate reducing schedule is adopted for reducing the last learning rate to

after the model performance saturated for 20 epoch. The training process runs for a total of 500 epoches. All trainable kernels are initialized with He initialization

[64], and no pre-trained parameters are used.

Iv-B3 Training and testing set splitting

Except for the DRIVE dataset which has a convention splitting of training and testing set, the strategy of leave-one-out validation is adopted for STARE, CHASE_DB1, and HRF dataset. Specifically, each images are tested using a model trained on the other images within the same dataset. This strategy for generating training and testing set is also adopted By recent works in[43, 65, 31, 57]. Only the performance and results on test samples are reported in this paper.

Iv-C Experimental results

Average performance measures with standard deviation of the proposed method is shown in Table 

III, which demonstrate the proposed method has surpassed manual segmentation.

Dataset Method ACC SE SP F1 AUROC
DRIVE the proposed method
2nd human observer N/A
STARE the proposed method
2nd human observer N/A
CHASE_DB1 the proposed method
2nd human observer N/A
HRF the proposed method
2nd human observer N/A N/A N/A N/A N/A
TABLE III: Average performance measures with standard deviation for DRIVE, STARE, CHASE_DB1, and HRF datasets. Better performances are highlighted.

The best and worse performances are also illustrated in Fig.9. The best cases on all datasets contain very few missed thin vessels , i.e., false negative pixels shown in blue. However, the worse cases are mostly because the proposed method can not detected partial vascular structures that are uneven illuminated. In both best and worse cases, false positive (i.e., pixels plotted by red) are rare, which demonstrate that the proposed method is capable of discriminatively separate non-vascular structures and vasculature.

Fig. 9: The best (first 2 columns) and worse (last 2 columns) cases on DRIVE (first row), STARE (second row), CHASE_DB1 (third row) and HRF (last row) datasets.

As shown in Fig.10, one can argue that the proposed method not only can detect major vascular tree, but also is sensitive to minor vessels that easily overlooked by human observers.

(a) Fudus image.
(b) Analytical plot.
(c) Predicted probability vessel map.
(d) Manual annotation.
Fig. 10: A zoomed in view of segmentation performance of minor vasculature of thin vessels. One can argue that the proposed method is sensitive to low-contrasted thin vessels easily overlooked by human observers.

Figure11 illustrated the vessel segmentation performance of the proposed method on complex cases with abnormalities, which demonstrate the robustness of the proposed method against various abnormalities such as exudates, cotton wool spots, and hemorrhages.

Fig. 11: Original fundus images (first column) and analytical vessel maps (last column) of cases with exudates (first row), cotton wool spots (second row), and hemorrhages (last row)

Iv-D Sensitivity analysis of global threshold

The threshold-sensitive metrics, i.e., ACC, SE, SP and F1, are measured at various global threshold values sampled in . And the resulting sensitivity curve is shown in Fig.12. As illustrated in Fig.12, the sensitivity curves of the proposed method have the same trend on all datasets, which demonstrates the robustness of the proposed Octave UNet across different datasets.

Furthermore, near the adopted global threshold, i.e., , the sensitive curves of ACC, SP, and F1 are changing smoothly, which further demonstrate the robustness of the proposed method. On the other hand, by lowering the standard of a pixel being in vessel class, i.e., the global threshold , the proposed method can achieve high gain on SE, while the other metrics drops very little.

(a) DRIVE.
(b) STARE.
(c) CHASE.
(d) HRF.
Fig. 12: Sensitivity curves of the proposed method on different datasets.

Iv-E Comparison with other state-of-the-art methods

The performance comparison of the proposed method and other state-of-the-art methods are reported in Table IV for DRIVE dataset, Table V for STARE dataset, Table VI for CHASE_DB1 dataset, and Table VI for HRF dataset.

The proposed method achieves best performance against state-of-the-art methods in terms of ACC and SP on all datasets. Specifically, the proposed method achieves all 5 best metrics on CHASE_DB1 and HRF datasets. For DRIVE dataset, the proposed Octave UNet achieve best ACC, SE, SP and AUROC, while achieves F1 score slightly lower than the R2-UNet proposed by Alom et al.[66]. For STARE dataset, the proposed Octave UNet achieves metrics of SE, F1, and AUROC comparable to patch-to-patch baed DeepVessel[65] and R2-UNet[66].

In general, the proposed method achieves better or comparable performance against other state-of-the-art methods.

Methods Year ACC SE SP F1 AUROC
Unsupervised Methods
Zena et al.[67] 2001 0.9377 0.6971 0.9769 N/A 0.8984
Mendonca et al.[68] 2006 0.9452 0.7344 0.9764 N/A N/A
Al-Diri et al.[69] 2009 0.9258 0.7282 0.9551 N/A N/A
Miri et al.[70] 2010 0.9458 0.7352 0.9795 N/A N/A
You et al.[37] 2011 0.9434 0.7410 0.9751 N/A N/A
Fraz et al.[71] 2012 0.9430 0.7152 0.9768 N/A N/A
Fathi et al.[36] 2014 N/A 0.7768 0.9759 0.7669 N/A
Sreejini et al.[72] 2015 0.9633 0.7132 0.9866 N/A N/A
Roychowdhury et al.[73] 2015 0.9494 0.7395 0.9782 N/A N/A
Fan et al.[17] 2019 0.9600 0.7360 0.9810 N/A N/A
Supervised Methods
Staal et al.[31] 2004 0.9441 0.7194 0.9773 N/A 0.9520
Ricci et al.[26] 2007 0.9563 N/A N/A N/A 0.9558
Marin et al.[11] 2011 0.9452 0.7067 0.9801 N/A 0.9588
Fraz et al.[42] 2012 0.9480 0.7460 0.9807 N/A 0.9747
Cheng et al.[74] 2014 0.9472 0.7252 0.9778 N/A 0.9648
Orlando et al.[75] 2014 N/A 0.7850 0.9670 0.7810 N/A
Vega et al.[76] 2015 0.9412 0.7444 0.9612 0.6884 N/A
Fan et al.[40] 2016 0.9614 0.7191 0.9849 N/A N/A
Fan et al.[44] 2016 0.9612 0.7814 0.9788 N/A N/A
Liskowski et al.[65] 2016 0.9535 0.7811 0.9807 N/A 0.9790
Li et al.[43] 2016 0.9527 0.7569 0.9816 N/A 0.9738
Orlando et al.[77] 2016 N/A 0.7897 0.9684 0.7857 N/A
Mo et al.[57] 2017 0.9521 0.7779 0.9780 N/A 0.9782
Xiao et al.[78] 2018 0.9655 0.7715 N/A N/A N/A
Alom et al.[66] 2019 0.9556 0.7792 0.9813 0.8171 0.9784
Proposed Method 2019 0.9661 0.7957 0.9827 0.8033 0.9818
TABLE IV: Comparison with other state-of-the-art methods on DRIVE dataset
Methods Year ACC SE SP F1 AUROC
Unsupervised Methods
Hoover et al.[61] 1998 0.9264 0.6747 0.9565 N/A N/A
Mendonca et al.[68] 2006 0.9440 0.6996 0.9730 N/A N/A
Al-Diri et al.[69] 2009 N/A 0.7521 0.9681 N/A N/A
You et al.[37] 2011 0.9497 0.7260 0.9756 N/A N/A
Fraz et al.[71] 2012 0.9442 0.7311 0.9680 N/A N/A
Fathi et al.[36] 2013 N/A 0.8061 0.9717 0.7509 N/A
Roychowdhury et al.[73] 2015 0.9560 0.7317 0.9842 N/A N/A
Fan et al.[17] 2019 0.9570 0.7910 0.9700 N/A N/A
Supervised Methods
Staal et al.[31] 2004 0.9516 N/A N/A N/A 0.9614
Ricci et al.[26] 2007 0.9584 N/A N/A N/A 0.9602
Marin et al.[11] 2011 0.9526 0.6944 0.9819 N/A 0.9769
Fraz et al.[42] 2012 0.9534 0.7548 0.9763 N/A 0.9768
Vega et al.[76] 2015 0.9483 0.7019 0.9671 0.6614 N/A
Fan et al.[40] 2016 0.9588 0.6996 0.9787 N/A N/A
Fan et al.[44] 2016 0.9654 0.7834 0.9799 N/A N/A
Liskowski et al.[65] 2016 0.9729 0.8554 0.9862 N/A 0.9928
Li et al.[43] 2016 0.9628 0.7726 0.9844 N/A 0.9879
Orlando et al.[77] 2016 N/A 0.7680 0.9738 0.7644 N/A
Mo et al.[57] 2017 0.9674 0.8147 0.9844 N/A 0.9885
Xiao et al.[78] 2018 0.9693 0.7469 N/A N/A N/A
Alom et al.[66] 2019 0.9712 0.8292 0.9862 0.8475 0.9914
Proposed Method 2019 0.9741 0.8164 0.9870 0.8250 0.9892
TABLE V: Comparison with other state-of-the-art methods on STARE dataset
Methods Year ACC SE SP F1 AUROC
Unsupervised Methods
Fraz et al.[79] 2014 N/A 0.7259 0.9770 0.7488 N/A
Roychowdhury et al.[73] 2015 0.9467 0.7615 0.9575 N/A N/A
Fan et al.[17] 2019 0.9510 0.6570 0.9730 N/A N/A
Supervised Methods
Fraz et al.[42] 2012 0.9469 0.7224 0.9711 N/A 0.9712
Fan et al.[44] 2016 0.9573 0.7656 0.9704 N/A N/A
Liskowski et al.[65] 2016 0.9628 0.7816 0.9836 N/A 0.9823
Li et al.[43] 2016 0.9527 0.7569 0.9816 N/A 0.9738
Orlando et al.[77] 2016 N/A 0.7277 0.9712 0.7332 N/A
Mo et al.[57] 2017 0.9581 0.7661 0.9793 N/A 0.9812
Alom et al.[66] 2019 0.9634 0.7756 0.9820 0.7928 0.9815
Proposed Method 2019 0.9714 0.8020 0.9853 0.8079 0.9851
TABLE VI: Comparison with other state-of-the-art methods on CHASE_DB1 dataset
Methods Year ACC SE SP F1 AUROC
Unsupervised Methods
Roychowdhury et al.[73] 2015 0.9467 0.7615 0.9575 N/A N/A
Supervised Methods
Kolar et al.[62] 2013 N/A 0.7794 0.9584 0.7158 N/A
Orlando et al.[77] 2016 N/A 0.7874 0.9584 0.7158 N/A
Proposed Method 2019 0.9763 0.8244 0.9874 0.8079 0.9891
TABLE VII: Comparison with other state-of-the-art methods on HRF dataset

V Conclusion

An effective and efficient method for accurately segmenting both major and thin retinal vessels based on multifrequency convolutional network is proposed in this paper. Building upon octave convolution and the proposed octave transposed convolution, Octave UNet can extract hierarchical features with multiple-spatial-frequencies and reconstruct accurate high resolution vessel map. Benefiting from the design of hierarchical multifrequency features, Octave UNet can be train in end-to-end manner without any pre-processing nor post-processing steps and achieves better or compatible performance comparing to state-of-the-art methods with fast processing speed.

References

  • [1] C. L Srinidhi, P. Aparna, and J. Rajan, “Recent Advancements in Retinal Vessel Segmentation,” Journal of Medical Systems, vol. 41, no. 4, p. 70, 2017. [Online]. Available: https://link.springer.com/article/10.1007/s10916-017-0719-2
  • [2] M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, and S. A. Barman, “Blood vessel segmentation methodologies in retinal images - A survey,” Computer Methods and Programs in Biomedicine, vol. 108, no. 1, pp. 407–433, 2012. [Online]. Available: http://dx.doi.org/10.1016/j.cmpb.2012.03.009
  • [3] P. Vostatek, E. Claridge, H. Uusitalo, M. Hauta-Kasari, P. Fält, and L. Lensu, “Performance comparison of publicly available retinal blood vessel segmentation methods,” Computerized Medical Imaging and Graphics, vol. 55, pp. 2–12, jan 2017. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0895611116300702
  • [4] C. S. Y. Cheung, Z. Butty, N. N. Tehrani, and W. C. Lam, “Computer-assisted image analysis of temporal retinal vessel width and tortuosity in retinopathy of prematurity for the assessment of disease severity and treatment outcome,” Journal of AAPOS, vol. 15, no. 4, pp. 374–380, aug 2011. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1091853111003909
  • [5] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, and R. L. Kennedy, “Measurement of retinal vessel widths from fundus images based on 2-D modeling,” IEEE Transactions on Medical Imaging, vol. 23, no. 10, pp. 1196–1204, oct 2004.
  • [6] C. Heneghan, J. Flynn, M. O’Keefe, and M. Cahill, “Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis,” Medical Image Analysis, vol. 6, no. 4, pp. 407–429, dec 2002. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1361841502000580
  • [7] R. A. Welikala, J. Dehmeshki, A. Hoppe, V. Tah, S. Mann, T. H. Williamson, and S. A. Barman, “Automated detection of proliferative diabetic retinopathy using a modified line operator and dual classification,” Computer Methods and Programs in Biomedicine, vol. 114, no. 3, pp. 247–261, 2014. [Online]. Available: http://dx.doi.org/10.1016/j.cmpb.2014.02.010
  • [8] T. Y. Wong, T. H. Mosley, R. Klein, B. E. Klein, A. R. Sharrett, D. J. Couper, and L. D. Hubbard, “Retinal microvascular changes and MRI signs of cerebral atrophy in healthy, middle-aged people,” Neurology, vol. 61, no. 6, pp. 806–811, apr 2003. [Online]. Available: http://www.ncbi.nlm.nih.gov/pubmed/10214731
  • [9] T. Y. Wong, R. Klein, B. E. Klein, J. M. Tielsch, L. Hubbard, and F. J. Nieto, “Retinal microvascular abnormalities and their relationship with hypertension, cardiovascular disease, and mortality,” Survey of Ophthalmology, vol. 46, no. 1, pp. 59–80, jul 2001. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S003962570100234X
  • [10] T. Y. Wong, R. Klein, D. J. Couper, L. S. Cooper, E. Shahar, L. D. Hubbard, M. R. Wofford, and A. R. Sharrett, “Retinal microvascular abnormalities and incident stroke: The Atherosclerosis Risk in Communities Study,” Lancet, vol. 358, no. 9288, pp. 1134–1140, oct 2001. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0140673601062535
  • [11] C. Mariño, M. G. Penedo, M. Penas, M. J. Carreira, and F. Gonzalez, “Personal authentication using digital retinal images,” Pattern Analysis and Applications, vol. 9, no. 1, pp. 21–33, 2006. [Online]. Available: https://link.springer.com/article/10.1007/s10044-005-0022-6
  • [12] C. Köse and C. Ikibaşs, “A personal identification system using retinal vasculature in retinal fundus images,” Expert Systems with Applications, vol. 38, no. 11, pp. 13 670–13 681, oct 2011. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0957417411006683
  • [13] T. A. Soomro, A. J. Afifi, L. Zheng, S. Soomro, J. Gao, O. Hellwich, and M. Paul, “Deep Learning Models for Retinal Blood Vessels Segmentation: A Review,” IEEE Access, vol. 7, pp. 71 696–71 717, 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8727963/
  • [14]

    S. Moccia, E. De Momi, S. El Hadji, and L. S. Mattos, “Blood vessel segmentation algorithms — Review of methods, datasets and evaluation metrics,” pp. 71–91, 2018. [Online]. Available:

    https://doi.org/10.1016/j.cmpb.2018.02.001
  • [15] S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39, no. 3, pp. 355–368, sep 1987. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0734189X8780186X
  • [16] E. M. Sigursson, S. Valero, J. A. Benediktsson, J. Chanussot, H. Talbot, and E. Stefánsson, “Automatic retinal vessel extraction based on directional mathematical morphology and fuzzy classification,” Pattern Recognition Letters, vol. 47, pp. 164–171, oct 2014. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167865514000865
  • [17] Z. Fan, J. Lu, C. Wei, H. Huang, X. Cai, and X. Chen, “A Hierarchical Image Matting Model for Blood Vessel Segmentation in Fundus Images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2367–2377, 2019.
  • [18] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 263–269, 1989. [Online]. Available: http://ieeexplore.ieee.org/document/34715/
  • [19] J. Zhang, B. Dashtbozorg, E. Bekkers, J. P. W. Pluim, R. Duits, and B. M. ter Haar Romeny, “Robust Retinal Vessel Segmentation via Locally Adaptive Derivative Frames in Orientation Scores,” IEEE Transactions on Medical Imaging, vol. 35, no. 12, pp. 2631–2644, dec 2016.
  • [20] Y. Wang, G. Ji, P. Lin, and E. Trucco, “Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition,” Pattern Recognition, vol. 46, no. 8, pp. 2117–2133, 2013. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0031320313000241
  • [21] G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov, “Trainable COSFIRE filters for vessel delineation with application to retinal images,” Medical Image Analysis, vol. 19, no. 1, pp. 46–57, jan 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1361841514001364
  • [22] H. Yu, S. Barriga, C. Agurto, G. Zamora, W. Bauman, and P. Soliz, “Fast vessel segmentation in retinal images using multiscale enhancement and second-order local entropy,” in Medical Imaging 2012: Computer-Aided Diagnosis, vol. 8315.   International Society for Optics and Photonics, 2012, p. 83151B.
  • [23] E. Moghimirad, S. Hamid Rezatofighi, and H. Soltanian-Zadeh, “Retinal vessel segmentation using a multi-scale medialness function,” Computers in Biology and Medicine, vol. 42, no. 1, pp. 50–60, jan 2012. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0010482511002046
  • [24] J. Zheng, P.-R. Lu, D. Xiang, Y.-K. Dai, Z.-B. Liu, D.-J. Kuai, H. Xue, and Y.-T. Yang, “Retinal image graph-cut segmentation algorithm using multiscale hessian-enhancement-based nonlocal mean filter,” Computational and mathematical methods in medicine, vol. 2013, 2013.
  • [25] U. T. Nguyen, A. Bhuiyan, L. A. Park, and K. Ramamohanarao, “An effective retinal blood vessel segmentation method using multi-scale line detection,” Pattern Recognition, vol. 46, no. 3, pp. 703–715, mar 2013. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S003132031200355X
  • [26] E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Transactions on Medical Imaging, vol. 26, no. 10, pp. 1357–1365, oct 2007. [Online]. Available: http://ieeexplore.ieee.org/document/4336179/
  • [27] I. Liu and Y. Sun, “Recursive tracking of vascular networks in angiograms based on the detection-deletion scheme,” IEEE Transactions on Medical Imaging, vol. 12, no. 2, pp. 334–341, jun 1993.
  • [28] Y. Yin, M. Adel, and S. Bourennane, “Retinal vessel segmentation using a probabilistic tracking method,” Pattern Recognition, vol. 45, no. 4, pp. 1235–1244, apr 2012. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0031320311003980
  • [29] E. Bekkers, R. Duits, T. Berendschot, and B. ter Haar Romeny, “A Multi-Orientation Analysis Approach to Retinal Vessel Tracking,” Journal of Mathematical Imaging and Vision, vol. 49, no. 3, pp. 583–610, jul 2014. [Online]. Available: https://doi.org/10.1007/s10851-013-0488-6
  • [30] J. Zhang, H. Li, Q. Nie, and L. Cheng, “A retinal vessel boundary tracking method based on Bayesian theory and multi-scale line detection,” Computerized Medical Imaging and Graphics, vol. 38, no. 6, pp. 517–525, sep 2014. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0895611114000901
  • [31] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 501–509, apr 2004. [Online]. Available: http://ieeexplore.ieee.org/document/1282003/
  • [32] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar, H. F. Jelinek, and M. J. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1214–1222, 2006.
  • [33] C. A. Lupascu, D. Tegolo, and E. Trucco, “FABC: Retinal Vessel Segmentation Using AdaBoost,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 5, pp. 1267–1274, 2010.
  • [34]

    D. Marín, A. Aquino, M. E. Gegúndez-Arias, J. M. Bravo, D. Marin, A. Aquino, M. E. Gegundez-Arias, and J. M. Bravo, “A New Supervised Method for Blood Vessel Segmentation in Retinal Images by Using Gray-Level and Moment Invariants-Based Features,”

    IEEE Transactions on Medical Imaging, vol. 30, no. 1, pp. 146–158, jan 2011. [Online]. Available: http://ieeexplore.ieee.org/document/5545439/
  • [35] J. Rahebi and F. Hardalaç, “Retinal blood vessel segmentation with neural network by using gray-level co-occurrence matrix-based features,” Journal of medical systems, vol. 38, no. 8, p. 85, 2014.
  • [36] A. Fathi and A. R. Naghsh-Nilchi, “General rotation-invariant local binary patterns operator with application to blood vessel detection in retinal images,” Pattern Analysis and Applications, vol. 17, no. 1, pp. 69–81, 2014.
  • [37] X. You, Q. Peng, Y. Yuan, Y.-m. Cheung, and J. Lei, “Segmentation of retinal blood vessels using the radial projection and semi-supervised approach,” Pattern Recognition, vol. 44, no. 10-11, pp. 2314–2324, 2011.
  • [38] A. Osareh and B. Shadgar, “Automatic blood vessel segmentation in color images of retina,” Iranian Journal of Science and Technology, vol. 33, no. B2, p. 191, 2009.
  • [39] L. Xu and S. Luo, “A novel method for blood vessel detection from retinal images,” Biomedical engineering online, vol. 9, no. 1, p. 14, 2010.
  • [40] Z. Fan, Y. Rong, J. Lu, J. Mo, F. Li, X. Cai, and T. Yang, “Automated blood vessel segmentation in fundus image based on integral channel features and random forests,” in Proceedings of the World Congress on Intelligent Control and Automation.   Institute of Electrical and Electronics Engineers Inc., sep 2016, pp. 2063–2068.
  • [41] J. Zhang, Y. Chen, E. Bekkers, M. Wang, B. Dashtbozorg, and B. M. ter Haar Romeny, “Retinal vessel delineation using a brain-inspired wavelet transform and random forest,” Pattern Recognition, vol. 69, pp. 107–123, 2017. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0031320317301498
  • [42] M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, and S. A. Barman, “An ensemble classification-based approach applied to retinal blood vessel segmentation,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 9, pp. 2538–2548, sep 2012. [Online]. Available: http://ieeexplore.ieee.org/document/6224174/
  • [43] Q. Li, B. Feng, L. Xie, P. Liang, H. Zhang, and T. Wang, “A Cross-Modality Learning Approach for Vessel Segmentation in Retinal Images,” IEEE Transactions on Medical Imaging, vol. 35, no. 1, pp. 109–118, jan 2016.
  • [44] Z. Fan and J. Mo, “Automated blood vessel segmentation based on de-noising auto-encoder and neural network,” in Proceedings of the International Conference on Machine Learning and Cybernetics, vol. 2.   IEEE, 2016, pp. 849–856.
  • [45] A. Oliveira, S. Pereira, and C. A. Silva, “Retinal vessel segmentation based on Fully Convolutional Neural Networks,” Expert Systems with Applications, vol. 112, pp. 229–242, dec 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0957417418303816
  • [46] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
  • [47] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, vol. 9351.   Springer, Cham, 2015, pp. 234–241. [Online]. Available: http://link.springer.com/10.1007/978-3-319-24574-4{_}28
  • [48] L. Antiga and S. Orobix, “Retina blood vessel segmentation with a convolutional neural network,” 2016. [Online]. Available: https://github.com/orobix/retina-unet
  • [49] X. Wang, W. Li, B. Miao, H. Jing, J. Zhangwei, X. Wen, J. Zhenyan, H. Gu, and Z. Shen, “Retina blood vessel segmentation using a U-net based Convolutional neural network,” in

    Procedia Computer Science: International Conference on Data Science (ICDS 2018), Beijing, China, 8-9 June 2018

    , 2018.
  • [50] M. Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, and V. K. Asari, “Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation,” arXiv preprint arXiv:1802.06955, 2018. [Online]. Available: http://arxiv.org/abs/1802.06955
  • [51] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • [52] M. Liang and X. Hu, “Recurrent convolutional neural network for object recognition,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), jun 2015, pp. 3367–3375.
  • [53] H. Fu, Y. Xu, S. Lin, D. W. Kee Wong, and J. Liu, “DeepVessel: Retinal Vessel Segmentation via Deep Learning and Conditional Random Field,” in International conference on medical image computing and computer-assisted intervention, S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, Eds.   Cham: Springer International Publishing, 2016, pp. 132–139.
  • [54] H. Fu, Y. Xu, D. W. K. Wong, and J. Liu, “Retinal vessel segmentation via deep learning network and fully-connected conditional random fields,” in Proceedings - International Symposium on Biomedical Imaging, vol. 2016-June, apr 2016, pp. 698–701.
  • [55] L. Wang, C.-Y. Lee, Z. Tu, and S. Lazebnik, “Training deeper convolutional networks with deep supervision,” arXiv preprint arXiv:1505.02496, 2015.
  • [56] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr, “Conditional random fields as recurrent neural networks,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1529–1537.
  • [57] J. Mo and L. Zhang, “Multi-level deep supervised networks for retinal vessel segmentation,” International Journal of Computer Assisted Radiology and Surgery, vol. 12, no. 12, pp. 2181–2193, 2017. [Online]. Available: https://link.springer.com/content/pdf/10.1007{%}2Fs11548-017-1619-0.pdfhttps://doi.org/10.1007/s11548-017-1619-0
  • [58] Y. Chen, H. Fang, B. Xu, Z. Yan, Y. Kalantidis, M. Rohrbach, S. Yan, and J. Feng, “Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution,” arXiv preprint arXiv:1904.05049, vol. 1, 2019. [Online]. Available: http://arxiv.org/abs/1904.05049
  • [59]

    V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in

    Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814.
  • [60] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
  • [61] A. Hoover, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp. 203–210, mar 2000. [Online]. Available: http://ieeexplore.ieee.org/document/845178/
  • [62] R. Kolar, T. Kubena, P. Cernosek, A. Budai, J. Hornegger, J. Gazarek, O. Svoboda, J. Jan, E. Angelopoulou, and J. Odstrcilik, “Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution fundus image database,” IET Image Processing, vol. 7, no. 4, pp. 373–383, jun 2013. [Online]. Available: https://digital-library.theiet.org/content/journals/10.1049/iet-ipr.2012.0455
  • [63] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [64]

    K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in

    Proceedings of the IEEE International Conference on Computer Vision, dec 2015, pp. 1026–1034.
  • [65] P. Liskowski and K. Krawiec, “Segmenting Retinal Blood Vessels With Deep Neural Networks.” IEEE transactions on medical imaging, vol. 35, no. 11, pp. 2369–2380, nov 2016. [Online]. Available: http://www.ncbi.nlm.nih.gov/pubmed/27046869
  • [66] M. Z. Alom, C. Yakopcic, M. Hasan, T. M. Taha, and V. K. Asari, “Recurrent residual U-Net for medical image segmentation,” Journal of Medical Imaging, vol. 6, no. 01, p. 1, 2019. [Online]. Available: https://www.spiedigitallibrary.org/journals/journal-of-medical-imaging/volume-6/issue-01/014006/Recurrent-residual-U-Net-for-medical-image-segmentation/10.1117/1.JMI.6.1.014006.full
  • [67] F. Zana and J.-C. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE transactions on image processing, vol. 10, no. 7, pp. 1010–1019, 2001.
  • [68] A. M. Mendonca and A. Campilho, “Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,” IEEE transactions on medical imaging, vol. 25, no. 9, pp. 1200–1213, 2006.
  • [69] B. Al-Diri, A. Hunter, and D. Steel, “An active contour model for segmenting and measuring retinal vessels,” IEEE Transactions on Medical imaging, vol. 28, no. 9, pp. 1488–1497, 2009.
  • [70] M. S. Miri and A. Mahloojifar, “Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 5, pp. 1183–1192, 2010.
  • [71] M. M. Fraz, S. A. Barman, P. Remagnino, A. Hoppe, A. Basit, B. Uyyanonvara, A. R. Rudnicka, and C. G. Owen, “An approach to localize the retinal blood vessels using bit planes and centerline detection,” Computer Methods and Programs in Biomedicine, vol. 108, no. 2, pp. 600–616, nov 2012. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0169260711002276
  • [72] K. S. Sreejini and V. K. Govindan, “Improved multiscale matched filter for retina vessel segmentation using PSO algorithm,” Egyptian Informatics Journal, vol. 16, no. 3, pp. 253–260, nov 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S111086651500033X
  • [73] S. Roychowdhury, D. D. Koozekanani, and K. K. Parhi, “Iterative Vessel Segmentation of Fundus Images,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 7, pp. 1738–1749, 2015.
  • [74] E. Cheng, L. Du, Y. Wu, Y. J. Zhu, V. Megalooikonomou, and H. Ling, “Discriminative vessel segmentation in retinal images by fusing context-aware hybrid features,” Machine vision and applications, vol. 25, no. 7, pp. 1779–1792, 2014.
  • [75] J. I. Orlando and M. Blaschko, “Learning fully-connected CRFs for blood vessel segmentation in retinal images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2014, pp. 634–641.
  • [76] R. Vega, G. Sanchez-Ante, L. E. Falcon-Morales, H. Sossa, and E. Guevara, “Retinal vessel extraction using Lattice Neural Networks with dendritic processing,” Computers in Biology and Medicine, vol. 58, pp. 20–30, mar 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S001048251400362X
  • [77] J. I. J. I. J. I. Orlando, E. Prokofyeva, and M. B. Blaschko, “A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images,” IEEE transactions on Biomedical Engineering, vol. 64, no. 1, pp. 16–27, jan 2016. [Online]. Available: http://ieeexplore.ieee.org/document/7420682/
  • [78] X. Xiao, S. Lian, Z. Luo, and S. Li, “Weighted Res-UNet for High-Quality Retina Vessel Segmentation,” Proceedings - 9th International Conference on Information Technology in Medicine and Education, ITME 2018, pp. 327–331, 2018.
  • [79]

    M. M. Fraz, A. R. Rudnicka, C. G. Owen, and S. A. Barman, “Delineation of blood vessels in pediatric retinal images using decision trees-based ensemble classification,”

    International Journal of Computer Assisted Radiology and Surgery, vol. 9, no. 5, pp. 795–811, sep 2014. [Online]. Available: https://doi.org/10.1007/s11548-013-0965-9http://link.springer.com/10.1007/s11548-013-0965-9