Retinal vessel segmentation by probing adaptive to lighting variations

04/29/2020 ∙ by Guillaume Noyel, et al. ∙ 0

We introduce a novel method to extract the vessels in eye fun-dus images which is adaptive to lighting variations. In the Logarithmic Image Processing framework, a 3-segment probe detects the vessels by probing the topographic surface of an image from below. A map of contrasts between the probe and the image allows to detect the vessels by a threshold. In a lowly contrasted image, results show that our method better extract the vessels than another state-of the-art method. In a highly contrasted image database (DRIVE) with a reference , ours has an accuracy of 0.9454 which is similar or better than three state-of-the-art methods and below three others. The three best methods have a higher accuracy than a manual segmentation by another expert. Importantly, our method automatically adapts to the lighting conditions of the image acquisition.



There are no comments yet.


This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Extracting vessels in eye fundus images has been explored in numerous papers, e.g. [1, 2, 3, 4, 5, 6]. However, these methods may present limitations when there are strong lighting variations in images. The existence of screening programmes for diabetic retinopathy has led to the creations of large databases of eye fundus images which contain contrast variations. They can be due to: the inhomogeneous absorption of the eye or to different lighting conditions [7]. The aim of this paper is to introduce a vessel segmentation method which is adaptive to these lighting variations in colour eye fundus images. After having complemented the luminance of these images, the vessels appear as a positive relief (i.e. a “chain of mountains”) in the image topographic surfaces. Their detection is made by a probe composed of three parallel segments, where the central segment has a higher intensity than both others. When the probe is inside a vessel (i.e. a “mountain”), the intensity difference between its external segments and the bottom of the mountain is minimal, whereas when the probe is outside a vessel, the intensity difference becomes greater. This principle will be used to detect the vessels. The adaptivity to lighting variations is due to the Logarithmic Image Processing model [8]. Let us present our method, before showing some experiments and results.

2 Method

2.1 Background: Logarithmic Image Processing (LIP)

Let be a grey level image defined on a domain with values in , where is equal to 256 for 8-bit digitised images. The LIP model is based on the transmittance law which gives it strong optical properties not only for images acquired by transmission but also by reflection [8]. The LIP-addition and its corollary the LIP-subtraction are defined between two images and by:


is an image if and only if . Otherwise, may take negative values lying in the interval . In the LIP model, the grey scale is inverted. corresponds to the white extremity, when no obstacle is placed between the source and the camera. corresponds to the black extremity, when no light is passing. Importantly, the subtraction or the addition of a constant allows to brighten or darken an image as if the light intensity (or the camera exposure-time) was increased or decreased respectively [8]. Such a property allows to adapt to the lighting variations which exist in numerous images. Let us first present our method in 1D before extending it to 2D.

(a) Colour image (b) (c) Profile and probe
Figure 1: (a) A lowly contrasted retinal image [9]. (b) In its luminance image after complementation , a vessel profile is extracted (green segment). (c) Profile and probe .

2.2 Detection of a vessel profile in 1D

The luminance is equal to , where RGB are the colour components of an image (Fig. 1 a). To be in the LIP-scale, the luminance is complemented, (Fig. 1 b), and a line-profile of a vessel is extracted (Fig. 1 c). As the vessel appears as a “bump” or a “mountain”, a probe made of 3 points is designed to probe this profile from below. The central point of the probe presents a higher intensity than the left one and the right one at the bottom. The distance between the bottom points (i.e. the width of the probe) is larger than the vessel width (Fig. 1 c).

Let us consider the profiles of Fig. 2 where we want to detect a bump (Fig. 2 a) but not a transition (Fig. 2 b). In order to put the 3-point probe in contact with the profile , a constant is LIP-added to the probe. It is defined by [8], where is the infimum. The left and right detectors, and , are defined as the infimum of the LIP-difference (or contrast [8]) between the profile and the left and right probes, and , after the LIP-addition of the constant :


The bump detector is defined as the supremum of the left and right detectors:


Maps of bump detectors can be computed by using the restriction of the profile to the domain of the probe centered in each point . In case of a bump, the map presents a deep minimum (Fig. 2 c), whereas in case of a transition, this minimum disappears (Fig. 2 d).

(a) (b) (c) (d)
Figure 2: (a) Probing of a bump . (b) Probing of a transition . The left and right detectors, and are shown by vertical black arrows. Maps of the bump detector (c) of the bump or (d) of the transition .
(a) Probe domain (b) Probe intensity
Figure 3: (a) 2D probe with an orientation and a width . (b) The central segment has a higher intensity than both others ones and .

2.3 Detection of the vessels in 2D

As retinal images are 2D functions, the probe , defined on , is made of 3 parallel segments with the same length and orientation (Fig. 3 a). The origin of the probe corresponds to one of the extremities of the central segment . Its intensity is greater than the intensity of the left and right segments and (Fig. 3 b). These two segments are equidistant of the central one and the width of the probe is . In order to define robust to noise operators, we will use the minimum, , defined as the element of a set sorted in descending order, .

Given the complemented luminance of a retinal image, the maps of the left and right detectors, and , are for all equal to:


The constant map is the point-wise infimum of the constant maps , and for each segment , and : . As the central segment must fully enter in the vessel relief, the infimum must be exact and the map will be used. However, for the left and right segments, and , the robust to noise maps and will be used.

The bump detector map in the orientation , , is defined as the point-wise supremum of:


The bump detector map is expressed as the point-wise infimum of the maps in all the orientations :


As the vessel detection is a multi-scale problem, different probes , of widths and length will be used. The bump detector maps for the probes are then combined by point-wise infimum:


In the map of vesselness (Fig. 4 a), the vessels appear as valleys and can be segmented by a threshold.

For a better visualisation, the map of vesselness can be normalised as follows. As the vessel values are less than the median of the map (Fig. 4 b), we define a new map: , if or , else. The values of the map are set in the interval in order to define the normalised map (Fig. 4 d), for all , by:


3 Experiments and results

3.1 Experiments for parameter estimation

Experiments were performed in lowly contrasted images from DIARETDB1 database [9] (Fig. 1) and in highly contrasted images from DRIVE database [1]. DIARETDB1 images were captured with a Field Of View (FOV) of degrees [9], whereas in DRIVE the FOV angle was . Parameters are normalised to be the same for all the images. Each parameter is carefully chosen so as to obtain the best segmentation results. A DIARETDB1 image is used for a qualitative evaluation, whereas DRIVE images are used for a quantitative evaluation. Indeed, it contains 20 images with a reference given by an expert. The parameters are as follows. The minimum, , is chosen such that of the minimal points of a set are discarded. orientations between and were found sufficient. A maximum number of 3 probes , and will be used. Their widths are related to the FOV diameter of the image and the ratio between the FOV angle of a reference camera, , and the FOV angle of the image camera, . As the width must be greater than the diameter of the largest vessels, is appropriate. The width and are equal to and . As the smallest vessels may be more tortuous than the largest ones, the length of a probe must be smaller than its width . We will use . The intensity of the probes will depend on the image mean value . Initially, the central segment intensity is set at and the left and right segment intensities at . For each image , the central segment intensity is then equal to and the left and right segment intensities to . The map of vesselness is segmented with a threshold so that of the FOV area are considered as vessels (Fig. 4 a). In order to avoid the segmentation of zones of noise, less than probes may be used. The probe number is chosen by verifying that the number of pixels whose class is changing between the segmentations of the vesselness maps with probes and with the first probe does not exceed of the vessel area of the segmentation . The selected segmentation is then filtered: the regions whose area is less than are removed and the small holes of the vessels are filled (i.e the complemented segmentation is eroded by a unit square and reconstructed by dilation under the complemented segmentation). Moreover, for the vesselness map and the normalised map , only the values which are inside the FOV mask are considered. In DIARETDB1 database, the FOV masks are segmented by a threshold, whereas they are available in DRIVE database.

3.2 Qualitative results in a lowly contrasted image

The map of vesselness (Fig. 4 a) is computed for the image of Fig. 1 (a). 2 probes are automatically selected. One can notice that the threshold ( of the FOV area) is below the median (Fig. 4 b). The segmentation (Fig. 4 c) is visually good and allows to detect vessels which are barely visible in the original image (Fig. 1 a). The normalised map (Fig. 4 d) is compared to the vessel detector B-COSFIRE [3] (Fig. 4 e) whose code is publicly available. In the brightest parts of the image, the B-COSFIRE filter is very efficient to find the vessels and gives more details than our method. However, in the darkest parts, compared to our method, the B-COSFIRE filter is more sensitive to noise and enhances less the vessels. Using the same area threshold, its segmentation detects a lot of noise in addition to the vessels (Fig. 4 f).

(a) (b) Histogram (c) Segmentation (d) (e) B-COSFIRE (f) Seg. of (e)
Figure 4: (a) Map of vesselness . (b) Histogram of and threshold value in red. (c) Vessel segmentation. (d) Normalised vesselness . (e) B-COSFIRE filtered image. (e) Segmentation of (e).

3.3 Quantitative results in a highly contrasted image database

In DRIVE database, as a reference is available we compare the results of our method to those of the expert segmentation (given with the database) and to those of six state-of-the-art methods [1, 2, 3, 4, 5, 6]. (Tab. 1). We use the following averaged criteria over the database: the sensitivity (Se), specificity (Sp) and accuracy (Acc) [1]

. Using the accuracy criterion, ours is fifth over seven automatic methods. However, when taking into account the standard deviation: ours, the

expert and the methods [2, 3, 1]

are in the same confidence interval. Three methods

[5, 4, 6] are above the others and the expert. In two images, Fig. 5 shows that our method is good to find the main vessels (Fig. 5 c, f). However, it is still limited to segment the smallest ones. In the lower part of Fig. 5 (f), one can notice that retinopathy lesions such as exudates create false positives (in cyan). Indeed, a thin zone between two exudates can be confounded with a vessel. This will be improved in future works. However, these preliminary results are encouraging because our method is standalone without any pre-processing such as contrast enhancement [7].

Method Se Sp Acc (std)
Zhu [5] 0.7140 0.9868 0.9607 (0.0040)
Zhao [4] 0.742 0.982 0.954   (-)
Hu [6] 0.7772 0.9793 0.9533 (-)
expert 0.7760 0.9725 0.9473 (0.0048)
Mendonça [2] 0.7344 0.9764 0.9463 (0.0065)
Ours 0.7358 0.9765 0.9454 (0.0060)
Azzopardi [3] 0.7655 0.9704 0.9442 (-)
Staal [1] - - 0.9441 (0.0065)
Table 1: Mean sensitivity (Se), specificity (Sp), accuracy (Acc) and its standard deviation (std) for different methods in DRIVE database.
(a) Image 1 (b) (c) Segmentation (d) Image 3 (e) (f) Segmentation
Figure 5: (a,d) Retinal images. (b,e) Normalised maps. (c,f) Segmentation comparison with the reference. Black pixels are true positives, white pixels are true negatives, cyan pixels are false positives and red pixels are false negatives.

4 Conclusion and perpsectives

We have successfully introduced a fully automatic method to extract vessels in colour retinal images which is adaptive to lighting variations. It is based on probing from below of an image by a 3-segment probe. A LIP-difference is then locally measured between the image and the probe. This gives a map of vesselness where vessels can be extracted by a threshold. In a lowly contrasted image, results have shown that our method better detects the vessels than a state-of-the-art one [3]. In a highly contrasted image database (DRIVE), ours gives similar or better results than 3 state-of-the-art ones [1, 2, 3] and the manual segmentation of a second expert. Three methods [5, 4, 6] are above the others and the second expert. In future, we will make our method more robust to lesions and we will relate it to Mathematical Morphology.


  • [1] J. Staal et al., “Ridge-based vessel segmentation in color images of the retina,” IEEE TMI, vol. 23, no. 4, pp. 501–509, 2004.
  • [2] A. M. Mendonca and A. Campilho, “Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,” IEEE TMI, vol. 25, no. 9, pp. 1200–1213, 2006.
  • [3] G. Azzopardi et al., “Trainable cosfire filters for vessel delineation with application to retinal images,” Med Image Anal, vol. 19, no. 1, pp. 46 – 57, 2015.
  • [4] Y. Zhao et al., “Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retinal images,” IEEE TMI, vol. 34, no. 9, pp. 1797–1807, 2015.
  • [5] C. Zhu et al., “Retinal vessel segmentation in colour fundus images using extreme learning machine,” Comput Med Imag Grap, vol. 55, pp. 68 – 77, 2017.
  • [6] K. Hu et al.,

    “Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function,”

    Neurocomputing, vol. 309, pp. 179 – 191, 2018.
  • [7] G. Noyel et al., “Superimposition of eye fundus images for longitudinal analysis from large public health databases,” Biomed Phys Eng Express, vol. 3, pp. 045015, 2017.
  • [8] M. Jourlin, Logarithmic Image Processing: Theory and Applications, vol. 195 of Adv Imag Electron Phys, Elsevier Science, 2016.
  • [9] T. Kauppi et al., “The diaretdb1 diabetic retinopathy database and evaluation protocol,” in Proc BMVC, 2007, pp. 15.1–15.10.