Extracting vessels in eye fundus images has been explored in numerous papers, e.g. [1, 2, 3, 4, 5, 6]. However, these methods may present limitations when there are strong lighting variations in images. The existence of screening programmes for diabetic retinopathy has led to the creations of large databases of eye fundus images which contain contrast variations. They can be due to: the inhomogeneous absorption of the eye or to different lighting conditions . The aim of this paper is to introduce a vessel segmentation method which is adaptive to these lighting variations in colour eye fundus images. After having complemented the luminance of these images, the vessels appear as a positive relief (i.e. a “chain of mountains”) in the image topographic surfaces. Their detection is made by a probe composed of three parallel segments, where the central segment has a higher intensity than both others. When the probe is inside a vessel (i.e. a “mountain”), the intensity difference between its external segments and the bottom of the mountain is minimal, whereas when the probe is outside a vessel, the intensity difference becomes greater. This principle will be used to detect the vessels. The adaptivity to lighting variations is due to the Logarithmic Image Processing model . Let us present our method, before showing some experiments and results.
2.1 Background: Logarithmic Image Processing (LIP)
Let be a grey level image defined on a domain with values in , where is equal to 256 for 8-bit digitised images. The LIP model is based on the transmittance law which gives it strong optical properties not only for images acquired by transmission but also by reflection . The LIP-addition and its corollary the LIP-subtraction are defined between two images and by:
is an image if and only if . Otherwise, may take negative values lying in the interval . In the LIP model, the grey scale is inverted. corresponds to the white extremity, when no obstacle is placed between the source and the camera. corresponds to the black extremity, when no light is passing. Importantly, the subtraction or the addition of a constant allows to brighten or darken an image as if the light intensity (or the camera exposure-time) was increased or decreased respectively . Such a property allows to adapt to the lighting variations which exist in numerous images. Let us first present our method in 1D before extending it to 2D.
2.2 Detection of a vessel profile in 1D
The luminance is equal to , where RGB are the colour components of an image (Fig. 1 a). To be in the LIP-scale, the luminance is complemented, (Fig. 1 b), and a line-profile of a vessel is extracted (Fig. 1 c). As the vessel appears as a “bump” or a “mountain”, a probe made of 3 points is designed to probe this profile from below. The central point of the probe presents a higher intensity than the left one and the right one at the bottom. The distance between the bottom points (i.e. the width of the probe) is larger than the vessel width (Fig. 1 c).
Let us consider the profiles of Fig. 2 where we want to detect a bump (Fig. 2 a) but not a transition (Fig. 2 b). In order to put the 3-point probe in contact with the profile , a constant is LIP-added to the probe. It is defined by , where is the infimum. The left and right detectors, and , are defined as the infimum of the LIP-difference (or contrast ) between the profile and the left and right probes, and , after the LIP-addition of the constant :
The bump detector is defined as the supremum of the left and right detectors:
Maps of bump detectors can be computed by using the restriction of the profile to the domain of the probe centered in each point . In case of a bump, the map presents a deep minimum (Fig. 2 c), whereas in case of a transition, this minimum disappears (Fig. 2 d).
2.3 Detection of the vessels in 2D
As retinal images are 2D functions, the probe , defined on , is made of 3 parallel segments with the same length and orientation (Fig. 3 a). The origin of the probe corresponds to one of the extremities of the central segment . Its intensity is greater than the intensity of the left and right segments and (Fig. 3 b). These two segments are equidistant of the central one and the width of the probe is . In order to define robust to noise operators, we will use the minimum, , defined as the element of a set sorted in descending order, .
Given the complemented luminance of a retinal image, the maps of the left and right detectors, and , are for all equal to:
The constant map is the point-wise infimum of the constant maps , and for each segment , and : . As the central segment must fully enter in the vessel relief, the infimum must be exact and the map will be used. However, for the left and right segments, and , the robust to noise maps and will be used.
The bump detector map in the orientation , , is defined as the point-wise supremum of:
The bump detector map is expressed as the point-wise infimum of the maps in all the orientations :
As the vessel detection is a multi-scale problem, different probes , of widths and length will be used. The bump detector maps for the probes are then combined by point-wise infimum:
In the map of vesselness (Fig. 4 a), the vessels appear as valleys and can be segmented by a threshold.
For a better visualisation, the map of vesselness can be normalised as follows. As the vessel values are less than the median of the map (Fig. 4 b), we define a new map: , if or , else. The values of the map are set in the interval in order to define the normalised map (Fig. 4 d), for all , by:
3 Experiments and results
3.1 Experiments for parameter estimation
Experiments were performed in lowly contrasted images from DIARETDB1 database  (Fig. 1) and in highly contrasted images from DRIVE database . DIARETDB1 images were captured with a Field Of View (FOV) of degrees , whereas in DRIVE the FOV angle was . Parameters are normalised to be the same for all the images. Each parameter is carefully chosen so as to obtain the best segmentation results. A DIARETDB1 image is used for a qualitative evaluation, whereas DRIVE images are used for a quantitative evaluation. Indeed, it contains 20 images with a reference given by an expert. The parameters are as follows. The minimum, , is chosen such that of the minimal points of a set are discarded. orientations between and were found sufficient. A maximum number of 3 probes , and will be used. Their widths are related to the FOV diameter of the image and the ratio between the FOV angle of a reference camera, , and the FOV angle of the image camera, . As the width must be greater than the diameter of the largest vessels, is appropriate. The width and are equal to and . As the smallest vessels may be more tortuous than the largest ones, the length of a probe must be smaller than its width . We will use . The intensity of the probes will depend on the image mean value . Initially, the central segment intensity is set at and the left and right segment intensities at . For each image , the central segment intensity is then equal to and the left and right segment intensities to . The map of vesselness is segmented with a threshold so that of the FOV area are considered as vessels (Fig. 4 a). In order to avoid the segmentation of zones of noise, less than probes may be used. The probe number is chosen by verifying that the number of pixels whose class is changing between the segmentations of the vesselness maps with probes and with the first probe does not exceed of the vessel area of the segmentation . The selected segmentation is then filtered: the regions whose area is less than are removed and the small holes of the vessels are filled (i.e the complemented segmentation is eroded by a unit square and reconstructed by dilation under the complemented segmentation). Moreover, for the vesselness map and the normalised map , only the values which are inside the FOV mask are considered. In DIARETDB1 database, the FOV masks are segmented by a threshold, whereas they are available in DRIVE database.
3.2 Qualitative results in a lowly contrasted image
The map of vesselness (Fig. 4 a) is computed for the image of Fig. 1 (a). 2 probes are automatically selected. One can notice that the threshold ( of the FOV area) is below the median (Fig. 4 b). The segmentation (Fig. 4 c) is visually good and allows to detect vessels which are barely visible in the original image (Fig. 1 a). The normalised map (Fig. 4 d) is compared to the vessel detector B-COSFIRE  (Fig. 4 e) whose code is publicly available. In the brightest parts of the image, the B-COSFIRE filter is very efficient to find the vessels and gives more details than our method. However, in the darkest parts, compared to our method, the B-COSFIRE filter is more sensitive to noise and enhances less the vessels. Using the same area threshold, its segmentation detects a lot of noise in addition to the vessels (Fig. 4 f).
3.3 Quantitative results in a highly contrasted image database
In DRIVE database, as a reference is available we compare the results of our method to those of the expert segmentation (given with the database) and to those of six state-of-the-art methods [1, 2, 3, 4, 5, 6]. (Tab. 1). We use the following averaged criteria over the database: the sensitivity (Se), specificity (Sp) and accuracy (Acc) 
. Using the accuracy criterion, ours is fifth over seven automatic methods. However, when taking into account the standard deviation: ours, theexpert and the methods [2, 3, 1]
are in the same confidence interval. Three methods[5, 4, 6] are above the others and the expert. In two images, Fig. 5 shows that our method is good to find the main vessels (Fig. 5 c, f). However, it is still limited to segment the smallest ones. In the lower part of Fig. 5 (f), one can notice that retinopathy lesions such as exudates create false positives (in cyan). Indeed, a thin zone between two exudates can be confounded with a vessel. This will be improved in future works. However, these preliminary results are encouraging because our method is standalone without any pre-processing such as contrast enhancement .
|Zhu ||0.7140||0.9868||0.9607 (0.0040)|
|Zhao ||0.742||0.982||0.954 (-)|
|Hu ||0.7772||0.9793||0.9533 (-)|
|Mendonça ||0.7344||0.9764||0.9463 (0.0065)|
|Azzopardi ||0.7655||0.9704||0.9442 (-)|
|Staal ||-||-||0.9441 (0.0065)|
4 Conclusion and perpsectives
We have successfully introduced a fully automatic method to extract vessels in colour retinal images which is adaptive to lighting variations. It is based on probing from below of an image by a 3-segment probe. A LIP-difference is then locally measured between the image and the probe. This gives a map of vesselness where vessels can be extracted by a threshold. In a lowly contrasted image, results have shown that our method better detects the vessels than a state-of-the-art one . In a highly contrasted image database (DRIVE), ours gives similar or better results than 3 state-of-the-art ones [1, 2, 3] and the manual segmentation of a second expert. Three methods [5, 4, 6] are above the others and the second expert. In future, we will make our method more robust to lesions and we will relate it to Mathematical Morphology.
-  J. Staal et al., “Ridge-based vessel segmentation in color images of the retina,” IEEE TMI, vol. 23, no. 4, pp. 501–509, 2004.
-  A. M. Mendonca and A. Campilho, “Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,” IEEE TMI, vol. 25, no. 9, pp. 1200–1213, 2006.
-  G. Azzopardi et al., “Trainable cosfire filters for vessel delineation with application to retinal images,” Med Image Anal, vol. 19, no. 1, pp. 46 – 57, 2015.
-  Y. Zhao et al., “Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retinal images,” IEEE TMI, vol. 34, no. 9, pp. 1797–1807, 2015.
-  C. Zhu et al., “Retinal vessel segmentation in colour fundus images using extreme learning machine,” Comput Med Imag Grap, vol. 55, pp. 68 – 77, 2017.
-  K. Hu et al., Neurocomputing, vol. 309, pp. 179 – 191, 2018.
-  G. Noyel et al., “Superimposition of eye fundus images for longitudinal analysis from large public health databases,” Biomed Phys Eng Express, vol. 3, pp. 045015, 2017.
-  M. Jourlin, Logarithmic Image Processing: Theory and Applications, vol. 195 of Adv Imag Electron Phys, Elsevier Science, 2016.
-  T. Kauppi et al., “The diaretdb1 diabetic retinopathy database and evaluation protocol,” in Proc BMVC, 2007, pp. 15.1–15.10.