1 Introduction
The retinal vasculature is the only part of the body’s circulatory system that can be observed non-invasively by optical means. A large variety of diseases affect the vasculature in a way that may cause geometrical and functional changes. Retinal images are therefore not only suitable for investigation of ocular diseases such as glaucoma and age-related macular degeneration (AMD), but also for systemic diseases such as diabetes, hypertension and arteriosclerosis. This makes the retinal vasculature a rewarding and well researched subject and and a growing number of image processing techniques are developed to segment and analyze the retinal vasculature Abramoff2010 ; Patton2006 .
Retinal vessel tracking
Typically there are two types of methods for vessel extraction: pixel classification methods Krause2013 ; Philipsen2012 ; Budai2009 ; Odstrcilik2009 and vessel tracking methods Al-Diri2009 ; Can1999 ; Yin2012 ; Grisan2004 ; Espona2007 ; Poletti2011 ; Chutatape1998
. The first type of method classifies pixels as either being part of a vessel or background, resulting in a pixel map in which white pixels represent blood vessels. Of the pixel classification methods, the approach by Krause et al.
Krause2013 is most similar to ours as both methods rely on a transformation to a higher dimensional domain. In their work they applied vessel detection based on the local Radon transform, of which we will show later in this article that this is a special case of an orientation score transform based on cake wavelets. The other type of method, vessel tracking, is based on recursively expanding a model of the vasculature from a set of seed points. One advantage of vessel tracking over pixel classification is that it guarantees connectedness of vessel segments, whereas in pixel classification methods this is not necessarily the case. For further quantitative analysis of the vasculature, tracking algorithms are preferred because they intrinsically provide geometrical and topological information. For example, vessel widths, curvatures, segment lengths, bifurcation density and other features can relatively easily be extracted from the generated vessel models.Several different approaches to vessel tracking can be found in literature. There are methods based on active contours Al-Diri2009 ; Espona2007 , matched filters Can1999 ; Grisan2004 ; Chutatape1998 , and probabilistic models Yin2012 among others Yin2012 ; Poletti2011 . The majority of papers on vessel tracking report limitations regarding tracking blood vessels through crossings, bifurcations and/or more complex situations. In this paper we aim at solving these problems by means of orientation analysis via so called orientation scores, which are objects in which image information is neatly organized based on position and orientation Duits2005 . We propose two new tracking algorithms that act directly on the domain of an orientation score, and we show that these methods are highly capable of dealing with the aforementioned problems. Afterwards, we will extend one of the orientation score based algorithms to a vasculature tracking algorithm, which is capable of constructing models of the complete retinal vasculature.
Orientation scores
Inspired by the cortical orientation columns in the primary visual cortext Hubel2009 , Duits et al. developed a mathematical framework for image processing and analysis based on 2D orientation scores Duits2007 . Similar to the perceptual organization of orientation in the visual cortex, a 2D orientation score is an object that maps 2D positions and orientation angles to complex scalars. Instead of assigning an orientation to each position and thereby extending the codomain, we extend the domain of the image where our modeling can deal with multiple orientations per position. When constructing an orientation score it is crucial one does not tamper the data evidence before tracking takes place. Therefore we consider invertible orientation scores that provide a full comprehensive overview of how the image is decomposed out of local (multiple) orientations. In invertible orientation scores, all orientated structures are disentangled, see Fig. 1.
Paper structure
The article is structured as follows: First, in Section 2, theory about orientation scores is provided. In Section 3, two vessel tracking approaches based on orientation scores are described:
-
the ETOS-algorithm: an all-scale approach based on a new class of wavelets, the so-called cake wavelets
-
the CTOS-algorithm: a multi-scale approach based on the classical Gabor wavelets
Both tracking methods rely on a novel generic geometrical principle for curve optimization within the Euclidean motion group, which is explained and mathematically underpinned in Appendix A. We will show that ETOS generally works with different types of orientation scores, however with best performance on invertible orientation scores based on cake wavelets Duits2005 ; Duits2007a (in comparison to non-invertible orientation scores based on Gabor wavelets). The second approach requires a multi-scale and orientation decomposition. The two approaches are described in Section 3.1, and evaluated in Section 3.2. It will turn out that ETOS based on cake wavelets has several advantages over CTOS based on Gabor wavelets. We have validated ETOS more extensively by comparing it to the state of the art in retinal vessel tracking Al-Diri2009 ; Bankhead2012 ; Xu2011 using the publicly availabe REVIEW database Al-Diri2009 . In Section 4.1 we describe our vasculature tracking algorithm, composed of proper initialization, junction detection and junction resolver algorithms. In Section 4.2 the correctness of topology of the models is evaluated using images of the HRFI-database Budai2011 . General conclusions can be found in Section 5.
2 Orientation scores
Orientation detection and encoding is a common subject in image processing. E.g., Frangi et al. Frangi1998
detect retinal blood vessels by calculating a vesselness value, obtained by eigenvalue analysis of the Hessian matrix. In
Soares2006 this is done by taking the maximum modulus over a set of oriented Gabor wavelet responses. Besides the presence of local orientations, the orientation value(s) may be relevant as well. For instance in vessel tracking and certain other image enhancement techniques like coherence enhanced diffusion Weickert1999 and orientation channel smoothing Felsberg2006The most commonly used methods to detect orientations are capable of detecting only one orientation per position. However, by using oriented wavelets and steerable filters Freeman1991 ; Simoncelli1994 ; Perona1991 ; Granlund1995 orientation confidence measures can be extracted for any given orientation, thus allowing for the detection of multiple orientations per position. Oriented wavelets allow for a transformation from an image to an orientation score, where each locally present combination of position and orientation is mapped to a single value Kalitzin1999 ; Duits2007 ; Duits2007a , see Fig. 1.
In his pioneering paper, Kalitzin Kalitzin1999 proposed a specific wavelet, given by
(1) |
that, by approximation, guaranteed invertibility from the orientation scores back to the original image, without loss of information. These wavelets belong to a specific class of so called proper wavelets (wavelets that allow well-posed reconstruction Duits2007 ; Duits2007a
), and are found by expansion in eigenfunctions of the harmonic oscillator. The advantage of this expansion is that this steerable basis is Fourier invariant, allowing to control the wavelet shape in both the spatial and Fourier domain. The disadvantages of such kernels
are however that 1) their series do not converge in and truncation of the pointwise converging series heavily affects the shape and induces undesirable oscillations (Duits2005, , p.140-142); 2) the wavelets explode in the radial direction along its orientation, e.g. for the case in (1) it explodes with (Duits2005, , App.7.3); 3) does not allow approximate reconstruction by integration over only.In Kalitzin’s paper the well-posedness of the reconstruction was not quantified. Analysis of the well-posedness was done by Duits using the function , which will be explained in more detail in Section 2.1. The function indicates how well spatial frequencies in an image are preserved after a transformation, and it can be seen as a measure for stability of the inverse transformation Duits2007a ; Fuehr2005 ; Duits2005 ; Duits2007 . Within these papers a new class of proper wavelets called cake wavelets are presented. These are oriented wavelets, able to capture all image scales without any bias to a specific scale. This property is crucial in our vessel edge tracking approach since it reduces the need for a multi-scale approach, as will become more clear in the next sections.
In contrast, the Gabor wavelet is another oriented wavelet, which is widely used in the field of image processing because of its capability to detect oriented features at a certain scale. An example of segmenting crossing lines using Gabor wavelet based orientation scores is given in Chen2000 . The property to tune the Gabor wavelet to capture features at a specific scale can be very useful, but the scale selection also implies exclusion of other scales. The single-scale Gabor wavelet transformation causes information from the original image to be lost and the transformation is therefore non-invertible. In order to introduce invertibility, one has to use a multi-scale approach Fischer2007 , which is also computationally more expensive.
Since no information should be lost during the orientation score transformation (see Fig. 0(e)), the notion of invertibility of an orientation score transformation is essential. The invertibility allows us to relate operators on images to operators on orientation scores, and vice versa. Using invertible orientation scores, one can employ the automatic disentanglement of local orientations involved in a crossing (cf. Fig. 0(f)). For line/contour enhancement, this has lead to a generic crossing preserving diffusion method Franken2009 ; Duits2010a ; Franken2008 which outperformed related diffusions acting directly on the image domain. For tracking of crossing blood vessels a similar advantage can be employed, as we will show in this article.
2.1 Construction of orientation scores
Consider a 2D image as a function , with compact support on the image domain , with image dimensions , and which we assume to be square integrable, i.e. . An orientation score, constructed from image , is defined as a function and depends on two variables (), where denotes position and denotes the orientation variable.
An orientation score of a function can be constructed by means of convolution with some anisotropic wavelet via
(2) |
where is the convolution kernel with orientation , i.e. aligned with the vertical axis in our convention, and where denotes the transformation between image and orientation score . The overline denotes complex conjugate, and the rotation matrix is given by
(3) |
see Fig. 0(e). Exact reconstruction111The reconstruction formula can easily be verified using the convolution theorem, , and the fact that from the orientation scores constructed by (2) is given by
(4) |
where
is the unitary Fourier transform on
, where denotes the adjoint wavelet transformation (see Duits2005 for details), and is calculated by(5) |

The function provides a stability measure of the inverse transformation. Theoretically, reconstruction is well-posed, as long as
(6) |
where is arbitrarily small, since then the condition number of is bounded by , see (Duits2005, , Thm. 20). If we do not restrict ourselves to band-limited/disk-limited functions, this requirement (Eq. (6)) bites the assumption since it implies is a continuous function vanishing at infinity (see e.g. Rudin1973 ) and so is . In that case we have to resort to distributional wavelet transforms222This is comparable to the construction of the unitary Fourier transform whose kernel is also not square integrable. whose closure is again a unitary map from into a reproducing kernel subspace of , for details see Appendix B.
In practice, to prevent numerical problems, it is best to aim at for , where is the Nyquist frequency of the discretely sampled image, meaning that all relevant frequency components within a ball of radius are preserved in the same way. Because of the discontinuity at , which causes practical problems with the discrete inverse Fourier transform, we will use wavelets , with , , and where
(7) |
where denotes a scale parameter. To fix the inflection point close to the Nyquist frequency, say at with , we set (to fix the bending point: , see Fig. 2). The function basically is a Gaussian function at scale , multiplied with the Taylor series of its inverse up to a finite order 2N to ensure a slower decay. The function smoothly approximates 1 on the domain , see Fig. 2. A wavelet with such a will be called a proper wavelet.
The methods presented in this paper are best described by a moving frame of reference that lives in the tangent bundle ()) above the domain of an orientation score. This moving frame of reference is given by the map
(8) |
with and ,
tangent vectors (formally attached at
). To simplify the notation we introduce coordinates , and tangent vectors(9) |
As a result we have that at a given point in the orientation score , the tangent vector points in the spatial direction aligned with the orientation score kernel used at layer , see Fig. 0(b) and 0(d). We will often rely on the notation in Eq. (9) in the remainder of this article.
2.2 The domain of orientation scores: SE(2)
The domain of an orientation score is the set . However, from Fig. 0(d) one can recognize a curved geometry on the domain of orientation scores. This is reflected in the fact that and vary with . This is modeled by imposing a group structure on the set . This group structure comes from rigid body motions acting on via
(10) |
Note that which allows us to uniquely identify
i.e. to identify the space of positions and orientations with the rigid body motion group . As the combination of two rigid body motions is again a rigid body motion, is equipped with the group product:
(11) |
which is consistent with Eq. (10). The moving frame of reference (9) corresponds to the so-called left-invariant vector fileds in SE(2). For details see (Franken2008, , Fig. 2.5a).
2.3 Cake wavelets
Cake wavelets are constructed from the Fourier domain. By using polar coordinates, the Fourier domain can be uniformly divided into equally wide samples (”pieces of cake”) in the angular direction, see Fig. 3. The spatial wavelet is given by
(12) |
where is a Gaussian window, with , that is used to avoid long tails in the spatial domain. Note that multiplication with a large window in the spatial domain corresponds to a convolution with a small window in the Fourier domain, such that is hardly affected with sufficiently large. Function is given by
(13) |
with and where is the angular resolution in radians. The function specifies the radial function in the Fourier domain given by (7). denotes the th order B-spline given by
(14) |
Orientation scores constructed from an image using cake wavelets are denoted by . Fig. 4c demonstrates a typical cake wavelet based orientation score for a certain orientation.

![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
The approach of constructing wavelets directly from the Fourier domain allows indirect control over the spatial shape of the filter, and it can easily be adapted. For example, the number of orientations specifies the angular resolution : If is large, the resolution in the orientation dimension is large and the filters become very narrow. This is illustrated in Fig. 5. The cut-off frequency (at the inflection point) of the function , which is usually set as the Nyquist-frequency, could be lowered to filter out high-frequency noise components. Moreover, because B-splines and the function are used to sample the Fourier domain, the sum of all cake wavelets is approximately 1 over the entire Fourier domain (within a ball of radius ), see Fig. 3 and 5. Thus the cake wavelets indeed are proper wavelets, allowing stable reconstruction via Eq. (4). In Eq. (4) one can omit division by in which case stable reconstruction is obtained both by integration over and/or its partially discrete subgroup , with .
Remark
If then in the distributional sense. In this case the wavelet transform converges to the localized Radon transform, which has been proposed for effective retinal vessel detection in Krause2013 . The advantage however of taking is that we obtain well-posed non singular kernels in the spatial domain while allowing a stable reconstruction for all a-priori set .
Cake wavelets are quadrature filters, meaning that the real part contains information about the locally even (symmetric) structures, e.g. ridges, and the imaginary part contains information about the locally odd (antisymmetric) structures, e.g. edges. That is, the real and imaginary part of the filter
are related to each other by the Hilbert transform in the direction perpendicular to the wavelets orientation, which is defined by(15) |
where specifies the direction in which the Hilbert transform is performed, recall Eq. (9).
The quadrature property is useful in our vessel tracking approach, since it allows us to directly detect vessel edges from the imaginary part of the orientation scores, without having to calculate first-order derivatives perpendicular to the vessel orientation. In our applications we did remove the DC-component for the real part to avoid responses on locally constant images. Fig. 5 shows the real and imaginary part of the cake wavelet in the spatial domain, as well as the coverage of the wavelet in the Fourier domain.
A final remark on cake wavelets: Since the cake wavelets uniformly cover the Fourier domain (), they allow us to use a fast approximate reconstruction scheme, which is given by integration of the orientation scores over the angles only:
(16) |
for details see Duits2005 .
![]() |
![]() |
2.4 Gabor wavelets
Gabor wavelets are directional wavelets, and can be tuned to specific spatial frequencies (and inherently scales). In the field of retinal image processing they are used for vessel detection in various studies Soares2006 ; Li2006 . We can exploit the tuning of the wavelet to specific spatial frequencies to match differently sized blood vessels. The 2D Gabor wavelet is a Gaussian, modulated by a complex exponential, and is defined as:
(17) |
where with is a diagonal matrix that defines the anisotropy of the wavelet. The vector defines the spatial frequency of the complex exponential and normalizes the wavelet to unity. In our method we use , which makes the filter elongated in the x-direction and we choose , which causes oscillations perpendicular to the orientation of the wavelet. We can dilate the filter by a scaling parameter :
(18) |
Orientation scores constructed from an image using Gabor wavelets at scale are denoted by . Fig. 4d-e demonstrate typical Gabor wavelet based orientation scores at a small and large scale respectively, for a certain orientation.
The real and imaginary part of the wavelet in the spatial domain, as well as the coverage of the wavelet in the Fourier domain are shown in Fig. 5. Similar to the cake wavelets, Gabor wavelets also have the quadrature property orthogonal to their orientation.
In the Fourier domain, the Gabor filters are represented as Gaussian functions shifted from the origin by . The set of all rotated Gabor functions at a certain scale covers therefore a certain annulus in the Fourier domain, and a single Gabor wavelet can thus be regarded as an oriented band-pass filter. This is also clearly depicted by the outlines of the frequency responses as shown in Fig. 5.
2.5 Double-sided vs single-sided wavelets, orientation vs direction
The cake and Gabor wavelets are double-sided wavelets which do not distinguish between a forward or backward direction (they are symmetric with respect to the y-axis). In order to distinguish between symmetries and symmetries (see Fig. 6), and to be able to handle bifurcations, we decompose the orientation scores into a forward and backward direction333I.e. we extend the domain of our orientation scores to the group , where also includes, besides rotations (with ), reflections (with )., denoted by a and symbol respectively:
(19) |
where
(20) |
and where
(21) |
with
(22) |
Note that by using the error function, we have and , so that . It is thus possible to choose one of the single-sided wavelets to construct a directional orientation score, while still being able to access the original (double-sided) orientation score. Fig. 6
demonstrates the advantage of using single-sided wavelets over double-sided wavelets in the case of direction estimation based on the orientation column of a score
.
![]() |
![]() |
![]() |
|
3 Vessel tracking in orientation scores via optimization in transversal tangent planes
In orientation scores, information from the original image is both maintained and neatly organized in different orientations, leading to the disentanglement of crossing structures. Moreover, because of the quadrature property of the wavelets used in the construction of orientation scores, important edge information is well represented in the imaginary part of the score. We therefore propose tracking algorithms that directly act on the orientation score:
-
The ETOS algorithm: Edge Tracking based on Orientation Scores (Section 3.1.1).
-
The CTOS algorithm: Centerline Tracking based on multi-scale Orientation Scores (Section 3.1.2).
In both tracking algorithms we rely on a fundamental geometric principle to extract the most probable paths in orientation scores. Consider to this end Fig.
7a, where a track is considered locally optimal if it is locally optimized in each444That is locally optimal in for each , . transversal 2D-tangent plane555tangent vectors can be considered as differential operators acting on smooth locally defined functions Aubin . In our case this boils down to replacing by , by and by .(23) |
spanned by and in tangent-bundle
where denotes the tangent space at . For more details on this principle we refer to Appendix A.
3.1 Methods
3.1.1 ETOS: Edge tracking in orientation scores
The ETOS algorithm tracks both vessel edges simultaneously through an orientation score. The method iteratively expands a blood vessel model by detecting, at each forward step , the optimal edge locations , from the orientation score. Here and denote the 2D left and right edge position respectively, denotes the orientation of the blood vessel. At each iteration the vessel center point and the vessel width are defined as follows:
(24) | |||
(25) |
To describe our method we will rely on a moving frame of reference with basis vectors , and , which are described by the orientation parameter and Eq. (9).
In our method the edge positions and are detected in the orientation score from the tangent plane (yellow plane in Fig. 6(a)). An edge can be detected as a local optimum from the imaginary part of this plane; a local minimum and maximum for the left and right edge respectively (as indicated by the two red dots in Fig. 6(c)). A schematic overview of the tracking process, including the symbols used in this section, is presented in Fig. 8.

For the sake of speed and simplicity, we follow a 2-step approach where the process of detecting the optimal edge positions is separated into two 1D optimization tasks which simply involve the detection of local minima (left edges) and maxima (right edge); in step 1 the edge locations and are optimized in the direction (-optimization), in step 2 the corresponding orientation is optimized in the direction (-optimization), see Fig. 7b-d. By considering the continuous properties of the blood vessels (e.g. continuous vessel widths), we use a paired edge tracking approach where the left and right edges are detected simultaneously. This approach has the advantage that even if one of the edges is less apparent in the image (e.g. at crossing points, parallel vessels and bifurcations), both edges and their orientation can still be tracked. A possible disadvantage of this approach is however that abrupt changes in vessel width (e.g. at stenoses and aneurysms) may become unnoticed or detected with less detail.
Step 1: Based on a-priori knowledge about the previous vessel orientation, edges and center point, stored in
where we set , a new vessel center point is calculated as
(26) |
where (typically in the order of 2 pixels) is the tracking step-size. New edge points are selected from a set of points on a line going through the estimated center point and perpendicular to the vessel orientation :
(27) |
with
(28) |
where is a parameter describing the distance to the estimated vessel center point and denotes the maximum distance to the estimated the vessel center point. Note that orientation of the previous iteration is used as the new orientation is yet to be detected.
![]() |
![]() |
![]() |
![]() |
An intensity profile can then be obtained from the orientation scores according to
(29) |
see Fig. 6(d)
. New edge points can now easily be found by detecting the local optima on the imaginary part of this profile. However, the detection of vessel edges is made more robust by taking into account that the vessel wall is a continuous structure, and that the width of a blood vessel gradually changes, rather than abruptly. Therefore, we introduce the edge probability envelope. The edge probability envelope is used to indicate the most likely position of the vessel edges and it consists of two Gaussian distributions, one around the expected left vessel edge position and one around the right vessel edge position. The envelope function is given by:
(30) |
where is the estimated location of the vessel center on , is the mean vessel width calculated over the last iterations:
(31) |
the standard deviation of the Gaussian distributions is denoted with
, and is used to align the envelope with the actual vessel profile. A robust value for is found by optimizing the cross-correlation of the envelope with the imaginary part of the actual profile:(32) |
![]() |
![]() |
![]() |
![]() |
see Fig. 9c. The left and right edges are finally detected as the arguments corresponding to the minimum and maximum points, and respectively, of the product of the envelope and the intensity profile:
(33) |
see Fig. 9d for an example. The new edge points can then be assigned by
(34) |
Step 2: Orientation can be estimated by selecting the orientation that provides the highest orientation score response at both vessel edges. The orientation score response is a combination of the orientation columns at the left and right edges ( and resp.), and the optimal orientation is calculated as:
(35) |
Finally the new center point , which may not be equal to , is calculated as the point between the two edges, according to Eq. (24).
3.1.2 CTOS: Multi-scale vessel center-line tracking in orientation scores
In this section the scale-selective property of the Gabor wavelets is exploited in the design of a fast orientation score based method called CTOS. The potential presence of a central light reflex in a blood vessel makes the design of a simple and fast centerline tracking algorithm based on local minima tracking in the image nearly impossible. However, by using Gabor wavelets and appropriate scale selection, central light reflexes can be filtered out such that vessel center points can easily be detected as local minima on detection profiles (see Fig. 10).
The CTOS algorithm follows the same geometric principle as the ETOS algorithm, for a mathematical underpinning see Appendix A, however CTOS is done on the real part of orientation scores in order to find the vessel center line, rather than the vessel edges. In this algorithm each iteration consists of 3-steps: In step 1 the center point is detected (-optimization), in step 2 the orientation is detected (-optimization) and in step 3 the vessel scale is detected (-optimization).
Step 1: Using the vessel center point , orientation and scale , which were detected during iteration , phase 1 at iteration is started by estimating the new center point as given by Eq. (26). The new center point is selected from a set of candidate points as given by Eq. (28). From the candidate points a center point detection profile is obtained by:
(36) |
with the orientation score generated by the Gabor wavelets at scale . The new center point is detected as the coordinate belonging to the local minimum on the intensity profile nearest to :
(37) |
Step 2: The new vessel orientation is detected as the local maximum of the negative orientation response, nearest to the previous vessel orientation :
(38) |
Step 3: At the new center point and orientation , scale is detected as the scale that gives the largest negative response:
(39) |
Compared to the ETOS algorithm, this algorithm is very fast since it only requires three basic (deterministic) detection steps. However the combination of scale and orientation detection makes the algorithm slightly less stable: orientation detection depends on the correct detection of scale and vice versa.
KPIS | CLRIS | VDIS | HRIS | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
% | Mean | % | Mean | % | Mean | % | Mean | |||||
Standard | 100.0 | 7.51 | 0.00 | 100.0 | 13.79 | 0.00 | 100.0 | 8.83 | 0.00 | 100.0 | 4.35 | 0.00 |
O1 | 100.0 | 7.00 | 0.23 | 100.0 | 13.19 | 0.57 | 100.0 | 8.50 | 0.54 | 100.0 | 4.12 | 0.27 |
O2 | 100.0 | 7.60 | 0.21 | 100.0 | 13.69 | 0.70 | 100.0 | 8.91 | 0.62 | 100.0 | 4.35 | 0.28 |
O3 | 100.0 | 7.97 | 0.23 | 100.0 | 14.52 | 0.57 | 100.0 | 9.15 | 0.67 | 100.0 | 4.58 | 0.30 |
Gregson | 100.0 | 7.29 | 0.60 | 100.0 | 12.80 | 2.84 | 100.0 | 10.07 | 1.49 | 100.0 | 7.64 | 1.48 |
HHFW | 96.3 | 6.47 | 0.39 | 0.0 | 78.4 | 7.94 | 0.88 | 88.3 | 4.97 | 0.93 | ||
1DG | 100.0 | 4.95 | 0.40 | 98.6 | 6.30 | 4.14 | 99.9 | 5.78 | 2.11 | 99.6 | 3.81 | 0.90 |
2DG | 100.0 | 5.87 | 0.34 | 26.7 | 7.00 | 6.02 | 77.2 | 6.59 | 1.33 | 98.9 | 4.18 | 0.70 |
ESP | 100.0 | 6.56 | 0.33 | 93.0 | 15.70 | 1.47 | 99.6 | 8.80 | 0.77 | 99.7 | 4.63 | 0.42 |
Graph | 99.4 | 6.38 | 0.67 | 94.1 | 14.05 | 1.78 | 96.0 | 8.35 | 1.43 | 100.0 | 4.56 | 0.57 |
ARIA | 100.0 | 6.30 | 0.29 | 100.0 | 14.27 | 0.95 | 99.0 | 8.07 | 0.95 | 99.5 | 4.66 | 0.32 |
ETOS | 100.0 | 6.14 | 0.36 | 100.0 | 14.03 | 0.53 | 99.87 | 8.36 | 0.80 | 99.83 | 4.95 | 0.45 |
3.2 Validation
The algorithms were tested on the green channel of color fundus images. For each image the luminosity is normalized by disposing low frequency luminosity drifts. The low frequency drifts are detected by large scale Gaussian blurring of the image (typically ), and are subtracted from the original image.
3.2.1 Algorithm behavior at complex vessel junctions
A qualitative validation is done using a challenging set of 4 sub-images (see Fig. 11), which were taken from the high-resolution fundus images of the HRFI database Budai2011 . This set of sub-images contains crossings, overlapping bifurcations with crossings, small vessels crossing large vessels, small vessels, curved vessels, parallel vessels, etc. In each sub-image we manually placed seed points at the start of each blood vessel and at each bifurcation. In total 27 seed points were marked. Each seed point contains initial vessel center position, left edge position, right edge position and orientation, denoted by , , and . The initial scale for the CTOS algorithm is detected as the scale that provides the largest scale response at and (see Eq. (39)).
The tracking experiments are conducted using the following set of tracking parameters: The step size is set to pixels; The width of the scan line is set to pixels; The number of orientations used to construct the orientation scores is set to and the standard deviation of the Gaussian distributions used in the edge probability envelope is set to .
The ETOS algorithm was tested on both invertible orientation scores, which were constructed by cake wavelets, and non-invertible orientation scores, which were constructed by Gabor wavelets. The scale of the Gabor wavelets was chosen in such a way that the relevant vessel features were presented as well as possible in the orientation scores (e.g. a scale too large would only represent the very large blood vessels correctly, and a scale too low only the small vessels). We found that gave best results. For the CTOS algorithm we used a set of orientation scores constructed by Gabor wavelets at scales with 5,10,15,20,25 and 30.
Results of the tracking experiments are shown in Fig. 11. From this figure we see that, at complex situations, the ETOS method (column 2 and 3) outperforms the CTOS method (column 4). Best results are obtained when ETOS is used with invertible orientation scores (column 2). The ETOS algorithm acting on the invertible orientation scores generated by the cake kernels only fails to correctly track blood vessel nr 5 from image 3. The algorithm gives excellent results for all other vessels and manages to track the blood vessels through all complex situations, even when the contrast of the vessel edges is very low.
The performance of the ETOS algorithm is slightly decreased when applied to non-invertible orientation scores based on Gabor filters. It now fails to track 3 vessels correctly. The scale selective property of the Gabor wavelets, resulting in non-invertible orientation scores, causes the ETOS algorithm to perform less accurately compared to the application to invertible orientation scores.
The CTOS algorithm, which relies on a multi-scale orientation score approach, has the lowest performance. It fails to correctly track 5 blood vessels. In some cases, incorrect scale selection causes small parallel blood vessel to be detected as one large blood vessel (vessel 2 with 4, and 3 with 6 in the first image). Other tracks failed as a result of incorrect orientation detection.
In conclusion we can state that ETOS outperforms CTOS and that it gives best results when applied on invertible orientation scores.
3.2.2 Validation of width measurements
In the previous section we showed that the ETOS algorithm is very well capable of tracking blood vessels through complex situations. In this section we quantitatively validate the reliability of the measured vessel widths that are provided by the ETOS algorithm. This is done by comparing the measured widths to ground truth width measurements provided by the REVIEW database Al-Diri2009 . The REVIEW database consists of 16 color fundus images, which can be divided into 4 subsets: 1) Kick point image set (KPIS), 2) Central light reflex image set (CLRIS), 3) Vascular disease image set (VDIS) and 4) the high resolution image set (HRIS). Each image set represents images of different quality and resolution, and all are provided with manual width measurements that are performed by three individual observers. A ground truth of the vessel widths is constructed by averaging the measurements of the observers. The HRIS set contains high resolution images (3584x2438) and were down-sampled by a factor of four before submission to the ETOS algorithm. For more information on the dataset we would like to refer to Al-Diri2009 .
When testing our algorithm, we tracked each segment by initializing the algorithm using the first pair of manually marked edge points. The same parameters that are described in Section 3.2.1 were used. Fig. 13 shows a selection of the tracking results in comparison to ground truth vessel edge labeling.
In total 5066 vessel width measurements are available. The error between automated measurements and the ground truth measurements is defined as
(40) |
where is the estimated width as measured by the ETOS algorithm (recall Eq. (25)), and is the ground truth width of the th profile. To be able to compare our method with others we follow the same validation procedure as described in Al-Diri2009 ; Bankhead2012 ; Xu2011 , where the main focus is on the standard deviation of the errors. This is motivated by the idea that different implicit definitions of vessel widths may lead to consistent errors. If this bias however is consistent enough, the error could easily be accounted for by subtraction of a bias constant. A low standard deviation of the errors indicates that the error is consistent.

A scatter plot, plotting 5059 ground truth widths against the widths measured by our ETOS algorithm. The linear regression model
indicates an offset of less then a pixel, suggesting that ETOS slightly over-estimates the vessel widths. The slope of 0.88 indicates a strong positive relation between the ground truth and measured widths.
Table 1 shows the validation results of our ETOS algorithm, in comparison with methods by other authors that published their results using the same database Al-Diri2009 ; Bankhead2012 ; Xu2011 . The first four rows of the table show the results of the manual annotations (observer 1, 2 and 3) and the golden standard. The next four rows show results of four classic approaches to vessel width measurements:
-
Gregson: a rectangle is fitted to a vessel intensity profile, and the width is set such that the area under the rectangle and profile Gregson1995 are equal.
-
Half Height Full Width (HHFW): the standard half-height method, which uses thresholds set half-way between the maximum and minimum intensities at either side of an estimated center point Brinchmann-Hansen1986 .
-
1D Gaussian (1DG): a 1D Gaussian model is fit to the vessel intensity profile Zhou1994 .
-
2D Gaussian: a 2D Gaussian model is fit to the vessel intensity profile Lowell2004 .
The next three rows give results of the most recent, state of the art methods that published their results:
-
The Extraction of Segment Profiles (ESP) is an active contour algorithm by Al-Diri et al. Al-Diri2009 .
-
The Graph method is a graph based edge segmentation technique developed by Xu et al. Xu2011 .
-
The Automated Retinal Image Analyzer (ARIA) is an algorithm developed by Bankhead et al. Bankhead2012 , they used a wavelet approach to vessel segmentation after which the edge locations are refined.
The last row shows the results we achieved using our ETOS algorithm.
The column labeled with % shows the success rate, it indicates how many width measurements could successfully be validated (for more detail see Al-Diri2009 ). The success percentage is smaller then 100% whenever measurements failed to converge, e.g. when the distance between the ground truth and measured edge pair was too large. The column labeled with Mean indicates the mean vessel width of all the measured vessel profiles. The column labeled with indicates the standard deviation of the error (Eq. (40)), a lower is favorable since it indicates that the error is consistent.
From Table 1 it can be observed that ESP, Graph, ARIA and our ETOS algorithm all outperform the classic width measurement techniques. Also compared to the state of the art methods our algorithm scores very well. The ETOS algorithm performs remarkably well on the CLRIS dataset, which contains a large number of vessels with the central light reflex. For these images, the standard deviation of the errors is even lower than those of the observers. For other datasets, our method‘s performance is comparable to the state of the art.
Fig. 12 shows a scatter plot of the ground truth widths against the widths measured by our ETOS algorithm, together with a linear regression model that was fit through these points. The points are very much centered around the line , indicating a strong positive correlation. This is confirmed by the slope of the linear regression model
, which is near to 1. The low number of outliers in the scatter plot confirms the low standard deviation in errors, as demonstrated by Table
1. The offset of 0.85, together with slope 0.88, indicate that the ETOS algorithm has the tendency to slightly overestimate for vessels of size up to 7 pixels and underestimates for larger vessel sizes.We conclude that our ETOS algorithm, which is highly capable of tracking blood vessels through all sorts of complex situations, also provides reliable width measurements.
4 Vasculature tracking
In this section we describe additions to the ETOS algorithm, so as to be able to construct models of the complete retinal vasculature. Our vasculature tracking algorithm consists of:
-
Optic disk detection.
-
Seed point detection in the optic disk region.
-
Correct Initialization of the ETOS algorith by robust initial edge detection.
-
Automatic termination based on a set of stopping criteria.
-
Junction detection, classification and numbering.
-
Junction resolving.
Each of these items is described in Sections 4.1.1 to 4.1.6 and the complete algorithm is validated in Section 4.2.
4.1 Methods
4.1.1 Optic disk detection
Since the blood vessels of interest all enter the eye through the optic disk, vasculature tracking is initiated in this region. The detection of the optic disk occurs in two phases. In the first phase a rough estimation of the optic disk position is made based on a method using variance filtering, as proposed by Sinthanayothin
Sinthanayothin1999 . This method is based on the significant variance in pixel intensities that occur in the optic disk region.Next, the estimated optic disk position is refined by edge focusing Bergholm1987 on the optic disk boundary, followed by a weighted Hough transform Hough1962 for circles. Here we assume that the optic disk is approximately circular. To avoid disturbance of blood vessels during the detection of the optic disk boundary, the vessels are first removed by applying a closing operator on the red channel of the image (Fig. 14b). Edge focussing is performed on a star-shaped set of profiles (Fig. 14c) and the detected edge positions are used as input for the weighted Hough transform for circles. The weight of each edge position is determined by the scale up to which the edge can be traced in Gaussian scale space (before it reaches a so-called toppoint Florack2000 ; Johansen1994 ).
The optic disk radius , found by the Hough transform for circles, describes the size of the optic disk. For adult human eyes, the average radius of the optic disk is known to be Tasman2009 . Vessels in the optic disk region typically have a caliber of . While the resolution of retinal images may vary from camera to camera, the physical dimensions of the human eye are rather constant between individuals. In order to make our algorithm generally applicable to retinal images of different resolution, we will normalize distances used in our routines with
(41) |
where, based on real physical values, describes typical retinal vessel calibers in pixels.
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
4.1.2 Vessel likelihood map and seed point detection
For the detection of the initial seed points, a vessel likelihood map of the optic disk region is constructed using invertible orientation scores. Before images are subjected to our algorithms, they have their DC-component removed by means of high-pass filtering. As a result vessel pixels have negative values and background pixel values are assumed to be zero. In effect the real part of the orientation score at a certain vessel position and orientation is negative, the orientation score is approximately zero at background area’s. This is also demonstrated in Fig. 6(b), were the vessel in the score can be seen as a dark (negative value) blob on the plane. Based on these observations, we define a vessel likelihood map as:
(42) |
Seed points are detected as local maxima on circular intensity profiles centered around the optic disk (Fig. 14(a)) with radii , where is the detected optic disk radius. A seed point is discarded whenever its value in the vessel likelihood map is smaller then the average value of all points on the circle. For each remaining seed point , the initial orientation is detected as the orientation that provides the highest modulus of the orientation scores: . An additional filtering step, in which the seed points are classified as either true or false positive, is described in Section 4.1.3.
4.1.3 Initial edge detection
The ETOS algorithm is initialized with a starting vessel center point , orientation and edges and . Starting with an already detected initial center point and orientation , intensity profile can be obtained from the orientation scores using Eq. (29). Candidate edges are detected as local optima on the imaginary part of . Beside the main vessel edges that we are interested in, it is very likely that multiple other candidate edges are detected as well (as a result of noise or a central light reflex). Therefore, we use an edge focussing approach to detect the dominant edges. Each combination of neighboring left and right edges will form potential vessel patches (see Fig. 15(a)). Note that a blood vessel with a central light reflex consists of two neighboring vessel patches. We start initial edge detection by detecting the main vessel patch of interest by scoring each edge pair (,), based on the initial center point estimate and initial orientation , as follows: