Automated Segmentation of Retinal Layers from Optical Coherent Tomography Images Using Geodesic Distance

09/07/2016 ∙ by Jinming Duan, et al. ∙ 0

Optical coherence tomography (OCT) is a non-invasive imaging technique that can produce images of the eye at the microscopic level. OCT image segmentation to localise retinal layer boundaries is a fundamental procedure for diagnosing and monitoring the progression of retinal and optical nerve disorders. In this paper, we introduce a novel and accurate geodesic distance method (GDM) for OCT segmentation of both healthy and pathological images in either two- or three-dimensional spaces. The method uses a weighted geodesic distance by an exponential function, taking into account both horizontal and vertical intensity variations. The weighted geodesic distance is efficiently calculated from an Eikonal equation via the fast sweeping method. The segmentation is then realised by solving an ordinary differential equation with the geodesic distance. The results of the GDM are compared with manually segmented retinal layer boundaries/surfaces. Extensive experiments demonstrate that the proposed GDM is robust to complex retinal structures with large curvatures and irregularities and it outperforms the parametric active contour algorithm as well as the graph theoretic based approaches for delineating the retinal layers in both healthy and pathological images.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 6

page 7

page 8

page 10

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optical coherence tomography (OCT) is a powerful imaging modality used to image biological tissues to obtain structural and molecular information [1]. By using the low coherence interferometry, OCT can provide high-resolution cross-sectional images from backscattering profiles of biological samples. Over the past two decades, OCT has become a well-established imaging modality and widely used by ophthalmologists for diagnosis of retinal and optical nerve diseases. One of the OCT imaging biomarkers for retinal and optical nerve disease diagnosis is the thickness of the retinal layers. Automated OCT image segmentation is therefore necessary to delineate the retinal boundaries.

Since the intensity patterns in OCT images are the result of light absorption and scattering in retinal tissues, OCT images usually contain a significant amount of speckle noise and inhomogeneity, which reduces the image quality and poses challenges to automated segmentation to identify retinal layer boundaries and other specific retinal features. Retinal layer discontinuities due to shadows cast by the retinal blood vessels, irregular retinal structures caused by pathologies, motion artefacts and sub-optimal imaging conditions also complicate the OCT images and cause inaccuracy or failure of automated segmentation algorithms.

Over the past two decades a number of automatic and semi-automatic OCT segmentation approaches have been proposed. These approaches can be roughly categorised into three families: A-scan based methods, B-scan based methods and volume based methods, as illustrated in Figure 1. A-scan based methods [2, 3, 4, 5, 6, 7, 8] detect intensity peak or valley points on the boundaries in each A-scan profile and then form a smooth and continuous boundary by connecting the detected points using model fitting techniques. These methods can be inefficiency and lack of accuracy. Common approaches for segmenting two-dimensional (2D) B-scans include active contour methods [9, 10, 11, 12, 13, 14], shortest-path based graph search [15, 16] and statistical shape models [17, 18, 19] (i.e. active shape and appearance models [20, 21]). B-scans methods outperform A-scan methods in general. However, they are prone to the intrinsic speckle noise in OCT images and more likely to fail in detection of pathological retinal structures. Three-dimensional (3D) scan of the retina is now widely used in commercial OCT devices. Existing volume based segmentation methods mainly use 3D graph based methods [22, 23, 24, 25, 26, 27, 28, 29, 30]

and pattern recognition

[31, 32, 33, 34, 35]. Benefiting from contextual information represented in the analysis graph, graph based methods provide optimal solutions and ideal for volumetric data processing. However, the computation can be very complex and slow. Pattern recognition methods normally require training data manually segmented by experts in order to learn a feasible model for classification. These approaches also suffer in accuracy and efficiency. Segmentation of retinal layers in OCT images thereby remains a challenging problem.

Figure 1: A en-face fundus image (left) with lines overlaid representing the locations of each B-scan within a volumetric OCT data. The red line corresponds to the B-scan in the image (right top). One vertical A-scan of the B-scan is shown in the plot (right bottom). The fovea region is characterised by a depression in the centre of the retina surface.

In this paper, we propose an algorithm for retinal layer segmentation based on a novel geodesic distance weighted by an exponential function. As opposed to using a single horizontal gradient as in other works [15, 29, 30], the exponential function employed in our method integrates both horizontal and vertical gradient information and can thus account for variations in the both directions. The function plays the role of enhancing the foveal depression regions and highlighting weak and low contrast boundaries. As a result, the proposed geodesic distance method (GDM) is able to segment complex retinal structures with large curvatures and other irregularities caused by pathologies. We compute the weighted geodesic distance via an Eikonal equation using the fast sweeping method [36, 37, 38]. A retinal layer boundary can then be detected based on the calculated geodesic distance by solving an ordinary differential equation via a time-dependent gradient descent equation. A local search region is identified based on the detected boundary to delineate all the nine retinal layer boundaries and overcome the local minima problem of the GDM. We evaluate the proposed GDM through extensive numerical experiments and compare it with state-of-the-art OCT segmentation approaches on both healthy and pathological images.

In the following sections, we shall first review the state-of-the-art methods that are to be compared with the proposed GDM, such as parallel double snakes [14], Chiu’s graph search [15], Dufour’s method [27], and OCTRIMA3D [29, 30]. This will be followed by the details of the proposed GDM, ground-truth validation, numerical experimental results, and comparison of the GDM with the state-of-the-art methods.

2 Literature Review

In this section, we will provide an overview of the state-of-the-art methods (i.e. parallel double snakes [14], Chiu’s method [15], OCTRIMA3D [29, 30], Dufour’s method [27]) that will be compared with our proposed GDM in Section 3. For a complete review on the subject, we refer the reader to [39]. Among the four methods reviewed, the first two can only segment B-scans, while the latter two are able to extract retinal surfaces from volumetric OCT data. We note that the term ‘surface’ refers to a set of voxels that fall on the interface between two adjacent retinal layer structures. The retinal layer boundaries to be delineated are shown in Figure 2.

Parallel double snakes (PDS): Rossant et al. [14] detected the pathological (retinitis pigmentosa) cellular boundaries in B-scan images by minimising an energy functional that includes two parallel active parametric contours. Their proposed PDS model consists of a centreline parametrised by and two parallel curves and with being a spatially varying half-thickness and

the normal vector to the the centreline

. Specifically, their PDS model is defined as

(2.1)

where the image energy ( is the image gradient operator) attracts the parametric curve towards one of retinal borders of the input B-scan , whilst handles curve which is parallel to . The internal energy imposes both first and second order smooth regularities on the central curve , with and respectively controlling the tension and rigidity of this curve. is a parallelism constraint imposed on and . Nine retinal borders have been delineated by the method, i.e., ILM, RNFL, IPL-INL, INL-OPL, OPL-ONL, ONL-IS, IS-OS, OS-RPE and RPE-CH.

Figure 2: An example cross-sectional B-Scan OCT image centred at the macula, showing nine target intra-retinal layer boundaries detected by the proposed method. The names of these boundaries labelled as notations , are summarised in Table 1. Knowledge of these layer boundaries allows us to calculate the retinal layer thickness, which is imperative for detecting and monitoring ocular diseases.
Notation Name of retinal boundary/surface Abbreviation
internal limiting membrane ILM
outer boundary of the retinal nerve fibre layer RNFL
inner plexiform layer-inner nuclear layer IPL-INL
inner nuclear layer-outer plexiform layer INL-OPL
outer plexiform layer-outer nuclear layer OPL-ONL
outer nuclear layer-inner segments of photoreceptors ONL-IS
inner segments of photoreceptors-outer segments of photoreceptors IS-OS
outer segments of of photoreceptors-retinal pigment epithelium OS-RPE
retinal pigment epithelium-choroid RPE-CH
Table 1: Notations for nine retinal boundaries/surfaces, their corresponding names and abbreviations

Chiu’s method: Chiu et al. [15] modelled the boundary detection problem in OCT retinal B-scan as determining the shortest-path that connects two endpoints in a graph , where is a set of nodes and is a set of undirected weights assigned to each pair of two nodes in the graph. Node corresponds to each pixel in the B-scan image, whilst weight is calculated from the intensity gradient of the image in its vertical direction. Each node is connected with its eight nearest neighbours and all other node pairs are disconnected, resulting in a sparse adjacency matrix of graph weights of vertical intensity variation. For example, an sized image has an sized adjacency matrix with non-zero filled entries. Mathematically, the weights between two nodes used in their method are calculated based on the pure vertical gradient value, defined as

(2.2)

where is the vertical gradient of a B-scan image; and denote receptively two separate nodes in and is a small positive value added to stabilise the system. The most prominent boundary is then detected as the minimum weighed path from the first to the last vertex in using Dijkstra’s Algorithm. A similar region refinement technique to Section 3.4 was used to detect seven retinal boundaries, i.e., ILM, RNFL, IPL-INL, INL-OPL, OPL-ONL, IS-OS and RPE-CH.

Dufour’s method: Dufour et al. [27] proposed a modification of optimal graph search approach [40] to segment retinal surfaces in OCT volume data. By using soft constraints and adding prior knowledge learned from a model, they improve the accuracy and robustness of the original framework. Specifically, their Markov random field based model is given by

where is a set of surfaces to . The external boundary energy is computed from the input 3D image data. The surface smoothness energy guarantees the connectivity of a surface in 3D and regularises the surface. The interaction energy integrates soft constraints that can regularise the distances between two simultaneously segmented surfaces. This model is then built from training datasets consisting of fovea-centered OCT slice stacks. Their algorithm is capable to segment six retinal surfaces ( in above formulation) in both healthy and macular edema subjects, i.e., ILM, RNFL, IPL-INL, OPL-ONL, IS-OS and RPE-CH.

OCTRIMA3D: Tian et al. [29, 30] proposed a real-time automatic segmentation of OCT volume data. The segmentation was done frame-by-frame in each 2D B-Scan by considering the spatial dependency between each two adjacent frames. Their work is based on Chiu’s graph search framework [15] for B-Scan OCT images. However, in addition to Chiu’s work they introduce the inter-frame flattening to reduce the curvature in the fovea region and thus the accuracy of their algorithm has been improved. Moreover, they apply inter-frame or intra-frame information to limit the search region in current or adjacent frame so that the computational speed of their algorithm can be increased. Furthermore, the biasing and masking techniques are developed so as to better attain retinal boundaries within the same search region. A totally eight retinal surfaces, i.e., ILM, RNFL, IPL-INL, INL-OPL, OPL-ONL, IS-OS, OS-RPE and RPE-CH, can be delineated by the method. To sum up, Table 2 reports the retinal boundaries/surfaces segmented by the four methods as well as our GDM proposed in the next section.

Method ILM () RNFL () IPL-INL () INL-OPL () OPL-ONL () ONL-IS () IS-OS () OS-RPE () RPE-CH ()
PDS [14]
Chiu’s method [15]
Dufour’s method [27]
OCTRIMA3D [29, 30]
GDM
Table 2: Target boundaries/surfaces of the five methods compared in this paper (check mark means the boundary/surface can be segmented, while cross mark means the boundary/surface cannot be segmented).

3 The Proposed Geodesic Distance Method (GDM)

In this section, we propose a novel framework using the geodesic distance to detect from OCT images nine retinal layer boundaries defined in Figure 2 and Table 1. As the proposed methodology applies equally to both 2D and 3D segmentation, we will illustrate the approach for 2D segmentation here, as the steps would be the same for 3D segmentation. Numerical implementation of the approach is given in Appendix.

3.1 Geodesic distance

We use geodesic distance to identify the pixels on the boundaries of retinal layers in OCT images. The geodesic distance is the smallest integral of a weight function over all possible paths from two endpoints (i.e. and ). The weight function determines how the path goes from to . Small weight at one point indicates that the path has high possibility of passing that point. Specifically, the weighted geodesic distance between two pixels/endpoints and is given as

(3.1)

where is the set of all the paths that link to , and the path length is normalised and the start and end locations are and , respectively. The infinitesimal contour length is weighted by a non-negative function . This minimisation problem can be interpreted as finding a geodesic curve (i.e. a path with the smallest weighted length) in a Riemannian space. In geometrical optics, it has proven that the solution of (3.1) satisfies the Eikonal equation (3.3).

The retinal layers of OCT images are normally near horizontal. The gradient in the vertical direction thus can be considered as a good candidate for computing weight in (3.1). For instance, each of the two prominent boundaries, e.g. ILM () and IS-OS () in Figure 3 (a) and (e), is at the border of a dark layer above a bright layer. As a result, pixels in the region around the two boundaries will have high gradient values, as shown in Figure 3 (b) and (f). As the retinal layers at each side of the boundary are either transiting from dark to bright or bright to dark, the non-negative weight function in this paper is defined based on intensity variation as follows

(3.2)

where is an input OCT image; is a linear stretch operator used to normalise values to between 0 and 1; is the exponential function and is a user-define parameter, together they enhance the foveal depression regions and highlight the weak retinal boundaries [41]; and and are the first-order gradient operator along x (vertical) and y (horizontal) direction, respectively. The two gradient operators are discretised using a central finite difference scheme under the Neumann boundary condition. (3.2) also includes the positive horizontal gradient information , without which only vertical direction is accounted for and it is thus only applicable to flat retinal boundaries. Consequently, our proposed method is robust against curved features (e.g. the central region of the fovea) as well as other irregularities (e.g. bumps or large variations of boundary locations) caused by pathologies. In other words, the proposed method with the weight defined in (3.2) can deal with both normal and pathological images, as illustrated in Figure 3 as well as in the experimental section.


Figure 3: Illustrating the effectiveness of the weight defined in (3.2). (a) and (e): normal B-scan OCT data and pathological B-scan from an eye with dry age-related macular degeneration (drye-AMD); (b) and (f): vertical dark-to-bright gradient maps of (a) and (e), respectively; (c) and (g): dark-to-bright gradient maps calculated using equation (3.2) with . Note the gradient values of pixels have been enhanced in the region with strong curvature and big bumps; (d) and (h): boundary detection results via the method described in Section 3.3 using different gradient maps. Yellow lines are computed using (b) and (f), whilst red lines using (c) and (g).

3.2 Selection of endpoints and

For fully automated segmentation, it is essential to find a way to initialise the two endpoints and automatically. Since the retinal boundaries in the OCT images used in this paper run across the entire width of the image, we add an additional column on each side to the gradient map computed from (3.2). As the the minimal weighted path is sought after, a weight larger than any of the non-negative weights calculated from (3.2) is therefore assigned to each of the newly added vertical columns (note that we use for the geodesic distance 3.1, the minimal weighted path thereby prefers large weights). This forces the path traversal in the same direction as the newly added vertical columns with maximal weights, and also allows the start and end points to be arbitrarily assigned in the two columns. Once the retinal layer boundary is detected, the two additional columns can be removed. Figure 4 shows two examples of endpoint initialisation.


Figure 4: Two segmentation examples using different automatic endpoints initialisations on a drak-to-bright gradient map. and are start and end points, respectively.

3.3 Eikonal equation and minimal weighted path

The solution of (3.1) can be obtained by solving the Eikonal equation after the endpoints are determined. Specifically, over a continuous domain, the distance map to the seed start point is the unique solution of the following Eikonal equation in the viscosity sense

(3.3)

with . The equation is a first order partial differential equation and its solution can be found via the classical fast marching algorithm [42, 43] using an upwind finite difference approximation with the computational complexity (MN is the total number of grid points). Recently, the fast sweeping algorithm [36, 37] has been proposed. This technique is based on a pre-defined sweep strategy, replacing the heap priority queue to find the next point to process, and thereby has the linear complexity of . In this paper, we apply fast sweep for (3.3) and its detailed 3D implementation has been given in Appendix. Figure 5 shows two distance maps calculated using the dark-to-bright weight defined in (3.2) and two different start points as shown in the examples in Figure 4.

Once the geodesic distance map to the start point has been computed, the minimal weighted path (geodesic curve) between point and can be extracted from the following ordinary differential equation through the time-dependent gradient descent

(3.4)

where controls the parametrisation speed of the resulting curve. To obtain unit speed parametrisation, we use . Since distance map is nonsmooth at point , a small positive constant is added to avoid dividing by zero. Note the point is guaranteed to be found from this ordinary differential equation because the distance field is monotonically increasing from to , which can be observed in Figure 5. This technique can achieve sub-pixel accuracy for the geodesic path even if the grid is discrete.


Figure 5: Two distance maps calculated using the dark-to-bright weight and two different start points in the two exmaples in Figure 4, respectively. The distance values are expanded to for better visualisation.

The geodesic curve is then numerically computed using a discretised gradient descent, which defines a discrete curve using

(3.5)

where is a discrete approximation of at time , and the time step size should be small enough. is the normalised gradient parametrised by the arc length. Once reaches , one of the retinal boundaries can be found. The following Algorithm 1 concludes the proposed geodesic distance algorithm for extracting one retinal boarder in OCT images.

Algorithm 1: the proposed GDM for one retinal boundary detection
1: Input OCT data (i.e. B-scan or volume)
2: calculate dark-to-bright or bright-to-dark weight using (3.2)

3: pad two new columns to the weight and assign large values to them

4: select two endpoints and on the two newly padded columns
5: calculate distance map in (3.3) using fast sweeping algorithm
6: find one retinal layer boundary using the gradient descent flow (3.5)
7: remove the additional columns in the edge detection result

3.4 Detection of nine retinal layer boundaries

We have introduced how the proposed geodesic distance algorithm (3.1) can find the minimal weighted path across the whole width of the OCT image for one retinal layer boundary. In this section, we shall describe the implementation details of the proposed approach to delineate nine retinal layer boundaries as shown in Figure 2 and Table 1. Since the proposed model (3.1) is not convex, its solution can easily get stuck in local optima. For example, Figure 3 (c) and (g) have high gradient values in the region around both the ILM and IS-OS boundaries. However, in Figure 3 (d) the algorithm detected the ILM boundary while in Figure 3 (h) it detected the IS-OS. In order to eliminate such uncertainty, we dynamically define the search region based on the detected boundaries. The following section describes the proposed method in detail.

3.4.1 Detection of the IS-OS boundary

The intensity variation between two layers divided by the IS-OS () border are normally the most prominent in OCT B-scans. However, due to the fact that OCT images are always corrupted by speckle noise as a result of light absorption and scattering in the retinal tissue, it is not always the case. For example, the intensity variation around the IML () border sometimes can be more obvious than that around the IS-OS, as shown in the gradient image Figure 3 (c). To make sure the segmentation of the IS-OS boundary we first enhance the IS-OS via a simple local adaptive thresholding approach111http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm, which is given as follows

(3.6)

where is the input OCT image, and means that is convolved with a suitable operator, i.e. the mean or median filter. is the window size of the filter and a user-defined threshold value. In the paper, we use the mean filter with the window size and set . The enhanced image can be then obtained by multiplying the original image with . The first two images in Figure 6 illustrates that the contrast of the IS-OS boarder has been enhanced and the most obvious intensity variation now takes place around the IS-OS boundary. The IS-OS boundary then is detected on the dark-to-bright gradient image. Consequently, the delineated line is guaranteed to pass the IS-OS in both cases, as shown in the last two images in Figure 6.

Figure 6: Detecting the IS-OS boarders in the normal and pathological subjects after image enhancement via a local adaptive thresholding method (3.6).

3.4.2 Detection of the RPE-CH, OS-RPE and ONL-IS boundaries

Once the IS-OS () is segmented, it can be used as a reference to limit the search region for segmenting the RPE-CH (), OS-RPE () and ONL-IS () boundaries. RPE-CH and OS-RPE are below the IS-OS and they are delineated in the following way: the RPE-CH can be extracted by applying the geodesic distance algorithm with the bright-to-dark gradient weights (3.2) obtained from the region pixels below the detected IS-OS (i.e. the bright-to-dark gradient weights are set to zeros above the IS-OS); the OS-RPE is then delineated on the bright-to-dark gradient map in the region between the detected IS-OS and RPE-CH (i.e. the bright-to-dark gradient weights are set to zeros outside of the region between the IS-OS and RPE-CH). The dark-to-bright ONL-IS is above the IS-OS. The search region can be constructed between the IS-OS boundary and a parallel line above it with a diameter of 15 pixels. The dark-to-bright gradient weights outside of the region are then set to zeros. Hence, the only boundary in the search region of the dark-to-bright gradient image is the ONL-IS which can be extracted.

3.4.3 Detection of the ILM and INL-OPL boundaries

Both the ILM () and INL-OPL () are at the border of a darker layer above a bright layer. The intensity variation around the IML boundary is much more prominent and thus the IML is segmented first. The detected ONL-IS () edge is taken as a reference and the dark-to-bright gradient weights below the ONL-IS is set to zeros. The ILM can then be obtained via the proposed method. The INL-OPL can then be easily detected on the dark-to-bright gradient map by simply limiting the search region between the ILM and ONL-IS (i.e. the dark-to-bright gradient values are set to zeros outside of the region between the ILM and ONL-IS).

3.4.4 Detection of the OPL-ONL, IPL-INL and RNFL boundaries

The OPL-ONL (), IPL-INL () and RNFL () demonstrate a bright layer above a darker layer and thus can be detected on the bight-to-dark gradient map defined in (3.2). The segmented INL-OPL () and ONL-IS () are taken as two reference boundaries, and the OPL-ONL edge can be found by limiting the search region between the INL-OPL and ONL-IS. The search region for the IPL-INL can be then constructed between the INL-OPL boundary and a parallel line above it with a diameter of 20 pixels. The IPL-INL can be located on a bright-to-dark gradient map which is set to zeros outside of the search region constructed. Finally, the RNFL () can be found in the search region between the two reference boundaries IPL-INL and IML (). However, because the IPL-INL and IML boundaries are very close to each other in the central region of the fovea, the search region for the RNFL are sometimes missing around the fovea region. This leads to segmentation errors of the RNFL, as shown in Figure 7 (a). These errors however can be avoided by simply removing the spurious points detected on the RNFL in the region above the IML, as shown in Figure 7 (b). The proposed methods for segmenting nine retinal layer boundaries can be summarised in the flow chart as shown in Figure 8.


Figure 7: The segmentation results of the nine retinal layer boundaries on both normal and dye-AMD pathological B-scans, as shown in (a) and (c). The detection of the RNFL boundary however shows errors due to the absence of a search region for this boundary in (a). (b) shows that these errors have been corrected.

Figure 8: The overview of the proposed framework for dynamically delineating nine retinal layer boundaries defined in Figure 2 and Table 1. Section 3.4 describes this flow chart in detail.

4 Experiment Setup

To evaluate the performance of the proposed GDM qualitatively and quantitatively, numerical experiments are conducted to compare it with the state-of-the-art approaches reviewed in Section 2 on both healthy and pathological OCT retinal images. As the GDM is able to segment both 2D and 3D OCT images, we perform numerical experiments on both B-scan and volumetric OCT data. An anisotropic total variation [44]

is used to reduce noise prior to determining the layers boundaries/surfaces for all segmentation methods. In the following, we introduce the detailed procedure of OCT data acquisition, the evaluation metrics used to quantify the segmentation results, the final numerical results, and the computational complexity of different methods.

4.1 Clinical Data

30 Spectralis SDOCT (ENVISU C class 2300, Bioptigen, axial resolution = 3.3µm, scan depth = 3.4mm, 32, 000 A-scans per second) B-scans from 15 healthy adults (mean age = 39.8 years, SD = 8.6 years; 7 male, 8 female) were used for the research. All the data was collected after informed consent was obtained and the study adhered to the tenets of the Declaration of Helsinki and Ethics Committee approval was granted.

2D B-scan data: The normal vivo B-scan OCT data was imaged from the left and right eye of 15 healthy adults using a spectral domain OCT device with a chin rest to stabilise the head. The B-scan located at the foveal centre was identified from the lowest point in the foveal pit where the cone outer segments were elongated (indicating cone specialisation). To reduce the speckle noise and enhance the image contrast, every B-scan was the average of aligned images scanned at the same position. In addition to the 30 OCT images from the healthy subjects, another 20 B-scans from subjects with pathologies are also used to compare the proposed GDM with other approaches in pathological cases. These B-scans are from an eye with dry age-related macular degeneration (drye-AMD), which is available from Dufour’s software package’s website222http://pascaldufour.net/Research/softwaredata.html. The accuracy of segmentation results obtained by the three automated 2D methods (i.e. PDS, Chiu’s method and GDM) over these healthy and pathological B-scans is evaluated using the ground truth datasets, which were manually delineated with extreme carefulness by one observer.

3D Volume data: 10 Spectralis SD-OCT (Heidelberg Engineering GmbH, Heidelberg, Germany) volume data sets from 10 healthy adult subjects are used in this study. Each volume contains 10 B-scans, and the OCT A-scans outside the 6mm 6mm (lateral azimuth) area and centred at the fovea were cropped to remove low signal regions. All volumetric data can be downloaded from [29], where also contains the results of the OCTRMA3D, and the manual labellings from two graders. In this study we choose the manual labelling of grader 1 as the 3D ground truth.

4.2 Evaluation Metrics

Performance metrics are defined to demonstrate the effectiveness of the proposed method and compare it with the existing methods. Three commonly used measures of success for OCT boundary detection are signed error (SE), absolute error (AE) and Hausdorff distance (HD). Among them, SE indicates the bias and variability of the detection results. AE is the absolute difference between the automatic detection results and ground truth, while HD measures the distance between the farthest point of a set to the nearest point of the other and vice versa. Specifically, these metrics are denoted as

where and are respectively the detected boundaries and ground truth boundaries (i.e. manual labellings). is the number of pixels/volexs that fall on the retinal boundary/surface. Statistically, when the SE value is close to zero, the difference between and is small. In this case, the detection result is less bias. The measurements of AE and HD (varies from 0 to theoretically) signify the difference between two boundaries, e.g., 0 indicates that both retinal structures share exactly the same boundaries, and larger AE and HD values mean larger distances between the measured boundaries. We also monitor the overall SE (OSE), AE (OAE) and HD (OHD) during all the experiments. They are defined as

where is the total number of retina boundaries one method can delineate.

4.3 Parameter Selection

There are five parameters in the PDS model: three smooth parameters , , and two time step sizes and used within the gradient descent equations to minimise the functional (2.1) with respect to and . In this study we use , , , and suggested in [14]. In addition, as the PDS is a nonconvex model and its segmentation results depend on initialisation. We initialise the parallel curves very closely to the true retinal boundaries for fair comparison with other methods. A maximal number of iterations number 500 is used to ensure convergence of the PDS model. The graph theoretic based methods, i.e., Chiu’s method, OCTRIMA3D and Dufour’s method, require no parameter input. Finally, our GDM has two build-in parameters: in (3.2) and in (3.5). We set and to detect the retinal layers in the OCT images.

4.4 Numerical Results

We first visually compare the segmentation results of the proposed GDM method, the PDS (2.1) and Chiu’s graph search method on both the healthy and pathological B-scans, which are shown in Figure 9 (a)-(d). The PDS results as shown in (e)-(h) have some errors on some of the boundaries detected. For instance, the and cannot converge to the true retinal boundaries around the central fovea region, as shown in (f) and (h). This is because the PDS is the classical snake-driven model which has difficulty handling boundary concavity problem. Moreover, due to the fact that the has a much stronger image gradient than the and , some parts of these two boundaries have been mistakenly attracted to the . As Chiu’s graph search method only considers the intensity changes in the pure vertical direction (2.2), it also fails segment the fovea region layers with strong curvature, as shown in (i)-(l). Moreover, the algorithm cannot handle irregular bumps caused by pathologies very well, which can be observed from the bottom line delineated in (k) and (l). In general, Chiu’s method works very nicely when the retinal structures are flat or smooth without big changes on boundary locations. The results by the proposed GDM method, as shown in (m)-(p), performs better than the PDS and Chiu’s methods when compared with the ground truth in the last row. As analysed in Section 3, the gradient weights defined in (3.2) account for both vertical and horizontal variations, making it very suitable for both flat and nonflat retinal structures. Hence, the GDM is a better clinical tool for detecting retinal boundaries from both normal or pathological subjects.

Figure 9: Comparison of different segmentation methods on healthy and pathological 2D OCT B-scans. 1st row: healthy (i.e. first two) and pathological (i.e. last two) B-scans; 2nd row: results by the PDS model (2.1); 3rd row: results by Chiu’s method; 4th row: results by the proposed GDM; 5th row: ground truth.

The accuracy of the segmentation results by different methods against ground truth over 30 healthy and 20 pathological B-scans is indicated in Table 3 and Table 4, respectively. In order to make the comparison clearer, we plot the data in the two tables in Figure 10 and Figure 11, respectively.

In Table 3 and Figure 10, the SE shows that the PDS leads to very large segmentation bias with the largest error being 6.01, whilst the bias of the GDM is less than 1.22 for all the retinal layer boundaries. Moreover, the mean SE plot of the GDM is close to zero, which means the GDM are less biased than the other two methods. Large errors of the PDS normally take place at the , , and , which is consistent with visual inspection on the healthy scans in Figure 9. Furthermore, the mean AE quantities and plots show that the GDM performs the best for all the boundaries. Particularly at the and where the curved fovea region is located, the HD values of the GDM (3.7021.62, 7.3402.16) are significantly lower than those of the PDS (36.5615.9, 29.0011.6) and Chiu’s method (22.129.23, 21.255.98). However, the accuracy of different methods are comparable at flat or smooth retinal boundaries such as , and . Finally, as the manual segmentation traces the small bumps of the true boundaries but the segmentation results by the PDS are however very smooth, the overall accuracy of the method is the lowest among all the approaches compared.

SE () AE () HD ()
Boundary PDS Chiu et al. GDM PDS Chiu et al. GDM PDS Chiu et al. GDM
ILM () -3.921.90 -1.220.68 0.2730.33 4.6152.03 2.6051.12 0.9240.26 36.5615.9 22.129.23 3.7021.62
RNFL () -2.571.38 -1.671.34 -0.530.37 3.8641.49 2.6760.82 1.2620.34 29.0011.6 21.255.98 7.3402.16
IPL-INL () -0.550.83 -1.041.21 -0.380.61 1.8760.60 2.0200.79 1.3140.32 8.6193.77 10.535.25 7.2581.92
INL-OPL () 0.0120.58 -0.900.61 -0.710.71 1.7080.39 1.6990.40 1.8070.51 6.7722.53 7.0362.84 7.5052.96
OPL-ONL () -0.231.29 -1.511.30 -1.121.17 2.1271.00 2.1331.05 1.9490.94 10.223.70 9.0443.48 7.4633.24
ONL-IS () 6.0100.83 -0.730.49 6.0550.86 1.3760.36 9.9691.58 4.6301.05
IS-OS () -0.090.61 0.1940.49 0.2910.63 0.8230.29 0.7200.25 0.7710.36 3.6761.63 3.2401.60 2.6110.74
OS-RPE () 5.2022.25 -0.780.47 5.5701.76 1.1250.36 8.9132.28 3.6010.96
RPE-CH () -0.310.79 -0.840.58 -0.740.69 1.2910.25 1.2280.47 1.2130.45 4.2371.47 4.0271.31 3.8311.08
Overall 0.3940.39 -1.000.54 -0.490.23 3.1030.74 1.8690.59 1.3050.32 13.114.25 11.043.75 5.3271.11
Table 3:

Mean and standard deviation of SE (

), AE () and HD () calculated using the results of different methods (the PDS, Chiu’ method and GDM) and the ground truth manual segmentation, over 30 healthy OCT B-scans.
Figure 10: Plots of mean and standard derivation obtained by different methods in Table 3 for healthy B-scans. The 1st and 2nd rows respectively denote the mean and standard derivation of the SE (), AE () and HD () for segmentation of boundary to using the PDS, Chiu’s mehtod and GDM. The overall value is the average result over all boundaries.

In Table 4 and Figure 11, the mean and standard deviation plots show that the GDM is more accurate and robust compared with the other two methods for pathological data. However, larger errors have been found at the last four boundaries , , and for all the segmentation methods. This is because the dry age-related macular degeneration has led irregularities to these retinal boundaries, making these methods less accurate and robust. The overall accuracy measured by the three quantities has decreased compared with the corresponding measurements listed in Table 3. Chiu’s graph search method using Dijkstra’s algorithm can be deemed as a discrete approximation of the proposed GDM. This makes its final results comparable to those of the GDM at some flat retinal boundaries and better than those of the PDS. However, the fast sweeping algorithm used to solve the Eikonal equation guarantees local resolution for the geodesic distance, which significantly reduces the grid bias and achieves sub-pixel accuracy for the geodesic path of the GDM. In addition to the novel weight function proposed in (3.2), the GDM also resolves the metrisation problem caused by discrete graph method and thus can obtain more accurate results than Chiu’s method for delineating cellular layers from both normal or pathological subjects.

SE () AE () HD ()
Boundary PDS Chiu et al. GDM PDS Chiu et al. GDM PDS Chiu et al. GDM
ILM () -0.410.59 -0.340.25 -0.360.29 0.9320.44 0.7960.17 0.6830.09 6.4614.86 4.0871.01 3.3371.10
RNFL () -0.930.93 -0.380.33 -0.490.50 1.7920.63 1.7170.53 1.2570.32 6.1451.84 8.4644.55 6.1092.49
IPL-INL () -0.230.62 -0.220.27 -0.320.32 1.2280.21 1.1490.20 0.9260.16 7.6401.31 5.8570.98 5.1511.82
INL-OPL () 0.5780.64 0.5550.39 0.3920.26 1.5460.28 1.5630.30 1.4190.16 7.1651.07 8.1941.36 5.9421.32
OPL-ONL () -0.041.08 0.2860.55 -0.070.64 2.3710.76 2.2550.60 2.0190.65 11.281.95 9.8582.76 9.2812.25
ONL-IS () 3.3391.22 -0.570.72 4.4840.50 1.4420.34 15.234.03 6.2051.01
IS-OS () -0.230.86 1.0301.06 0.3500.50 2.4151.25 2.3991.05 1.0550.22 15.9510.2 17.6611.3 6.7954.65
OS-RPE () 2.3714.17 0.0280.41 5.9272.34 1.8210.47 22.6312.9 9.6731.30
RPE-CH () 3.3152.59 3.0112.98 0.0270.35 4.7972.59 5.1462.70 2.2520.46 31.2312.9 32.6313.2 13.193.50
Overall 0.8630.59 0.5630.44 -0.110.22 2.8320.83 2.1460.70 1.4300.20 13.754.72 12.394.06 7.3000.67
Table 4: Mean and standard deviation of SE (), AE () and HD () calculated using the results of different methods (the PDS, Chiu’s method and GDM) and the ground truth manual segmentation, over 20 pathological OCT B-scans.
Figure 11: Plots of mean and standard derivation obtained by different methods in Table 4 for pathological B-scans. The 1st and 2nd rows respectively denote the mean and standard derivation of the SE (), AE () and HD () for segmentation of boundary to using the PDS, Chiu’s mehtod and GDM. The overall value is the average result over all boundaries.

In the next section, the proposed GDM is used to segment the OCT volume dataset that includes samples from ten healthy adult subjects, named as Volume 1 to 10 respectively. Dufour’s and OCTRIMA3D methods are also used to segment the same dataset for comparison purposes. In Figure 12, we demonstrate four representative segmentation results of GDM on Volume 1, 2, 7 and 9.

Figure 12: 3D rendered images of human in vivo intra-retinal layer surfaces obtained through segmenting Spectralis SD-OCT volumes with the proposed GDM method. Samples are named Volume 1, Volume 2, Volume 7 and Volume 9. The color used for each individual retinal surface is the same as in Figure 2.

Figure 13: Two B-scans extracted from the Volume 4 sample. The left shows the en-face representation of the OCT scan with two lines (green and red) overlaid representing the corresponding locations of two B-scans within a volume present in the right.
Figure 14: The comparison between Dufour’s method (left), OCTRIMA3D (middle) and GDM (right) on the two B-scans in Figure 13. The segmentation results by these methods are marked with red lines while the ground truth using manual labelling with green lines.

The segmentation results of the three approaches on an exemplary sample (Volume 4) are shown in two distinctive B-scans in Figure 13 and 14, where one B-scan retinal structures are quite flat and the other contains the nonflat fovea region. Dufour’s method has lower accuracy than the OCTIMA3D and GDM for both cases. OCTRIMA3D extends Chiu’s method to 3D space and improves it by reducing the curvature in the fovea region using the inter-frame flattening technique, so the method performs very well for both flat and nonflat retinal structures. However, there still exist some obvious errors on the 5th boundary . OCTRIMA3D is able to flatten the and in the meanwhile it also increases the curvature of its adjacent boundaries such as , which might be the reason leading to the errors. Compared with the other two, the GDM’s results show less green lines, verifying that the results are closer to ground truth and thus it is the most accurate among the three compared. In addition to the 2D visualisation, the 3D rendering of the results segmented by the three approaches is given in Figure 15. The experiment furthermore shows that Dufour’s results deviate much from ground truth, while the OCTRIMA3D is better than Dufour’s method and comparable to the GDM. The GDM results cover less grey ground truth and are thereby the best.

Figure 15: The 3D comparison between Dufour’s method, OCTRIMA3D and GDM by segmenting the intra-retinal layer surfaces from the Volumes 4 sample. Column (a)-(d) are respectively Dufour’s results, OCTRIMA3D results, GDM results and ground truth. Column (e)-(g) are respectively the segmentation results of the three compared methods, overlaid with ground truth. Row 1-6 are the results for the individual surface , , , , and total retina surfaces, respectively.
SE () AE () HD ()
Volume # Dufour et al. OCTRIMA3D GDM Dufour et al. OCTRIMA3D GDM Dufour et al. OCTRIMA3D GDM
1 -1.194 0.4559 0.3782 2.3816 1.3490 1.0720 25.688 15.273 10.449
2 -2.170 -0.036 -0.128 4.5250 0.9089 0.7814 56.667 11.570 7.0938
3 -2.576 0.4182 0.5983 3.6129 1.3237 1.0989 25.203 16.719 9.5326
4 -2.296 1.0987 0.6774 3.8185 1.5175 1.0753 51.522 18.364 9.6151
5 -1.680 1.3288 0.5909 4.3327 1.5012 0.9005 56.223 11.889 8.8419
6 -2.623 1.0732 0.2974 4.0682 1.4838 0.9493 43.070 19.201 9.5281
7 -2.326 0.5294 0.4529 3.1506 0.9378 0.7433 31.782 8.6701 6.4803
8 -0.636 1.1355 0.6833 2.3955 1.4455 1.0069 25.481 17.930 11.685
9 -4.206 0.3077 0.0859 4.5813 1.0780 0.7678 43.223 8.9694 5.7191
10 -2.648 0.6701 0.2606 4.4903 1.0627 0.7877 41.017 11.666 10.961
Table 5: The SE (), AE () and HD () calculated using the results of different methods (Dufour’s method, OCTRMIA3D and GDM) and the ground truth manual segmentation, for the OPL-ONL () surface in each of the 10 OCT volumes.
SE () AE () HD ()
Volume # Dufour et al. OCTRIMA3D GDM Dufour et al. OCTRIMA3D GDM Dufour et al. OCTRIMA3D GDM
1 -0.432 -0.148 -0.019 1.1013 0.5391 0.4437 16.559 4.7616 4.5805
2 0.7476 -0.276 -0.079 2.0329 0.5539 0.3971 20.309 5.2093 3.7743
3 -0.311 -0.291 -0.106 1.4347 0.5406 0.4629 18.432 2.9790 4.0176
4 0.3652 -0.116 0.3363 1.6954 0.5271 0.4601 27.853 5.3672 2.7882
5 0.6057 -0.098 0.0994 1.7567 0.4756 0.3500 26.556 3.7573 3.4150
6 0.9825 -0.592 -0.139 2.4970 0.7247 0.4066 23.487 5.9301 3.9297
7 -1.247 -0.536 0.0237 1.3895 0.7501 0.3716 10.016 3.1398 3.6980
8 -0.311 -0.069 0.1740 1.0438 0.4053 0.3466 15.044 4.2301 4.3940
9 -0.755 -0.111 0.1407 0.8068 0.5422 0.3939 3.5210 3.4263 3.3868
10 -0.099 -0.220 0.1028 1.2941 0.5609 0.4246 13.313 3.1210 3.5361
Table 6: The SE (), AE () and HD () calculated using the results of different methods (Dufour’s method, OCTRMIA3D and GDM) and the ground truth manual segmentation, for the IS-OS () surface in each of the 10 OCT volumes
OSE () OAE () OHD ()
Volume # Dufour et al. OCTRIMA3D GDM Dufour et al. OCTRIMA3D GDM Dufour et al. OCTRIMA3D GDM
1 -1.271 0.3607 0.4338 1.8358 1.1204 0.9538 17.486 9.3358 7.9163
2 -1.161 0.0246 0.0640 2.5380 0.9652 0.7238 29.682 7.7987 6.1267
3 -1.513 -0.052 0.3456 2.1470 0.9343 0.7838 19.985 8.3491 6.9920
4 -1.431 0.4272 0.3560 2.5278 1.0374 0.8667 31.346 9.4042 7.3130
5 -1.020 0.6369 0.5021 2.4119 1.0794 0.8289 32.607 8.6822 7.1379
6 -1.434 0.4216 0.3969 2.6754 1.1371 0.8606 28.629 9.5267 7.2548
7 -2.010 0.0059 0.3283 2.2458 0.9682 0.7407 21.788 7.0644 6.8279
8 -1.031 0.5815 0.5785 1.7462 1.1063 0.9067 17.610 10.100 8.5112
9 -1.951 0.0542 0.2014 2.1368 0.8771 0.6922 21.344 5.7482 5.4794
10 -1.513 0.1022 0.2109 2.3315 0.8397 0.6596 24.841 6.3250 6.7132
Table 7: The OSE (), OAE () and OHD () calculated using the results of different methods (Dufour’s method, OCTRMIA3D and GDM) and the ground truth manual segmentation, for overall retina surfaces in each of the 10 OCT volumes
Figure 16: Boxplots for the SE (), AE (), HD (), OSE (), OAE () and OHD () obtained by different methods in Table 5-7 for the 10 OCT volumes. 1st row: boxplots of Table 5; 2nd row: boxplots of Table 6; 3rd row: boxplots of Table 7.

Table 5-7 contain quantitative information for comparing the accuracy of the three methods on the 10 OCT volumes. Table 5 lists the quantities for the surface around the fovea region, and Table 6 presents the numerical results for the surface that is flatter and smoother. In Table 5, the SE quantity indicates that Dufour’s method produces larger segmentation bias than the OCTRIMA3D and GDM. The SE values by the GDM are in the range of [-0.128 0.6833], showing less variability than those by the other two methods. Moreover, the GDM leads to the smallest AE and HD quantities in all 10 cases, indicating that the GDM is the best among all the methods. Compared with those in Table 5, the quantities in Table 6 show a significant improvement of all the methods. For example, the range of the HD quantity by Dufour’s method has dropped from [25.688 56.667] to [3.521 27.853]. In addition, the accuracy gap between the OCTRIMA3D and GDM has been reduced. The HD values of Volume 3, 7 and 10 by the OCTRIMA has even become smaller than the corresponding values by the GDM. These improvements are the fact that the retinal surface is flat and the voxel values remain fairly constant. From the OAE and OHD in Table 7 we can observe that the accuracy of the GDM is the highest for the total retina surfaces among the existing approaches.

The corresponding boxplots of Table 5-7 are shown in Figure 16. It is clear that the proposed GDM method performs consistently better, with higher accuracy and lower error rates for both flat and nonflat retina layers. The boxplots show that there is little variation in performance across the modelled structures and that even in the worst case scenario the proposed method yields lower error rate than the average performance of other methods. Furthermore, in Figure 17 we present the 3D plots of the SE, AE and HD quantities computed by the three methods on the 10 OCT volumes. The SE values by the GDM are closer to zero and the AE and HD values by it remain smaller. The overall distribution of these discrete data points also indicates that the GDM results are less oscillating. We can thus conclude from Figure 17 that the GDM is the best among all the methods compared for extracting intra-reintal layer surfaces from 3D OCT volume data in terms of both accuracy and robustness.

Figure 17: 3D plots of the SE (), AE () and HD () obtained using 10 OCT volumes using Dufour’ method, OCTRMIA3D and GDM.

4.5 Computational Complexity Analysis

The experimental results in Section 4.4 have shown that the performance of our algorithm is superior over others in terms of accuracy. In this section the performance of the different approaches in terms of the computational time is demonstrated. We implemented PDS, Chiu’s method and GDM using Matlab 2014b on a Windows 7 platform with an Intel Xeon CPU E5-1620 at 3.70GHz and 32GB memory. For a sized B-scan, with initialisation close to the true retinal boundaries, it takes 3.625s (500 iterations) for PDS to delineate two parallel boundaries. Chiu’s method needs 1.962s to detect one boundary, while the GDM only takes 0.415s. Note that the time complexity of Chiu’s graph search method is , where and are the number of nodes and edges. In the context of boundary detection, and . Hence the time complexity of the method is . In contrast, our GDM solved using fast sweeping only has linear complexity of , which is more efficient than Chiu’s method. Instead of directly doing segmentation in 3D, the OCTRMIA3D explores spatial dependency between two adjacent B-scans and applies Chiu’s method to each 2D frame independently. The OCTRMIA3D is thus able to track retinal boundaries in 3D volume efficiently. It has been reported in [29] that the processing time of the OCTRMIA3D for the whole OCT volume of voxels was 26.15s, which is faster than our GDM (40.25s is used to segment a sized volume). However, such procedure in the OCTRMIA3D complicates the whole 3D segmentation process and might make the algorithm less general. Finally, Dufour’s graph method needs 14.68s to detect the six intra-retinal layers surfaces on a sized volume. Dufour’s method was implemented using a different programming language (C) and delineated different number of retinal surfaces from those of GDM so comparison can not be made between the two methods.

5 Conclusion

We have presented a new automated segmentation framework based on the geodesic distance for delineating retinal layer boundaries in 2D/3D OCT images. The framework integrates horizontal and vertical gradient information and can thus account for changes in the both directions. Further, the exponential weight function employed within the framework enhances the foveal depression regions and highlights the weak and low contrast boundaries. As a result, the proposed method is able to segment complex retinal structures with large curvatures and other irregularities caused by pathologies. Extensive numerical results, validated with ground truth, demonstrate the effectiveness of proposed framework for segmenting both normal and pathological OCT images. The proposed method has achieved higher segmentation accuracy than existing methods, such as the parameter active contour model and the graph theoretic based approaches. Ongoing research includes integrating the segmentation framework into a system for detection and quantification of retinal fractures and other diseases of the retina.

6 Appendix

We present the 3D fast sweeping algorithm to solve the Eikonal equation (3.3). Given a seed point , its distance function satisfies the following Eikonal equation

(6.1)

with and where is defined in (3.2). (6.1) is a typical partial differential equation and it can be solved efficiently by using the fast sweeping algorithm proposed by Zhao [36]. To do so, the Godunov upwind difference scheme is used to discretise (6.1) as follows

(6.2)

In equation (6.2), , , and . Boundary conditions need to be handled in the computational grid space. One-sided upwind difference is used for each of the 6 boundary faces of the grid space. For example, at the left boundary face, a one-sided difference along the direction is computed as

, and are then sorted in increasing order and the sorted version is recorded as , and . So, the unique solution to (6.2) is given as follows:

(6.3)

where is a piecewise function containing three parts

The three parts correspond to the following intervals, respectively

To solve (6.3), which is not in analytical form, the fast Gauss-Seidel iteration with alternating sweeping orderings is used. For initialization, the value of the seed point is set to zero, and this value is fixed in later calculations. The rest of the points are set to large values, and these values will be update later. The whole 3D grid is traversed in the following orders for the Gauss-Seidel iteration

References

  • [1] David Huang, Eric A Swanson, Charles P Lin, Joel S Schuman, William G Stinson, Warren Chang, Michael R Hee, Thomas Flotte, Kenton Gregory, Carmen A Puliafito, and James G Fujimoto. Optical coherence tomography. Science, 254(5035):1178–1181, 1991.
  • [2] Michael R Hee, Joseph A Izatt, Eric A Swanson, David Huang, Joel S Schuman, Charles P Lin, Carmen A Puliafito, and James G Fujimoto. Optical coherence tomography of the human retina. Archives of ophthalmology, 113(3):325–332, 1995.
  • [3] Dara Koozekanani, Kim Boyer, and Cynthia Roberts. Retinal thickness measurements from optical coherence tomography using a markov boundary model. Medical Imaging, IEEE Transactions on, 20(9):900–916, 2001.
  • [4] Hiroshi Ishikawa, Scott Piette, Jeffrey M Liebmann, and Robert Ritch. Detecting the inner and outer borders of the retinal nerve fiber layer using optical coherence tomography. Graefe’s archive for clinical and experimental ophthalmology, 240(5):362–371, 2002.
  • [5] Hiroshi Ishikawa, Daniel M Stein, Gadi Wollstein, Siobahn Beaton, James G Fujimoto, and Joel S Schuman. Macular segmentation with optical coherence tomography. Investigative ophthalmology & visual science, 46(6):2012–2017, 2005.
  • [6] Mahnaz Shahidi, Zhangwei Wang, and Ruth Zelkha. Quantitative thickness measurement of retinal layers imaged by optical coherence tomography. American journal of ophthalmology, 139(6):1056–1061, 2005.
  • [7] Delia Cabrera Fernández, Harry M Salinas, and Carmen A Puliafito. Automated detection of retinal layer structures on optical coherence tomography images. Optics Express, 13(25):10200–10216, 2005.
  • [8] MA Mayer, RP Tornow, R Bock, J Hornegger, and FE Kruse. Automatic nerve fiber layer segmentation and geometry correction on spectral domain oct images using fuzzy c-means clustering. Investigative Ophthalmology & Visual Science, 49(13):1880–1880, 2008.
  • [9] Delia Cabrera Fernandez. Delineating fluid-filled region boundaries in optical coherence tomography images of the retina. IEEE Trans. Med. Imaging, 24(8):929–945, 2005.
  • [10] Mircea Mujat, Raymond C Chan, Barry Cense, B Hyle Park, Chulmin Joo, Taner Akkin, Teresa C Chen, and Johannes F de Boer. Retinal nerve fiber layer thickness map determined from optical coherence tomography images. Optics Express, 13(23):9480–9491, 2005.
  • [11] Akshaya Mishra, Alexander Wong, Kostadinka Bizheva, and David A Clausi. Intra-retinal layer segmentation in optical coherence tomography images. Optics express, 17(26):23719–23728, 2009.
  • [12] Azadeh Yazdanpanah, Ghassan Hamarneh, Benjamin Smith, and Marinko Sarunic. Intra-retinal layer segmentation in optical coherence tomography using an active contour approach. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2009, pages 649–656. Springer, 2009.
  • [13] Itebeddine Ghorbel, Florence Rossant, Isabelle Bloch, Sarah Tick, and Michel Paques. Automated segmentation of macular layers in oct images and quantitative evaluation of performances. Pattern Recognition, 44(8):1590–1603, 2011.
  • [14] Florence Rossant, Isabelle Bloch, Itebeddine Ghorbel, and Michel Paques. Parallel double snakes. application to the segmentation of retinal layers in 2d-oct for pathological subjects. Pattern Recognition, 48(12):3857–3870, 2015.
  • [15] Stephanie J Chiu, Xiao T Li, Peter Nicholas, Cynthia A Toth, Joseph A Izatt, and Sina Farsiu. Automatic segmentation of seven retinal layers in sdoct images congruent with expert manual segmentation. Optics express, 18(18):19413–19428, 2010.
  • [16] Qi Yang, Charles A Reisman, Zhenguo Wang, Yasufumi Fukuma, Masanori Hangai, Nagahisa Yoshimura, Atsuo Tomidokoro, Makoto Araie, Ali S Raza, Donald C Hood, et al. Automated layer segmentation of macular oct images using dual-scale gradient information. Optics express, 18(20):21293–21307, 2010.
  • [17] Vedran Kajić, Boris Považay, Boris Hermann, Bernd Hofer, David Marshall, Paul L Rosin, and Wolfgang Drexler. Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis. Optics express, 18(14):14730–14744, 2010.
  • [18] Vedran Kajić, Marieh Esmaeelpour, Boris Považay, David Marshall, Paul L Rosin, and Wolfgang Drexler. Automated choroidal segmentation of 1060 nm oct in healthy and pathologic eyes using a statistical model. Biomedical optics express, 3(1):86–103, 2012.
  • [19] Matthäus Pilch, Yaroslava Wenner, Elisabeth Strohmayr, Markus Preising, Christoph Friedburg, Erdmuthe Meyer zu Bexten, Birgit Lorenz, and Knut Stieger. Automated segmentation of retinal blood vessels in spectral domain optical coherence tomography scans. Biomedical optics express, 3(7):1478–1491, 2012.
  • [20] Timothy F Cootes, Christopher J Taylor, David H Cooper, and Jim Graham. Active shape models-their training and application. Computer vision and image understanding, 61(1):38–59, 1995.
  • [21] Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEE Transactions on Pattern Analysis & Machine Intelligence, (6):681–685, 2001.
  • [22] Mona Haeker, Michael Abràmoff, Randy Kardon, and Milan Sonka. Segmentation of the surfaces of the retinal layer from oct images. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2006, pages 800–807. Springer, 2006.
  • [23] Mona K Garvin, Michael D Abràmoff, Randy Kardon, Stephen R Russell, Xiaodong Wu, and Milan Sonka. Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-d graph search. Medical Imaging, IEEE Transactions on, 27(10):1495–1505, 2008.
  • [24] Mona Kathryn Garvin, Michael David Abràmoff, Xiaodong Wu, Stephen R Russell, Trudy L Burns, and Milan Sonka. Automated 3-d intraretinal layer segmentation of macular spectral-domain optical coherence tomography images. Medical Imaging, IEEE Transactions on, 28(9):1436–1447, 2009.
  • [25] Gwénolé Quellec, Kyungmoo Lee, Martin Dolejsi, Mona K Garvin, Michael D Abràmoff, and Milan Sonka. Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in sd-oct of the macula. Medical Imaging, IEEE Transactions on, 29(6):1321–1330, 2010.
  • [26] Bhavna J Antony, Michael D Abràmoff, Milan Sonka, Young H Kwon, and Mona K Garvin. Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3d segmentation of intraretinal surfaces in sd-oct volumes. In SPIE Medical Imaging, pages 83141G–83141G. International Society for Optics and Photonics, 2012.
  • [27] Pascal A Dufour, Lala Ceklic, Hannan Abdillahi, Stephan Schroder, Sandro De Dzanet, Ute Wolf-Schnurrbusch, and Janusz Kowal. Graph-based multi-surface segmentation of oct data using trained hard and soft constraints. Medical Imaging, IEEE Transactions on, 32(3):531–543, 2013.
  • [28] Raheleh Kafieh, Hossein Rabbani, Michael D Abramoff, and Milan Sonka. Intra-retinal layer segmentation of 3d optical coherence tomography using coarse grained diffusion map. Medical image analysis, 17(8):907–928, 2013.
  • [29] Jing Tian, Boglárka Varga, Gábor Márk Somfai, Wen-Hsiang Lee, William E Smiddy, and Delia Cabrera DeBuc. Real-time automatic segmentation of optical coherence tomography volume data of the macular region. PloS one, 10(8):e0133908, 2015.
  • [30] Jing Tian, Boglarka Varga, Erika Tatrai, Palya Fanni, Gabor Mark Somfai, William E Smiddy, and Delia Cabrera Debuc. Performance evaluation of automated segmentation software on optical coherence tomography volume data. Journal of biophotonics, 9(5):478–489, 2016.
  • [31] KA Vermeer, J van der Schoot, JF De Boer, and HG Lemij. Automated retinal and nfl segmentation in oct volume scans by pixel classification. Investigative Ophthalmology & Visual Science, 51(13):219–219, 2010.
  • [32] KA Vermeer, J Van der Schoot, HG Lemij, and JF De Boer. Automated segmentation by pixel classification of retinal layers in ophthalmic oct images. Biomedical optics express, 2(6):1743–1756, 2011.
  • [33] Alfred R Fuller, Robert J Zawadzki, Stacey Choi, David F Wiley, John S Werner, and Bernd Hamann. Segmentation of three-dimensional retinal image data. Visualization and Computer Graphics, IEEE Transactions on, 13(6):1719–1726, 2007.
  • [34] Maciej Szkulmowski, Maciej Wojtkowski, Bartosz Sikorski, Tomasz Bajraszewski, Vivek J Srinivasan, Anna Szkulmowska, Jakub J Kałużny, James G Fujimoto, and Andrzej Kowalczyk. Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies. Journal of biomedical optics, 12(4):041207–041207, 2007.
  • [35] Andrew Lang, Aaron Carass, Matthew Hauser, Elias S Sotirchos, Peter A Calabresi, Howard S Ying, and Jerry L Prince. Retinal layer segmentation of macular oct images using boundary classification. Biomedical optics express, 4(7):1133–1152, 2013.
  • [36] Hongkai Zhao. A fast sweeping method for eikonal equations. Mathematics of computation, 74(250):603–627, 2005.
  • [37] Yen-Hsi Richard Tsai, Li-Tien Cheng, Stanley Osher, and Hong-Kai Zhao. Fast sweeping algorithms for a class of hamilton–jacobi equations. SIAM journal on numerical analysis, 41(2):673–694, 2003.
  • [38] Jinming Duan, Ben Haines, Wil OC Ward, and Li Bai. Surface reconstruction from point clouds using a novel variational model. In Research and Development in Intelligent Systems XXXII, pages 135–146. Springer, 2015.
  • [39] Delia Cabrera DeBuc. A review of algorithms for segmentation of retinal image data using optical coherence tomography.
  • [40] Qi Song, Xiaodong Wu, Yunlong Liu, Milan Sonka, and Mona Garvin. Simultaneous searching of globally optimal interacting surfaces with shape priors. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2879–2886. IEEE, 2010.
  • [41] Jinming Duan, Zhaowen Qiu, Wenqi Lu, Guodong Wang, Zhenkuan Pan, and Li Bai. An edge-weighted second order variational model for image decomposition. Digital Signal Processing, 49:162–181, 2016.
  • [42] James A Sethian. A fast marching level set method for monotonically advancing fronts. Proceedings of the National Academy of Sciences, 93(4):1591–1595, 1996.
  • [43] James Albert Sethian. Level set methods and fast marching methods: evolving interfaces in computational geometry, fluid mechanics, computer vision, and materials science, volume 3. Cambridge university press, 1999.
  • [44] Tom Goldstein and Stanley Osher. The split bregman method for l1-regularized problems. SIAM journal on imaging sciences, 2(2):323–343, 2009.