1 Introduction
Optical coherence tomography (OCT) image segmentation to detect retinal layer boundaries is a fundamental procedure for diagnosing and monitoring the progression of retinal and optical nerve diseases. There exist rich literature on approaches for automatic and semiautomatic OCT image segmentation. Common methods include deformable models [1, 2], graphbased and geodesic distance methods [3, 4], statistical shape and appearance models [5, 6]
, etc. Very recently, deep neural networks are becoming increasingly popular for OCT segmentation, demonstrating excellent performance
[7, 8]. However, these deep learning methods usually require the networks to be sufficiently deep to learn all appearance and shape variations of the retinal layers from annotated training sets. Therefore the training set has to be very large and rich to prevent the overfitting. Large annotated training sets are however difficult to obtain. Existing OCT segmentation algorithms also tend to segment individual retinal layers separately. This form of analysis often fails when there is uncertainty in the image, especially some retinal layers are often difficult to see or missing in OCT images.
We believe that OCT segmentation is more effective by incorporating anatomical shape of the retinal layers and their spatial relations. As such, in this paper we propose a shapebased variational MumfordShah (MS) functional for segmentation of up to 9 retinal layer boundaries in OCT images, using only a small training set. We make three distinct contributions to OCT segmentation:

We introduce a new piecewise constant variational MS functional to evolve a predefined retinal layer boundary model for OCT image segmentation. It has a regionbased data fidelity term and a hybridised first and second regularisation term. The predefined retinal layer boundary model consists of 9 retinal layer boundaries, each is an open explicit parametric contour represented by a set of control points. We then construct two narrowbands around each open contour, within which regionbased information is derived to aid contour evolution. We show that by incorporating a retinal layer boundary model our method can segment 9 retinal layers simultaneously, and by utilising regional information, the proposed method has a large convergence range and is robust to initialisation.

We introduce to the MS functional a shape constraint learnt from a set of training OCT shapes. We then apply the principal component analysis (PCA) to derive the statistical distribution from the training shapes as well as the resulting irregular contours evolved directly from the MS functional. In this way, the irregular contours are restricted to a manifold of familiar shapes and thereby can be pulled back to appropriate positions to allow a faster convergence.

We apply the proposed method to real OCT dataset acquired from healthy subjects, and demonstrate that the proposed method outperforms the stateoftheart methods.
2 Methodology
2.1 Intensitybased variational MS functional
We start with a new intensitybased MS segmentation functional, and then apply a learnt shape constraint to the functional. We define a retinal layer boundary model as having 9 retinal layers, each of which is an explicit parametric contour . Based on [9], we propose a new piecewise constant MS functional for each of the 9 retinal layers
(1) 
where is the input image and are the mean grey value of region . The first regional energy term follows the ChanVese model [2, 10]. In the second term, is the first order derivative of the curve with respect to the arc length normalized into the region , and is the second order derivative. and are two regularisation coefficients. To segment a retinal layer in OCT, has two values, namely, . The contour regularisation (last two terms) combines the first and second order derivatives, preventing the contour from bending by introducing elasticity and stiffness to the contour. Each control point of the contour thus can be more equidistant or centred between its neighbourhood points. This makes the functional stable for numerical calculation. is defined as a parametric contour of a set of control points
(2) 
Here , are the coordinates of the control points to represent one retinal layer boundary. The start and end points are and , respectively.
Variational methods taking into account regional information are generally more robust against noisy and initialisation. In OCT segmentation, we however require the parametric contour to be an open curve [5]. In this case, is no longer equal to
. This brings difficulties in estimating parameter
in (1) due to the vanished image regions and . To circumvent this limitation, we propose to construct a narrowband around the open contour using the method illustrated in Fig 1:
For each control point on , such as the red dot in Fig 1, we compute its normal using the neighbourhood control points;

We define a narrowband radius
, and interpolate new points, as represented by the black dots in Fig
1, along the two normal directions of the original control points;
2.2 Minimisation of MS by gradient descent
We minimise the proposed functional (1) with respect to both the parameter and the contour , where denote the mean grey values on both sides of contour curve and have a simple closedform
(3) 
where are the two regions partitioned by the curve and they change as the curve evolves. Since we are able to construct regions from an open contour , it is easy to calculate in (3). After are estimated, we fix and use the gradient descent method to minimise the functional (1) with respect to the open contour . This results in the following contour evolution equation
(4) 
where , and
denotes the outer normal vector of the contour
. The first two terms on the righthand side of (4) minimise the contour length and thereby enforce an equidistant spacing between the control points. The third term maximises the homogeneity in the adjoining regions in the narrowband, which is measured by the energy density (3). This forces move towards a retinal layer boundary in the OCT image.We now need to discretise (4) for numerical implementation. Specifically, for each control point of the contour , we have the following two semiimplicit iterative schemes
(5) 
(6) 
where is an artificial time parameter. The superscript denotes the th iteration, and the subscript the th control point. As compared to explicit iterative methods, semiimplicit relaxation methods allow the use of a relative larger time step , speeding up the convergence rate of contour evolution. We apply the finite difference method with Neumann boundary condition [11] to discretise the 2nd and 4thorder derivatives , , and .
To iterate (5) and (6), the open contour normals are calculated in Fig 2. Apart from the two endpoints, for each control point on the contour we first compute its tangent using its neighbours via the centre finite difference scheme. For the start and end points, their tangents are respectively computed with the forward and backward finite differences. The contour normals thereby can be easily derived from the computed tangents because they are perpendicular to each other.
Note that given control points we have to calculate a set of 2 equations as in (5) and (6). Since the 2nd and 4thorder derivatives are linear differential operators, we can rewrite (5) and (6) to its matrix form as
(7) 
(8) 
where
is the identity matrix,
denotes pointwise multiplication and A is an matrix of the form2.3 Integrating shape constraint to intensitybased MS
The contour evolution driven by (7) and (8
) can become irregular as the MS functional uses only local pixel intensities. To further improve the MS functional we propose to impose some constraint for the functional to control contour evolution. With some manually segmented retinal layer boundaries as training shapes we can derive the statistical distribution of the retinal layer boundaries as a shape constraint on the MS functional. The main idea of this is to project the irregular shapes into a latent space spanned by a few eigenvectors of training shapes with largest eigenvalues. In this way, the resulting irregular shapes are restricted to a manifold of familiar shapes and thereby can be corrected. Note that other dimensionality reduction methods can be also used to derive shape constraint. In this work we found PCA performs reasonably well. Next we detail the use of shape constraint to (
1) using PCA. Note that an OCT shape model represents the locations of 9 boundaries.If one OCT shape model consists of control points, it can be modelled as a dimensional vector , where ( appears in (2)). Assuming that we have training shapes manually annotated from OCT images. We first align all shapes at the centre of coordinate origin by applying the Procrustes transformations. The mean OCT shape is then computed by . For each shape in the training set, its deviation from the mean shape is . Then the covariance matrix Cov can be calculated by .
We calculate the eigenvectors and corresponding eigenvalues of Cov (sorted so that ). Let be the matrix of the first eigenvectors of . Then we can approximate an OCT shape in the training set by
(9) 
where defines the parameters for different deformation patterns. Since P
is an orthogonal matrix,
is given by(10) 
By varying the parameters , we can generate new examples of the shape. We can also limit each to constrain the deformation patterns of the shape. Typical limits [6] are
(11) 
where .
The use of shape constraint to (1) for segmenting 9 retinal layer boundaries is summarised as follow: P, and are first computed from a set of training OCT shapes. After the original OCT shape model (locations of 9 boundaries) is updated with (7) and (8) (each boundary in the shape model is updated separately), it is transformed to approximate the mean shape with the Procrustes transformation, which forms . This is followed by updating using (10) and then projecting it using (11), i.e. the coefficients obtained with (10) are constrained with equation (11). Next, we correct the original OCT shape using (9) with the projected and warping the corrected version back to the original location. In this way, the original irregular OCT shape is pulled back to a regular one used for the next iteration of (7) and (8). The whole process is repeated until convergence. The overall numerical optimisation of the proposed method is fast as it only computes a few hundred control points.
3 Experimental results
Data: 9 retinal layers boundaries were segmented for each image from in vivo OCT Bscans. 30 Spectralis SDOCT (ENVISU C class 2300, Bioptigen, axial resolution = 3.3, scan depth = 3.4, 32,000 Ascans per second) Bscans from 15 healthy adults were used for the work. The Bscan was imaged from the left and right eye of the healthy adults using a spectral domain OCT device with a chin rest to stabilise the head. The Bscan located at the foveal centre was identified from the lowest point in the foveal pit where the cone outer segments were elongated (indicating cone specialisation). The ground truth boundaries were manually generated by an experienced ophthalmologist. The dataset was randomly split into 20 training and 10 (5 are corrupted with highlevel speckle noise using Matlab
function with 0.8 variance) validation datasets. For image preprocessing, all images were cropped to extract only region of interest and were flattened using ground truth labels before training. The 9 retinal layer boundaries which can be segmented by the proposed method are shown in Fig
3.Notation  Name of retinal boundary/surface  Abbreviation 

internal limiting membrane  ILM  
outer boundary of the retinal nerve fibre layer  RNFL  
inner plexiform layerinner nuclear layer  IPLINL  
inner nuclear layerouter plexiform layer  INLOPL  
outer plexiform layerouter nuclear layer  OPLONL  
outer nuclear layerinner segments of photoreceptors  ONLIS  
inner segments of photoreceptorsouter segments of photoreceptors  ISOS  
outer segments of of photoreceptorsretinal pigment epithelium  OSRPE  
retinal pigment epitheliumchoroid  RPECH 
Parameters: These parameters in our method are the first and second order regularisation parameters and , the artificial time step , the narrowband radius , the number of eigenvectors/eigenvalues , the number of control points for 9 boundaries, and the iteration number (1000).
These parameters are selected as follows: 1) Large and leads to increasingly shorter and finally vanishing segmentation boundaries. Small values may cause control points intersect with each other, thus leading to numerical instabilities. In this work, we fixed ; 2) is bounded by the CFL stability condition. Numerical stabilities can be attained by using ; 3) was selected according to the initial OCT boundaries. If initialisation is close to the true retinal layer boundaries, is small ( pixels for Fig 4 1st row). Otherwise, a large should be used ( pixels for Fig 4 last row). Note that we set the same value for each of 9 OCT boundaries; 4) was confirmed by choosing the first largest eigenvalues such that , where is the total variance of all the eigenvalues. We used the number of training samples as the value of . Eigenvectors corresponding to small eigenvalues do not contribute much to shape variation; 5) 360 control points in total are used for a whole OCT shape (each retinal layer boundary thus has 40). Overall, we only adjusted for different initialisations (see Fig 4).
Comparsion: The performance of the proposed segmentation method was evaluated by computing the Hausdorff distance (HD) metric between the automated and ground truth segmentations for different retinal layer boundaries. We compared our method with the geodesic distance method proposed in [4]. In Fig 5, visual comparison suggests that the proposed method provides significant improvements over intensitybased geodesic distance in OCT segmentation, especially when OCT images are of low contrast and contain highlevel speckle noise. In Table 2, we report the HD metric of each boundary as well as the total 9 boundaries over the validation dataset and show that our method outperforms the geodesic distance in terms of the HD metric for all the retinal layer boundaries. The improvements are more evident at IML, RNFL, IPLINL, INLOPL and OPLONL boundary locations. Note that in order to compare our results with ground truth labels we fitted a spline curve to each segmentation contour such that the resulting contours run across the entire width of the OCT image.
IML  NFL  IPLINL  INLOPL  OPLONL  

GDM [4]  9.802.51  10.23.32  25.2610.5  22.558.96  20.467.24 
Proposed  1.720.52  1.680.66  1.0580.12  0.9250.10  0.8630.23 
ONLIS  ISOS  OSRPE  RPECH  Overall  
GDM [4]  2.531.05  1.911.34  1.211.05  1.0310.98  10.554.10 
Proposed  0.610.05  0.840.11  1.610.96  0.9520.78  1.1400.39 
4 Conclusion
We presented a new segmentation method for optical coherence tomography (OCT) images, which allows the integration of statistical shape models learned from a small OCT dataset. To this end, we developed the MumfordShah functional in a way which facilitates a parametric representation of open contours. We then constructed narrowbands around the open contours such that regional information can be derived to assist segmentation. We have shown that integrating such information allows the proposed method to have a large convergence range and thus robust against different initialisations. We have also validated that the proposed method is very accurate even OCT images are of low contrast and contain highlevel speckle noise, and that the method outperforms the stateoftheart geodesic distance segmentation method. The proposed method can be readily extended to other segmentation problems involving open contours.
References
 [1] Rossant, F., Bloch, I., Ghorbel, I., Paques, M.: Parallel double snakes. application to the segmentation of retinal layers in 2doct for pathological subjects. Pattern Recognition 48(12) (2015) 3857–3870
 [2] Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Transactions on image processing 10(2) (2001) 266–277
 [3] Chiu, S.J., Li, X.T., Nicholas, P., Toth, C.A., Izatt, J.A., Farsiu, S.: Automatic segmentation of seven retinal layers in sdoct images congruent with expert manual segmentation. Optics express 18(18) (2010) 19413–19428
 [4] Duan, J., Tench, C., Gottlob, I., Proudlock, F., Bai, L.: Automated segmentation of retinal layers from optical coherence tomography images using geodesic distance. Pattern Recognition 72 (2017) 158–175

[5]
Xie, W., Duan, J., Shen, L., Li, Y., Yang, M., Lin, G.:
Open snake model based on global guidance field for embryo vessel
location.
IET Computer Vision
12(2) (2018) 129–137  [6] Cootes, T., Baldock, E., Graham, J.: An introduction to active shape models. Image processing and analysis (2000) 223–248
 [7] Roy, A.G., Conjeti, S., Karri, S.P.K., Sheet, D., Katouzian, A., Wachinger, C., Navab, N.: Relaynet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. Biomedical optics express 8(8) (2017) 3627–3642
 [8] Fang, L., Cunefare, D., Wang, C., Guymer, R.H., Li, S., Farsiu, S.: Automatic segmentation of nine retinal layer boundaries in oct images of nonexudative amd patients using deep learning and graph search. Biomedical optics express 8(5) (2017) 2732–2744
 [9] Cremers, D., Tischhäuser, F., Weickert, J., Schnörr, C.: Diffusion snakes: Introducing statistical shape knowledge into the mumfordshah functional. International journal of computer vision 50(3) (2002) 295–313
 [10] Duan, J., Pan, Z., Yin, X., Wei, W., Wang, G.: Some fast projection methods based on chanvese model for image segmentation. EURASIP Journal on Image and Video Processing 2014(1) (2014) 7
 [11] Noye, B.J., Arnold, R.J.: Accurate finite difference approximations for the neumann condition on a curved boundary. Applied Mathematical Modelling 14(1) (1990) 2–13
Comments
There are no comments yet.