Reconstruction and Quantification of 3D Iris Surface for Angle-Closure Glaucoma Detection in Anterior Segment OCT

06/09/2020
by   Jinkui Hao, et al.
IEEE
8

Precise characterization and analysis of iris shape from Anterior Segment OCT (AS-OCT) are of great importance in facilitating diagnosis of angle-closure-related diseases. Existing methods focus solely on analyzing structural properties identified from the 2D slice, while accurate characterization of morphological changes of iris shape in 3D AS-OCT may be able to reveal in addition the risk of disease progression. In this paper, we propose a novel framework for reconstruction and quantification of 3D iris surface from AS-OCT imagery. We consider it to be the first work to detect angle-closure glaucoma by means of 3D representation. An iris segmentation network with wavelet refinement block (WRB) is first proposed to generate the initial shape of the iris from single AS-OCT slice. The 3D iris surface is then reconstructed using a guided optimization method with Poisson-disk sampling. Finally, a set of surface-based features are extracted, which are used in detecting of angle-closure glaucoma. Experimental results demonstrate that our method is highly effective in iris segmentation and surface reconstruction. Moreover, we show that 3D-based representation achieves better performance in angle-closure glaucoma detection than does 2D-based feature.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 6

09/01/2019

Deep Learning Algorithms to Isolate and Quantify the Structures of the Anterior Segment in Optical Coherence Tomography Images

Accurate isolation and quantification of intraocular dimensions in the a...
02/10/2019

Angle-Closure Detection in Anterior Segment OCT based on Multi-Level Deep Network

Irreversible visual impairment is often caused by primary angle-closure ...
06/09/2020

Open-Narrow-Synechiae Anterior Chamber Angle Classification in AS-OCT Sequences

Anterior chamber angle (ACA) classification is a key step in the diagnos...
07/20/2020

A Macro-Micro Weakly-supervised Framework for AS-OCT Tissue Segmentation

Primary angle closure glaucoma (PACG) is the leading cause of irreversib...
12/09/2019

LiDAR Iris for Loop-Closure Detection

In this paper, a global descriptor for a LiDAR point cloud, called LiDAR...
02/21/2020

Robust Iris Presentation Attack Detection Fusing 2D and 3D Information

Diversity and unpredictability of artifacts potentially presented to an ...
10/23/2019

Identification of primary angle-closure on AS-OCT images with Convolutional Neural Networks

Primary angle-closure disease (PACD) is a severe retinal disease, which ...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Anterior Segment OCT (AS-OCT) imaging is a non-contact and non-invasive method for cross-sectional viewing of anterior segment structure, as shown in Fig. 1 (A). Anatomical structures, such as iris shape and anterior chamber angle (ACA), observed in AS-OCT play key roles in facilitating examination and diagnosis of angle-closure glaucoma [1, 2, 3]. Fig. 1 (B, C) show two AS-OCT images revealing open angle and angle-closure glaucoma, respectively. However, manual identification of angle-closure glaucoma is time consuming and prone to human error.To this end, automated extraction of morphological features, e.g., ACA, iris and other anterior segment structures, would benefit both clinical diagnosis and any automated screening system [4, 5, 6, 7].

Figure 1: For one AS-OCT volume (A), the open angle (B) and angle-closure (C) cases may appear in different sectors. A 3D volume is more suitable than a 2D image for supporting global analysis.

Epidemiological studies [8, 9] have established that quantitative iris parameters are independently related to narrow ACA, and an anteriorly-bowed iris may be related to the degree of angle-closure progression. Huang et al. [9] also suggest that morphological changes in the iris surface are an important sign, revealing and enabling the understanding of the pathogenesis of angle-closure glaucoma. As a result, automated extraction of the iris from the AS-OCT has become an active research area of significance for future diagnosis and prognosis. Ni et al. [10] assessed angle-closure glaucoma by computing mean iris curvature and the trapezoidal area of the iridocorneal angle, etc. Fu et al. [4] proposed a data-driven method of segmenting the cornea and iris, as well as measuring the clinical parameters essential to screen for glaucoma. Shang et al. [11]

presented a curvilinear structure filter based on the local phase tensor to extract the iris region, so as to further assist the diagnosis of angle-closure glaucoma. However, all of the aforementioned methods rely on 2D slices of AS-OCT, which are less useful in distinguishing the stages of angle-closure glaucoma. This may stem from the fact that AS-OCT provides only a single cross-sectional slice view across the anterior segment and, in consequence, all other slices are irrelevant to the task of determining angle status 

[12]. In contrast, a comprehensive study of the global information provided by a 3D representation of the iris may improve measurement accuracy and robustness more significantly than conventional approaches that make use of only an individual 2D slice. Moreover, the occludable iridocorneal, or fully closed ACA leads to the presence of trabecular iris contact (TIC) and exacerbate the iris reconstruction problem.

To this end, in this paper we propose an automated reconstruction and quantification framework for 3D iris surface. Inspired by the discrete wavelet transform, we introduce a novel wavelet refinement block (WRB) into a U-shaped architecture [13] with a view to reducing the redundancy while maintaining local details to the decoder, for extracting an initial iris segmentation. The detected iris boundaries are then utilized to reconstruct a 3D iris surface based on adapting an Poisson-disk sampling. Finally, we extract features from this 3D iris surface (e.g., principal curvatures, Gaussian curvature, mean curvature and shape index) to further assist the examination and diagnosis of angle-closure glaucoma. The experiment demonstrate that our proposed method has high effectiveness on iris segmentation and reconstruction.

Figure 2: Flowchart of the proposed approach. Given an AS-OCT sector in a radiant area (A), the iris boundaries (B) are firstly identified by our segmentation network. We then convert the segmentation into a 3D point cloud and mesh (C). A constrained Poisson-disk sampling method is used to optimize the point cloud and mesh (D), so as to obtain a more accurate iris surface. Finally, different surface measurements (E) are computed for diagnosis of angle-closure related diseases (F).

2 Proposed Method

In this section, we introduce the proposed 3D iris reconstruction and quantification framework for angle-closure glaucoma detection in AS-OCT. Fig. 2 illustrates the pipeline of our proposed method.

2.1 Iris Segmentation Network with Wavelet Refinement Block

In [14, 13], high resolution features from an encoder are combined with decoder features using skip connection, which takes detailed information directly to the decoder to remedy information loss due to pooling and convolutional operations. However, this operation also imports massive quantities of irrelevant information into the decoder, which disturbs and weakens the learning ability of networks. To address this issue, we introduce a new network component into the segmentation network, which we call a wavelet refinement block (WRB). This is able to reduce the amount of redundant information, while preserving local details for the decoder. Fig. 3 illustrates the architecture of our segmentation network.

2.1.1 Discrete Wavelet Transform:

Given the input feature , a 2D Discrete Wavelet Transform (DWT) with four convolutional filters - low-pass filter , and high-pass filters , , and are performed to decompose into four subband features, , , , and . Taking the Haar wavelet as an example, the four filters are defined as:

(1)

Note that all the convolutions above are performed with stride 2, yielding a subsampling of factor 2 along each spatial dimension. The DWT operation is defined as

, , , and , where denotes a convolution operator, and means the standard down-sampling operator with factor 2.

Figure 3: Architecture of our segmentation model.

2.1.2 Network Architecture:

We utilize U-Net [13] as the backbone, which consists of four encoder blocks which correspond to the first four blocks, as shown in Fig. 3. In order to restore boundary details, we insert three WRB, after the first three blocks to take local detail information to the decoder. Specifically, we use a Haar wavelet in Eq. (1) to decompose the corresponding feature maps into four frequency subbands channel-wise, where each band is half-resolution of the input. It is worth noting that the low freq uency band stores local averages of the input data: correspondingly, the high frequency bands, namely , , and , encode details that are significant in recovering boundaries. We then employ convolution for each subband separately and cascade them with decoder feature maps. Allowing comparison with the skip connection that directly brings features from the encoder into the decoder in the general U-Net [13], our WRB reduces the introduction of redundant information while preserving details, which makes our network more accurate and robust in predicting details.

2.2 3D iris surface reconstruction and quantification

At present, the gold standard for diagnostic angle assessment is observation of ACA by gonioscopy. In simpler terms, ophthalmologists move the gonioscope counterclockwise, making an annotation every . In a similar manner, the AS-OCT automated scan obtains multiple consecutive radiant slices within a area, which can then be used to reconstruct a mesh of the iris surface in 3D.

2.2.1 Surface Reconstruction:

Using the previously obtained segmentation results, the upper boundaries of the iris are used to produce a 3D point cloud of the iris surface. These point clouds are nonuniform and sparse: nevertheless, the mesh generated is coarse and deficient of lacking local details. As demonstrated by the representative patch shown in Fig. 4 (A), distortions of the mesh lead to mispresentation of the iris surface. In addition, the geometrical changes in some regions are more dramatic than in others due to the existence of iris frill. This leads to a higher point density than in smooth areas, which leads to a low quality mesh, as shown in Fig. 2 (C).

To this end, we adapted a constrained Poisson-disk sampling [15] of the coarse mesh to refine the surface. This method produces a more uniform and dense point cloud, while guaranteeing that objects of a certain size will be distributed according to the sampling scheme, without overlapping. In practice, an adaptive radius was utilized, to obtain a more precise representation of the point cloud while being as uniform as possible. Specifically, if the maximum curvature of a given point was larger than the global average, was set to : otherwise, it was set to . In our work, following empirical testing we set and to 6 and 10, respectively. Fig. 2 (D) and Fig. 4 (B) demonstrate the optimized point cloud and mesh, which are more effective in revealing geometrical details.

Figure 4: Illustrative of iris 3D reconstruction and feature measurements. (A) Original point cloud and mesh generated by proposed 2D iris segmentation method. (B) Optimized point cloud and mesh. (C) Visualization of different measurements.

2.2.2 Feature Extraction:

As suggested by study [9], the quantitative iris parameters, such as iris curvature, were independently associated to the degree of angle-closure progression. In consequence, after the reconstruction of the iris surface, we calculated the following curvature-related measures for the later diagnosis of angle-closure related diseases: principal curvatures, Gaussian curvature, mean curvature and shape index [16].

It is worth noting that the shape index is introduced in order to capture the intuitive notion of ‘shape’ locally and globally. The shape index of each point may be defined as

(2)

where and are the maximum and minimum curvatures of a point, respectively, and . Unlike the curvature, the shape index is invariant to scaling of the shape, and it could give a simple measurement of the local shape - it can present the flat concave and convex regions significantly [16].

Figure 5: Visualization of 2D iris segmentation results. From left to right: original image, and as obtained by U-Net [13], CE-Net [14], our method, and ground truth.

3 Experimental Results

In order to validate the effectiveness and superiority of the proposed method, we evaluated its individual components separately: first 2D iris segmentation, and then angle-closure glaucoma detection in 3D iris quantification.

3.1 Evaluation of Iris Segmentation

3.1.1 Data and metrics:

A total of 100 AS-OCT images captured by CASIA-2 (Tomey Inc., Japan) from different subjects were collected. The iris regions were annotated by an image analysis expert and an ophthalmologist, and a consensus of their results was used as the final reference standard. The dataset was divided equally into training and testing sets. For a quantitative evaluation, we employed the following metrics: accuracy (Acc), sensitivity (Sen) and dice coefficients (Dice) as region-level evaluation criteria; and root normalized mean square error (RNMSE) and Hausdorff distance (HD) as edge-level evaluation criteria. In addition, we also provided the trabecular iris contact (TIC) error score [4].

The segmentation performance of the proposed model was compared with the following state-of-the-art segmentation methods: FCN [17], U-Net [13], Segnet [18], and CE-Net [14]). As shown in Table 1, our model achieved the best performance in terms of all metrics, with a single exception: its HD score is 0.01 lower than that of Segnet. Fig. 5 illustrates the iris segmentation results of the different methods over two sample images. Overall, the proposed method demonstrated that it could correctly segment the iris region, when compared with the corresponding manual annotations. From careful observation of the TIC regions, it was clear from visual inspection that our method was better able to identify the iris root than the competing methods. This is because our method introduces the wavelet refinement block to reduce redundancy in the image, so that our network may then pay greater attention to the salient context than other segmentation models.

Method Region Edge TIC
Sen(%) Dice(%) Acc(%) RNMSE HD TIC Err.(Pixel)
FCN [17] 84.63 87.01 97.24 0.0601 6.60 21.77
U-Net [13] 88.19 91.84 99.26 0.0543 4.64 19.63
Segnet [18] 91.09 92.70 99.32 0.0619 4.31 17.65
CE-Net [14] 95.74 94.89 99.45 0.0512 4.41 14.43
Proposed 96.46 95.21 99.48 0.0504 4.32 13.24
Table 1: Performance in 2D iris segmentation of different methods.

3.2 Evaluation of Angle-Closure Glaucoma Classification

In this experiment, we first reconstructed 3D iris surfaces using different segmentation methods: FCN [17], U-Net [13], Segnet [18] and CE-Net [14]. We then used measurements obtained in Section 2.2 as the input of a 3D classification model, PointNet [19]

, to classify the glaucoma subjects into cases of open angle and angle-closure glaucoma types, respectively.

3.2.1 Data and metrics:

A total of 42 AS-OCT volumes were captured from 42 eyes, with each volume containing 128 AS-OCT images. A senior ophthalmologist made an annotation (determining open or angle-closure glaucoma) from gonioscopic examination of every segment ACA of each eye, yielding 24 annotations for a single eye, resulting in a total of 1008 annotations for each dataset (504 open-angle and 504 angle-closure). In light of this, we partitioned the 42 automatically-generated 3D iris shapes into 1008 sub-surfaces by dividing each shape into 24 segments. We trained using 80% of these sub-surfaces and reserved 20% as a testing set. We employed the metrics of Acc, Sen, Spe, and area under ROC curve (AUC) to measure the final angle-closure glaucoma classification performance.

3.2.2 Results:

Table 2

reports the classification performances using features extracted from the 3D iris reconstruction by the different segmentation methods. It may be seen that clearly significant margins of improvement in classification results were achieved when the proposed method is compared with the other state-of-art segmentation models. For example, our method exhibits a large advantage over FCN by increases in

and of about 8.33% and 10.5%, respectively. Comparatively, our method reduces noise and other redundancies in AS-OCT, thereby allow the decoder to concentrate on high-level context, such as the TIC region, which is more beneficial for disease detection.

3.2.3 Effectiveness of optimization step:

In addition, we also demonstrate how the classification result benefits from the point cloud optimization process. We may observe that the generated 3D iris surface seen in Fig. 4 A, suffers from distortion without the subsequent optimization step optimization step (Fig. 4

B), and this scenario leads to incorrect curvature estimation, which will further compromise the accuracy of classification. This finding is evidenced in Table 

2: without the point cloud optimization step, the classification results show decrements of 3.1%, 2.1%, and 4.2% for ACC, Sen and Spe, respectively.

3.2.4 Comparison of 2D and 3D features:

All existing angle-closure glaucoma classification methods are accomplished using 2D features obtained from a single AS-OCT slice. To further verify whether the features extracted from a 3D iris surface obtained by our method could improve classification performance, we compared the proposed method to the conventional approaches that use 2D feature representation: histograms of oriented gradients (HOG) features[20]

with liner Support Vector Machine (SVM), AlexNet 

[21], VGG [22], and ResNet [23]. The classification performances of these methods are reported in Table 2. It may be seen that by enabling the use of features extracted from a reconstructed 3D iris surface, the 3D classification network, PointNet, achieved the best performances in terms of Acc, Spe, and AUC. This confirms that making use of 3D features is more helpful in improving the accuracy of angle-closure glaucoma classification than using 2D features alone.

Acc(%) Sen(%) Spe(%) AUC(%)
2D: HOG+SVM 85.01 74.33 95.90 91.92
2D: ResNet34 96.87 97.91 94.79 99.52
2D: AlexNet 91.99 99.60 84.37 93.81
2D: VGG16 94.18 96.48 90.13 98.95
3D: FCN + PointNet 90.10 88.54 91.66 97.55
3D: U-Net + PointNet 92.70 89.58 95.83 98.82
3D: Segnet + PointNet 94.79 96.87 92.70 99.01
Our WRB + PointNet 95.31 96.87 93.75 99.17
Our WRB + Optimization + PointNet 98.43 98.95 97.91 99.83
Table 2: Angle-closure glaucoma classification results obtained by different methods.

4 Conclusion

Existing methods to identify gonioscopic angle-closure have focused solely on the extraction of features from 2D slices, which are less satisfactory for the identification of angle-closure glaucoma subtypes. In this work, we have developed a novel framework for reconstruction and quantification of 3D iris surface from AS-OCT imagery. This is for the first time that a comprehensive surface-based framework has been applied to model and analyze 3D iris from AS-OCT. The high evaluation performance in segmentation experiments shows the ability and robustness of our models in extracting iris features from single AS-OCT slices. Feature analysis and glaucoma screening have then been performed based on 3D iris reconstruction, which show the high effectiveness of our approach. The proposed framework opens the possibility for further investigation of AS-OCT from a new perspective.

References

  • [1] Chansangpetch, S., Rojanapongpun, P., Lin, S.C.: Anterior segment imaging for angle closure. American journal of ophthalmology 188 (2018) xvi–xxix
  • [2] Ang, M., Baskaran, M., et al.: Anterior segment optical coherence tomography. Progress in retinal and eye research 66 (2018) 132–156
  • [3] Xu, B.Y., Chiang, M., et al.: Deep learning classifiers for automated detection of gonioscopic angle closure based on anterior segment oct images. American journal of ophthalmology 208 (2019) 273–280
  • [4] Fu, H., Xu, Y., et al.: Segmentation and quantification for angle-closure glaucoma assessment in anterior segment oct. IEEE transactions on medical imaging 36(9) (2017) 1930–1938
  • [5] Fu, H., Xu, Y., et al.: Multi-Context Deep Network for Angle-Closure Glaucoma Screening in Anterior Segment OCT. In: International Conference on Medical image computing and computer-assisted intervention. (2018) 356–363
  • [6] Fu, H., Baskaran, M., et al.: A deep learning system for automated angle-closure detection in anterior segment optical coherence tomography images. American journal of ophthalmology 203 (2019) 37–45
  • [7] Fu, H., Xu, Y., et al.: Angle-Closure Detection in Anterior Segment OCT Based on Multilevel Deep Network. IEEE Transactions on Cybernetics (2020)
  • [8] Wang, B., Sakata, L.M., et al.: Quantitative iris parameters and association with narrow angles. Ophthalmology 117(1) (2010) 11–17
  • [9] Huang, J., Wang, Z., Wu, Z., Li, Z., Lai, K., Ge, J.: Comparison of ocular biometry between eyes with chronic primary angle-closure glaucoma and their fellow eyes with primary angle-closure or primary angle-closure suspect. Journal of glaucoma 24(4) (2015) 323–327
  • [10] Ni Ni, S., Tian, J., Marziliano, P., Wong, H.T.: Anterior chamber angle shape analysis and classification of glaucoma in ss-oct images. Journal of ophthalmology 2014 (2014)
  • [11] Shang, Q., Zhao, Y., et al.: Automated iris segmentation from anterior segment oct images with occludable angles via local phase tensor. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE (2019) 4745–4749
  • [12] Cho, H.k., Ahn, D., Kee, C.: Evaluation of circumferential angle closure using iridotrabecular contact index after laser iridotomy by swept-source optical coherence tomography. Acta ophthalmologica 95(3) (2017) e190–e196
  • [13] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, Springer (2015) 234–241
  • [14] Gu, Z., Cheng, J., et al.: Ce-net: context encoder network for 2d medical image segmentation. IEEE transactions on medical imaging 38(10) (2019) 2281–2292
  • [15] Corsini, M., Cignoni, P., Scopigno, R.: Efficient and flexible sampling with blue noise properties of triangular meshes. IEEE transactions on visualization and computer graphics 18(6) (2012) 914–924
  • [16] Zhao, Y., Liu, Y., et al.: Region-based saliency estimation for 3d shape analysis and understanding. Neurocomputing 197 (2016) 1–13
  • [17] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR. (2015) 3431–3440
  • [18] Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39(12) (2017) 2481–2495
  • [19] Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: CVPR. (2017) 652–660
  • [20] Xu, Y., Liu, J., et al.: Anterior chamber angle classification using multiscale histograms of oriented gradients for glaucoma subtype identification. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE (2012) 3167–3170
  • [21] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097–1105
  • [22] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • [23] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. (2016) 770–778