1 Introduction
Automated segmentation of the optic disc (OD) and optic cup (OC) in the retinal fundus images is a fundamental task in the field of medical image analysis. It helps the quantification of the clinical measures about the retinal related diseases, such as the rim thickness, the ISNT rule [20], and the vertical cuptodisc ratio (CDR) [13]. These measures further assist in the diseases diagnosis and the progression assessment, and facilitate for the doctorpatient communication.
In the fundus, the OD consists of two parts: the OC exhibiting as a pit in centre and the neuroretinal rim packing the nerve fibres. Thus, a reliable feature to segment the OC and the rim is the depth. However, in D images, the depth information is completely absent. This makes the OC segmentation problem be highly illdefined.
The current consensus on the segmentation problem is to learn good representations for the OC pixels, rim pixels, and the background pixels. The deep features are now textbook. For example, Ushaped networks are designed in
[24] and [1] to learn deep features in Cartesian domain. An MNet [11] is designed to learn deep features in polar domain. However, the representations learned from single domain are still insufficient to distinguish the OC pixels, rim pixels, and background pixels.In this paper, we argue that integrating representations by CNNs from both the Cartesian and polar domains contributes to the accurate segmentation of the OD and OC. Intuitively, the shapes and spatial layouts of the OC, rim, and vessels are completely different in these two domains, as shown in Fig. 1 (a). This implies that different contextual information will be learned from images in different domains. Naturally, we are motivated to achieve richer representations for the segmentation by exploiting complementary contextual information from both domains and integrating them.
Theoretically, the CNNs in Cartesian domain are equivariant to translation [9]. More specifically, for any pixel in an image, the feature vector learned by the CNNs in Cartesian domain is translation invariant, as illustrated in Fig. 1(b). On the other hand, in polar domain the CNNs are equivariant to rotation [10]. When considering one pixel of an image, the feature vector learned by the CNNs in polar domain is rotation invariant, as illustrated in Fig. 1(c). Fusing translation equivariant representations from Cartesian domain and the rotation equivariant representations from the polar domain reaches a richer representation with high expressive capability and better predictive performances than any one of them.
In this paper, we propose a Cartesianpolar dualdomain network (DDNet) for the joint OD and OC segmentation. It first learns feature representations from both Cartesian domain and polar domain by a twobranch of domain encoder. Then it fuses the representations from two domains by the proposed dualdomain fusion module. The fusion module builds the correspondence between rectilinear grid and polar gird by the differentiable polar transform layer and learns complementary contextual information from domain feature maps. Finally, the fused features are used for the dense classification by a decoder.
In summary, there are three contributions in our paper:

We propose a novel OD and OC segmentation approach, which for the first time considers both the Cartesian domain and polar domain and explores the complementary.

We design a Cartesianpolar dualdomain network (DDNet) with two encoding branches to learn rich contextual information from two domains for the joint OD and OC segmentation.

We propose a dualdomain fusion module. It allows the elementwise fusion of the feature maps on different grids from two domains and enhances the expressive capability by learning the feature importance across two domains in elementwise.
2 Related Works
For the wide clinical applicability, the joint OD and OC segmentation has attracted much attention in the past decades. The OD and OC segmentation approaches update from handcrafted to deep learning based while the domain performing segmentation extends from the Cartesian domain to the polar domain.
Handcrafted features based in Cartesian domain. Most approaches are developed in the Cartesian domain with handcrafted features. There are mainly two veins. One is based on the shape priors and tries to delineate the boundaries, e.g. [18, 2, 30, 33, 16, 4]. The other is based on the appearance priors and aims to distinguish the OD and OC pixels/regions from the background, e.g. [8, 32, 28, 23]. Due to the limited expressive capability of the handcrafted features, those methods are intractable to segment the subtle OC. They are also fragile when the OD is surrounded by bright exudates, peripapillary atrophy etc.
Deep features based in Cartesian domain. More effective OD and/or OC segmentation approaches root in powerful representation learning methods within Cartesian domain. Deep Retinal Image Understanding (DRIU) [19] takes the fivestage VGG16 [25] as the base network and learns feature maps for the OD segmentation. In [24] and [1], variants of UNet [22] were proposed to segment the OD and OC. Nevertheless, the deep learning based segmentation approaches in Cartesian domain encounter difficulties in learning due to the imbalanced class distributions and the small classes such as OC pixels are prone to be misclassified.
Deep features based in polar domain. Most recently, MNet [11], performing segmentation in polar domain, was proposed. It learns representations directly from polar images by a simplified UNet [11] with multiscale inputs and produces polar segmentation maps. Then an inverse polar transform is used to map the polar segmentation results back to Cartesian domain. Although MNet [11] improves the segmentation performances significantly, it ignores the contextual information from Cartesian domain.
Different from the previous approaches, our DDNet learns representations from dual domains for the joint segmentation of the OD and OC. To the best knowledge of the authors, this is the first segmentation network designed to explore the complementary of the Cartesian domain and polar domain.
3 Cartesianpolar Dualdomain Network
The proposed Cartesianpolar dualdomain segmentation network (DDNet) roots in the twobranch of domain feature encoder and welldesigned dualdomain fusion module. With the rich representations learned from two domains, the DDNet makes dense predictions by a decoder.
3.1 Network Architecture
The Cartesian domain is the original domain that both natural images and fundus images are captured. In this domain, the rectilinear grid is used and the geometry structures are well visualised. Naturally, the endtoend segmentation networks [17, 6, 21, 3, 7] were developed for natural images in Cartesian domain. By directly transferring the great successes achieved in natural image segmentation to fundus image segmentation, Ushaped networks [19, 24, 1] were proposed. To alleviate the imbalanced class distributions among the OC, rim, and background pixels, MNet [11] was proposed to segment the OC and OD in the polar domain. However, the contextual information learned from single domain is still insufficient. Therefore, it is highly desired to learn richer representations for the segmentation task.
We observe that the shapes and spatial layouts of the OC, rim, OD, and vessels are completely different in Cartesian domain and polar domain. For example, the OC is ellipselike and the rim is ringlike in Cartesian domain while they are bandlike in polar domain. The vessels in Cartesian domain extend radially from superior and inferior to the OD while they are almost vertical layout in polar domain. Also by transforming the image in Cartesian domain to polar domain, the structures close to the transformation origin in Cartesian domain are amplified in polar domain while the structures far away from the transformation origin are squeezed. Such evident differences imply that complementary contextual information is embedded in two domains. Learning representations from both domains and fusing them results in richer representations. To this end, we propose the DDNet.
Fig. 2 illustrates the architecture of our DDNet. It involves the following four components:

Cartesian domain encoding branch. This branch maps the Cartesian image into the feature maps on rectilinear grid. We directly use the modified Xception model [7] to learn the feature representations since it has shown promising performances in natural scene semantic segmentation. It can be divided into five stages according to the spatial sizes of the feature maps. For convenience, we denote the feature maps by the last convolutional layer at the th stage as .

Polar domain encoding branch. This branch first maps the Cartesian image on rectilinear grid to polar image on polar grid by a polar transform layer (PTL), then forwards the polar image into the modified Xception model [7] and generates feature maps on polar grid. For convenience, we denote the feature maps by the last convolutional layer at the th stage as .

Dualdomain fusion module. This module is designed to incorporate on rectilinear grid and on polar grid. It first transforms the feature maps on the rectilinear grid to feature maps on the polar grid by a PTL. Then it fuses the feature maps and on the polar grid in elementwise by the importantbased fusion block (IFB). Its detailed description will be given in next subsection.

Decoder. The decoder first rescales the fused feature maps to a unified spatial size, then concatenates them and makes dense predictions to obtain the segmentation maps on polar grid.
The segmentation maps by our DDNet are on polar grid. The joint segmentation problem is formulated as a three classes (i.e., OC, rim, background) dense classification and the crossentropy loss is used. In the training phase, our DDNet is supervised by the segmentation groundtruth in polar domain, which is transformed from the Cartesian groundtruth by the PTL. During the testing phase, an extra inverse polar transformation is performed on the outputs of the DDNet to obtain the final segmentation results on rectilinear grid.
3.2 Dualdomain Fusion Module
Given the feature maps on two different grids, the dualdomain feature fusion for dense prediction pursuits a weighted strategy in elementwise that best fits the segmentation groundtruth. It is required to build the correspondence between two domains and exploit the complementary information for the elementwise classification. Fig. 3 illustrates the dualdomain fusion module.
The Cartesian domain branch takes the image on the rectilinear grid as input and outputs feature maps on rectilinear grid. Differently, the polar domain branch first transforms the image on the rectilinear grid to that on the polar grid, then outputs the feature maps on the polar grid. To build the correspondence of the feature maps on different grids, we adopt the PTL [10] and transform the feature maps on the rectilinear grid to that on polar grid. Formally, denoting the feature maps with channels and size of in Cartesian domain from the th stage as and the point coordinates on rectilinear grid as , the PTL [10] adopts the differentiable image sampling technique [12] and outputs the sampled polar feature maps with the same spatial size and channel whose point coordinates on polar grid are denoted as . In terms of and , the PTL is expressed as:
(1) 
where is the centre point of and .
To select the informative features from two domains for each element and enhance the expressive capability of the representations, the importance of the feature across the domains is learned. It is implemented by learning two important matrices and . is the channel importance map, encoding the importance of the feature map from two domains. is the location importance map, encoding the importance of the spatial location.
Formally, taking the feature maps and from the stage of the Cartesian domain encoding branch and polar domain encoding branch respectively as input , the importance weighted feature maps are obtained by:
(2)  
where is the elementwise multiplication, is the feature maps weighted by , and is the final output. Partially inspired by the CBAM [27] which is designed to learn a channel attention map and spatial attention map at each CNN stage, we adopt the same implementation to learn and . Finally we add the important weighted feature maps and , and use a convolutional layer to fuse the feature maps across the channel dimension:
(3) 
where is convolutional weights of the convolutional layer.
3.3 Analysis
From the view of representation theory, our proposed DDNet not only learns representations with powerful discriminativeness, but also benefits from the translation equivariant and rotation equivariant that the twobranch of domain encoder achieves respectively.
Essentially, the Cartesian domain branch learns feature representations by performing translational convolutions on a translation symmetry group [9] [15]. Consequently, the feature representations for the input image achieve to translation equivariance and satisfy:
(4) 
where is a translation action. This means that performing a translation on the input image (forming ) and then passing it through the Cartesian domain encoding branch will give the same result as first forwarding the input image to the Cartesian domain encoding branch (forming ) and then performing the same translation on the learned representation. More specifically, as is exampled in Fig. 1(b), for any pixel , denoting the corresponding pixel in the translated image as , the Cartesian domain encoding branch maps them to a same representation, i.e., . In other words, the representations for each pixel by the Cartesian domain encoding branch are invariant to translation and beneficial to the pixelwise classification.
Different from the Cartesian domain encoding branch, the polar domain encoding branch learns feature representations by performing rotation convolutions on a rotation symmetry group [10]. Formally, we model the feature representations from the polar domain encoding branch as two mappings . The first one is the PTL, which maps the Cartesian image to polar and reduces the rotation action on the Cartesian image to the translation action on the corresponding polar image. This implies that, performing a rotation action on , there exists a translation satisfying:
(5) 
The second one corresponds to the convolutional neural network, which maps the polar image
to feature representation . With Eq. 5 and Eq. 6, the feature representations from the polar domain encoding branch satisfy:(6)  
This means that performing a rotation on the input image (forming ) and then forwarding it to the polar domain encoding branch will give the same result as first forwarding the input image to the polar domain encoding branch (forming ) and then performing a translation on the learned feature representation. More specifically, as is exampled in Fig. 1(c), for any pixel in and the corresponding pixel in the rotated image , the polar domain encoding branch maps them to a same feature vector, i.e., . In other words, the representations for each pixel by the polar domain branch are invariant to rotation and beneficial to the pixelwise classification.
By fusing the translation invariant representations and rotation invariant representations, our DDNet obtains more powerful representations. Next, we will demonstrate its effectiveness by experiments.
4 Experimental Results
The segmentation performances of our DDNet are first evaluated and compared on the public dataset ORIGA [31] for the OD and OC segmentation. Then we apply it to the CDR estimation. The ORIGA [31] contains 650 fundus images size of . 325 images are used as training images including 73 glaucoma cases and 325 images are used as testing images including 95 glaucoma cases. For each image in ORIGA [31], the segmentation masks of the OD and OC, the CDR value by experts are provided.
4.1 Implementation Details
Data Augmentation. In the training phase, the images from the training set are flipped horizontally in a random way and scaled by a random factor ranging from 0.9 to 1.1. The OD region only takes a small region in the retinal fundus image. To crop a small sized OD window, we simply train an OD segmentor by finetuning the pretrained Deeplabv3+ [7]. According to the OD mask from the OD segmentor, we calculate the OD centre and crop an window as the inputs of the proposed DDNet.
Training. The proposed DDNet is built on the Deeplabv3+ [7]. A twostage training procedure is adopted. First, we pretrain the parameters in the Cartesian domain encoding branch and polar domain encoding branch separately by finetuning the pretrained Deeplabv3+ [7] on Pascal VOC and obtain the single domain segmentation model, denoted as Deeplabv3+ (Cartesian) and Deeplabv3+ (polar) respectively. Then, the parameters in the whole DDNet are trained. The hyperparameters include: the minibatch size (4), learning rate (0.007 in the first stage, 0.001 in the second stage), maximum number of training iterations (10,000), momentum (0.9) and weight decay (0.00004).
Testing. At the testing phase, the whole input image is first forwarded to the OD segmentor, and the OD window size of is cropped. Then we forward the OD window to the trained DDNet and obtain the segmentation masks of OD and OC on polar grid. Finally, with an inverse polar transform, we obtain the final segmentation masks on rectilinear grid.
Methods  

handcrafted  Rbend [14]  0.129    0.395     
ASM [29]  0.148    0.313      
Superpixel [8]  0.102    0.264    0.299  
LRR [28]      0.244      
deep learning  lightweight UNet [24]  0.115    0.287    0.303 
FCDenseNet [1]  0.067    0.231      
MNet [11]  0.071  6.70/6.93  0.230  14.38/9.96  0.233  
DeepLabv3+ [7] (Cartesian)  0.059  5.51/3.79  0.209  12.93/8.39  0.212  
DeepLabv3+ [7] (polar)  0.057  5.26/3.38  0.214  13.23/9.09  0.210  
DDNet (ours)  0.054  5.01/3.35  0.204  12.48/8.39  0.201 
are the boundary location error and its standard deviation. The best results and second best results are marked in red and blue respectively. (Best viewed in colour)
4.2 Segmentation Performances
We adopt the overlapping error and the average boundary location error to evaluate the performances as introduced in [11] and [5]
respectively. The former measures the ratio of number of pixels that are wrongly classified to the number of pixels in the union region of the segmentation mask and the groundtruth. The later measures the average absolute distance between the boundaries
of the segmentation mask and the boundaries of the groundtruth:(7) 
where is the Euclidean distance of the boundary point in the direction to the centroid of the target, and is the set of the uniformly sampled directions. As same as the setting used in [5], is set to .
We compare the segmentation performances of the proposed DDNet with Rbend [14], ASM [29], Superpixel [8], LRR [28], lightweight UNet [24], FCDenseNet [1], DeepLabv3+ [7] (Cartesian) and DeepLabv3+ [7] (polar). Among them, the first four are handcrafted features based. The last five are deep features based.
We report the performance comparisons in Table. 1, in which the overlapping errors of the OD, OC, and rim are denoted as , and , respectively. The average boundary location errors of OD and OC are denoted as and , respectively. It is observed that: (1) the proposed DDNet achieves the lowest overlapping errors as well as the boundary location errors; (2) Compared to the MNet [11] which is specifically designed for the joint OD and OC segmentation, our DDNet outperforms it by , and with the overlapping errors of the OD, OC and rim respectively and by and with the boundary location errors of the OD and OC respectively; (3) Compared to the Deeplabv3+ [7] performed in Cartesian domain, our DDNet reduces the overlapping errors of the OD, OC and rim by , and respectively; (4) Compared to the Deeplabv3+ [7] performed in polar domain, our DDNet reduces the overlapping errors of the OD, OC and rim by , and respectively.
The segmentation results by MNet [11], DeepLabv3+ [7] (Cartesian), DeepLabv3+ [7] (polar) and the proposed DDNet are illustrated in Fig. 4. It is illustrated that the Deeplabv3+ [7] (polar) achieves superior results in Fig. 4(a) and Fig. 4(c) but inferior results in Fig. 4(b) and Fig. 4(d) to Deeplabv3+ [7]
(Cartesian). By fusing the features extracted from two domains, our DDNet is able to achieve the superior results. Compared to MNet
[11], our DDNet achieves more accurate segmentation results. The last column shows a challenging example that all methods fail to segment the OC.4.3 Application on the CDR Estimation
Glaucoma is the first leading cause of irreversible vision impairment and blindness [26]. In clinical, the diagnosis of glaucoma commonly relies on multiple measures such as the CDR, the visual field and intraocular pressure, etc. Generally, the larger the CDR is, the higher risk the patient is at. In what follows, we estimate the CDR according to the segmentation masks of OD and OC.
The CDR value is defined as the ratio of the vertical diameter of the OC to the vertical diameter of the OD. To evaluate the performances on the CDR estimation of the segmentation approaches, we follow [11] and adopt the absolute error. Fig. 5 shows the results of MNnet [11], Deeplabv3+ [7] (Cartesian), Deeplabv3+ [7] (polar), and our DDNet. Obviously, our DDNet gets the lowest absolute CDR estimation error . The Deeplabv3+ [7] achieves similar absolute CDR errors in Cartesian domain () and polar domain (), and both of them are superior to the MNet () [11].
5 Conclusion
This paper focuses on the joint segmentation of OD and OC in retinal fundus images. Due to the absence of depth, representations learned from single domain are insufficient to partition of the OC and rim. To improve performances, we propose to learn representations from both the Cartesian domain and polar domain, and present the Cartesianpolar Dualdomain segmentation network (DDNet). On one hand, our DDNet benefits from the complementary contextual information exploited from images in Cartesian domain and polar domain. On the other hand, our DDNet benefits from the translation equivariance achieved by the CNNs in Cartesian domain and the rotation equivariance achieved by the CNNs in polar domain. By fusing the representations from both domains, the representations by our DDNet are more powerful. We also validate thestateoftheart segmentation performances of the DDNet on ORIGA [31]. When applying the DDNet to the CDR estimation, it achieves lowest absolute error, which demonstrates the potential application on glaucoma screening. Our DDNet benefits from the complementary of two domains. But it is still an open question that how to fuse the feature maps such that the fused features are equivariant to translation and rotation. This will be our future work.
References
 [1] B. AlBander, B. M. Williams, W. AlNuaimy, M. A. AlTaee, H. Pratt, and Y. Zheng. Dense fully convolutional segmentation of the optic disc and cup in colour fundus for glaucoma diagnosis. Symmetry, 10, 2018.
 [2] A. Aquino, M. E. GegundezArias, and D. Marin. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques. IEEE Transactions on Medical Imaging, 29(11):1860–1869, Nov 2010.
 [3] V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoderdecoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12):2481–2495, 2017.
 [4] E. J. Bekkers, M. Loog, B. M. t. H. Romeny, and R. Duits. Template matching via densities on the rototranslation group. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(2):452–466, Feb 2018.
 [5] A. Chakravarty and J. Sivaswamy. Joint optic disc and cup boundary extraction from monocular fundus images. Computer Methods and Programs in Biomedicine, 147:51 – 61, 2017.
 [6] L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2014.
 [7] L.C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. Encoderdecoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
 [8] J. Cheng, J. Liu, Y. Xu, F. Yin, D. W. K. Wong, N. M. Tan, D. Tao, C. Y. Cheng, T. Aung, and T. Y. Wong. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Transactions on Medical Imaging, 32(6):1019–1032, 2013.
 [9] T. Cohen and M. Welling. Group equivariant convolutional networks. In ICML, pages 2990–2999, 2016.

[10]
C. Esteves, C. AllenBlanchette, X. Zhou, and K. Daniilidis.
Polar transformer networks.
In ICLR, 2018.  [11] H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, and X. Cao. Joint optic disc and cup segmentation based on multilabel deep network and polar transformation. IEEE Transactions on Medical Imaging, PP(99):1–1, 2018.
 [12] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, pages 2017–2025, 2015.
 [13] J. B. Jonas, A. Bergua, P. Schmitz–Valckenberg, K. I. Papastathopoulos, and W. M. Budde. Ranking of optic disc variables for detection of glaucomatous optic nerve damage. Investigative Ophthalmology and Visual Science, 41(7):1764, 2000.
 [14] G. D. Joshi, J. Sivaswamy, and S. R. Krishnadas. Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. IEEE Transactions on Medical Imaging, 30(6):1192–1205, 2011.
 [15] R. Kondor and S. Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In ICML, 2018.
 [16] A. Li, Z. Niu, J. Cheng, F. Yin, D. W. K. Wong, S. Yan, and J. Liu. Learning supervised descent directions for optic disc segmentation. Neurocomputing, 275:350 – 357, 2018.
 [17] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440, 2015.
 [18] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, and L. Kennedy. Optic nerve head segmentation. IEEE Transactions on Medical Imaging, 23(2):256–264, 2004.
 [19] K.K. Maninis, J. PontTuset, P. Arbeláez, and L. Van Gool. Deep retinal image understanding. In S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, editors, Medical Image Computing and ComputerAssisted Intervention – MICCAI 2016, pages 140–148, 2016.
 [20] H. N, O. C, C. A, and et al. The isnt rule and differentiation of normal from glaucomatous eyes. Archives of Ophthalmology, 124(11):1579–1583, 2006.
 [21] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In ICCV, pages 1520–1528, 2015.
 [22] O. Ronneberger, P. Fischer, and T. Brox. Unet: Convolutional networks for biomedical image segmentation. In N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, editors, Medical Image Computing and ComputerAssisted Intervention – MICCAI 2015, pages 234–241, 2015.
 [23] A. SalazarGonzalez, D. Kaba, Y. Li, and X. Liu. Segmentation of the blood vessels and optic disk in retinal images. IEEE Journal of Biomedical and Health Informatics, 18(6):1874–1886, 2014.
 [24] A. Sevastopolsky. Optic disc and cup segmentation methods for glaucoma detection with modification of unet convolutional neural network. Pattern Recognition and Image Analysis, 27(3):618–624, Jul 2017.
 [25] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. In ICLR, 2015.
 [26] Y.C. Tham, X. Li, T. Y. Wong, H. A. Quigley, T. Aung, and C.Y. Cheng. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and metaanalysis. Ophthalmology, 121(11):2081–2090, 2014.
 [27] S. Woo, J. Park, J.Y. Lee, and I. S. Kweon. Cbam: Convolutional block attention module. In ECCV, 2018.
 [28] Y. Xu, L. Duan, S. Lin, X. Chen, D. W. K. Wong, T. Y. Wong, and J. Liu. Optic cup segmentation for glaucoma detection using lowrank superpixel representation. In P. Golland, N. Hata, C. Barillot, J. Hornegger, and R. Howe, editors, Medical Image Computing and ComputerAssisted Intervention – MICCAI 2014, pages 788–795, 2014.
 [29] F. Yin, J. Liu, S. H. Ong, Y. Sun, D. W. K. Wong, N. M. Tan, C. Cheung, M. Baskaran, T. Aung, and T. Y. Wong. Modelbased optic nerve head segmentation on retinal fundus images. In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 2626–2629, Aug 2011.
 [30] H. Yu, E. S. Barriga, C. Agurto, S. Echegaray, M. S. Pattichis, W. Bauman, and P. Soliz. Fast localization and segmentation of optic disk in retinal images using directional matched filtering and level sets. IEEE Transactions on Information Technology in Biomedicine, 16(4):644–657, July 2012.
 [31] Z. Zhang, F. S. Yin, J. Liu, W. K. Wong, N. M. Tan, B. H. Lee, J. Cheng, and T. Y. Wong. Origalight: An online retinal fundus image database for glaucoma analysis and research. In 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pages 3065–3068, 2010.
 [32] Y. Zheng, D. Stambolian, J. O’Brien, and J. C. Gee. Optic disc and cup segmentation from color fundus photograph using graph cut with priors. In International Conference on Medical Image Computing and ComputerAssisted Intervention, pages 75–82, 2013.
 [33] J. Zilly, J. M. Buhmann, and D. Mahapatra. Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation. Computerized Medical Imaging and Graphics, 55:28 – 41, 2017. Special Issue on Ophthalmic Medical Image Analysis.