DDNet: Cartesian-polar Dual-domain Network for the Joint Optic Disc and Cup Segmentation

Existing joint optic disc and cup segmentation approaches are developed either in Cartesian or polar coordinate system. However, due to the subtle optic cup, the contextual information exploited from the single domain even by the prevailing CNNs is still insufficient. In this paper, we propose a novel segmentation approach, named Cartesian-polar dual-domain network (DDNet), which for the first time considers the complementary of the Cartesian domain and the polar domain. We propose a two-branch of domain feature encoder and learn translation equivariant representations on rectilinear grid from Cartesian domain and rotation equivariant representations on polar grid from polar domain parallelly. To fuse the features on two different grids, we propose a dual-domain fusion module. This module builds the correspondence between two grids by the differentiable polar transform layer and learns the feature importance across two domains in element-wise to enhance the expressive capability. Finally, the decoder aggregates the fused features from low-level to high-level and makes dense predictions. We validate the state-of-the-art segmentation performances of our DDNet on the public dataset ORIGA. According to the segmentation masks, we estimate the commonly used clinical measure for glaucoma, i.e., the vertical cup-to-disc ratio. The low cup-to-disc ratio estimation error demonstrates the potential application in glaucoma screening.

READ FULL TEXT VIEW PDF
07/05/2022

Vision-based Uneven BEV Representation Learning with Polar Rasterization and Surface Estimation

In this work, we propose PolarBEV for vision-based uneven BEV representa...
09/06/2017

Polar Transformer Networks

Convolutional neural networks (CNNs) are inherently equivariant to trans...
11/29/2021

Attention-based Feature Decomposition-Reconstruction Network for Scene Text Detection

Recently, scene text detection has been a challenging task. Texts with a...
01/03/2018

Joint Optic Disc and Cup Segmentation Based on Multi-label Deep Network and Polar Transformation

Glaucoma is a chronic eye disease that leads to irreversible vision loss...
06/21/2022

Using the Polar Transform for Efficient Deep Learning-Based Aorta Segmentation in CTA Images

Medical image segmentation often requires segmenting multiple elliptical...
10/27/2021

SiamPolar: Semi-supervised Realtime Video Object Segmentation with Polar Representation

Video object segmentation (VOS) is an essential part of autonomous vehic...
09/27/2005

Face Verification in Polar Frequency Domain: a Biologically Motivated Approach

We present a novel local-based face verification system whose components...

1 Introduction

Automated segmentation of the optic disc (OD) and optic cup (OC) in the retinal fundus images is a fundamental task in the field of medical image analysis. It helps the quantification of the clinical measures about the retinal related diseases, such as the rim thickness, the ISNT rule [20], and the vertical cup-to-disc ratio (CDR) [13]. These measures further assist in the diseases diagnosis and the progression assessment, and facilitate for the doctor-patient communication.
In the fundus, the OD consists of two parts: the OC exhibiting as a pit in centre and the neuroretinal rim packing the nerve fibres. Thus, a reliable feature to segment the OC and the rim is the depth. However, in D images, the depth information is completely absent. This makes the OC segmentation problem be highly ill-defined.

The current consensus on the segmentation problem is to learn good representations for the OC pixels, rim pixels, and the background pixels. The deep features are now textbook. For example, U-shaped networks are designed in

[24] and [1] to learn deep features in Cartesian domain. An MNet [11] is designed to learn deep features in polar domain. However, the representations learned from single domain are still insufficient to distinguish the OC pixels, rim pixels, and background pixels.

Figure 1: Illustration for the complementary of Cartesian domain and polar domain. (a) OD window in Cartesian domain (top) and polar domain (bottom). The differences on the shapes and spatial layout of the structures imply that two domains embed different contextual information. and denote the Cartesian coordinates and polar coordinates respectively. (b) and (c): Convolution in Cartesian domain and polar domain respectively. In Cartesian domain, the convolution is translation equivariance but not rotation equivariance. As illustrated in (b), for any pixel in the original image and its corresponding pixel denoted by in the image with translation , convolution with a bank of arbitrary filters

produces the same feature vectors. Convolution in polar domain is rotation equivariance but not translation equivariance. As illustrated in (c), the rotation

in Cartesian domain is reduced to translation in polar domain. Thus for any pixel in the original image and its corresponding pixel denoted by in the image with rotation , transforming them into polar domain, then convolution with a bank of arbitrary filters produces the same feature vectors.

In this paper, we argue that integrating representations by CNNs from both the Cartesian and polar domains contributes to the accurate segmentation of the OD and OC. Intuitively, the shapes and spatial layouts of the OC, rim, and vessels are completely different in these two domains, as shown in Fig. 1 (a). This implies that different contextual information will be learned from images in different domains. Naturally, we are motivated to achieve richer representations for the segmentation by exploiting complementary contextual information from both domains and integrating them.
Theoretically, the CNNs in Cartesian domain are equivariant to translation [9]. More specifically, for any pixel in an image, the feature vector learned by the CNNs in Cartesian domain is translation invariant, as illustrated in Fig. 1(b). On the other hand, in polar domain the CNNs are equivariant to rotation [10]. When considering one pixel of an image, the feature vector learned by the CNNs in polar domain is rotation invariant, as illustrated in Fig. 1(c). Fusing translation equivariant representations from Cartesian domain and the rotation equivariant representations from the polar domain reaches a richer representation with high expressive capability and better predictive performances than any one of them.
In this paper, we propose a Cartesian-polar dual-domain network (DDNet) for the joint OD and OC segmentation. It first learns feature representations from both Cartesian domain and polar domain by a two-branch of domain encoder. Then it fuses the representations from two domains by the proposed dual-domain fusion module. The fusion module builds the correspondence between rectilinear grid and polar gird by the differentiable polar transform layer and learns complementary contextual information from domain feature maps. Finally, the fused features are used for the dense classification by a decoder.
In summary, there are three contributions in our paper:

  • We propose a novel OD and OC segmentation approach, which for the first time considers both the Cartesian domain and polar domain and explores the complementary.

  • We design a Cartesian-polar dual-domain network (DDNet) with two encoding branches to learn rich contextual information from two domains for the joint OD and OC segmentation.

  • We propose a dual-domain fusion module. It allows the element-wise fusion of the feature maps on different grids from two domains and enhances the expressive capability by learning the feature importance across two domains in element-wise.

Figure 2: The architecture of the proposed DDNet. It involves four components. (1) The Cartesian domain encoding branch maps the image to feature maps on rectilinear grid. (2) The polar domain encoding branch maps the image to feature maps on polar grid. (3) The dual-domain fusion module builds the correspondence between two domains by the polar transform layer (PTL) and fuses the importance-refined features across domains in element-wise by the proposed importance-based fusion block (IFB). (4) The decoder aggregates the fused features from low-level to high-level and makes dense predictions. The atrous spatial pyramid pooling [7] performed on the fused feature maps at the last stage is used to produce more scales of feature maps.

2 Related Works

For the wide clinical applicability, the joint OD and OC segmentation has attracted much attention in the past decades. The OD and OC segmentation approaches update from hand-crafted to deep learning based while the domain performing segmentation extends from the Cartesian domain to the polar domain.


Hand-crafted features based in Cartesian domain. Most approaches are developed in the Cartesian domain with hand-crafted features. There are mainly two veins. One is based on the shape priors and tries to delineate the boundaries, e.g. [18, 2, 30, 33, 16, 4]. The other is based on the appearance priors and aims to distinguish the OD and OC pixels/regions from the background, e.g. [8, 32, 28, 23]. Due to the limited expressive capability of the hand-crafted features, those methods are intractable to segment the subtle OC. They are also fragile when the OD is surrounded by bright exudates, peripapillary atrophy etc.
Deep features based in Cartesian domain. More effective OD and/or OC segmentation approaches root in powerful representation learning methods within Cartesian domain. Deep Retinal Image Understanding (DRIU) [19] takes the five-stage VGG16 [25] as the base network and learns feature maps for the OD segmentation. In [24] and [1], variants of UNet [22] were proposed to segment the OD and OC. Nevertheless, the deep learning based segmentation approaches in Cartesian domain encounter difficulties in learning due to the imbalanced class distributions and the small classes such as OC pixels are prone to be misclassified.
Deep features based in polar domain. Most recently, MNet [11], performing segmentation in polar domain, was proposed. It learns representations directly from polar images by a simplified UNet [11] with multiscale inputs and produces polar segmentation maps. Then an inverse polar transform is used to map the polar segmentation results back to Cartesian domain. Although MNet [11] improves the segmentation performances significantly, it ignores the contextual information from Cartesian domain.
Different from the previous approaches, our DDNet learns representations from dual domains for the joint segmentation of the OD and OC. To the best knowledge of the authors, this is the first segmentation network designed to explore the complementary of the Cartesian domain and polar domain.

3 Cartesian-polar Dual-domain Network

The proposed Cartesian-polar dual-domain segmentation network (DDNet) roots in the two-branch of domain feature encoder and well-designed dual-domain fusion module. With the rich representations learned from two domains, the DDNet makes dense predictions by a decoder.

3.1 Network Architecture

The Cartesian domain is the original domain that both natural images and fundus images are captured. In this domain, the rectilinear grid is used and the geometry structures are well visualised. Naturally, the end-to-end segmentation networks [17, 6, 21, 3, 7] were developed for natural images in Cartesian domain. By directly transferring the great successes achieved in natural image segmentation to fundus image segmentation, U-shaped networks [19, 24, 1] were proposed. To alleviate the imbalanced class distributions among the OC, rim, and background pixels, MNet [11] was proposed to segment the OC and OD in the polar domain. However, the contextual information learned from single domain is still insufficient. Therefore, it is highly desired to learn richer representations for the segmentation task.
We observe that the shapes and spatial layouts of the OC, rim, OD, and vessels are completely different in Cartesian domain and polar domain. For example, the OC is ellipse-like and the rim is ringlike in Cartesian domain while they are band-like in polar domain. The vessels in Cartesian domain extend radially from superior and inferior to the OD while they are almost vertical layout in polar domain. Also by transforming the image in Cartesian domain to polar domain, the structures close to the transformation origin in Cartesian domain are amplified in polar domain while the structures far away from the transformation origin are squeezed. Such evident differences imply that complementary contextual information is embedded in two domains. Learning representations from both domains and fusing them results in richer representations. To this end, we propose the DDNet.
Fig. 2 illustrates the architecture of our DDNet. It involves the following four components:

  • Cartesian domain encoding branch. This branch maps the Cartesian image into the feature maps on rectilinear grid. We directly use the modified Xception model [7] to learn the feature representations since it has shown promising performances in natural scene semantic segmentation. It can be divided into five stages according to the spatial sizes of the feature maps. For convenience, we denote the feature maps by the last convolutional layer at the -th stage as .

  • Polar domain encoding branch. This branch first maps the Cartesian image on rectilinear grid to polar image on polar grid by a polar transform layer (PTL), then forwards the polar image into the modified Xception model [7] and generates feature maps on polar grid. For convenience, we denote the feature maps by the last convolutional layer at the -th stage as .

  • Dual-domain fusion module. This module is designed to incorporate on rectilinear grid and on polar grid. It first transforms the feature maps on the rectilinear grid to feature maps on the polar grid by a PTL. Then it fuses the feature maps and on the polar grid in element-wise by the important-based fusion block (IFB). Its detailed description will be given in next subsection.

  • Decoder. The decoder first re-scales the fused feature maps to a unified spatial size, then concatenates them and makes dense predictions to obtain the segmentation maps on polar grid.

The segmentation maps by our DDNet are on polar grid. The joint segmentation problem is formulated as a three classes (i.e., OC, rim, background) dense classification and the cross-entropy loss is used. In the training phase, our DDNet is supervised by the segmentation ground-truth in polar domain, which is transformed from the Cartesian ground-truth by the PTL. During the testing phase, an extra inverse polar transformation is performed on the outputs of the DDNet to obtain the final segmentation results on rectilinear grid.

Figure 3: The dual-domain fusion module. As illustrated, the PTL builds the correspondence between rectilinear grid and polar grid. It transforms the feature maps on rectilinear grid from Cartesian domain encoding branch to on polar grid. The channel importance map and location importance map together encode the importance of the feature in element-wise. Finally, a convolutional layer fuses the features across the channel dimension and generates the fused feature maps .

3.2 Dual-domain Fusion Module

Given the feature maps on two different grids, the dual-domain feature fusion for dense prediction pursuits a weighted strategy in element-wise that best fits the segmentation ground-truth. It is required to build the correspondence between two domains and exploit the complementary information for the element-wise classification. Fig. 3 illustrates the dual-domain fusion module.
The Cartesian domain branch takes the image on the rectilinear grid as input and outputs feature maps on rectilinear grid. Differently, the polar domain branch first transforms the image on the rectilinear grid to that on the polar grid, then outputs the feature maps on the polar grid. To build the correspondence of the feature maps on different grids, we adopt the PTL [10] and transform the feature maps on the rectilinear grid to that on polar grid. Formally, denoting the feature maps with channels and size of in Cartesian domain from the -th stage as and the point coordinates on rectilinear grid as , the PTL [10] adopts the differentiable image sampling technique [12] and outputs the sampled polar feature maps with the same spatial size and channel whose point coordinates on polar grid are denoted as . In terms of and , the PTL is expressed as:

(1)

where is the centre point of and .
To select the informative features from two domains for each element and enhance the expressive capability of the representations, the importance of the feature across the domains is learned. It is implemented by learning two important matrices and . is the channel importance map, encoding the importance of the feature map from two domains. is the location importance map, encoding the importance of the spatial location.
Formally, taking the feature maps and from the -stage of the Cartesian domain encoding branch and polar domain encoding branch respectively as input , the importance weighted feature maps are obtained by:

(2)

where is the element-wise multiplication, is the feature maps weighted by , and is the final output. Partially inspired by the CBAM [27] which is designed to learn a channel attention map and spatial attention map at each CNN stage, we adopt the same implementation to learn and . Finally we add the important weighted feature maps and , and use a convolutional layer to fuse the feature maps across the channel dimension:

(3)

where is convolutional weights of the convolutional layer.

3.3 Analysis

From the view of representation theory, our proposed DDNet not only learns representations with powerful discriminativeness, but also benefits from the translation equivariant and rotation equivariant that the two-branch of domain encoder achieves respectively.
Essentially, the Cartesian domain branch learns feature representations by performing translational convolutions on a translation symmetry group [9] [15]. Consequently, the feature representations for the input image achieve to translation equivariance and satisfy:

(4)

where is a translation action. This means that performing a translation on the input image (forming ) and then passing it through the Cartesian domain encoding branch will give the same result as first forwarding the input image to the Cartesian domain encoding branch (forming ) and then performing the same translation on the learned representation. More specifically, as is exampled in Fig. 1(b), for any pixel , denoting the corresponding pixel in the translated image as , the Cartesian domain encoding branch maps them to a same representation, i.e., . In other words, the representations for each pixel by the Cartesian domain encoding branch are invariant to translation and beneficial to the pixel-wise classification.
Different from the Cartesian domain encoding branch, the polar domain encoding branch learns feature representations by performing rotation convolutions on a rotation symmetry group [10]. Formally, we model the feature representations from the polar domain encoding branch as two mappings . The first one is the PTL, which maps the Cartesian image to polar and reduces the rotation action on the Cartesian image to the translation action on the corresponding polar image. This implies that, performing a rotation action on , there exists a translation satisfying:

(5)

The second one corresponds to the convolutional neural network, which maps the polar image

to feature representation . With Eq. 5 and Eq. 6, the feature representations from the polar domain encoding branch satisfy:

(6)

This means that performing a rotation on the input image (forming ) and then forwarding it to the polar domain encoding branch will give the same result as first forwarding the input image to the polar domain encoding branch (forming ) and then performing a translation on the learned feature representation. More specifically, as is exampled in Fig. 1(c), for any pixel in and the corresponding pixel in the rotated image , the polar domain encoding branch maps them to a same feature vector, i.e., . In other words, the representations for each pixel by the polar domain branch are invariant to rotation and beneficial to the pixel-wise classification.
By fusing the translation invariant representations and rotation invariant representations, our DDNet obtains more powerful representations. Next, we will demonstrate its effectiveness by experiments.

4 Experimental Results

The segmentation performances of our DDNet are first evaluated and compared on the public dataset ORIGA [31] for the OD and OC segmentation. Then we apply it to the CDR estimation. The ORIGA [31] contains 650 fundus images size of . 325 images are used as training images including 73 glaucoma cases and 325 images are used as testing images including 95 glaucoma cases. For each image in ORIGA [31], the segmentation masks of the OD and OC, the CDR value by experts are provided.

4.1 Implementation Details

Data Augmentation. In the training phase, the images from the training set are flipped horizontally in a random way and scaled by a random factor ranging from 0.9 to 1.1. The OD region only takes a small region in the retinal fundus image. To crop a small sized OD window, we simply train an OD segmentor by finetuning the pre-trained Deeplabv3+ [7]. According to the OD mask from the OD segmentor, we calculate the OD centre and crop an window as the inputs of the proposed DDNet.
Training. The proposed DDNet is built on the Deeplabv3+ [7]. A two-stage training procedure is adopted. First, we pre-train the parameters in the Cartesian domain encoding branch and polar domain encoding branch separately by finetuning the pre-trained Deeplabv3+ [7] on Pascal VOC and obtain the single domain segmentation model, denoted as Deeplabv3+ (Cartesian) and Deeplabv3+ (polar) respectively. Then, the parameters in the whole DDNet are trained. The hyper-parameters include: the mini-batch size (4), learning rate (0.007 in the first stage, 0.001 in the second stage), maximum number of training iterations (10,000), momentum (0.9) and weight decay (0.00004).
Testing. At the testing phase, the whole input image is first forwarded to the OD segmentor, and the OD window size of is cropped. Then we forward the OD window to the trained DDNet and obtain the segmentation masks of OD and OC on polar grid. Finally, with an inverse polar transform, we obtain the final segmentation masks on rectilinear grid.

Methods               
hand-crafted R-bend [14] 0.129 - 0.395 - -
ASM [29] 0.148 - 0.313 - -
Superpixel [8] 0.102 - 0.264 - 0.299
LRR [28] - - 0.244 - -
deep learning lightweight U-Net [24] 0.115 - 0.287 - 0.303
FC-DenseNet [1] 0.067 - 0.231 - -
MNet [11] 0.071 6.70/6.93 0.230 14.38/9.96 0.233
DeepLabv3+ [7] (Cartesian) 0.059 5.51/3.79 0.209 12.93/8.39 0.212
DeepLabv3+ [7] (polar) 0.057 5.26/3.38 0.214 13.23/9.09 0.210
DDNet (ours) 0.054 5.01/3.35 0.204 12.48/8.39 0.201
Table 1: Performance comparisons of the different methods on ORIGA [31]. denotes the overlapping error.

are the boundary location error and its standard deviation. The best results and second best results are marked in red and blue respectively. (Best viewed in colour)

Figure 4: OD and OC segmentation results on ORIGA [31] dataset: (a) and (b) are challenging images for OD segmentation and (c) (e) are challenging images for OC segmentation. From top to bottom: results by MNet [11], Deeplabv3+ [7] trained in Cartesian domain, Deeplabv3+ [7] trained in polar domain and our DDNet. The solid contours in red and the dashed green contours are delineated by experts and the segmentation approaches respectively. (Best viewed in colour)
Figure 5: Performance comparisons on the CDR estimation. From left to right: the average absolute CDR errors by MNet [11], Deeplabv3+ [7] (Cartesian), Deeplabv3+ [7] (polar) and our proposed DDNet.

4.2 Segmentation Performances

We adopt the overlapping error and the average boundary location error to evaluate the performances as introduced in [11] and [5]

respectively. The former measures the ratio of number of pixels that are wrongly classified to the number of pixels in the union region of the segmentation mask and the ground-truth. The later measures the average absolute distance between the boundaries

of the segmentation mask and the boundaries of the ground-truth:

(7)

where is the Euclidean distance of the boundary point in the direction to the centroid of the target, and is the set of the uniformly sampled directions. As same as the setting used in [5], is set to .
We compare the segmentation performances of the proposed DDNet with R-bend [14], ASM [29], Superpixel [8], LRR [28], lightweight U-Net [24], FC-DenseNet [1], DeepLabv3+ [7] (Cartesian) and DeepLabv3+ [7] (polar). Among them, the first four are hand-crafted features based. The last five are deep features based.
We report the performance comparisons in Table. 1, in which the overlapping errors of the OD, OC, and rim are denoted as , and , respectively. The average boundary location errors of OD and OC are denoted as and , respectively. It is observed that: (1) the proposed DDNet achieves the lowest overlapping errors as well as the boundary location errors; (2) Compared to the MNet [11] which is specifically designed for the joint OD and OC segmentation, our DDNet outperforms it by , and with the overlapping errors of the OD, OC and rim respectively and by and with the boundary location errors of the OD and OC respectively; (3) Compared to the Deeplabv3+ [7] performed in Cartesian domain, our DDNet reduces the overlapping errors of the OD, OC and rim by , and respectively; (4) Compared to the Deeplabv3+ [7] performed in polar domain, our DDNet reduces the overlapping errors of the OD, OC and rim by , and respectively.
The segmentation results by MNet [11], DeepLabv3+ [7] (Cartesian), DeepLabv3+ [7] (polar) and the proposed DDNet are illustrated in Fig. 4. It is illustrated that the Deeplabv3+ [7] (polar) achieves superior results in Fig. 4(a) and Fig. 4(c) but inferior results in Fig. 4(b) and Fig. 4(d) to Deeplabv3+ [7]

(Cartesian). By fusing the features extracted from two domains, our DDNet is able to achieve the superior results. Compared to MNet

[11], our DDNet achieves more accurate segmentation results. The last column shows a challenging example that all methods fail to segment the OC.

4.3 Application on the CDR Estimation

Glaucoma is the first leading cause of irreversible vision impairment and blindness [26]. In clinical, the diagnosis of glaucoma commonly relies on multiple measures such as the CDR, the visual field and intraocular pressure, etc. Generally, the larger the CDR is, the higher risk the patient is at. In what follows, we estimate the CDR according to the segmentation masks of OD and OC.
The CDR value is defined as the ratio of the vertical diameter of the OC to the vertical diameter of the OD. To evaluate the performances on the CDR estimation of the segmentation approaches, we follow [11] and adopt the absolute error. Fig. 5 shows the results of MNnet [11], Deeplabv3+ [7] (Cartesian), Deeplabv3+ [7] (polar), and our DDNet. Obviously, our DDNet gets the lowest absolute CDR estimation error . The Deeplabv3+ [7] achieves similar absolute CDR errors in Cartesian domain () and polar domain (), and both of them are superior to the MNet () [11].

5 Conclusion

This paper focuses on the joint segmentation of OD and OC in retinal fundus images. Due to the absence of depth, representations learned from single domain are insufficient to partition of the OC and rim. To improve performances, we propose to learn representations from both the Cartesian domain and polar domain, and present the Cartesian-polar Dual-domain segmentation network (DDNet). On one hand, our DDNet benefits from the complementary contextual information exploited from images in Cartesian domain and polar domain. On the other hand, our DDNet benefits from the translation equivariance achieved by the CNNs in Cartesian domain and the rotation equivariance achieved by the CNNs in polar domain. By fusing the representations from both domains, the representations by our DDNet are more powerful. We also validate the-state-of-the-art segmentation performances of the DDNet on ORIGA [31]. When applying the DDNet to the CDR estimation, it achieves lowest absolute error, which demonstrates the potential application on glaucoma screening. Our DDNet benefits from the complementary of two domains. But it is still an open question that how to fuse the feature maps such that the fused features are equivariant to translation and rotation. This will be our future work.

References

  • [1] B. Al-Bander, B. M. Williams, W. Al-Nuaimy, M. A. Al-Taee, H. Pratt, and Y. Zheng. Dense fully convolutional segmentation of the optic disc and cup in colour fundus for glaucoma diagnosis. Symmetry, 10, 2018.
  • [2] A. Aquino, M. E. Gegundez-Arias, and D. Marin. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques. IEEE Transactions on Medical Imaging, 29(11):1860–1869, Nov 2010.
  • [3] V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12):2481–2495, 2017.
  • [4] E. J. Bekkers, M. Loog, B. M. t. H. Romeny, and R. Duits. Template matching via densities on the roto-translation group. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(2):452–466, Feb 2018.
  • [5] A. Chakravarty and J. Sivaswamy. Joint optic disc and cup boundary extraction from monocular fundus images. Computer Methods and Programs in Biomedicine, 147:51 – 61, 2017.
  • [6] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2014.
  • [7] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
  • [8] J. Cheng, J. Liu, Y. Xu, F. Yin, D. W. K. Wong, N. M. Tan, D. Tao, C. Y. Cheng, T. Aung, and T. Y. Wong. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Transactions on Medical Imaging, 32(6):1019–1032, 2013.
  • [9] T. Cohen and M. Welling. Group equivariant convolutional networks. In ICML, pages 2990–2999, 2016.
  • [10] C. Esteves, C. Allen-Blanchette, X. Zhou, and K. Daniilidis.

    Polar transformer networks.

    In ICLR, 2018.
  • [11] H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, and X. Cao. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Transactions on Medical Imaging, PP(99):1–1, 2018.
  • [12] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, pages 2017–2025, 2015.
  • [13] J. B. Jonas, A. Bergua, P. Schmitz–Valckenberg, K. I. Papastathopoulos, and W. M. Budde. Ranking of optic disc variables for detection of glaucomatous optic nerve damage. Investigative Ophthalmology and Visual Science, 41(7):1764, 2000.
  • [14] G. D. Joshi, J. Sivaswamy, and S. R. Krishnadas. Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. IEEE Transactions on Medical Imaging, 30(6):1192–1205, 2011.
  • [15] R. Kondor and S. Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In ICML, 2018.
  • [16] A. Li, Z. Niu, J. Cheng, F. Yin, D. W. K. Wong, S. Yan, and J. Liu. Learning supervised descent directions for optic disc segmentation. Neurocomputing, 275:350 – 357, 2018.
  • [17] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440, 2015.
  • [18] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, and L. Kennedy. Optic nerve head segmentation. IEEE Transactions on Medical Imaging, 23(2):256–264, 2004.
  • [19] K.-K. Maninis, J. Pont-Tuset, P. Arbeláez, and L. Van Gool. Deep retinal image understanding. In S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, pages 140–148, 2016.
  • [20] H. N, O. C, C. A, and et al. The isnt rule and differentiation of normal from glaucomatous eyes. Archives of Ophthalmology, 124(11):1579–1583, 2006.
  • [21] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In ICCV, pages 1520–1528, 2015.
  • [22] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, 2015.
  • [23] A. Salazar-Gonzalez, D. Kaba, Y. Li, and X. Liu. Segmentation of the blood vessels and optic disk in retinal images. IEEE Journal of Biomedical and Health Informatics, 18(6):1874–1886, 2014.
  • [24] A. Sevastopolsky. Optic disc and cup segmentation methods for glaucoma detection with modification of u-net convolutional neural network. Pattern Recognition and Image Analysis, 27(3):618–624, Jul 2017.
  • [25] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [26] Y.-C. Tham, X. Li, T. Y. Wong, H. A. Quigley, T. Aung, and C.-Y. Cheng. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmology, 121(11):2081–2090, 2014.
  • [27] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon. Cbam: Convolutional block attention module. In ECCV, 2018.
  • [28] Y. Xu, L. Duan, S. Lin, X. Chen, D. W. K. Wong, T. Y. Wong, and J. Liu. Optic cup segmentation for glaucoma detection using low-rank superpixel representation. In P. Golland, N. Hata, C. Barillot, J. Hornegger, and R. Howe, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2014, pages 788–795, 2014.
  • [29] F. Yin, J. Liu, S. H. Ong, Y. Sun, D. W. K. Wong, N. M. Tan, C. Cheung, M. Baskaran, T. Aung, and T. Y. Wong. Model-based optic nerve head segmentation on retinal fundus images. In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 2626–2629, Aug 2011.
  • [30] H. Yu, E. S. Barriga, C. Agurto, S. Echegaray, M. S. Pattichis, W. Bauman, and P. Soliz. Fast localization and segmentation of optic disk in retinal images using directional matched filtering and level sets. IEEE Transactions on Information Technology in Biomedicine, 16(4):644–657, July 2012.
  • [31] Z. Zhang, F. S. Yin, J. Liu, W. K. Wong, N. M. Tan, B. H. Lee, J. Cheng, and T. Y. Wong. Origa-light: An online retinal fundus image database for glaucoma analysis and research. In 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pages 3065–3068, 2010.
  • [32] Y. Zheng, D. Stambolian, J. O’Brien, and J. C. Gee. Optic disc and cup segmentation from color fundus photograph using graph cut with priors. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 75–82, 2013.
  • [33] J. Zilly, J. M. Buhmann, and D. Mahapatra. Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation. Computerized Medical Imaging and Graphics, 55:28 – 41, 2017. Special Issue on Ophthalmic Medical Image Analysis.