Patch-based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation

02/20/2019 ∙ by Shujun Wang, et al. ∙ The Chinese University of Hong Kong 12

Glaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in joint OD and OC segmentation. However, affected by the domain shift among different datasets, deep networks are severely hindered in generalizing across different scanners and institutions. In this paper, we present a novel patchbased Output Space Adversarial Learning framework (pOSAL) to jointly and robustly segment the OD and OC from different fundus image datasets. We first devise a lightweight and efficient segmentation network as a backbone. Considering the specific morphology of OD and OC, a novel morphology-aware segmentation loss is proposed to guide the network to generate accurate and smooth segmentation. Our pOSAL framework then exploits unsupervised domain adaptation to address the domain shift challenge by encouraging the segmentation in the target domain to be similar to the source ones. Since the whole-segmentationbased adversarial loss is insufficient to drive the network to capture segmentation details, we further design the pOSAL in a patch-based fashion to enable fine-grained discrimination on local segmentation details. We extensively evaluate our pOSAL framework and demonstrate its effectiveness in improving the segmentation performance on three public retinal fundus image datasets, i.e., Drishti-GS, RIM-ONE-r3, and REFUGE. Furthermore, our pOSAL framework achieved the first place in the OD and OC segmentation tasks in MICCAI 2018 Retinal Fundus Glaucoma Challenge.



There are no comments yet.


page 1

page 3

page 5

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Fig. 1: Segmentation degradation due to domain shift. Domain stands for the ORIGA dataset, while domain stands for the Drishti-GS dataset. In the Prediction column, the black and gray colors represent the optic cup (OC) and optic disc (OD) segmentation, respectively. The two numbers are the dice coefficients for the OC and OD segmentation results, showing that the dice coefficients degrade from 0.87 to 0.62 for OC and from 0.95 to 0.73 for OD, when we use the M-Net [1] trained on D1 to test on D2. Our method overcomes the problem; while it is trained on , it still can achieve high dice coefficients of 0.86 and 0.95 for OC and OD, respectively, when using it on .

Glaucoma is a chronic disease that damages the optic nerves and leads to irreversible vision loss [2]. Screening and detecting glaucoma in its early stage are beneficial to preserve the vision of patients. Currently, analyzing the optic nerve head and retinal nerve fiber layer is a practical method for glaucoma detection. However, such analysis is predominantly subjective and often suffers from the high intra- and inter-observer variations [3]. With the recent advancement in optical fundus imaging, objective and quantitative glaucoma assessments based on the morphology of optic disc (OD) and optic cup (OC), and the cup-to-disc ratio (CDR) become available [2]. CDR is the ratio of vertical cup diameter to vertical disc diameter. A large CDR value often indicates a high risk of glaucoma. Manually acquiring those measurements is time-consuming. Accurately segmenting OD and OC from the fundus image via automatic solutions would prompt the large-scale glaucoma screening [1].

Remarkable performance on OD and OC segmentation are recently reported with the development of deep learning [4, 5, 1]. Assuming the training and testing samples have the same appearance distribution, training dataset consisting of a large amount of pixel-level annotations helps the deep networks learn the segmentation on the testing dataset. However, it is difficult for the network to obtain good segmentation performance on new datasets. For example, the state-of-the-art network, like M-Net [1] performs well on its specific testing dataset, i.e., ORIGA [6], but generalizes poorly on some other datasets; see Fig. 1. Domain shift, which refers to the difference in appearance distribution between the different datasets, is the main cause for the poor generalization ability of deep networks [7, 8, 9]. Indeed, domain shift among various retinal fundus image datasets is very common. Many public retinal image datasets, e.g., Drishti-GS [10], RIM-ONE-r3 [11], and REFUGE, are acquired with obvious appearance discrepancy resulting from different scanners, image resolution ratios, light source intensities, and parameter settings (Fig. 1). Overcoming the domain shift is highly desired to enhance the robustness of deep networks.

To reduce the performance degradation caused by domain shift, domain adaptation methods [8, 12] are developed to generalize the deep networks trained in a source domain to work more effectively in some other target domains with varying appearance. A vanilla solution is to fine-tune the segmentation network with a full-supervision provided by a large quantity of annotated samples from the target domain. However, preparing for the extra annotations in the target domain is highly time-consuming and expensive, and often suffers from inter-observer variations; moreover, such a solution is impractical for large-scale glaucoma screening. Therefore, an unsupervised domain adaptation approach without requiring extra annotations is highly desirable in real clinical scenarios. Furthermore, leveraging the knowledge shared across different domains can help the deep networks maintain their performance under various imaging conditions. For this joint OD and OC segmentation task, spatial and morphological structures in the output space (i.e., segmentation mask) are shared by different datasets and thus are beneficial to the mask prediction. For example, the OC is always contained inside the OD region, while both the OC and OD have ellipse-like shapes. Such spatial correlation information is crucial for domain adaptation but is typically ignored by existing deep-network-based segmentation methods.

In this work, we aim at jointly segmenting the OD and OC in retinal fundus images from different domains by introducing a novel patch-based Output Space Adversarial Learning framework (pOSAL). As the core workhorses in the framework, the lightweight network architecture for efficacy and the unsupervised domain adaptation for domain-invariance contribute to our promising performance. Our framework explores the annotated source domain images and unannotated target domain images to reduce the performance degradation on target domain. We first develop a representative segmentation network equipped with a morphology-aware segmentation loss to produce compelling segmentations. Effectively combining the designs of DeepLabv3+ [13] and depth-wise separable convolutional network MobileNetV2 [14], our segmentation network achieves a good balance between extracting multi-scale discriminative context features and computational burden. The proposed morphology-aware segmentation loss further guides the network to capture mask smoothness priors and therefore improves the segmentation. To overcome the domain shift challenge, inspired by [15], we adopt the output space adversarial learning via utilizing the spatial and morphological structures of the segmentation mask. Specifically, we attach a discriminator network to learn the abstract spatial and shape information from the label distributions of the source domain, and then employ the adversarial learning procedure to encourage the segmentation network to generate consistent predictions in a shared output space (e.g., the similar spatial layout and structure context) for the images in both source and target domains. Since the whole-segmentation-based adversarial scheme is weak in capturing segmentation details, we devise a patch-wise discriminator to capture the local statistics of the output space and guide the segmentation network to focus on the local structure similarity in the image patches. We extensively evaluate our pOSAL framework for the joint OD and OC segmentation on three public fundus image datasets (Drishti-GS, RIM-ONE-r3, and REFUGE). The pOSAL framework achieves state-of-the-art results, bringing significant improvements with the proposed patch-based output space adversarial learning.

Our main contributions are summarized as follows:

  1. We exploit unsupervised domain adaptation for joint OD and OC segmentation over different retinal fundus image datasets. The presented novel pOSAL framework enables patch-based output space domain adaptation to reduce the segmentation performance degradation on target datasets with domain shift.

  2. We design an efficient segmentation network equipped with a new morphology-aware segmentation loss to produce plausible OD and OC segmentation results. The morphological segmentation loss is able to guide the network to capture the mask smoothness priors for accurate segmentation.

  3. We conduct extensive experiments on three public retinal fundus image datasets to demonstrate the effectiveness of the pOSAL framework. Furthermore, we achieved the first place in the OD and OC segmentation task of the MICCAI 2018 Retinal Fundus Glaucoma Challenge.

The remainders of this paper are organized as follows. We review the related techniques in Section II and elaborate the pOSAL framework in Section III. The experiments and results are presented in Section IV. We further discuss our method in Section V and draw the conclusions in Section VI.

Fig. 2: Overview of the pOSAL framework. ROI regions () are firstly extracted from the source () and target () domain images and then fed into the segmentation network . The discriminator in a patch-based adversarial learning scheme enforces the similarity between the target image prediction () and source ones (). The segmentation network is supervised by the segmentation loss () computed on the prediction of source domain images () and the adversarial loss () calculated on the prediction of unlabeled target domain images ().

Ii Related Works

The OD and OC segmentation from retinal fundus images are non-trivial and have been independently studied for years. For the OD segmentation, early works employed the hand-crafted visual features, including the image gradient information [16], features from stereo image pairs [17], local texture features [18]

and superpixel-based classifier

[19]. The OC segmentation is more challenging than OD considering the lower-contrast boundary [1]. Hand-crafted features are also investigated for this task [18, 19, 20, 21, 22, 23]. Recently, some works were developed for joint OD and OC segmentation. Zheng et al. [24] designed a graph-cut framework. In [25], structure constraints were utilized for joint OD and OC segmentation.

Convolutional neural networks (CNNs) have shown remarkable performance on retinal fundus image segmentation [1, 4, 5, 26, 27, 28, 29], and outperformed traditional hand-crafted features based methods [30]. Effective network architecture design is the focus of these deep learning based methods. For example, Maninis et al. [4] presented the DRIU network combining multi-level features to segment vessels and optic disc. A disc-aware network [28] was designed for glaucoma screening by an ensemble of different feature streams in the network. ResU-net was presented in [5] with an adversarial module between the ground truth and segmentation mask to improve the final segmentation. Based on the U-net, Fu et al. [1] developed the M-Net for joint OD and OC segmentation. Although promising, CNN-based methods are often degraded when the training and testing datasets are from different domains. Our output space adversarial learning framework helps address this domain shift challenge and enhance the segmentation performance on different testing domains.

Very recently, domain adaptation techniques were explored in the field of medical image analysis [8, 9, 31, 32, 33]. Previous methods [8, 9] performed the latent feature alignment to explore a shared feature space on the source and target domain through the adversarial learning. Another cut-in point for domain adaptation is to transfer the images from the target domain to the source domain, and then to apply the trained network to the transferred images [32, 34, 35]. Among these methods, Cycle-GAN [36] is a popular technique to transfer images over different domains. The key characteristic of these approaches is to generate style-realistic images in another domain without using paired data. Extra constraints are needed to guide this unsupervised style transfer process. For example, Zhang et al. [34] employed two segmentation networks stacked behind the cycle-GAN to act as an extra supervision on the generators to enhance the shape-consistency. In [35], a semantic-aware adversarial learning was introduced to prevent the semantic distortion during the image transformation. In [32], a task-driven generative adversarial network was developed to enforce the segmentation consistency. However, these methods ignore the property that for segmentation tasks, the label space (output space) of different domains are usually highly correlated in terms of the spatial structures and geometry. Therefore, instead of exploring a shared feature space or transferring the input images, we use a patch-based output space adversarial learning to conduct the domain adaptation for joint OD and OC segmentation.

Iii Methodology

Fig. 2 overviews the pOSAL framework for joint OD and OC segmentation from retinal fundus images; our framework has three modules: an ROI extraction network , a segmentation network , and a patch-level discriminator . Due to the small area ratio of OD over the whole image, the ROI regions, and , are firstly extracted from the source domain images and target domain images , respectively (Section III-A). Then, the cropped source and target images and are fed into the segmentation network to produce the OD and OC predictions (Section III-B). A patch-level discriminator is utilized to encourage the segmentation network to produce similar outputs for the source domain images and target domain images (Section III-C). The whole framework is finally optimized by adversarial learning.

Iii-a ROI Extraction

To perform accurate segmentation, we first locate the position of OD and then crop the disc region from the original image for further segmentation. To achieve that, we build an extraction network to segment the OD and crop the ROI image according to the segmentation result. The extraction network is configured to segment the optic disc to provide a rough guidance. Although only trained with the source domain images and labels, as our experiments will demonstrate, the trained extraction network generalizes well on the target domain images due to the strong and visible structure characteristics of the optic disc in both source and target domain images. Therefore, the disc regions of both domain images can be obtained by the same extraction network. Specifically, our extraction network follows a U-Net [37] architecture and is trained with resized source images () and corresponding OD labels. The trained U-net can be used for coarse OD prediction in both domains. We then map the predicted OD mask back to the original image and crop a sub-image with the size of based on the center of the predicted OD mask. The extraction network has convolutional layers, and the last one is a convolutional layer with one output feature channel for the OD segmentation. We use the activation function to generate the probability map of OD.

Fig. 3: Architecture of the segmentation network. It is based on DeepLabv3+ but with MobileNetV2 as the network backbone.

Iii-B Segmentation Network with Morphology-aware Loss

We conduct the OD and OC segmentation based on the above-cropped ROI images. To better capture the geometric structure of the output space, we customize a network with a novel morphology-aware segmentation loss for high-quality segmentation of the OD and OC.

Iii-B1 Segmentation Network Architecture

our segmentation network follows the spirit of DeepLabv3+ architecture [13]. To further reduce the number of parameters and the computation cost, we replace the backbone network Xception [13] with the lightweight and handy MobileNetV2 [14] as shown in Fig. 3

. The first initial convolutional layer and the following seven inverted residual blocks of the MobileNetV2 are utilized to extract features. We keep the stride of the first convolutional layer and the following three blocks as the initial setting and set the stride as one in the remaining blocks. The total downsampling rate of the network is eight. The ASPP component 

[13] with different dilation rates is utilized to generate multi-scale features. The feature maps are then concatenated and followed by a convolutional layer. To integrate the semantic clues from different levels, we upsample the above-combined feature and concatenate it with the low-level feature for fine-grained semantic segmentation as the DeepLabv3+ does. Finally, we use another convolutional layer with two output channels followed by the activation function to generate the probability maps of OD () and OC () simultaneously, according to the multi-label setting in [1]. The input size of the designed segmentation network is , so that it can take the whole cropped images as input.

Iii-B2 Morphology-aware Segmentation Loss

to improve the segmentation, we develop a novel morphology-aware segmentation loss to guide the network to segment and capture the smoothness priors of the OD and OC. This joint morphological loss includes a dice coefficient loss and a smoothness loss .

The dice coefficient loss [38] measures the overlap between the prediction and ground truth, and is written as


where are the total pixels in the image; and are the predicted probability map and binary ground truth mask, respectively.

The smoothness loss encourages the network to produce homogeneous predictions within neighbor regions. It is calculated by a binary pairwise label interaction:


where is the four-connected neighbors of pixel ; and denote the prediction and ground truth, respectively. The smoothness loss encourages the neighboring pixels of central pixel to have similar predicted probabilities when their ground truth belong to the same class (). Smoothness loss is applied to the OD and OC probability maps, respectively.

The joint morphology-aware segmentation loss is defined as


where , , , are the predicted probability map and binary ground truth mask of OD and OC, respectively; are the weights empirically set as 0.4, 0.6, and 1.0, respectively. Observing that it is more difficult to segment OC than OD due to the unclear boundaries of OC, we thus empirically set a slight larger value for than .

Iii-C Patch-based Output Space Adversarial Learning

Different from high-level feature-based image classification, the feature for segmentation needs to encode both the low-level descriptors and high-level abstracts, such as appearance, shape, context and object semantic information. However, domain adaption based on feature space may not be the best choice for our segmentation task due to the complexity in handling the high-dimensional features [15]. Although the image appearance shifts across domains, the segmentation of source and target domain images have similar geometry structures in the output space (i.e., segmentation mask). Therefore, bridging two domains by forcing them to share the same distribution in the output space becomes an effective way for domain adaptation. In this work, we propose to perform domain adaptation for segmentation task through the output space adversarial learning. Specifically, the segmentation masks of target domain images should be similar to the ones of source domain. To achieve this, we attach a patch-level discriminator after the outputs of the segmentation network , and then employ the adversarial learning technique to train the whole framework. In this adversarial setting, the segmentation network aims to fool the discriminator

by generating a similar output space distribution either for source and target domain, while the discriminator aims to identify the segmentation from target domain as outliers. The geometry structure constraints on the segmentation masks are guaranteed through this adversarial process.

Fig. 4: Network architecture of the patch-based discriminator.

Iii-C1 Patch Discriminator

we employ a patch discriminator (PatchGAN) [39, 40] to conduct the adversarial learning. PatchGAN tries to classify whether each overlapped patch from the predicted mask is in line with the distribution of that from the source predictions. Compared with the image-level (ImageGAN) or pixel-level (PixelGAN) adversarial learning, PatchGAN has the ability to capture the local statistics [41] of the output space and guides the segmentation network to focus on the local structure similarity in the image patches.

We realize the patch-based discriminator through a fully convolutional network, as shown in Fig. 4. The network contains five convolutional layers with a kernel size of and a stride of . The channel number of the five convolutional layers are , respectively. The activation function following each convolutional layer is LeakyReLU with an alpha value of , except for the last one using the function. The output size () of the patch-based discriminator is , in which one pixel corresponds to a patch of size in the input probability maps. Each patch is classified into real (1) or fake (0) through the discriminator. We employ this adversarial learning strategy to force each generated patch in the prediction of target domain to be similar to the patch of source domain.

Iii-C2 Objective Function

with the adversarial learning, we model the optimization as a two-player min-max game to alternately update weights in the segmentation network and the discriminator .

The discriminator evaluates whether the input is from the source domain prediction. We formulate the training objective for the discriminator as


where if the patch prediction is from the source domain, and if the patch prediction is from the target domain.

As for the segmentation network, the objective function consists of the proposed morphology-aware segmentation loss for the source domain images and the adversarial loss for the target domain images. In general, the training objective of segmentation network is


Since we have the annotations for the images from source domain, we can use the joint morphology-aware segmentation loss to optimize the segmentation network. The adversarial loss is designed for the images in target domain without any annotations. The segmentation network is responsible for ‘fooling’ the discriminator to classify the prediction of target domain images as the source prediction.

Iii-C3 Training Strategy

we optimize the segmentation network and the discriminator following the standard approach from [42]. In each training iteration, we feed the images from source domain and target domain to the network alternatively. Then we optimize the whole framework by minimizing the proposed objective functions and . We repeat the above procedure for each training iteration.

Domain Dataset Number of samples Image size Cameras Release year
Source REFUGE Train 400 Zeiss Visucam 500 2018
Target Drishti-GS Train/Test 50 + 51 unknown
Target RIM-ONE-r3 Train/Test 99 + 60 unknown
Target REFUGE Validation/Test 400 + 400 Canon CR-2
TABLE I: Statistics of the datasets used in evaluating the proposed method.
Drishti-GS RIM-ONE-r3
Method Method
pOSAL 0.858 0.965 0.082 pOSAL 0.787 0.865 0.081
pOSALseg-S 0.836 0.944 0.118 pOSALseg-S 0.744 0.779 0.103
Edupuganti et al.[29] 0.897 0.967 - DRIU [4] - 0.955 -
Sevastopolsky [27] 0.850 - - Sevastopolsky [27] - 0.950 -
Son et al.[43] - 0.967 - Son et al.[43] - 0.955 -
Zilly et al.[26] 0.871 0.973 - Zilly et al.[26] 0.824 0.942 -
pOSALseg-T 0.901 0.974 0.048 pOSALseg-T 0.856 0.968 0.049
TABLE II: Results of joint OD and OC segmentation on the Drishti-GS and RIM-ONE-r3 testing datasets.

Iv Experiments

Iv-a Dataset

We conducted experiments on three public OD and OC segmentation datasets, Drishti-GS dataset [10], RIM-ONE-r3 dataset [11] and the REFUGE challenge dataset111 The statistics of these three datasets are listed in Table I. We refer the train part of the REFUGE dataset as the source domain, the Drishti-GS dataset, RIM-ONE-r3 dataset and the validation/test parts of the REFUGE dataset as the target domain. The source and target domain images are captured by different cameras so that the color and texture of the images are different, as shown in Fig. 5. We first extensively evaluated and analyzed our pOSAL framework on the Drishti-GS and RIM-ONE-r3 datasets, and then compared with other state-of-the-art segmentation methods on the REFUGE test dataset.

Fig. 5: Comparison of images from different datasets. There exists a large variation in color and texture among the different dataset images.

Iv-B Implementation Details

The framework was implemented in Python based on Keras 


with the Tensorflow backend. We first trained the segmentation network with source domain images and annotations and then utilized the adversarial learning to train the whole

pOSAL framework in an end-to-end manner. When training the segmentation network, we used the Adam [45] optimizer and initialized the backbone network weights by the MobileNetV2 [14]

weights trained on the ImageNet dataset. We set the initial learning rate as

and divided it by

every 100 epochs. We totally trained 200 epochs with a mini-batch size of 16 on a server with four Nvidia Titan Xp GPUs. Data augmentation was adopted to expand the training dataset by random scale, rotation, flip, elastic transformation, contrast adjustment, adding noise and random erasing

[46]. When end-to-end training the whole pOSAL framework, we fed source and target images to the network alternatively. The optimization method of segmentation network was the same as the above, while the discriminator

was optimized with the stochastic gradient descent (SGD) algorithm. The initial learning rate of the segmentation network and discriminator were

and , respectively, and decreased using the polynomial decay with a power of 0.9 as mentioned in [47] in a total of 100 epochs. We conducted the morphological operation, i.e., filling the hole, to post-process the predicted mask. The implementation and segmentation results are available in

Iv-C Evaluation Metrics

We adopt the REFUGE challenge evaluation metrics, dice coefficients (

) and the vertical cup to disc ratio (), to evaluate the segmentation performance of the presented method. The criteria are defined as


where , , and represent the number of true positive, false positive, and false negative pixels, respectively. and denote the cup to disc ratio value for the prediction and ground truth, while and are the vertical diameters for OC and OD, respectively. The dice coefficient is a standard evaluation metric for segmentation tasks, while the CDR value is one of the critical indicators for glaucoma screening in the clinical convention. We use absolute error to evaluate the difference between the CDR value of prediction and that of the ground truth , while the lower value represents the better prediction result.

Fig. 6: Qualitative results on the Drishti-GS testing dataset. Each column presents one example. From top to bottom: original image, ROI region with ground truth contours of OD and OC, results of the pOSALseg-S, and results of our pOSAL framework. The green and blue contours indicate the boundary of OD and OC, respectively.
Method Drishti-GS RIM-ONE-r3
TD-GAN[32] 0.747 0.924 0.117 0.728 0.853 0.118
Hoffman et al. [48] 0.851 0.959 0.093 0.755 0.852 0.082
Javanmardi et al. [31] 0.849 0.961 0.091 0.779 0.853 0.085
OSAL-pixel 0.851 0.962 0.089 0.778 0.854 0.084
pOSAL (ours) 0.858 0.965 0.082 0.787 0.865 0.081
TABLE III: Comparison with different domain adaptation methods on the Drishti-GS and RIM-ONE-r3 datasets.

Iv-D Experiments on Drishti-GS and RIM-ONE-r3 Datasets

Under the domain adaptation setting, we need to utilize the unlabeled target domain images to train the whole framework. For a fair comparison, the unlabeled target domain images used in the training phase were different from the target domain images in the testing phase. We follow this experiment setting over our experiments.

Iv-D1 Effectiveness of Patch-based Output Space Adversarial Learning

the Drishti-GS and RIM-ONE-r3 datasets both provide the training and testing images splits. Therefore, for the Drishti-GS dataset, we used the REFUGE training dataset as the source domain and the training part of the Drishti-GS dataset as the target domain to train our pOSAL framework. We then report the segmentation performance of our method on the testing dataset of Drishti-GS. We conducted experiments with the same dataset setting for the RIM-ONE-r3 dataset.

Table II presents the segmentation results on the Drishti-GS and RIM-ONE-r3 testing datasets. For each dataset, we show the segmentation performance of our pOSAL framework (pOSAL) and the segmentation network only (pOSALseg-S) to demonstrate the effect of the proposed output space adversarial learning. We used the REFUGE training dataset to train the pOSALseg-S model and directly evaluated it on the Drishti-GS and RIM-ONE-r3 testing images. It is observed that the pOSAL consistently improves the DI of optic cup and disc and the on the Drishti-GS and RIM-ONE-r3 datasets compared with pOSALseg-S. On the RIM-ONE-r3 dataset, we achieve 4.3% and 8.6% DI improvement for the cup and disc segmentation with the patch-based output space adversarial learning, while we also achieve 2.2% and 2.1% DI improvement for OC and OD on the Drishti-GS dataset, respectively. Since the domain discrepancy between the REFUGE training data and RIM-ONE-r3 data is larger than the difference between REFUGE training data and Drishti-GS data (see Fig. 5), the absolute DI values of optic cup and disc on RIM-ONE-r3 is lower than those on Drishti-GS. These comparisons demonstrate the patch-based output space adversarial learning can alleviate the performance degradation among the datasets with domain shift.

Iv-D2 Qualitative Results

we show some qualitative results of the optic OD and OC segmentation on the Drishti-GS dataset in Fig. 6. For the pOSALseg-S method without domain adaptation, it can locate the approximate location but fails to generate accurate boundaries of OD and OC due to the low image contrast at the boundary between OD and OC, as well as between OD and background (especially columns A, B and E in Fig. 6). In contrast, our proposed method successfully localizes the OD and OC and further preserves the shape prior and generates more accurate boundaries.

Team Overall Rank
CUHKMED (ours) 0.8826 2 0.9602 1 0.0450 2 1.75 1
Masker 0.8837 1 0.9464 7 0.0414 1 2.50 2
BUCT 0.8728 3 0.9525 3 0.0456 3 3.00 3
NKSG 0.8643 5 0.9488 5 0.0465 4 4.60 4
VRT 0.8600 6 0.9532 2 0.0525 7 5.40 5
AIML 0.8519 7 0.9505 4 0.0469 5 5.45 6
Mammoth 0.8667 4 0.9361 10 0.0526 8 7.10 7
SMILEDeepDR 0.8367 8 0.9386 9 0.0488 6 7.45 8
NIGHTOwl 0.8257 10 0.9487 6 0.0563 9 8.60 9
SDSAIRC 0.8315 9 0.9436 8 0.0674 10 9.15 10
Cvblab 0.7728 11 0.9077 11 0.0798 11 11.00 11
Winter_Fell 0.6861 12 0.8772 12 0.1536 12 12.00 12
TABLE IV: Results of OD and OC segmentation on the REFUGE testing dataset. Top three items are bold for each metric.

Iv-D3 Comparison with other Segmentation Methods

we also report the segmentation performance of some supervised learning methods in literature for the above two datasets. In these methods, the networks were trained with the training split of the dataset in a supervised way and evaluated on the testing part of the related datasets. Besides the methods in the literature, we also trained our segmentation network with the training data and report the segmentation performance on the testing images (denoted as

pOSALseg-T) to show the effectiveness of our designed segmentation network with the morphology-aware segmentation loss. We show these results in Table II. As we can see our segmentation network (pOSALseg-T) can produce better DI for the optic cup and disc segmentation compared with other supervised methods on both of the Drishti-GS and RIM-ONE-r3 datasets, showing the effectiveness of the segmentation network design. In another aspect, it is observed that the optic cup and disc segmentation performance of our pOSAL framework on the Drishti-GS dataset is very close to that of these supervised methods, which further indicates the effectiveness of the proposed patch-based output space adversarial learning.

Iv-D4 Comparison with Different Domain Adaptation Approaches

as far as we know, we are not aware of any previous works that explores domain adaptation for optic disc and cup segmentation. Therefore, we compared our pOSAL framework with several unsupervised domain adaptation ideas in other medical image analysis and natural image processing tasks. Specifically, we compared our pOSAL framework with a Cycle-GAN based unsupervised domain adaptation method TD-GAN [32], a latent feature alignment method [48], and a recent domain adaptation method for eye vasculture segmentation [31]. To show the effectiveness of our patch-based discriminator, we also implemented a pixel-based discriminator for adversarial learning (denoted as OSAL-pixel). Table III presents the performance of different domain adaptation methods on the Drishti-GS and RIM-ONE-r3 datasets. All the methods adopted the same segmentation network architecture for a fair comparison. As we can see, our pOSAL framework achieves the best performance for the optic cup and disc segmentation among these unsupervised domain adaptation methods on the Drishti-GS and RIM-ONE-r3 datasets. In another aspect, the patch-based adversarial learning outperforms the pixel-level discriminator (OSAL-pixel) and the image-level discriminator (Javanmardi et al. [31]) for adversarial learning, as it considers the local and global context information simultaneously.

Iv-D5 Performance of Glaucoma Screening

the vertical CDR value is one of the important indicators for glaucoma screening. Therefore, we provide the glaucoma diagnose performance based on our segmentation method. Specifically, we make use of the segmented OD and OC masks to calculate the vertical CDR value for the th image. Then the normalized CDR values of the th image can be calculated using


where and are the maximum and minimum vertical CDR values, respectively, through all the testing images. We report the Receiver Operating Characteristic (ROC) curve and Area Under ROC Curve (AUC) for glaucoma screening evaluation in Fig. 7.

Fig. 7: The ROC curves of our method in glaucoma screening on the Drishti-GS and RIM-ONE-r3 datasets.

Iv-E Results of the REFUGE Challenge

We report the results for the optic disc and cup segmentation task of the REFUGE challenge in conjunction with MICCAI 2018. The challenge datasets consist of three parts: a training dataset, a validation dataset, and a testing dataset. The validation and testing datasets are acquired with the same cameras and the detailed information is shown in Table I. The testing images were evaluated via an on-site part, where the participants had four hours to acquire the testing images and submit the prediction results to avoid manually tuning the hyper-parameters. We treated the training images as the source domain and the validation images as the unlabeled target domain to train our pOSAL framework. The testing image prediction was then acquired by an ensemble of five models to improve the segmentation performance further. Other participant teams also utilized the ensemble scheme to generate the final testing prediction (e.g., team Masker).

There were 12 teams selected to participate in the onsite REFUGE challenge for the OD and OC segmentation task, and the challenge results are listed in Table IV (The leaderboard is in the challenge website222 Each team was only allowed for one submission, and the teams were ranked according to the following weighted sum of three metrics:


where , , and denote the rank of , and criteria, respectively. A lower suggests a better final rank. All these methods utilized deep neural networks for OD and OC segmentation. Some methods made use of other datasets (e.g., ORIGA [6] and IDRiD333 as the extra training data to improve the model generalization capability, while we only used the training and validation datasets provided by the organizer. In Table IV, it is observed that our pOSAL framework outperforms the second-ranking team Masker by around 1.4% on the optic disc DI, while we achieve compelling performance on both the DI of optic cup and CDR . Overall, our pOSAL framework achieves the best overall ranking score and the first place in this challenging task, demonstrating the effectiveness of pOSAL. We also visually compare the appearance difference of the results with and without the output space adversarial learning using a single model. As shown in Fig. 8, our pOSAL framework could retain the elliptical features and push the optic cup within the optic disc to produce better visual results.

Fig. 8: Qualitative results of the REFUGE testing image. The green and blue contours indicate the boundary of OD and OC, respectively.
pOSALseg-S 0.869 0.932 0.059
pOSAL 0.875 0.946 0.051
TABLE V: Results of OD and OC segmentation on the REFUGE validation dataset.

Besides, we further validate the effectiveness of the patch-based output space adversarial learning on the REFUGE validation dataset. Specifically, we randomly divided the 400 validation images into two equal-sized parts and used them as the unlabeled target domain training data to train the network and the target domain testing data to evaluate the network, respectively. We report the performance of our pOSAL framework and the same network without domain adaptation (pOSALseg-S) in Table V. As we can see, the presented pOSAL framework also improves the DI for optic cup and disc on the REFUGE validation dataset.

Loss function
Cross Entropy Loss 0.860 0.953
Dice Loss 0.878 0.950
Morphology-aware Loss 0.885 0.956
TABLE VI: Comparison of different loss functions.

We compared the effect of different loss functions to the segmentation network. Specifically, we divided the 400 REFUGE training images into 320 and 80 images to train and evaluate the network, respectively, with different loss functions. The results are shown in Table VI. We can find that the Dice Loss achieved a better DI for OC and a comparable DI for OD compared with the Cross Entropy Loss. When combined with the smooth loss, the proposed morphology-aware segmentation loss achieves the best DI of OD and OC predictions, suggesting that the morphology-aware segmentation loss produces high-quality predictions.

We also provide the glaucoma screening evaluation results here for readers’ reference. We directly utilized the segmentation results of our pOSAL framework to calculate the vertical CDR values to diagnose glaucoma following the method on previous two datasets. Since we cannot access the ground truth of the glaucoma, we only report the AUC of glaucoma screening on the challenge testing dataset. The AUC value is , ranking third (Team CUHKMED) in the onsite challenge444

Block type Params Time
Xception 41.3M 0.885 0.953 0.124s
MobileNetV2 5.8M 0.885 0.956 0.056s
TABLE VII: Comparison of different network backbones.

V Discussion

The optic disc to cup ratio has been recognized as an essential attribute for glaucoma screening, so a high-quality automatic segmentation method is highly demanded in clinical practice. Although plenty of works worked on this problem, there still exists a gap between research works and clinical practice due to the lack of annotations, the noisy or sparse annotations of clinical applications, and the domain shift between training images and real testing images. In this work, we focus on developing unsupervised domain adaptation methods to enable optic disc and cup segmentation applied to clinical applications. The key insight of our method is to encourage the target domain predictions closer to the source ones, since the OD and OC geometry structure should be reserved for source and target domain images. The extensive experiments on three public fundus image datasets have sufficiently demonstrated the potential of our method in generalizing the segmentation network to unlabeled target domain images.

Method Drishti-GS RIM-ONE-r3
E 0.798 0.930 0.098 0.638 0.709 0.150
pOSAL 0.858 0.965 0.082 0.787 0.865 0.081
TABLE VIII: Performance of the Extraction Network E.

In our method, we first used an extraction network to crop an ROI image before performing the segmentation. To show the necessity of the ROI extraction, we conducted another experiment to see the overall performance of the extraction network . We trained a new extraction network with two outputs instead of only the optic disc. The performance of OC and OD on the Drishti-GS and RIM-ONE-r3 datasets are shown in Table VIII. It is observed that the segmentation performance of using only the extraction network is much lower than the full method pOSAL. The results here verify the effectiveness of the two-stage pipeline. A good ROI is the basis of good segmentation results in the two-stage pipeline. In some cases, the boundary between the OD and background is unclear, so the OD may not in the center of the ROI. To avoid this kind of situation, the ROI size is needed to be designed properly. In our experiments, the width and height of ROI are about twice larger than that of the OD, which helps relax the location deviation. We found that all of the OD and OC regions are covered by the cropped ROI under this experiment setting.

Currently, numerous works are focusing on computation-efficient network design [14, 49] to promote the deep learning applications for mobile devices with limited computing power. In our work, we used a MobileNetV2 [14] as the network backbone to reduce the computation cost. We compared the segmentation performance, parameter numbers, and testing time cost of the original backbone: Xception [13] and MobileNetV2 [14] in Table VII. It is observed that the MobileNetV2 backbone has fewer parameters and can reduce the testing time by half with similar performance compared with the Xception backbone. This comparison indicates that we could develop more lightweight network architecture to promote the development of mobile applications for glaucoma screening.

Although our network can be generalized to unlabeled target domain images, collecting extra unlabeled images from the target domain is needed to train the network. Moreover, it is necessary to re-train a new network when the image comes from a new target domain. In practice, the unlabeled target domain images may not be available during the training stage. Therefore, in the future, we would explore the domain generalization techniques [50, 51, 52] to tackle this problem without the demand for many target images.

Vi Conclusion

We presented a novel patch-based Output Space Adversarial Learning framework to segment optic disc and cup from different fundus images. We first employed a lightweight and efficient network with the morphology-aware segmentation loss to generate accurate and smooth predictions. To tackle the domain shift between the source and target domains, we exploited unsupervised domain adaptation model to improve the generalization of the segmentation network. Particularly, the patch-based output space adversarial learning was designed to capture the local statistics of the output space and guide the segmentation network generate similar outputs for the images from the target and source domains. We also performed extensive experiments on three public retinal fundus image datasets to demonstrate the significant improvements and the effectiveness of the presented pOSAL framework. More effort will be involved to extend this framework to other medical image analysis problems in the near future.


  • [1] H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, and X. Cao, “Joint optic disc and cup segmentation based on multi-label deep network and polar transformation,” IEEE Transactions Medical Imaging, 2018.
  • [2] M. C. V. S. Mary, E. B. Rajsingh, and G. R. Naik, “Retinal fundus image analysis for diagnosis of glaucoma: a comprehensive survey,” IEEE Access, vol. 4, pp. 4327–4354, 2016.
  • [3] P. Naithani, R. Sihota, P. Sony, T. Dada, V. Gupta, D. Kondal, and R. M. Pandey, “Evaluation of optical coherence tomography and heidelberg retinal tomography parameters in detecting early and moderate glaucoma,” Investigative ophthalmology & visual science, vol. 48, no. 7, pp. 3138–3145, 2007.
  • [4] K.-K. Maninis, J. Pont-Tuset, P. Arbeláez, and L. Van Gool, “Deep retinal image understanding,” in MICCAI.   Springer, 2016, pp. 140–148.
  • [5] S. M. Shankaranarayana, K. Ram, K. Mitra, and M. Sivaprakasam, “Joint optic disc and cup segmentation using fully convolutional and adversarial networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis.   Springer, 2017, pp. 168–176.
  • [6] Z. Zhang, F. S. Yin, J. Liu, W. K. Wong, N. M. Tan, B. H. Lee, J. Cheng, and T. Y. Wong, “Origa-light: An online retinal fundus image database for glaucoma analysis and research,” in Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE.   IEEE, 2010, pp. 3065–3068.
  • [7] M. Ghafoorian, A. Mehrtash, T. Kapur, N. Karssemeijer, E. Marchiori, M. Pesteie, C. R. Guttmann, F.-E. de Leeuw, C. M. Tempany, B. van Ginneken et al.

    , “Transfer learning for domain adaptation in mri: Application in brain lesion segmentation,” in

    MICCAI.   Springer, 2017, pp. 516–524.
  • [8] K. Kamnitsas, C. Baumgartner, C. Ledig, V. Newcombe, J. Simpson, A. Kane, D. Menon, A. Nori, A. Criminisi, D. Rueckert et al., “Unsupervised domain adaptation in brain lesion segmentation with adversarial networks,” in International Conference on Information Processing in Medical Imaging.   Springer, 2017, pp. 597–609.
  • [9] Q. Dou, C. Ouyang, C. Chen, H. Chen, and P.-A. Heng, “Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss,” IJCAI, 2018.
  • [10] J. Sivaswamy, S. Krishnadas, A. Chakravarty, G. Joshi, A. S. Tabish et al., “A comprehensive retinal image dataset for the assessment of glaucoma from the optic nerve head analysis,” JSM Biomedical Imaging Data Papers, vol. 2, no. 1, p. 1004, 2015.
  • [11] F. Fumero, S. Alayón, J. Sanchez, J. Sigut, and M. Gonzalez-Hernandez, “Rim-one: An open retinal image database for optic nerve evaluation,” in Computer-Based Medical Systems (CBMS), 2011 24th International Symposium on.   IEEE, 2011, pp. 1–6.
  • [12] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa, “Visual domain adaptation: A survey of recent advances,” IEEE signal processing magazine, vol. 32, no. 3, pp. 53–69, 2015.
  • [13] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” ECCV, 2018.
  • [14] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in CVPR, 2018, pp. 4510–4520.
  • [15] Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker, “Learning to adapt structured output space for semantic segmentation,” CVPR, 2018.
  • [16] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, L. Kennedy et al., “Optic nerve head segmentation,” IEEE Transactions Medical Imaging, vol. 23, no. 2, pp. 256–264, 2004.
  • [17] M. D. Abramoff, W. L. Alward, E. C. Greenlee, L. Shuba, C. Y. Kim, J. H. Fingert, and Y. H. Kwon, “Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features,” Investigative ophthalmology & visual science, vol. 48, no. 4, pp. 1665–1673, 2007.
  • [18] G. D. Joshi, J. Sivaswamy, and S. Krishnadas, “Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment,” IEEE Transactions Medical Imaging, vol. 30, no. 6, pp. 1192–1205, 2011.
  • [19] J. Cheng, J. Liu, Y. Xu, F. Yin, D. W. K. Wong, N.-M. Tan, D. Tao, C.-Y. Cheng, T. Aung, and T. Y. Wong, “Superpixel classification based optic disc and optic cup segmentation for glaucoma screening,” IEEE Transactions Medical Imaging, vol. 32, no. 6, pp. 1019–1032, 2013.
  • [20] D. Wong, J. Liu, J. Lim, X. Jia, F. Yin, H. Li, and T. Wong, “Level-set based automatic cup-to-disc ratio determination using retinal fundus images in argali,” in Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE.   IEEE, 2008, pp. 2266–2269.
  • [21] D. Wong, J. Liu, J. Lim, H. Li, and T. Wong, “Automated detection of kinks from blood vessels for optic cup segmentation in retinal images,” in Medical Imaging 2009: Computer-Aided Diagnosis, vol. 7260.   International Society for Optics and Photonics, 2009, p. 72601J.
  • [22] Y. Xu, D. Xu, S. Lin, J. Liu, J. Cheng, C. Y. Cheung, T. Aung, and T. Y. Wong, “Sliding window and regression based cup detection in digital fundus images for glaucoma diagnosis,” in MICCAI.   Springer, 2011, pp. 1–8.
  • [23] Y. Xu, L. Duan, S. Lin, X. Chen, D. W. K. Wong, T. Y. Wong, and J. Liu, “Optic cup segmentation for glaucoma detection using low-rank superpixel representation,” in MICCAI.   Springer, 2014, pp. 788–795.
  • [24] Y. Zheng, D. Stambolian, J. O’Brien, and J. C. Gee, “Optic disc and cup segmentation from color fundus photograph using graph cut with priors,” in MICCAI.   Springer, 2013, pp. 75–82.
  • [25] Y. Xu, J. Liu, S. Lin, D. Xu, C. Y. Cheung, T. Aung, and T. Y. Wong, “Efficient optic cup detection from intra-image learning with retinal structure priors,” in MICCAI.   Springer, 2012, pp. 58–65.
  • [26] J. Zilly, J. M. Buhmann, and D. Mahapatra, “Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation,” Computerized Medical Imaging and Graphics, vol. 55, pp. 28–41, 2017.
  • [27] A. Sevastopolsky, “Optic disc and cup segmentation methods for glaucoma detection with modification of u-net convolutional neural network,” Pattern Recognition and Image Analysis, vol. 27, no. 3, pp. 618–624, 2017.
  • [28] H. Fu, J. Cheng, Y. Xu, C. Zhang, D. W. K. Wong, J. Liu, and X. Cao, “Disc-aware ensemble network for glaucoma screening from fundus image,” IEEE Transactions Medical Imaging, 2018.
  • [29] V. G. Edupuganti, A. Chawla, and A. Kale, “Automatic optic disk and cup segmentation of fundus images using deep learning,” in 2018 25th IEEE International Conference on Image Processing (ICIP).   IEEE, 2018, pp. 2227–2231.
  • [30] F. Yin, J. Liu, D. W. K. Wong, N. M. Tan, C. Cheung, M. Baskaran, T. Aung, and T. Y. Wong, “Automated segmentation of optic disc and optic cup in fundus images for glaucoma diagnosis,” in Computer-based medical systems (CBMS), 2012 25th international symposium on.   IEEE, 2012, pp. 1–6.
  • [31] M. Javanmardi and T. Tasdizen, “Domain adaptation for biomedical image segmentation using adversarial training,” in Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on.   IEEE, 2018, pp. 554–558.
  • [32] Y. Zhang, S. Miao, T. Mansi, and R. Liao, “Task driven generative modeling for unsupervised domain adaptation: Application to x-ray image segmentation,” MICCAI, 2018.
  • [33] X. Yang, H. Dou, R. Li, X. Wang, C. Bian, S. Li, D. Ni, and P.-A. Heng, “Generalizing deep models for ultrasound image segmentation,” in MICCAI.   Springer, 2018, pp. 497–505.
  • [34] Z. Zhang, L. Yang, and Y. Zheng, “Translating and segmenting multimodal medical volumes with cycle-and shapeconsistency generative adversarial network,” in CVPR, 2018, pp. 9242–9251.
  • [35] C. Chen, Q. Dou, H. Chen, and P.-A. Heng, “Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation,” arXiv preprint arXiv:1806.00600, 2018.
  • [36]

    J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in

    ICCV, 2017.
  • [37] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in MICCAI.   Springer, 2015, pp. 234–241.
  • [38] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 3D Vision (3DV), 2016 Fourth International Conference on.   IEEE, 2016, pp. 565–571.
  • [39]

    P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,”

    CVPR, 2017.
  • [40] C. Li and M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” in ECCV.   Springer, 2016, pp. 702–716.
  • [41] Z. Yi, H. R. Zhang, P. Tan, and M. Gong, “Dualgan: Unsupervised dual learning for image-to-image translation.” in ICCV, 2017, pp. 2868–2876.
  • [42] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  • [43] J. Son, S. J. Park, and K.-H. Jung, “Towards accurate segmentation of retinal vessels and the optic disc in fundoscopic images with generative adversarial networks,” Journal of digital imaging, pp. 1–14, 2018.
  • [44] F. Chollet et al., “Keras,”, 2015.
  • [45] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [46] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” arXiv preprint arXiv:1708.04896, 2017.
  • [47] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018.
  • [48] J. Hoffman, D. Wang, F. Yu, and T. Darrell, “Fcns in the wild: Pixel-level adversarial and constraint-based adaptation,” arXiv preprint arXiv:1612.02649, 2016.
  • [49] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in ECCV.   Springer, 2018, pp. 122–138.
  • [50] K. Muandet, D. Balduzzi, and B. Schölkopf, “Domain generalization via invariant feature representation,” in

    International Conference on Machine Learning

    , 2013, pp. 10–18.
  • [51] D. Li, Y. Yang, Y.-Z. Song, and T. M. Hospedales, “Deeper, broader and artier domain generalization,” in ICCV.   IEEE, 2017, pp. 5543–5551.
  • [52] H. Li, S. J. Pan, S. Wang, and A. C. Kot, “Domain generalization with adversarial feature learning,” in CVPR, 2018.