Glaucoma is a degenerative optic neuropathy that causes functional damage and visual field defects by altering several structures in the optic nerve head (ONH) of the retina . Currently, the diagnostic procedure for detecting glaucoma requires several time-consuming tests, besides a visual examination of medical images, whose interpretation is often subjective for expert ophthalmologists at the grading time 
. So, many state-of-the-art studies propose image-processing techniques to help experts via machine-learning solutions.
The optical coherence tomography (OCT) is the quintessential imaging modality for glaucomatous damage evaluation since it allows evidencing the deterioration of the cell layers of the optic nerve, which is intimately linked to the glaucoma severity (see Fig. 1). Specifically, the retinal nerve fibre layer (RNFL) has been reported in the clinical literature as the most important structure for glaucoma progression .
Some of the previous studies combined hand-driven learning algorithms with conventional machine-learning classifiers to discern between healthy and glaucomatous OCT samples[4, 9]
. The aforementioned studies demand a previous segmentation of the retinal layers of interest to conduct the hand-crafted feature extraction from specific regions of the B-scans, e.g. the RNFL thickness. However, approaches based on prior segmentation knowledge could transfer remaining errors for downstream classification tasks, according to the recent study
. To avoid this shortcoming, deep learning arises as an appealing alternative to derive high-performing computer-aided diagnosis systems in the ophthalmology field.
Nevertheless, despite the rise of these models in many computer vision and medical problems, their application on OCT images for glaucoma assessment still presents several limitations. First, most of the literature focuses on discerning between healthy and glaucoma classes via OCT B-scans[5, 6], SD-OCT volumes [7, 10, 13]
or probability RNFL maps by combining fundus images and OCT samples[16, 14]. Indeed, to the best of our knowledge, only  proposes a glaucoma-based scenario beyond the healthy-glaucoma classification by including the suspect label. Furthermore, the limited size of available data sets may hamper the generalization capabilities of learned models, as they could easily lead to overfitting. This is particularly important in the presence of domain shift between training (labeled) and testing (unlabeled) data distribution. A naive solution would be to collect and labeled data that follows a similar distribution than testing images, which can be then employed to fine-tune the model on the new test data. However, this requires to label large amounts of testing images, involving a time-consuming human process, which is unrealistic in clinical practice. To alleviate the problem of domain shifts, unsupervised domain adaptation has recently appeared as an interesting learning strategy. These methods typically include an adversarial learning framework [21, 18]
, which can lead to unstable training and high sensitiveness to hyperparameters.
To fill these gaps in the literature, we propose an alternative learning strategy for glaucoma grading that differs from current literature in several ways. First, unlike existing methods, our approach addresses the problem of glaucoma grading, according to the clinical annotation criteria . Second, it demonstrates to improve the testing performance of a model trained in the presence of domain shift, approaching the results obtained by full-supervision. Third, the simplicity of our model facilitates the training convergence, contrary to complex adversarial learning-based methods. And last, we propose architectural changes that result in enhanced useful representations from the OCT B-scans, leading to a better performance and more meaningful prediction interpretations compared to conventional architectures.
Ii-a Self-training strategy
Self-training, or self-supervised learning, aims at automatically generating a supervisory signal for a given task, which can be then used to enhance the representation learning of features or to label an independent dataset. The former typically involves integrating an unsupervised pretext task, relational reasoning  or contrastive learning . Nevertheless, in our glaucoma grading scenario, we advocate for the use of a sequential step, where the model is first trained on a labeled source dataset. Then, this model performs an inference on the unlabeled dataset to generate the target pseudo-labels, which are later used to train the model, mimicking full supervision on the test data (see Fig. 2). This learning strategy has demonstrated a high classification performance on different imaging modalities, such as histopathology 
or natural images from ImageNet. Formally, we denote as the independent training set, where refers to the -sample of the database with its corresponding ground truth , being the number of training image pairs. Furthermore, we use to represent a given task, where
is usually learnt by a neural network. In the current work, we leverage the scenario where the source and target domains are related () since both correspond to OCT samples, but acquired from different hardware settings. Furthermore, tasks across models are the same , as we focus on multi-class B-scans classification to discern between healthy, early and advanced glaucoma classes.
Thus, let be a raw B-scan of dimensions , a first model is defined by training a base encoder network on the labeled source dataset composed of samples, where each training instance is denoted by (, ), as observed in Fig. 2. Then, the embedding representations are fed into a classification layer
to extract the logit scoresthat are transformed via softmax function to obtain a class-probability (see Algorithm 1). The coefficients and
of the first model are updated during the back-propagation step at every epoch. Once the training is finished on this model, it is used to predict the class of each sample from the unlabeled target dataset , with , leading to the corresponding pseudo-labels (Algorithm 2). Last, the pseudo-labels are used to augment the training dataset, which results in , where . In this way, the model in the last step is trained on the augmented pseudo-labeled dataset, following Algorithm 1, under the hypothesis that substantial improvements could be reported at the test time because two reasons: i) the increase of labeled training samples and ii) the knowledge distilled from the unlabeled target dataset.
Ii-B Proposed architecture
In addition to the presented learning strategy, we propose several architectural changes that improve over existing ones both quantitatively and from an interpretability perspective. In our recent works focused on glaucoma detection from raw OCT samples [7, 5, 6], we conducted an in-depth experimental analysis of different deep-learning architectures, both trained from scratch and fine-tuning the most popular models in the literature. Specifically, the VGG family of networks reported the best results in . So, in , we employed them as a strong backbone to develop a new feature extractor able to learn useful representations from the slides of a spectral-domain (SD)-OCT volume. Following this findings, in this paper, we adopt our previous work  as a benchmark to infer the encoder architecture by introducing slight nuances that allow reinforcing the learning of the intrinsic knowledge of OCT samples for glaucoma grading.
As observed in Fig. 3, we freeze the three first convolutional blocks of the VGG architecture by applying a deep fine-tuning strategy in order to leverage the knowledge acquired by the network when it was pre-trained on the ImageNet dataset. Following , we include a residual block via convolutional-skip connections and an attention module by means of an identity shortcut to give rise to the architecture. As a novelty, we refine the filters of the residual convolutions to optimize the glaucoma learning process by leveraging the domain-specific knowledge of the OCT samples. In particular, we introduce a tailored kernel size of (yellow box) to enforce the network to focus on critical glaucoma-specific regions which underlie contrast changes along the vertical axis of the B-scans. A concatenation aggregation function is used to combine the outputs from the residual block and VGG architecture. Then, a convolution
is applied to reduce the filters’ dimension without affecting the properties of the feature maps. This structure is introduced via skip-connections to refine the embedded space throughout a convolutional autoencoder
with a sigmoid function aimed to recalibrate the feature learning. Again, a concatenation operation is defined to combine the information from the attention block to the feature map of the main branch. An additionalconvolutional layer is included to provide an embedding volume of features of dimensions , where was empirically calculated as a multiple of the number of classes to encourage a better learning convergence. Finally, a global average-pooling (GAP) layer is applied to compute a spatial squeeze from the feature volume such that . In this way, given an input OCT image , an embedding representation map is achieved by the backbone network . Regarding the classification stage belong to the class (see Fig. 3).
Iii Ablation experiments
Iii-a Data sets
To evaluate the proposed learning methodology, we resort to two independent databases containing circumpapillary B-scans around the optic nerve head (ONH) of the retina. Note that the OCT samples from both source and target data sets were acquired from different Hospitals using the Heidelberg Spectralis OCT system under distinct settings conditions, e.g. illumination, noisy, contrast, etc. A different senior ophthalmologist (with more than 25 years of clinical experience) annotated each B-scan according to the European Guideline for Glaucoma Diagnosis. We considered as an unlabeled data set during the entire learning process to conduct the proposed methodology. We only used the target labels at the test time to evaluate the models’ performance. Information about the data sets distribution per patient and per sample is detailed in Table I. Note that the study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of each implicated Hospital. Informed consent was obtained from all subjects involved in the study.
|Source||32 / 41||28 / 35||25 / 31||85 / 107|
|Target||26 / 49||24 / 37||21 / 26||71 / 112|
|TOTAL||58 / 90||52 / 72||46 / 57||156 / 219|
Data partitioning. At the first stage, we performed a patient-level data partitioning to divide the source data set into five different subsets. A 5-fold cross-validation strategy was addressed to provide robust models and reliable results. In each of the five iterations, of the data were used to train the first model, whereas the remaining samples were employed as a validation subset to prevent overfitting. Otherwise, we randomly selected from the data set to generate the pseudo-labels from which training the model at the second stage. The rest of the target data was used as a test set.
Iii-B Validation of the backbone architecture
In this stage, we conduct a comparison between the proposed model and the state-of-the-art architectures focusing on OCT-based glaucoma identification. Following the experimental setup carried out in , we contrast here the canonical VGG family of networks and the proposed architecture using as a backbone both VGG16 and VGG19 architectures. In Table II
, we show the performance of the aforementioned networks during the training of the model at the first stage in a multi-class scenario for glaucoma grading. To this end, different figures of merit, e.g. sensitivity (SN), specificity (SP), F-score (FS), accuracy (ACC) and area under the ROC curve (AUC), are considered. Note that results correspond to the average and standard deviation from the cross-validation stage in terms of micro-average per class.
|SN||0.67 0.06||0.75 0.11||0.76 0.11||0.77 0.06|
|SP||0.84 0.03||0.87 0.05||0.88 0.06||0.89 0.03|
|FS||0.67 0.06||0.75 0.11||0.76 0.11||0.77 0.06|
|ACC||0.78 0.04||0.83 0.07||0.84 0.08||0.85 0.04|
|AUC||0.76 0.06||0.82 0.08||0.82 0.09||0.83 0.05|
Based on the results from Table II, we selected the network relative to the (with VGG19) as a baseline to address the pseudo-labeling stage since it outperformed conventional architectures for both VGG16 and VGG19 approaches. In a non-realistic setting, we calculate the performance of the selected backbone at the pseudo-labeling time to determine the usefulness of the proposed approach. To this end, the baseline trained on was tested on , which resulted in accuracy . Besides, the qualitative class activation maps (CAMs) computed in Fig. 4 further strengthen our confidence in the proposed backbone encoder, since it is evident how the heat maps provided by the attention module (Fig. 4 (b)) focus on more localized and glaucoma-specific regions than conventional VGG networks (Fig. 4 (a)). Also, the findings from the CAMs are directly in line with the clinicians’ opinion, since the generated heat maps keep an evident relationship between the RNFL thickness and the predicted class, according to the clinical statements .
Iv Prediction results
Once the pseudo-labels were generated during the first stage, we trained the model making use of the same
(with VGG19) architecture. All the experiments were conducted under the same conditions in order to provide a reliable comparison between the different approaches. In particular, all models were trained during 100 epochs using 16 B-scans per batch and the Adadelta optimizer to minimize the categorical cross-entropy (CEE) loss function. At this point, it is important to note that there are no public databases to make a comparison with the literature. In addition, no state-of-the-art studies have been performed to grade the glaucoma severity, so replicating previous glaucoma-based methods would lead to an unreliable and non-objective comparison.
In Table III, we report the results achieved by the trained model both in the first stage (baseline) and the second stage (proposed). Also, as a reference point, we show the performance of the approaches relative to the upper bound (model trained with target labels ) and the lower bound (model just trained with target pseudo-labels ). We can observe that the proposed learning strategy, which does not require additional target labeled data, consistently outperforms the baseline methodology across all the metrics, with improvements of 1-3%. Note that the upper bound scenario is considered to evidence how large the gap in performance is between the fully and semi-supervised approaches. In this case, reported values reveal compelling results, as we observe small differences (2-3%) between the upper bound and the proposed strategy. Furthermore, the model just trained on the target dataset making use of the pseudo-labels (lower bound) results in a poor-performance with respect to the rest of approaches, as expected, with differences ranging from 3% to 6%. This evidences that an increase of the training set via the proposed pseudo-labeling strategy improves the prediction performance for glaucoma grading, as a result of a knowledge transfer between the source and target domains.
The proposed self-training learning strategy has been successfully applied to grade the glaucoma severity from OCT B-scans in the presence of domain shift. Results have demonstrated that including pseudo-labels in the training-loop can enhance the performance over a model trained only on labeled source data, without incurring on extra annotation steps. In addition, the results achieved by the proposed model surpass those reached by the conventional architectures for glaucoma grading, leading to better predictions from both quantitative and interpretability perspective. These findings are evident in the provided heat maps, which highlight more localized glaucoma-specific areas, which are clinically relevant. As a future work, we intend to evaluate our learning strategy across more datasets that might contain larger domain shifts.
We gratefully acknowledge the support of the Generalitat Valenciana (GVA) for the donation of the DGX A100 used for this work, action co-financed by the European Union through the Programa Operativo del Fondo Europeo de Desarrollo Regional (FEDER) de la Comunitat Valenciana 2014-2020 (IDIFEDER/2020/030).
-  (2020) A simple framework for contrastive learning of visual representations. In ICML, pp. 1597–1607. Cited by: §II-A.
-  (2018) Correlation of retinal nerve fiber layer thickness and perimetric changes in primary open-angle glaucoma. Journal of the Egyptian Ophthalmological Society 111 (1). Cited by: §I, §III-B.
-  (1986) The concept of visual field indices. Graefe’s archive for clinical and experimental ophthalmology 224 (5), pp. 389–392. Cited by: §I.
-  (2015) Comparison of retinal thickness measurements between the topcon algorithm and a graph-based algorithm in normal and glaucoma eyes. PloS one 10 (6), pp. 1–13. Cited by: §I.
Glaucoma detection from raw circumpapillary oct images using fully convolutional neural networks. In IEEE ICIP, Vol. , pp. 2526–2530. Cited by: §I, §II-B.
-  (2020) Analysis of hand-crafted and automatic-learned features for glaucoma detection through raw circumpapillary oct images. In International Conference on Intelligent Data Engineering and Automated Learning, pp. 156–164. Cited by: §I, §II-B.
-  (2020) Glaucoma detection from raw sd-oct volumes: a novel approach focused on spatial dependencies. Computer Methods and Programs in Biomedicine, pp. 105855. Cited by: §I, §II-B, §II-B, §III-B.
-  (2018) Unsupervised representation learning by predicting image rotations. In ICLR, Cited by: §II-A.
-  (2017) Development of machine learning models for diagnosis of glaucoma. PLoS One 12 (5), pp. 1–16. Cited by: §I.
-  (2019) A feature agnostic approach for glaucoma detection in oct volumes. PloS one 14 (7). Cited by: §I.
-  (2020) Self-supervised relational reasoning for representation learning. arXiv preprint arXiv:2006.05849. Cited by: §II-A.
-  (2020) Clinically verified hybrid deep learning system for retinal ganglion cells aware grading of glaucomatous progression. IEEE TBME. Cited by: §I, §I.
-  (2019) Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. The Lancet Digital Health 1 (4), pp. e172–e182. Cited by: §I.
-  (2020) Improved automated detection of glaucoma by correlating fundus and sd-oct image analysis. International Journal of Imaging Systems and Technology. Cited by: §I.
-  (2021) Self-learning for weakly supervised gleason grading of local patterns. IEEE journal of biomedical and health informatics. Cited by: §II-A.
-  (2019) Enhancing the accuracy of glaucoma detection from oct probability maps using convolutional neural networks. In IEEE EMBC, pp. 2036–2040. Cited by: §I.
-  (2020) A review of deep learning for screening, diagnosis, and detection of glaucoma progression. Translational Vision Science & Technology 9 (2), pp. 42–42. Cited by: §I.
-  (2020) Domain adaptation model for retinopathy detection from cross-domain oct images. In Medical Imaging with Deep Learning, pp. 795–810. Cited by: §I.
-  (2004) Primary open-angle glaucoma. The Lancet 363 (9422), pp. 1711–1720. Cited by: §I.
Self-training with noisy student improves imagenet classification.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10687–10698. Cited by: §II-A.
-  (2020) Unsupervised domain adaptation for cross-device oct lesion detection via learning adaptive features. In IEEE ISBI, pp. 1570–1573. Cited by: §I.