A self-training framework for glaucoma grading in OCT B-scans

11/23/2021
by   Gabriel Garcia, et al.
0

In this paper, we present a self-training-based framework for glaucoma grading using OCT B-scans under the presence of domain shift. Particularly, the proposed two-step learning methodology resorts to pseudo-labels generated during the first step to augment the training dataset on the target domain, which is then used to train the final target model. This allows transferring knowledge-domain from the unlabeled data. Additionally, we propose a novel glaucoma-specific backbone which introduces residual and attention modules via skip-connections to refine the embedding features of the latent space. By doing this, our model is capable of improving state-of-the-art from a quantitative and interpretability perspective. The reported results demonstrate that the proposed learning strategy can boost the performance of the model on the target dataset without incurring in additional annotation steps, by using only labels from the source examples. Our model consistently outperforms the baseline by 1-3 model on the labeled target data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 4

07/08/2020

Combating Domain Shift with Self-Taught Labeling

We present a novel method to combat domain shift when adapting classific...
07/09/2021

Adversarial Domain Adaptation with Self-Training for EEG-based Sleep Stage Classification

Sleep staging is of great importance in the diagnosis and treatment of s...
10/18/2018

Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training

Recent deep networks achieved state of the art performance on a variety ...
03/05/2021

Cycle Self-Training for Domain Adaptation

Mainstream approaches for unsupervised domain adaptation (UDA) learn dom...
04/27/2021

AT-ST: Self-Training Adaptation Strategy for OCR in Domains with Limited Transcriptions

This paper addresses text recognition for domains with limited manual an...
04/16/2022

Bidirectional Self-Training with Multiple Anisotropic Prototypes for Domain Adaptive Semantic Segmentation

A thriving trend for domain adaptive segmentation endeavors to generate ...
06/25/2021

Circumpapillary OCT-Focused Hybrid Learning for Glaucoma Grading Using Tailored Prototypical Neural Networks

Glaucoma is one of the leading causes of blindness worldwide and Optical...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Glaucoma is a degenerative optic neuropathy that causes functional damage and visual field defects by altering several structures in the optic nerve head (ONH) of the retina [19]. Currently, the diagnostic procedure for detecting glaucoma requires several time-consuming tests, besides a visual examination of medical images, whose interpretation is often subjective for expert ophthalmologists at the grading time [12]

. So, many state-of-the-art studies propose image-processing techniques to help experts via machine-learning solutions.

The optical coherence tomography (OCT) is the quintessential imaging modality for glaucomatous damage evaluation since it allows evidencing the deterioration of the cell layers of the optic nerve, which is intimately linked to the glaucoma severity (see Fig. 1). Specifically, the retinal nerve fibre layer (RNFL) has been reported in the clinical literature as the most important structure for glaucoma progression [2].

Fig. 1: (a) Eyeball showing the regions of interest. (b) Fundus image highlighting the optic nerve head (ONH). (c) Arrangement of the cell fibre layers of the retina. (d) Typical OCT B-scan evidencing the retinal fibre layers by different grey-intensity levels. (e) Cropping of the RNFL structure.

Some of the previous studies combined hand-driven learning algorithms with conventional machine-learning classifiers to discern between healthy and glaucomatous OCT samples

[4, 9]

. The aforementioned studies demand a previous segmentation of the retinal layers of interest to conduct the hand-crafted feature extraction from specific regions of the B-scans, e.g. the RNFL thickness. However, approaches based on prior segmentation knowledge could transfer remaining errors for downstream classification tasks, according to the recent study

[17]

. To avoid this shortcoming, deep learning arises as an appealing alternative to derive high-performing computer-aided diagnosis systems in the ophthalmology field.

Nevertheless, despite the rise of these models in many computer vision and medical problems, their application on OCT images for glaucoma assessment still presents several limitations. First, most of the literature focuses on discerning between healthy and glaucoma classes via OCT B-scans

[5, 6], SD-OCT volumes [7, 10, 13]

or probability RNFL maps by combining fundus images and OCT samples

[16, 14]. Indeed, to the best of our knowledge, only [12] proposes a glaucoma-based scenario beyond the healthy-glaucoma classification by including the suspect label. Furthermore, the limited size of available data sets may hamper the generalization capabilities of learned models, as they could easily lead to overfitting. This is particularly important in the presence of domain shift between training (labeled) and testing (unlabeled) data distribution. A naive solution would be to collect and labeled data that follows a similar distribution than testing images, which can be then employed to fine-tune the model on the new test data. However, this requires to label large amounts of testing images, involving a time-consuming human process, which is unrealistic in clinical practice. To alleviate the problem of domain shifts, unsupervised domain adaptation has recently appeared as an interesting learning strategy. These methods typically include an adversarial learning framework [21, 18]

, which can lead to unstable training and high sensitiveness to hyperparameters.

To fill these gaps in the literature, we propose an alternative learning strategy for glaucoma grading that differs from current literature in several ways. First, unlike existing methods, our approach addresses the problem of glaucoma grading, according to the clinical annotation criteria [3]. Second, it demonstrates to improve the testing performance of a model trained in the presence of domain shift, approaching the results obtained by full-supervision. Third, the simplicity of our model facilitates the training convergence, contrary to complex adversarial learning-based methods. And last, we propose architectural changes that result in enhanced useful representations from the OCT B-scans, leading to a better performance and more meaningful prediction interpretations compared to conventional architectures.

Ii Methods

Ii-a Self-training strategy

Self-training, or self-supervised learning, aims at automatically generating a supervisory signal for a given task, which can be then used to enhance the representation learning of features or to label an independent dataset. The former typically involves integrating an unsupervised pretext task

[8], relational reasoning [11] or contrastive learning [1]. Nevertheless, in our glaucoma grading scenario, we advocate for the use of a sequential step, where the model is first trained on a labeled source dataset. Then, this model performs an inference on the unlabeled dataset to generate the target pseudo-labels, which are later used to train the model, mimicking full supervision on the test data (see Fig. 2). This learning strategy has demonstrated a high classification performance on different imaging modalities, such as histopathology [15]

or natural images from ImageNet

[20]. Formally, we denote as the independent training set, where refers to the -sample of the database with its corresponding ground truth , being the number of training image pairs. Furthermore, we use to represent a given task, where

is usually learnt by a neural network. In the current work, we leverage the scenario where the source and target domains are related (

) since both correspond to OCT samples, but acquired from different hardware settings. Furthermore, tasks across models are the same , as we focus on multi-class B-scans classification to discern between healthy, early and advanced glaucoma classes.

Thus, let be a raw B-scan of dimensions , a first model is defined by training a base encoder network on the labeled source dataset composed of samples, where each training instance is denoted by (, ), as observed in Fig. 2. Then, the embedding representations are fed into a classification layer

to extract the logit scores

that are transformed via softmax function to obtain a class-probability (see Algorithm 1). The coefficients and

of the first model are updated during the back-propagation step at every epoch

. Once the training is finished on this model, it is used to predict the class of each sample from the unlabeled target dataset , with , leading to the corresponding pseudo-labels (Algorithm 2). Last, the pseudo-labels are used to augment the training dataset, which results in , where . In this way, the model in the last step is trained on the augmented pseudo-labeled dataset, following Algorithm 1, under the hypothesis that substantial improvements could be reported at the test time because two reasons: i) the increase of labeled training samples and ii) the knowledge distilled from the unlabeled target dataset.

Fig. 2: Illustration of the proposed learning strategy broken down by stages.
Data: Training set .
Results: Trained coefficients ;
Algorithm:
random;
for  to  do
        for  to  do
               ;
               ;
               ;
              
        ;
        Update , using ;
       
Algorithm 1 Model Training
Data: Target set .
Results: Target pseudo-labels ;
Algorithm:
frozen;
for  to  do
        ;
        ;
        ;
       
Algorithm 2 Pseudo-labeling

Ii-B Proposed architecture

In addition to the presented learning strategy, we propose several architectural changes that improve over existing ones both quantitatively and from an interpretability perspective. In our recent works focused on glaucoma detection from raw OCT samples [7, 5, 6], we conducted an in-depth experimental analysis of different deep-learning architectures, both trained from scratch and fine-tuning the most popular models in the literature. Specifically, the VGG family of networks reported the best results in [5]. So, in [7], we employed them as a strong backbone to develop a new feature extractor able to learn useful representations from the slides of a spectral-domain (SD)-OCT volume. Following this findings, in this paper, we adopt our previous work [7] as a benchmark to infer the encoder architecture by introducing slight nuances that allow reinforcing the learning of the intrinsic knowledge of OCT samples for glaucoma grading.

As observed in Fig. 3, we freeze the three first convolutional blocks of the VGG architecture by applying a deep fine-tuning strategy in order to leverage the knowledge acquired by the network when it was pre-trained on the ImageNet dataset. Following [7], we include a residual block via convolutional-skip connections and an attention module by means of an identity shortcut to give rise to the architecture. As a novelty, we refine the filters of the residual convolutions to optimize the glaucoma learning process by leveraging the domain-specific knowledge of the OCT samples. In particular, we introduce a tailored kernel size of (yellow box) to enforce the network to focus on critical glaucoma-specific regions which underlie contrast changes along the vertical axis of the B-scans. A concatenation aggregation function is used to combine the outputs from the residual block and VGG architecture. Then, a convolution

is applied to reduce the filters’ dimension without affecting the properties of the feature maps. This structure is introduced via skip-connections to refine the embedded space throughout a convolutional autoencoder

with a sigmoid function aimed to recalibrate the feature learning. Again, a concatenation operation is defined to combine the information from the attention block to the feature map of the main branch. An additional

convolutional layer is included to provide an embedding volume of features of dimensions , where was empirically calculated as a multiple of the number of classes to encourage a better learning convergence. Finally, a global average-pooling (GAP) layer is applied to compute a spatial squeeze from the feature volume such that . In this way, given an input OCT image , an embedding representation map is achieved by the backbone network . Regarding the classification stage

, a single output layer with three neurons, corresponding to the number of classes, is implemented through a softmax activation function to determine the probability that

belong to the class (see Fig. 3).

Fig. 3: Backbone architecture used to train the model during the first and second stages. The blue region corresponds to the encoder network structure, whereas the yellow frame denotes the classification layer to discern between healthy, early and advanced glaucomatous cases.

Iii Ablation experiments

Iii-a Data sets

To evaluate the proposed learning methodology, we resort to two independent databases containing circumpapillary B-scans around the optic nerve head (ONH) of the retina. Note that the OCT samples from both source and target data sets were acquired from different Hospitals using the Heidelberg Spectralis OCT system under distinct settings conditions, e.g. illumination, noisy, contrast, etc. A different senior ophthalmologist (with more than 25 years of clinical experience) annotated each B-scan according to the European Guideline for Glaucoma Diagnosis. We considered as an unlabeled data set during the entire learning process to conduct the proposed methodology. We only used the target labels at the test time to evaluate the models’ performance. Information about the data sets distribution per patient and per sample is detailed in Table I. Note that the study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of each implicated Hospital. Informed consent was obtained from all subjects involved in the study.

Healthy
(pat./samp.)
Early
(pat./samp.)
Advanced
(pat./samp.)
TOTAL
(pat./samp.)
Source 32 / 41 28 / 35 25 / 31 85 / 107
Target 26 / 49 24 / 37 21 / 26 71 / 112
TOTAL 58 / 90 52 / 72 46 / 57 156 / 219
TABLE I: Patients (pat.) and samples (samp.) per data set grouped by categories, according to the experts’ annotation

Data partitioning. At the first stage, we performed a patient-level data partitioning to divide the source data set into five different subsets. A 5-fold cross-validation strategy was addressed to provide robust models and reliable results. In each of the five iterations, of the data were used to train the first model, whereas the remaining samples were employed as a validation subset to prevent overfitting. Otherwise, we randomly selected from the data set to generate the pseudo-labels from which training the model at the second stage. The rest of the target data was used as a test set.

Iii-B Validation of the backbone architecture

In this stage, we conduct a comparison between the proposed model and the state-of-the-art architectures focusing on OCT-based glaucoma identification. Following the experimental setup carried out in [7], we contrast here the canonical VGG family of networks and the proposed architecture using as a backbone both VGG16 and VGG19 architectures. In Table II

, we show the performance of the aforementioned networks during the training of the model at the first stage in a multi-class scenario for glaucoma grading. To this end, different figures of merit, e.g. sensitivity (SN), specificity (SP), F-score (FS), accuracy (ACC) and area under the ROC curve (AUC), are considered. Note that results correspond to the average and standard deviation from the cross-validation stage in terms of micro-average per class.

VGG16 VGG19
RAGNet_v2
(with VGG16)
RAGNet_v2
(with VGG19)
SN 0.67 0.06 0.75 0.11 0.76 0.11 0.77 0.06
SP 0.84 0.03 0.87 0.05 0.88 0.06 0.89 0.03
FS 0.67 0.06 0.75 0.11 0.76 0.11 0.77 0.06
ACC 0.78 0.04 0.83 0.07 0.84 0.08 0.85 0.04
AUC 0.76 0.06 0.82 0.08 0.82 0.09 0.83 0.05
TABLE II: Micro-average cross-validation results achieved during the training of the first model using the source database

Based on the results from Table II, we selected the network relative to the (with VGG19) as a baseline to address the pseudo-labeling stage since it outperformed conventional architectures for both VGG16 and VGG19 approaches. In a non-realistic setting, we calculate the performance of the selected backbone at the pseudo-labeling time to determine the usefulness of the proposed approach. To this end, the baseline trained on was tested on , which resulted in accuracy . Besides, the qualitative class activation maps (CAMs) computed in Fig. 4 further strengthen our confidence in the proposed backbone encoder, since it is evident how the heat maps provided by the attention module (Fig. 4 (b)) focus on more localized and glaucoma-specific regions than conventional VGG networks (Fig. 4 (a)). Also, the findings from the CAMs are directly in line with the clinicians’ opinion, since the generated heat maps keep an evident relationship between the RNFL thickness and the predicted class, according to the clinical statements [2].

Fig. 4: Class activation maps (CAMs). (a) Heat maps extracted from the VGG19 architecture. (b) Heat maps achieved from the RAGNet_v2 (with VGG19 as a backbone) at the output from the attention module.

Iv Prediction results

Once the pseudo-labels were generated during the first stage, we trained the model making use of the same

(with VGG19) architecture. All the experiments were conducted under the same conditions in order to provide a reliable comparison between the different approaches. In particular, all models were trained during 100 epochs using 16 B-scans per batch and the Adadelta optimizer to minimize the categorical cross-entropy (CEE) loss function. At this point, it is important to note that there are no public databases to make a comparison with the literature. In addition, no state-of-the-art studies have been performed to grade the glaucoma severity, so replicating previous glaucoma-based methods would lead to an unreliable and non-objective comparison.

SN SP FS ACC AUC
Baseline 0.7059 0.8529 0.7059 0.8039 0.7613
Proposed 0.7353 0.8676 0.7353 0.8235 0.7853
Lower bound 0.6765 0.8382 0.6765 0.7843 0.7463
Upper bound 0.7647 0.8824 0.7647 0.8431 0.8050
TABLE III: Test results achieved during the prediction of the target set

In Table III, we report the results achieved by the trained model both in the first stage (baseline) and the second stage (proposed). Also, as a reference point, we show the performance of the approaches relative to the upper bound (model trained with target labels ) and the lower bound (model just trained with target pseudo-labels ). We can observe that the proposed learning strategy, which does not require additional target labeled data, consistently outperforms the baseline methodology across all the metrics, with improvements of 1-3%. Note that the upper bound scenario is considered to evidence how large the gap in performance is between the fully and semi-supervised approaches. In this case, reported values reveal compelling results, as we observe small differences (2-3%) between the upper bound and the proposed strategy. Furthermore, the model just trained on the target dataset making use of the pseudo-labels (lower bound) results in a poor-performance with respect to the rest of approaches, as expected, with differences ranging from 3% to 6%. This evidences that an increase of the training set via the proposed pseudo-labeling strategy improves the prediction performance for glaucoma grading, as a result of a knowledge transfer between the source and target domains.

V Conclusion

The proposed self-training learning strategy has been successfully applied to grade the glaucoma severity from OCT B-scans in the presence of domain shift. Results have demonstrated that including pseudo-labels in the training-loop can enhance the performance over a model trained only on labeled source data, without incurring on extra annotation steps. In addition, the results achieved by the proposed model surpass those reached by the conventional architectures for glaucoma grading, leading to better predictions from both quantitative and interpretability perspective. These findings are evident in the provided heat maps, which highlight more localized glaucoma-specific areas, which are clinically relevant. As a future work, we intend to evaluate our learning strategy across more datasets that might contain larger domain shifts.

Acknowledgment

We gratefully acknowledge the support of the Generalitat Valenciana (GVA) for the donation of the DGX A100 used for this work, action co-financed by the European Union through the Programa Operativo del Fondo Europeo de Desarrollo Regional (FEDER) de la Comunitat Valenciana 2014-2020 (IDIFEDER/2020/030).

References

  • [1] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. In ICML, pp. 1597–1607. Cited by: §II-A.
  • [2] A. El-Naby et al. (2018) Correlation of retinal nerve fiber layer thickness and perimetric changes in primary open-angle glaucoma. Journal of the Egyptian Ophthalmological Society 111 (1). Cited by: §I, §III-B.
  • [3] J. Flammer (1986) The concept of visual field indices. Graefe’s archive for clinical and experimental ophthalmology 224 (5), pp. 389–392. Cited by: §I.
  • [4] E. Gao, B. Chen, J. Yang, F. Shi, W. Zhu, D. Xiang, and et al. (2015) Comparison of retinal thickness measurements between the topcon algorithm and a graph-based algorithm in normal and glaucoma eyes. PloS one 10 (6), pp. 1–13. Cited by: §I.
  • [5] G. García, R. d. Amor, A. Colomer, and V. Naranjo (2020)

    Glaucoma detection from raw circumpapillary oct images using fully convolutional neural networks

    .
    In IEEE ICIP, Vol. , pp. 2526–2530. Cited by: §I, §II-B.
  • [6] G. García, A. Colomer, and V. Naranjo (2020) Analysis of hand-crafted and automatic-learned features for glaucoma detection through raw circumpapillary oct images. In International Conference on Intelligent Data Engineering and Automated Learning, pp. 156–164. Cited by: §I, §II-B.
  • [7] G. García, A. Colomer, and V. Naranjo (2020) Glaucoma detection from raw sd-oct volumes: a novel approach focused on spatial dependencies. Computer Methods and Programs in Biomedicine, pp. 105855. Cited by: §I, §II-B, §II-B, §III-B.
  • [8] S. Gidaris, P. Singh, and N. Komodakis (2018) Unsupervised representation learning by predicting image rotations. In ICLR, Cited by: §II-A.
  • [9] S. J. Kim and K. J. e. al. Cho (2017) Development of machine learning models for diagnosis of glaucoma. PLoS One 12 (5), pp. 1–16. Cited by: §I.
  • [10] S. Maetschke, B. Antony, H. Ishikawa, G. Wollstein, J. Schuman, and R. Garnavi (2019) A feature agnostic approach for glaucoma detection in oct volumes. PloS one 14 (7). Cited by: §I.
  • [11] M. Patacchiola and A. Storkey (2020) Self-supervised relational reasoning for representation learning. arXiv preprint arXiv:2006.05849. Cited by: §II-A.
  • [12] H. Raja, T. Hassan, M. U. Akram, and N. Werghi (2020) Clinically verified hybrid deep learning system for retinal ganglion cells aware grading of glaucomatous progression. IEEE TBME. Cited by: §I, §I.
  • [13] A. R. Ran et al. (2019) Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. The Lancet Digital Health 1 (4), pp. e172–e182. Cited by: §I.
  • [14] T. Shehryar, M. U. Akram, S. Khalid, S. Nasreen, A. Tariq, A. Perwaiz, and A. Shaukat (2020) Improved automated detection of glaucoma by correlating fundus and sd-oct image analysis. International Journal of Imaging Systems and Technology. Cited by: §I.
  • [15] J. Silva-Rodriguez, A. Colomer, J. Dolz, and V. Naranjo (2021) Self-learning for weakly supervised gleason grading of local patterns. IEEE journal of biomedical and health informatics. Cited by: §II-A.
  • [16] K. A. Thakoor, X. Li, E. Tsamis, P. Sajda, and D. C. Hood (2019) Enhancing the accuracy of glaucoma detection from oct probability maps using convolutional neural networks. In IEEE EMBC, pp. 2036–2040. Cited by: §I.
  • [17] A. C. Thompson, A. A. Jammal, and F. A. Medeiros (2020) A review of deep learning for screening, diagnosis, and detection of glaucoma progression. Translational Vision Science & Technology 9 (2), pp. 42–42. Cited by: §I.
  • [18] J. Wang et al. (2020) Domain adaptation model for retinopathy detection from cross-domain oct images. In Medical Imaging with Deep Learning, pp. 795–810. Cited by: §I.
  • [19] R. N. Weinreb and P. T. Khaw (2004) Primary open-angle glaucoma. The Lancet 363 (9422), pp. 1711–1720. Cited by: §I.
  • [20] Q. Xie, M. Luong, E. Hovy, and Q. V. Le (2020) Self-training with noisy student improves imagenet classification. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 10687–10698. Cited by: §II-A.
  • [21] S. Yang, X. Zhou, J. Wang, G. Xie, C. Lv, P. Gao, and B. Lv (2020) Unsupervised domain adaptation for cross-device oct lesion detection via learning adaptive features. In IEEE ISBI, pp. 1570–1573. Cited by: §I.