Direct Classification of Type 2 Diabetes From Retinal Fundus Images in a Population-based Sample From The Maastricht Study

11/22/2019 ∙ by Friso G. Heslinga, et al. ∙ TU Eindhoven 0

Type 2 Diabetes (T2D) is a chronic metabolic disorder that can lead to blindness and cardiovascular disease. Information about early stage T2D might be present in retinal fundus images, but to what extent these images can be used for a screening setting is still unknown. In this study, deep neural networks were employed to differentiate between fundus images from individuals with and without T2D. We investigated three methods to achieve high classification performance, measured by the area under the receiver operating curve (ROC-AUC). A multi-target learning approach to simultaneously output retinal biomarkers as well as T2D works best (AUC = 0.746 [±0.001]). Furthermore, the classification performance can be improved when images with high prediction uncertainty are referred to a specialist. We also show that the combination of images of the left and right eye per individual can further improve the classification performance (AUC = 0.758 [±0.003]), using a simple averaging approach. The results are promising, suggesting the feasibility of screening for T2D from retinal fundus images.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Type 2 Diabetes mellitus (T2D) is a chronic metabolic disorder characterized by hyperglycemia, insulin resistance, and relative insulin deficiency. Late detection of T2D can lead to long-term damage, including blindness [4] and cardiovascular disease [6]. Although T2D diagnosis based on blood-glucose measurements works well, half of the people living with diabetes worldwide were undiagnosed in 2017. [3] This is unfortunate, especially since major health benefits are expected from early detection and treatment. [9] Non-invasive, easy-accessible screening methods could improve early detection.

Retinal fundus imaging is widely used for the detection of diabetic retinopathy (DR), one of the complications of T2D. Over the last few years, deep learning (DL) has been proposed for automated analysis of retinal fundus images. [17] DR is relatively unambiguous and DL models have shown excellent detection performance. For example, Gulshan et al. [7] obtained an area under the receiver operating curve (ROC-AUC) of 0.99 for detection of referable DR.

Despite the promising results for DR detection, retinal fundus images are not used for early T2D detection, even though the vascular geometrical structures of the retina have been related to early T2D.[18] The aim of this study is to investigate to what extent a deep learning model is able to distinguish T2D and non-T2D cases in retinal fundus images and to evaluate what techniques can be used to improve the classification.

1.1 Related work

To the best of our knowledge, only one study has explored the value of deep learning for direct classification of T2D.[1] In addition, Poplin et al.[13] used deep learning to extract cardiovascular risk factors from retinal fundus images, including a key diagnostic measure for T2D, haemoglobin A1c (HbAIc). While high predictive performance was reported for age and sex, and some predictive information was found for smoking history and systolic blood pressure, model predictions for HbAIc levels correlated poorly with the HbAIc labels (R2 = 0.09).

Others have focused on the extraction of handcrafted features from fundus images.[16, 5] Features such as vessel tortuosity, mean arteriolar width and venular width are considered biomarkers for T2D [18]. In previous work we showed that these biomarkers can be approximated with a deep learning approach [10]

. In this study we investigated the added value of these biomarkers for the training process of a deep learning model that directly classifies fundus images.

2 Methods

The color fundus images used for this research originate from The Maastricht Study111, an observational prospective population-based cohort study. The rationale and methodology have been described elsewhere.[14] Eligible for participation in The Maastricht Study were all individuals aged between 40 and 75 years and living in the southern part of the Netherlands. The study population was enriched with T2D participants for reasons of statistical power. For our study only images from individuals with T2D and normal glucose metabolism were included (8924 images from 2336 individuals in total). Other diabetes types and prediabetes individuals were excluded.

The data was divided into sets for training, validation and testing according to a 60%/20%/20% split. All images of a single individual were assigned to the same set. An overview of the sets is shown in Tabel 1. The sets comprise images of left and right eyes that are centered either on the fovea or on the optic disc. The images were resized to 1024 x 1024 pixels and channel-wise global contrast normalization was applied before further processing.

Training Validation Test Total
Total number of individuals 1376 464 496 2336
age (years) [std] 60.0 [8.5] 59.6 [8.1] 60.4 [8.2] 59.9 [8.2]
sex (% men) 47.2 50.2 54.1 51.2
T2D individuals 466 (33.9%) 159 (34.3%) 182 (36.7%) 807 (34.5%)
Number of images 5222 1802 1900 8924
Table 1: Data set split details.

All experiments were performed using DL models based on a VGG-19 architecture for which the output layer was replaced. Data-augmentation was used to expand the number of training images, encompassing translation (0 - 20 pixels), rotation (0 - 360°), horizontal and vertical reflection, intensity shift(0 - 20/256), color shift (0 - 30/256) and contrast shift (0 - 0.1). Inputs for the DL models are 800 x 800 pixels centered crops of the 1024 x 1024 augmented images. Models were implemented in Keras


using a TensorFlow

[15] backend and training was done with balanced batches of 18 on 3 GPU’s. Optimization of the model weights was done using Adam. Target labels are either 0 = normal glucose metabolism or 1 = T2D. The best performing model was selected based on the validation set. Final performance of the models was evaluated by the ROC-AUC on the test set

2.1 Model setup and initialization

First, we evaluated the effect of initialization of the model’s weights for the classification of T2D images versus non-T2D images. We compared five different strategies: (1) random initialization; (2) ImageNet weights; (3) model pretrained on global retinal microvascular measurements (T2D biomarkers), including vessel caliber and vessel tortuosity

[10]; (4) A multi-target learning (MTL) approach with random initialization and (5) Multi-target learning with ImageNet weights. For the T2D biomarker approach (3) we first trained a model to predict four microvascular measures as described elsewhere[10] and then replaced the output layer for the classification task. For the multi-target approaches (4 and 5), we simultaneously predicted four T2D biomarkers and T2D status. The learning rate schedule and L2

-regularization were optimized on the validation set after which all experiments were repeated three times using different random seeds to obtain a measure for standard deviation.

2.2 Aleatoric uncertainty estimation

In a clinical setting one can decide to refer an image for further inspection if the assessor is too uncertain about the decision. Ahyan et al. [2]

showed that test-time augmentation (TTA) can be used to define a measure for the aleatoric uncertainty. We applied 30-fold TTA to the model that performed best on the validation set using the same augmentation settings as applied during training to find the posterior distribution of the T2D predictions. We used variance of the prediction distribution,

var(Pred), as a measure for aleatoric uncertainty. Additionally, proximity of the mean of the prediction distribution to 0.5, abs(mean(Pred)-0.5), was evaluated as a measure of uncertainty, since this is exactly half-way the labels of healthy and T2D. We show the effect of the referral of images that the model is uncertain about, by excluding these from the results and recalculating the ROC-AUC for different referral fractions.

2.3 Individual-level estimation

Multiple fundus images (1 to 12) were available per individual, providing a similar number of T2D predictions. Different strategies for the aggregation of image-level predictions to individual-level predictions were evaluated for the model that performed best on the validation set: (1) mean of the soft predictions for the left and right eye; (2) maximum of the predictions for the left and right eye; (3) logistic regression and (4) Gaussian Naive Bayes. For the machine learning techniques (3 and 4) the following features were selected: Mean, variance and number of images for each of the combinations left/right eye and optic disc and fovea centered images, resulting in 12 features per individual. Average padding was used for missing values: If for one eye no optic disc centered image or fovea-centered images was available, the prediction for the opposite-centered image was used. If no image was available for one eye, the prediction for the other eye was used.

3 Results

An overview of the results for different model setups and weight initialization is shown in Tabel 2. If a single (non-augmented) image was used for evaluation, the ROC-AUC was found to be in the range of 0.726-0.739. When 30-fold TTA was applied, the ROC-AUC slightly increased for all strategies, with the best performance found for the MTL approach with randomly initialized weights (AUC = 0.746 [0.001]).

Initialization ROC-AUC [std] 30-fold ROC-AUC [std]
random initialization 0.726 [0.006] 0.729 [0.009]
ImageNet weights 0.733 [0.003] 0.737 [0.008]
T2D biomarker weights 0.734 [0.004] 0.738 [0.006]
MTL w. random initialization 0.733 [0.010] 0.746 [0.001]
MTL w. ImageNet weights 0.739 [0.002] 0.741 [0.001]
Table 2: Model setup and initialization results.

The model that performed best on the validation set was one of the MTL models with ImageNet weights. Its performance on the test set was found to be 0.740 with 30-fold TTA. When a fraction of the images was left out for referral, based on high uncertainty of the prediction for those images, the ROC-AUC substantially increased (Figure 1). For example, when 20% of the images was excluded, the ROC-AUC increased to 0.765. Interestingly, the effect on the ROC-AUC seemed similar for both uncertainty measures.

Figure 1: ROC-AUC after rejection of images with high prediction uncertainty

The combination of multiple images to obtain an individual-level prediction resulted in a higher ROC-AUC (e.g. 0.758 [0.003] for mean of both eyes) than for single images (0.733 [0.010]). The use of more complex classifiers did not lead to significantly better classification performance than a simple mean over the images of the left and right eye, as is shown in Tab. 3

image-level mean of left max of left logistic Gaussian Naive
and right eye and right eye regression Bayes
ROC-AUC [std] 0.733 [0.010] 0.758 [0.003] 0.755 [0.005] 0.761 [0.004] 0.757 [0.002]
Table 3: Individual-level evaluation.

4 Conclusion and Discussion

Individuals with type 2 diabetes can be distinguished quite well from individuals with normal glucose metabolism in The Maastricht Study population using retinal fundus images and deep learning techniques. Minor benefits can be expected from optimization of the model setup and weight initialization. We found that an MTL approach with randomly initialized weights works marginally better than the other models. Classification performance improvement can be achieved with referral of the most uncertain cases and the use of multiple images per individual. This result is in line with the finding of Leibig et al.[12] who leveraged prediction uncertainty to successfully refer fundus images with signs of diabetic retinopathy that were difficult to grade. This step will however lead to the referral of more false positives, which could hemper the cost-effectiveness in a screening setting.

One possibility to use retinal fundus imaging as a screening technique is the use of smartphone fundus photography.[8] Future research is needed to evaluate the value of the addition of basic patient characteristics, such as sex, age and body mass index. The inclusion criteria should be extended to comprise early T2D cases (prediabetes), which were excluded for this research. Moreover, clinical validation on an external data set is needed to assess the value of the automated classification of fundus images in a general screening setting.

This research is financially supported by the TTW Perspectief program and Philips Research. The authors have no conflicts of interests to report. This work has not been submitted for publication anywhere else. The clinical data used in the research originates from the Maastricht Study. The Maastricht Study was supported by the European Regional Development Fund via OP-Zuid, the Province of Limburg, the Dutch Ministry of Economic Affairs (grant 31O.041), Stichting De Weijerhorst (Maastricht, The Netherlands), the Pearl String Initiative Diabetes (Amsterdam, The Netherlands), CARIM School for Cardiovascular Diseases (Maastricht, The Netherlands), Stichting Annadal (Maastricht, The Netherlands), Health Foundation Limburg (Maastricht, The Netherlands) and by unrestricted grants from Janssen-Cilag B.V. (Tilburg, The Netherlands), Novo Nordisk Farma B.V. (Alphen aan den Rijn, The Netherlands), and Sanofi-Aventis Netherlands B.V. (Gouda, The Netherlands).


  • [1] S. Abbasi-Sureshjani, B. Dashtbozorg, B.M. ter Haar Romeny, and F. Fleuret (2017) Exploratory study on direct prediction of diabetes using deep residual networks.

    Lecture Notes in Computational Vision and Biomechanics

    27, pp. 797–802.
    Cited by: §1.1.
  • [2] M. S. Ahyan and P. Berens (2018)

    Test-time data augmentation for estimation of heteroscedastic aleatoric uncertainty in deep neural networks

    1st Conference on Medical Imaging with Deep Learning. Cited by: §2.2.
  • [3] N.H. Cho, J.E. Shaw, S. Karuranga, Y. Huang, J.D. da Rocha Fernandes, A.W. Ohlrogge, and B. Malanda (2018) IDF Diabetes Atlas: Global estimates of diabetes prevalence for 2017 and projections for 2045. Diabetes Research and Clinical Practice 138, pp. 271 – 281. External Links: Document, Link Cited by: §1.
  • [4] N. G. Congdon, D. S. Friedman, and T. Lietman (2003) Important Causes of Visual Impairment in the World Today. JAMA 290 (15), pp. 2057–2060. Cited by: §1.
  • [5] B. Dashtbozorg, S. Abbasi-Sureshjani, J. Zhang, F. Huang, E. Bekkers, and B. ter Haar Romeny (2016) Infrastructure for retinal image analysis. Medical Image Analysis Third International Workshop, OMIA 2016, pp. 105–112. Cited by: §1.1.
  • [6] K. Gu, C. C. Cowie, and M. I. Harris (1999) Diabetes and Decline in Heart Disease Mortality in US Adults. JAMA 281 (14), pp. 1291–1297. Cited by: §1.
  • [7] V. Gulshan, L. Peng, and M. Coram (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Journal of the American Medical Association 316(22), pp. 2402–2410. Cited by: §1.
  • [8] L. J. Haddock, D. Y. Kim, and S. Mukai (2013) Simple, inexpensive technique for high-quality smartphone fundus photography in human and animal eyes. Journal of Ophthalmology Article ID 518479. Cited by: §4.
  • [9] W. H. Herman, W. Ye, S. J. Griffin, R. K. Simmons, M. J. Davies, K. Khunti, G. E.H.M. Rutten, A. Sandbaek, T. Lauritzen, K. Borch-Johnsen, M. B. Brown, and N. J. Wareham (2015) Early detection and treatment of type 2 diabetes reduce cardiovascular morbidity and mortality: a simulation of the results of the anglo-danish-dutch study of intensive treatment in people with screen-detected diabetes in primary care (ADDITION-Europe). Diabetes Care 38 (8), pp. 1449–1455. External Links: Document, ISSN 0149-5992 Cited by: §1.
  • [10] F. G. Heslinga, J. P.W. Pluim, B. Dashtbozorg, T. T.J.M. Berendschot, A.J.H.M. Houben, R. M.A. Henry, and M. Veta (2019) Approximation of a pipeline of unsupervised retina image analysis methods with a CNN. Proceedings of SPIE 10949, Medical Imaging 2019: Image Processing 10949N. External Links: Document, Link Cited by: §1.1, §2.1.
  • [11] (2015) Keras. Note: Software available from External Links: Link Cited by: §2.
  • [12] C. Leibig, V. Allken, M. S. Ayhan, P. Berens, and S. Wahl (2017) Leveraging uncertainty information from deep neural networks for disease detection. Nature Scientific Reports 7, pp. 17816. Cited by: §4.
  • [13] R. Poplin, A.V. Varadarajan, K. Blumer, Y. Liu, M.V. McConnell, G.S. Corrado, L. Peng, and D.R. Webster (2018) Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering 2, pp. 158–64. Cited by: §1.1.
  • [14] M.T. Schram, S.J.S. Sep, C.J. van der Kallen, P.C. Dagnelie, A. Koster, N. Schaper, R.M.A. Henry, and C.D.A. Stehouwer (2014) The Maastricht Study: an extensive phenotyping study on determinants of type 2 diabetes, its complications and its comorbidities. European Journal of Epidemiology 29(6), pp. 439–451. Cited by: §2.
  • [15] (2015) TensorFlow: large-scale machine learning on heterogeneous systems. Note: Software available from External Links: Link Cited by: §2.
  • [16] B.M. ter Haar Romeny, E.J. Bekkers, J. Zhang, S. Abbasi-Sureshjani, F. Huang, R. Duits, B. Dashtbozorg, T.T.J.M. Berendschot, I. Smit-Ockeloen, K.A.J. Eppenhof, J. Feng, J. Hannink, J. Schouten, M. Tong, H. Wul, H.W. van Triest, S. Zhu, D. Chen, W. He, L. Xu, P. Hand, and Y. Kang (2016) Brain-inspired algorithms for retinal image analysis. machine vision and applications. Machine Vision and Applications 27(8), pp. 1117–1135. Cited by: §1.1.
  • [17] D. S. W. Ting, L. R. Pasquale, L. Peng, J. P. Campbell, A. Y. Lee, R. Raman, G. S. W. Tan, L. Schmetterer, P. A. Keane, and T. Y. Wong (2019) Artificial intelligence and deep learning in ophthalmology. 103 (2), pp. 167–175. External Links: Document Cited by: §1.
  • [18] J. Zhang, B. Dashtbozorg, F. Huang, T. T. J. M. Berendschot, and B. M. ter Haar Romeny (2018) Analysis of retinal vascular biomarkers for early detection of diabetes. VipIMAGE 2017, Lecture Notes in Computational Vision and Biomechanics 27, pp. 811–817. Cited by: §1.1, §1.