A Two-phase Decision Support Framework for the Automatic Screening of Digital Fundus Images

11/01/2014 ∙ by Balint Antal, et al. ∙ University of Debrecen (UD) 0

In this paper we give a brief review on the present status of automated detection systems describe for the screening of diabetic retinopathy. We further detail an enhanced detection procedure that consists of two steps. First, a pre-screening algorithm is considered to classify the input digital fundus images based on the severity of abnormalities. If an image is found to be seriously abnormal, it will not be analysed further with robust lesion detector algorithms. As a further improvement, we introduce a novel feature extraction approach based on clinical observations. The second step of the proposed method detects regions of interest with possible lesions on the images that previously passed the pre-screening step. These regions will serve as input to the specific lesion detectors for detailed analysis. This procedure can increase the computational performance of a screening system. Experimental results show that both two steps of the proposed approach are capable to efficiently exclude a large amount of data from further processing, thus, to decrease the computational burden of the automatic screening system.



There are no comments yet.


page 5

page 19

page 20

page 21

page 22

page 23

page 24

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Retinal fundus photography is widely used at the diagnosis and at regular controls of the consequent treatment of various eye diseases, such as diabetic retinopathy (DR), age related macular degeneration (AMD) and glaucoma. DR is one of the most frequent causes of visual impairment in developed countries and is the leading cause of new cases of blindness among those in the working age vidr

. In 1997 an estimated 124 million people had diabetes worldwide. This is expected to nearly double by 2010. At any point in time, approximately 40% of persons with diabetes have diabetic retinopathy, of which an estimated 5% have the sight-threatening form of this disease. Altogether every day nearly 75 people go blind from DR even though treatment is available

causes .

Timely detection, organized and practised screening programs are the mainstay of identifying patients at risk for developing any symptoms of DR. Several countries elaborated nation or region wide programs to fulfil this goal. In the United States vanderbilt eyepacs , in the United Kingdom nhs and in The Netherlands eyecheck there are digital photography acquisition and reader centre sites already available in daily routine. Color digital retinal images are captured at serviceable sites even outside of health care settings and data will then be transferred to central locations where they are double read and evaluated by specially trained graders. Further health provision of the patient depends upon the result of the grading.

Automated grading of DR based on the detection of the characteristic lesions would safely reduce the burden of manual grading in screening programs. Promising results on higher sensitivity compared with manual graders are already reported in Philip for patients having referable diabetic retinopathy. Although the overall specificity of automated grading was lower than the manual analysis, remarkable financial savings could be achieved by reducing the grading workload. Screening programs can be organized to reduce the risk of the disease within the population. Though with the automated screening systems we have to make a compromise between sensitivity and specificity, an alternate approach with high performance is currently not available to provide mass screening. So for a fair competition, automated screening systems should be compared by human experts routinely involved. In this sense, manual grading is imperfect, since graders required to be highly specific missed more than 5% of the cases of referable diabetic retinopathy in a study Philip .

With an automated decision support of the grading process wider access could be provided to the service and improvements could be realized at personal and community DR care level. Graders in DR reading centres are taught to recognize patterns which represent lesions like microaneurysm, dot and blot haemorrhages, lipid exudates and cotton wool spots. With the implementation of computer aided pattern recognition algorithms, the detection of the above mentioned lesions would be theoretically possible. In the past, much effort has been made by different research groups to develop mature algorithms, with sensitivity and specificity is close to that of humans

Winder . The performance of the algorithms is approaching their limit, though their use is not recommended for clinical practice system yet. However, the developments in the field High-performance computing can advance the spread of such methods Chorley Bader .

The process of analysing fundus images may be performed by a series of steps. For each step, a number of different approaches are available. It is very hard to determine which are the best algorithms to employ at each step since there is no gold-standard or consensus even in the detection of the regions of interest. In a literature review performed by Winder et al. Winder the following pattern recognition steps were identified creating the detection algorithmic sequence: preprocessing, localization and segmentation of the optic disc, segmentation of the retinal vasculature, localization of the macula and fovea, detection of retinopathy. Our interest is to promote the automated decision support framework by means of inserting a preliminary pre-filtering phase before the detailed analysis. Such algorithms usually aim to detect low quality images. For example, in niemeijer , a such approach is proposed. In this paper, we propose two new stjpg which can be considered in this phase: pre-screening and pre-filtering.

Figure 1: Samples from the image set (both taken from the DRIVE database drive ); (a) abnormal fundus, (b) not abnormal fundus, the proper grading needs further analysis.

During pre-screening, we classify the images as severely diseased (highly abnormal) or to be forwarded for further processing. The aim of this step is twofold. On the one hand, we minimize the risk that an abnormal image passes the screening without a warning, since it is immediately spotted by the automatic system before detailed analysis. On the other hand, we save computational time, since only the not obviously abnormal fundus images are analysed in details. Figure 1

gives an impression about these two classes. At the analysis of fundus images, machine learning algorithms are often applied to classification based on feature vectors consisting of intensity values of the image in other fields, see e.g.

hiv for HIV or nyul for glaucoma detection. Thus, we consider implementing these approaches for DR screening, as well. We also improved the techniques with feature extraction based on the inhomogeneity characteristics of the diseased retina supported by clinical observations. Our algorithms are trained and tested on images from publicly available, as well as, on our own databases.

As a second (pre-filtering) step we extract those candidate subregions of fundus images that are expected to contain specific lesions. This step is economically favourable since the operation of the lesion detection algorithms are computationally the most time-consuming. The most common lesion that algorithms detect on the fundus is the microaneurysm (see Figure 2a), which is an early sign of diabetic retinopathy. A microaneurysm appears as a small red spot on the retina. Microaneurysms may evolve to haemorrhages, which are also red spots, but they differ in size and shape (see Figure 2b). The detection of DR related bright lesions (exudates) has a rich literature Winder exudate , as well. Exudates appear at an advanced stage of diabetic retinopathy (see Figure 2c and 2d). The retinal pigment epihelium (RPE) is usually caused by age-related macular degeneration. The sign of RPE is the inhomogeneous surface of the retina, as it is shown in Figure 2e and 2f.

Figure 2: Lesions of the retina; (a) microaneurysms, (b) haemorrhages, (c-d) bright lesions (exudates), (e-f) retinal pigment epihelium.

Our approach aims to find candidate regions containing lesions based on the fact that asides form its anatomical parts, the intensity values of the normal retina surface have small saliences (see Figure 3). If there is a connected set of salient values with a given cardinality, we can assume that there is a lesion within the examined region. The goal is to preserve those regions only, which possibly contain lesions.

Figure 3: Surface of a normal fundus.

The rest of the paper is organized as follows. In section 2 we present our pre-screening approach for classifying the images as highly abnormal or not. Section 3 exhibits how candidate regions that passed the pre-screening are pre-filtered on the fundus images. The datasets and corresponding experimental results are given in section 4. Finally, some conclusions are drawn in section 5.

2 Pre-Screening – Classifying the Input Image

As first step of our approach, we check whether the fundus represented on the image has so severe abnormalities (e.g. large haemorrhages, retinal detachment) that the patient should be sent directly to clinical expert. In the case of high-loaded automatic systems, skipping these images will enhance the performance, since detailed analyses should not take place. Pre-screening is realized with the application of machine learning algorithms. Next, we summarize the components of pre-screening by consequent stjpg.

2.1 Pre-processing

As a pre-processing step, we convert the input RGB images to grayscale ones as proposed e.g. in extrudate , to get a suitable representation for possible disorders. Then, we apply adaptive histogram equalization (AHE) as an intensity normalization step proposed in youssif , and an output is shown in Figure 4. Finally, we rescale the images to the size of 90 90 pixels.

Figure 4: Contrast enhancement of fundus images by adaptive histogram equalization. Original image is taken from the DRIVE database drive . (a) grayscale image, (b) image after AHE.

2.2 Feature vectors and classifiers

We also take advantage of the clinical observation that fundi with severe diabetic retinopathy often have inhomogeneity caused by retinal pigment epithelium (RPE) atrophy, which is the waste of the pigmented cell layer of the retina retina . Composing feature vectors based on this observation leads to more accurate results both in classification and computational performance, as it will be presented in the results section. To extract these features, we used the following approaches:

  • Inhomogeneity: Let the image be split into disjoint subimages of size , e.g. with . Then, for each pixel within a subimage, we compute the sum of intensity differences larger than a given threshold for every subsequent subimage pixels. If this number is larger than zero, the feature is set to 1, otherwise to 0. See Algorithm 1 for the precise formulation. The values of and are determined experimentally and these values are constants across the image.

  • Standard deviation:

    For each subimage we calculate the standard deviation. This approach is for referential purposes.

  • Combined: We calculate both the inhomogeneity and the standard deviation feature and combine them.

  Let .
  for each subimage in the image do
     Let .
     for each pixel in  do
        Calculate the differences of the intensities from the first pixel of .
        if the difference is larger than a threshold  then
           Add the difference to .
        end if
     end for
     if  then
        Let .
        Let .
     end if
     Increment .
  end for
Algorithm 1 Inhomogeneity feature calculation.

An example feature vector for a homogeneous and an inhomogeneous image part are given in Table 1.

homogeneous (Figure 3) 4.57, 0.0 4.68, 0.0 4.34, 0.0 3.91, 0.0 3.67, 0.0
inhomogeneous (Figure 2 (e)) 68.55, 1.0 71.41, 1.0 55.30, 0.0 65.64, 1.0 34.30, 1.0
Table 1: A characteristic example feature vector excerpt for a homogeneous an inhomogeneous retina part. The numbers in each cell are the standard deviation and inhomogeneity values, respectively.

We use the implementations of the Weka weka

libraries to test the classification of the images using the features above. After investigating several classifiers (SVM, KNN, etc.) we have chosen the Naive Bayes classifier for this task, which uses a simple, but very effective approach to classify in specific cases. As it can be seen in section

4, this classifier with the proposed feature provided proper results with good computational performance.

It is also an interesting observation that these features (with a proper training set and classifier) are also useful in detecting low quality images. However, this issue is out of the scope of this paper.

3 Pre-Filtering – Extracting Regions With Lesion Candidates

As the second step of our approach, we extract regions with lesions candidate in the images that passed the pre-screening phase. Since these images must undergo detailed image analyses to extract specific lesions later on, this pre-filtering is highly recommended to restrict the input of the corresponding detector algorithms. Now we summarize the stjpg how the candidate regions are extracted.

3.1 Pre-processing

Similar to the pre-processing stjpg discussed for the pre-screening phase, we use the green plane of the image by following literature recommendations youssif . Then, we perform histogram equalization on the image to reduce the vignette effect (see Figure 5) and calculate the background image by applying a strong median filter of size (e.g. with ).

Figure 5: The green plane after histogram equalization.

We use the background image shown in Figure 6, to perform shade correction by subtracting it from the original image.

Figure 6: The background image.

To suppress noise, we apply a median filter of size (e.g. with ) to the shade corrected image. As the final pre-processing step, we apply unsharp masking to increase the acutance (see Figure 7).

Figure 7: The pre-processed image for candidate region extraction.

3.2 Removal of anatomical parts

Detecting the anatomical parts of the fundus is an important step before lesion detection. For example, the optic disc appears as the brightest circular patch on the fundus, whose presence may disturb the detection of exudates. Removing the vessel system is also relevant, since a small portion of it appears basically the same as haemorrhages. Besides these two anatomical parts, we also remove the macula, because for certain region sizes, some parts of it can appear as a locally salient object. For these tasks, we use the vessel detector published by Staal et al. vessel , the macula detector of Petsatodis od and the optic disc detector described in exudate .

3.3 Statistical analysis of regions

We split the image into disjoint regions of size (e.g. with ). For each region, we compute the local mean and the standard deviation of its intensity values. We label the pixel having intensity as high, if , while is low, if . Otherwise, P remains unlabeled. After labelling, we select connected components, which composed of pixels with identical labels and with cardinality at least n. If a component satisfied these conditions, we consider that as a lesion candidate. We use the areas which possibly contain lesions as input for specific lesion detectors, designed for e.g. microaneurysms or exudates.

4 Results

In this section, we present our respective experimental results for the pre-screening and pre-filtering phases of the proposed approach.

4.1 Results on pre-screening

Our first experimental dataset consisted of 34 training and 28 test macula-centered images. Both the training and the test sets contain 50-50% normal and abnormal cases. Ophthalmologists selected and classified these images whether they contain serious disorder or not from three databases: the publicly available DRIVE drive , DIARETDB1 diaretdb1 and the database provided by the Moorfields Eye Hospital, London, UK for our research purposes. Our goal is to find images, where the fundus is abnormal to avoid obviously diseased cases to pass. These images contain sight-threatening disorders and have a grade of R3 in a usual retinopathy grading protocol protocoll . Therefore, we label the elements of the test database as images with serious disorder (first class) and images to be processed further (second class). Thus, the second class expected to contain normal or not seriously diseased cases.

For pre-screening, we used a Naive Bayes classifier and trained for the combined features extracted from all regions of the images as disclosed in section 2.2. Thus, a feature vector is extracted for each image, where is the number of subimages. With this approach, we have successfully classified all elements of the test dataset. That is, the accuracy in this case is 100%.

To make the approach faster, we used backward elimination hiv for feature subset selection. That is, we have selected the best 11 regions on each image to extract the features from them for classification. In this case, our approach still provided no false predictions with an elapsed time below milliseconds.

We have also tested our approach on the 1200 images of the Messidor database111Kindly provided by the Messidor program partners (see http://messidor.crihan.fr). This database is dedicated to measure the performance of screening systems by providing grading scores for each image. The grades are from R0 to R3, where R0 represents no retinopathy and r3 is the most serious case of this disease based on the type and the number of the lesions appearing on the images. We have selected the two classes as the follows: abnormal (R3) and images that needs further analysis (R0, R1, R2). Our approach achieved an accuracy of 82% with 81% sensitivity and 82% specificity on this dataset. Most of the error originates from the false classification of R2 cases (52% of all misclassified images are from this class), while a smaller portion of R1 and R3 images also classified wrongly (25% and 23% of all misclassification occurred in these images, respectively).

4.2 Results on pre-filtering

We have tested our approach on those images which have been classified as ”to be processed further” by the previous pre-screening phase and the positive samples of the first training set. Thus, this database consisted of 36 images. We have also tested this approach on the 784 images of the Messidor dataset.

The detector missed only 1 fundus image which contained lesions on the first dataset. Our results are summarized in Table 2 in details containing the value of the size parameters , the number of correctly / incorrectly (true / false) identified regions, the number of misclassified images and the percentage of the remaining pixels. An image is considered as misclassified, if it contained at least one lesion, but none was found, or otherwise, if it contains no lesions but at least one was found by the algorithm.

Size () True False Mis- Percentage
10 24 10 4 0.05
25 26 10 4 0.34
50 25 9 5 1.28
75 27 3 1 2.5
100 16 7 5 3.47
200 4 4 5 4.82
Table 2: Experimental results on pre-filtering. Size - the size of the region window, True - the number of correctly identified regions, False - the number of falsely identified regions, Misclassified - the number of misclassified images, Percentage - the percentage of the number of remaining pixels after this step.

This setup of the parameters was found by empirical tests to obtain highest accuracy. The detector missed only 3.5% fundus image containing lesions on the first dataset, while on the Messidor dataset this number was 5%. A demonstrative example for selected regions can be seen in Figure 8.

Figure 8: Regions with lesion candidates.

With this candidate region detection, we can reduce the total number of pixels of the database to nearly 2.5% of the original data. In order to demonstrate how its reduction affected consequent detailed image processing analysis, we tested a specific lesion detector. Therefore, the computational time of the state-of-the-art microaneurysm detection algorithm fleming reduced by 90% after this candidate selection.

5 Conclusion

We presented a complemented automatic decision support approach that can separate fundus images containing serious lesions from the ones that should undergo detailed screening. This step can immediately direct patients with serious lesions to an ophthalmologist without a time-consuming screening procedure. With a use of a Naive Bayes classifier, we were able to classify all the test images correctly. As a secondary pre-filtering step for images passing pre-screening, we have presented an approach which is eligible to detect areas which presumably contain lesions. As a fair trade-off with accuracy, we gained high computational performance with using only small regions to detect the actual lesions within. However, it is also should be noted that with the gain in computational performance some lesions can be missed using this framework.


This work was supported in part by the János Bolyai grant of the Hungarian Academy of Sciences, and by the TECH08-2 project DRSCREEN - Developing a computer based image processing system for diabetic retinopathy screening of the National Office for Research and Technology of Hungary (contract no.: OM-00194/2008, OM-00195/2008, OM-00196/2008).


  • (1) R. Klein, B. E. K. Klein, S. E. Moss, Visual impairment in diabetes, Ophthalmology 91 (1984) 1–9.
  • (2) C. of Vision Impairment, http://www.lighthouse.org/research/statistics-on-vision-impairment/causes/.
  • (3) Vanderbilt eye institute.
    URL http://www.vanderbilthealth.com/eyeinstitute/23499
  • (4) Eyepacs.
    URL https://www.eyepacs.org/about.do
  • (5) English national screening programme for diabetic retinopathy.
    URL http://www.retinalscreening.nhs.uk/pages/
  • (6) Eyecheck.
    URL http://www.eyecheck.nl
  • (7) S. Philip, A. D. Fleming, K. A. Goatman, S. Fonseca, P. Mcnamee, G. S. Scotland, G. J. Prescott, P. F. Sharp, J. A. Olson, The efficacy of automated disease/no disease grading for diabetic retinopathy in a systematic screening programme, British Journal of Ophthalmology 91 (11) (2007) 1512–1517.
  • (8) R. Winder, P. Morrow, I. McRitchie, J. Bailie, P. Hart, Algorithms for digital image processing in diabetic retinopathy, Computerized Medical Imaging and Graphics 33 (8) (2009) 608 – 622.
  • (9) M. Abramoff, M. Niemeijer, M. Suttorp-Schulten, M. A. Viergever, S. R. Russel, B. van Ginneken, Evaluation of a system for automatic detection of diabetic retinopathy from color fundus photographs in a large population of patients with diabetes, Diabetes Care 31 (February 2008) 193–198.
  • (10) M. J. Chorley, D. W. Walker, Performance analysis of a hybrid mpi/openmp application on multi-core clusters, Journal of Computational Science 1 (3) (2010) 168 – 174.
  • (11) M. Bader, M. Mehl, U. R de, G. Wellein, Simulation software for supercomputers, Journal of Computational Science 2 (2) (2011) 93 – 94.
  • (12) M. Niemeijer, M. D. Abrámoff, B. van Ginneken, Image structure clustering for image quality verification of color retina images in diabetic retinopathy screening, Medical Image Analysis 10 (2006) 888–898.
  • (13) J. Staal, M. Abramoff, M. Niemeijer, M. Viergever, B. van Ginneken, Ridge based vessel segmentation in color images of the retina, IEEE Transactions on Medical Imaging 23 (2004) 501–509.
  • (14) I. Kozak, P. Sample, J. Hao, W. R. Freeman, R. N. Weinreb, T. Lee, M. H. Goldbaum, Machine learning classifiers detect subtle field defects in eyes of hiv inviduals, Trans Am Ophthalmol Soc 105 (2007) 111–120.
  • (15) J. Meier, R. Bock, G. Michelson, L. Nyúl, J. Hornegger, Effects of preprocessing eye fundus images on appearance based glaucoma classification, Lecture Notes in Computer Science 4673 (2007) 165 – 173.
  • (16) A. Sopharak, K. T. Nwe, Y. A. Moe, M. N. Dailey, B. Uyyanonvara, Automatic exudate detection with a naive bayes classifier, in: The 2008 International Conference on Embedded Systems and Intelligent Technology, February 27-29, 2008.
  • (17) A. Sopharak, K. T. Nwe, Y. A. Moe, M. N. Dailey, B. Uyyanonvara, Automatic exudate detection with a naive bayes classifier, in: The 2008 International Conference on Embedded Systems and Intelligent Technology, February 27-29, 2008.
  • (18) A. A. A. Youssif, A. Z. Ghalwash, A. S. Ghoneim, Comparative study of contrast enhancement and illumination equalization methods for retinal vasculature segmentation, Proc. Cairo International Biomedical Engineering Conferemce.
  • (19) C. Bellman, M. M. Neveu, H. P. N. Scholl, C. R. Hogg, P. R. Rath, S. Jenkins, A. C. Bird, G. E. Holder, Localized retinal electrophysiological and fundus autofluorescence imaging abnormalities in maternal inherited diabetes and deafness, Investigative Ophthalmology & Visual Science 45 (July 2004) 2355–2360.
  • (20) I. H. Witten, E. Frank, Data Mining: Practical machine learning tools and techniques, 2nd Edition, Morgan Kaufmann, San Francisco, 2005.
  • (21) J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, B. van Ginneken, Ridge-based vessel segmentation in color images of the retina, IEEE Transactions on Medical Imaging 23 (2004) 501 – 509.
  • (22) T. S. Petsatodis, A. Diamantis, G. P. Syrcos, A complete algorithm for automatic human recognition based on retina vascular network characteristics, Era1 International Scientific Conference, Peloponnese, Greece.
  • (23) T. Kauppi, V. Kalesnykiene, J. K m r inen, L. Lensu, I. Sorri, A. Raninen, R. Voutilainen, H. Uusitalo, H. K lvi inen, J. Pietil , Diaretdb1 diabetic retinopathy database and evaluation protocol, Proc. of the 11th Conf. on Medical Image Understanding and Analysis (MIUA2007) (2007) 61–65.
  • (24) S. Harding, R. Greenwood, S. Aldington, J. Gibson, D. Owens, R. Taylor, E. Kohner, P. Scanlon, G. Leese, Grading and disease management in national screening for diabetic retinopathy in england and wales, Diabetic Medicine 20 (December 2003) 965 971.
  • (25) A. D. Fleming, S. Philip, K. A. Goatman, Automated microaneurysm detection using local contrast normalization and local vessel detection, IEEE Transactions on Medical Imaging 25(9) (2006) 1223–1232.