Mini Lesions Detection on Diabetic Retinopathy Images via Large Scale CNN Features

11/19/2019 ∙ by Qilei Chen, et al. ∙ UMass Lowell 0

Diabetic retinopathy (DR) is a diabetes complication that affects eyes. DR is a primary cause of blindness in working-age people and it is estimated that 3 to 4 million people with diabetes are blinded by DR every year worldwide. Early diagnosis have been considered an effective way to mitigate such problem. The ultimate goal of our research is to develop novel machine learning techniques to analyze the DR images generated by the fundus camera for automatically DR diagnosis. In this paper, we focus on identifying small lesions on DR fundus images. The results from our analysis, which include the lesion category and their exact locations in the image, can be used to facilitate the determination of DR severity (indicated by DR stages). Different from traditional object detection for natural images, lesion detection for fundus images have unique challenges. Specifically, the size of a lesion instance is usually very small, compared with the original resolution of the fundus images, making them diffcult to be detected. We analyze the lesion-vs-image scale carefully and propose a large-size feature pyramid network (LFPN) to preserve more image details for mini lesion instance detection. Our method includes an effective region proposal strategy to increase the sensitivity. The experimental results show that our proposed method is superior to the original feature pyramid network (FPN) method and Faster RCNN.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Diabetic Retinopathy (DR) is a leading problem of ophthalmic disease globally and one of the most common complications of diabetes. The prevalence of DR in diabetic populations is as high as 24.7-37.5% [4]. People suffering from diabetes have a higher risk of developing DR as the elevated glucose levels can damage the retina blood vessels. Early screening and regular checkup with the fundus camera have been reported as an effective approach to reduce the risk of blindness [18]. Therefore, it is important to develop an effective tool for DR detection in early screenings to improve the healthcare outcome.

International Clinical Diabetic Retinopathy and Diabetic Macular Edema Disease Severity Scales [15] is a worldwide-used standard for severity diagnosis based on DR images. There are 5 stages for DR in the standard and the classification of the 5 stages is based on the location and number of the following 10 lesion categories: 1) blot hemorrhages, 2) micro-aneurysms, 3) hard exudate, 4) cotton wool spot, 5) fibrous proliferation, 6) venous beading, 7) intraretinal microvascular abnormity (IRMA), 8) neovascularization, 9) vitreous hemorrhage, 10) venous loop [13]. Normally, DR images need to be in a high-resolution format, so that the lesions can be shown in detail (see Fig. 1), especially for the small size lesion categories 1)-4). It is a labor extensive job for an ophthalmologist to examine the whole image and figure out all the numbers, categories and locations of these lesions. In fact, finding lesions in DR images can be considered as an instance level multi-label visual object detection task and it has become a hot research topic in recent years [3, 14, 17, 19].

Fig. 1: Mini lesion with zoom-in view.

However, most of the previous methods are designed to detect a single lesion category (e.g., [3, 17]), or to detect multiple lesion categories in a sequence of steps [19]. Also, due to the lack of datasets with detailed location information for lesion instances, previous weakly supervised methods [14] can only find suspicious lesion regions rather than fine grained individual lesion instances. In this study, we aim to develop a single model to detect all lesion instances of different categories in one round.

Traditional Faster-RCNN and FPN methods are not suitable for small lesion detection in our dataset, as the classification is based on the middle or top feature maps, which may loss the feature details of the small lesions. To address this issue, we first propose a large-size feature pyramid network (LFPN) to be used in RCNN [6] method. Our method increases the size of the bottom feature map to match that of the input image, which can better preserve the details of small targets on the original images, and is thus more effective for detecting small lesions. Furthermore, we find that normal region proposal strategy based on intersection-over-union (IoU) in region proposal network (RPN) tends to miss a lot of small true targets in our experiments. To alleviate this problem, we design an effective region proposal strategy, in which small anchors containing center region of ground-truth will be set positive, to help RPN pay more attention to true mini lesion targets on the fundus image. The experiment results show that our methods can considerably improve the performance of lesion detection over the original Faster-RCNN and FPN methods. To the best of our knowledge, our work is the first effort for lesion detection at the instance level, based on the unique DR image dataset we collected with lesion instances labeled by ophthalmologists.

The major contributions of this work can be summerized as follows:

  • Firstly, we propose a large size feature map method in FPN to preserve details of high-resolution fundus images for detecting mini size lesions.

  • Secondly, we design a center focus strategy in RPN to get more acceptable anchors for the lesion targets detection.

  • Thirdly, our lesion detection method outperforms other state-of-art approaches in the experiments and provides a strong foundation for DR severity diagnosis.

Ii Related Work

In this section, we will review recent CNN-based object detection methods that will be used in our research. We also briefly introduce the International Clinical Diabetic Retinopathy and Diabetic Macular Edema Disease Severity Scales (ICDRDMEDSS) and show the important relationship between lesions detection and severity diagnosis.

Ii-a CNN-based Object Detection

With the explosive development of convolution neural networks (CNNs) these years, various CNN-based visual object detection methods such as Faster-RCNN

[12], SSD [10] and YOLO [11] have been proposed. According to the process of detection, these methods can be devided in two categories: one-stage methods such as SSD, YOLO and two-stage methods such as Faster-RCNN. In one stage methods, object class classification and location regression are directly predicted through the feature map of backbone. Unlike one stage methods, the region proposal network (RPN), which can be described as a binary-label detector, is used in two stage methods as the first step. The binary-label detector can predict objectness and the positive results will be used in the second step for the object label classication and location regression. From previous studies, two-stage methods are often more effective for various size object detection. We use Faster-RCNN as the baseline in our experiments.

Ii-B Feature Pyramid Networks

Image pyramids construction [1]

have been proven to be an effective method to handle the fundamental challenge of recognizing objects at vastly different scales in computer vision. With the prevelent of CNN methods, Lin et al.

[8] proposed a Feature Pyramid Network (FPN) method built upon CNN features as a basic component in recognition systems for detecting objects at different scales. Faster-RCNN with FPN achieves better result on large scale natural object detection dataset COCO [9]. The standard FPN uses the last residual block of the 4 stages from the ResNets [7] backbone as input and then goes through a top-down pathway to construct 4 feature layers, with the size ratio between adjacent layers set to be 2. In a standard FPN, the ratio of original image to the largest feature scale is 4. Larger size feature can preserve more details of the objects, which is especially important for small instances. Motivated by such intuition, we modify the standard FPN and extend the feature scale to the original size.

Ii-C Diabetic Retinopathy Diagnosis

International Clinical Diabetic Retinopathy and Diabetic Macular Edema Disease Severity Scales (ICDRDMEDSS) is proposed by Wilkinson in 2003 as one of the international standards, in which diabetic retinopathy can be classified into 5 stages: 1) No DR, 2) Mild non-proliferative DR, 3) Moderate non-proliferative DR, 4) Severe non-proliferative DR, 5) Proliferative DR. The stage diagnosis is based on the category and location of lesions on the fundus image. For example, having more than 20 hemorrhages in each of 4 quadrants without signs of proliferative retinopathy is the indicative condition of the 4th stage. Quadrants are centered by macula (shown in Fig 

2). EyePACS [2] is a well-known large scale dataset for DR Diagnosis in ICDRDMEDSS standard. It contains more than 100,000 fundus images and each image was labeled with an integer ranging from 0 to 4, indicating the stage of DR. In EyePACS, DR diagnosis is considered as a task of image classification. In fact, location and classification of lesions on fundus can show detail of DR diagnosis. Therefore lesion instance detection can provide a more effective way to assist ophthalmologists to view the condition of DR.

Fig. 2: DR image with 4 quadrants based on the center of marcula according to ICDRDMEDSS.

Iii Dataset and Method

Data collection is a vital step for lesion detection methods based on convolutional neural network (CNN). We developed our own labeling tool, which can be easily applied to mark the bounding box and category of lesion instances in an image. After the manual labeling step, we tally the number of object instances and compute the area ratio for each lesion category. The summary information will be shown in Section

. LFPN is designed for preserving and making full use of the small size object features. The most important part of LFPN is that we use the stride-1 feature map, which is the same size as the input image, for RoI-pooling and classification in RCNN (shown in Fig.

3). This layer can preserve the details of small object features as much as possible. Another difference from FPN is that we use the top layer for region proposal and then map the region-of-interests (ROIs) to the stride-1 feature map. Details of LFPN can be seen in Fig. 3. For small target proposal, IoU is not an effective metric to locate the lesion targets, especially when the object constitates only a small part in the center of the ground-truth bounding box. We add a center-focused condition in the proposal strategy for the situation mentioned above, which is shown to improve the result in the experiments.

Iii-a Dataset Analysis

The dataset contains images with a resolution of , including fundus pictures from 500 patients and covering all 5 severity stages. All the original images were preprocessed to remove the left and right hand side black parts with low pixel values. This helps the model focus on the fundus part with a new size of . There are 10 lesion labels as mentioned in the introduction part in our labeling tool and a flexible bounding box tool is provided to mark the location and category of each lesion. Each annotation box represents a single lesion instance and contains 5 values , where is the coordinate in a fundus image for the upper-left corner of ground-truth, is the size of the box and is the label of the lesion. The dataset was randomly divided into 4 equal parts and each part was handled by one specialist and then validated by 3 other specialists.

LabelSet Total Train Validation
1 18493 14720 3773
2 7703 6301 1402
3 9316 7403 1913
4 654 537 117
5 34 - -
6 15 - -
7 25 - -
8 49 - -
9 14 - -
10 1 - -
TABLE I: Summary of lesions in 10 different categories. Labels correspond to the categories description order mentioned in the introduction.
Label 1 2 3 4
Ratio 0.07244% 0.05390% 0.31672% 0.23976%
TABLE II: The average lesion-to-image ratios for categories 1)-4).

Table I summarizes the number of instances for each lesion category. We can see that categories 1)-4) make up the majority of the dataset. These four categories are more valuable for early diagnosis because they are indicators for the frist three stages of severity [13]. Compared with the other 6 categories of lesion, labeling of categories 1)-4) are more challenging and labor-consuming as the area ratios are very small on a fundus image. The average lesion-to-image ratios for categories 1) - 4) are listed in Table II.

Iii-B Large Size Feature Pyramid Network

Fig. 3 illustrates the architecture of our proposed large-size feature pyramid network (LFPN). As in FPN, we upsample the spatial resolution of the feature map by a factor of 2 and then merge each upsampled map with the corresponding bottom-up map. The difference is that we will continue to upsample the feature map until it reaches the input size (, ). The input image will be considered as a feature map and it will go through a convolutional layer to increase the channel dimension from to . After that, the new -dimension layer will be merged with the upsampled feature map with size (, ) by operation of elementwise-sum. For example, if Resnet-101 is used as the backbone, we will get a set of feature maps containing 6 layers {, , , , , }, where layer has the same size as intput data, while FPN only contains to .

Fig. 3: The architecture of large-size feature pyramid network (LFPN). is the input image size and is the channel size of feature maps.

With a large feature map size, can sufficiently preserve the feature of small lesions. In FPN, every feature layer is used for both region proposal and class prediction. In LFPN, we only use the smaller size feature maps, i.e. , as the region proposal layers, and then the proposed regions of interest (RoI) will be mapped to layer . Unlike the application condition of FPN, in which layers have to be presented for multi-scale [16] objects, our approach only concentrates on small-scale lesion targets and layers act as linking chain and feature producers. For a GPU with fixed memory capacity, LFPN can provide larger size and better result than FPN. Compared with faster-RCNN, the region proposal layer remains the same, but our RoI pooling is performed on the largest layer rather than the top feature layer of the backbone, thus the classifier can use more detailed features to get better results.

Iii-C Center-Focus Target Proposal

Region proposal network (RPN) [12] is a vital part in RCNN methods as it predicts object bounds and objectness scores at each position on the feature map. In RPN, an anchor is centered at the sliding window in question and is associated with a scale and aspect ratio [12]. The region proposal part will assign an anchor with a positive label if it matches ether one of conditions mentioned in [12] and with a negative label if the IoU ratio between the anchor and all ground-truth boxes is lower than 0.5.

In fact, manual labeling of small size lasions is a difficult task even for specialists and usually the bounding box contains part of lesion region rather than strictly flollowing along its contour (see green boxes in Fig. 4). The center region are the most important part of the bounding box when the area ratio of the lesion to the ground-truth box is less than 0.5. The region proposal strategy in [12] may reject some acceptable anchor boxes. For example, in Fig. 4, the small blue anchor fits the whole lesion region perfectly but the IoU ratio is less than 0.5 and the anchor will be set to negative based on the proposal condition mentioned in [12].

Fig. 4: Red rectangle parts of the left image are zoomed in and shown on the right. Green rectangles are the ground-truth boxes and blue ones are proposal targets.

In addition to the original condition, we introduce a center-focus (CF) target proposal condition to help mitigate the problem of rejecting suitable anchors:

(1)
(2)

where represents the merged area of ground truth and anchor , is 1 when the center of ground truth resides in the anchor and represents the area of anchor . Equation presents that if the anchor contains the center region of the ground-truth box and equation means that the anchor with IoU ratio greater than 0.5 or location near the center of ground-truth is positive.

Iv Experiments

In this section, we will first introduce a new criterion in our experiments and then describe the details of hyperparameter settings and analysis of the results. We select all images with small lesion categories

as the experimental dataset. During the experiments, we randomly divide the dataset into 2 parts: one is for training while the other for validation with a 4:1 ratio. The number of lesion instances of both sets are shown in the last two rows of Table I. Manual lesion labeling is a challenging task, and some lesions may be not marked. Our CNN-based detector can even detect the lesion that actually exist but are not marked in ground-truth. It is more reasonable to use the sensitivity of results as the quality metric on the validation set. In our experiments, horizontal flipping is applied during the training for data augmentation. LFPN is implemented on MXnet and the experiments are performed on 4 NVIDIA TESLA K80 GPUs.

Iv-a Center-Focus Criterion

Standard criterion for natural object detection is based on IoU ratio only [12], which means the IoU ratio between a true positive object and the ground-truth should be above the threshold 0.5. In our DR dataset, the bounding box of ground-truth lesion instance contains a lot of context as the green rectangle shown in Fig. 4 and the center region of ground-truth are the main information of lesions. A predicted object thar has IoU ratio less than 0.5 but contains center region is still acceptable in our application. Corresponding to the CF target proposal condition mentioned in Section , we develop our own center-focus criterion. In this criterion, a prediction rectangle should be positive when the IoU ratio is more than 0.1 and the rectangle contains the center point of ground-truth.

Iv-B LFPN for Lesion Detection

We train LFPN-RCNN on our dataset and use another two methods for baselines, namely Faster-RCNN and Faster-RCNN with FPN. The same parameter settings are used for all three methods. We apply Resnet101 as the backbone, which has been pretrained on Imagenet

[5]. The input size of the network for the three methods mentioned above is . For LFPN, layers are applied for region proposal. The base anchor number of each location in one layer is . In our work, we use a small anchor scale list for all the three methods to fit the small lesion detection.

MethodCategory Blot Hemorrhages Micro-aneurysms Hard Exudate Cotton-wool Spot
FasterRCNN 84.63% 80.98% 85.85% 72.22%
FPN 90.21% 88.10% 91.79% 74.60%
LFPN 92.76% 91.96% 90.51% 75.40%
FasterRCNN+CF 87.58% 81.37% 89.23% 70.63%
FPN+CF 91.90% 89.02% 88.46% 62.70%
LFPN+CF 93.01% 86.26% 93.79% 79.73%
TABLE III: Summary of sensitivities on 4 categories. The last three rows show the results with center-focus (CF) proposal strategy.

During the validation step, we set the max prediction number of one image as 100 and the confidence score threshold is 0.1. The results of sensitivity with CF criterion is shown in the first 3 rows of Table. III. For the lesion categories of 1), 2) and 4), the results of our proposed LFPN are superior to the other two methods, which means that the LFPN is more suitable for smaller lesions detection as lesions of categories 1) and 2) have a lower area ratio on average.

Iv-C Detection with Center-Focused Target Proposal

Fig. 5 show the recall results for ablations of the design choices for CF proposal. The blue line represents the result from LFPN with CF proposal strategy, which can help the network focus more on the center of the main feature of each ground truth. We observe that the recall is still very high even when the IoU ratio threshold increases to 0.6. From the last three rows of Table. III, we can see that our CF strategy is effective on most results of the lesion categories. The result of micro-aneurysms worsens with the CF strategy. We analyze the result for each lesion category and find that 1) blot hemorrhages and 2) micro-aneurysms have high cross-mistakes. Actually, some small hemorrhages are similar to micro-aneurysms on appearance. With the increasing accuracy of hemorrhages, the classifier becomes more sensitive to hemorrhages, which may harm the recognition ability of micro-aneurysms. We will follow up with the issue in our future research.

Fig. 5: Recall IoU overlap ratio.

V Conclusion

In this work, we propose LFPN to detect small lesion instances on DR images. The proposed architecture has two advantages: Frist, we use large CNN feature maps, which has the same size as the input image and contains detail of small lesion features, and thus is more effective for object classification in the second stage of RCNN. Another is that we use the top layer for region proposal, which is computing resource efficient. To enhance the target presentation, the center-focused condition is applied to the proposal strategy. In fact, small lesions detection in large size images is a common problem for medical image processing. Our method can automatically propose a lot of useful lesion regions on large medical images, thus improves doctor diagnosis accuracy and efficiency and saves medical resources.

References

  • [1] E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden (1984) Pyramid methods in image processing. RCA engineer 29 (6), pp. 33–41. Cited by: §II-B.
  • [2] J. Cuadros and G. Bresnick (2009) EyePACS: an adaptable telemedicine system for diabetic retinopathy screening. Journal of diabetes science and technology 3 (3), pp. 509–516. Cited by: §II-C.
  • [3] L. Dai, B. Sheng, Q. Wu, H. Li, X. Hou, W. Jia, and R. Fang (2017) Retinal microaneurysm detection using clinical report guided multi-sieving cnn. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 525–532. Cited by: §I, §I.
  • [4] G. Danaei, M. M. Finucane, Y. Lu, G. M. Singh, M. J. Cowan, C. J. Paciorek, J. K. Lin, F. Farzadfar, Y. Khang, G. A. Stevens, et al. (2011) National, regional, and global trends in fasting plasma glucose and diabetes prevalence since 1980: systematic analysis of health examination surveys and epidemiological studies with 370 country-years and 2.7 million participants. The Lancet 378 (9785), pp. 31–40. Cited by: §I.
  • [5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    2009 IEEE conference on computer vision and pattern recognition

    ,
    pp. 248–255. Cited by: §IV-B.
  • [6] R. Girshick (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §I.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §II-B.
  • [8] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125. Cited by: §II-B.
  • [9] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §II-B.
  • [10] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §II-A.
  • [11] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §II-A.
  • [12] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §II-A, §III-C, §III-C, §IV-A.
  • [13] X. Wang, Y. Lu, Y. Wang, and W. Chen (2018) Diabetic retinopathy stage classification using convolutional neural networks. In 2018 IEEE International Conference on Information Reuse and Integration (IRI), pp. 465–471. Cited by: §I, §III-A.
  • [14] Z. Wang, Y. Yin, J. Shi, W. Fang, H. Li, and X. Wang (2017) Zoom-in-net: deep mining lesions for diabetic retinopathy detection. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 267–275. Cited by: §I, §I.
  • [15] C. Wilkinson, F. L. Ferris III, R. E. Klein, P. P. Lee, C. D. Agardh, M. Davis, D. Dills, A. Kampik, R. Pararajasegaram, J. T. Verdaguer, et al. (2003) Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 110 (9), pp. 1677–1682. Cited by: §I.
  • [16] D. Xiao, Q. Chen, and S. Li (2016) A multi-scale cascaded hierarchical model for image labeling.

    International Journal of Pattern Recognition and Artificial Intelligence

    30 (09), pp. 1660005.
    Cited by: §III-B.
  • [17] Y. Yang, T. Li, W. Li, H. Wu, W. Fan, and W. Zhang (2017) Lesion detection and grading of diabetic retinopathy via two-stages deep convolutional neural networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 533–540. Cited by: §I, §I.
  • [18] G. Zhang, H. Chen, W. Chen, and M. Zhang (2017) Prevalence and risk factors for diabetic retinopathy in china: a multi-hospital-based cross-sectional study. British Journal of Ophthalmology 101 (12), pp. 1591–1595. Cited by: §I.
  • [19] Y. Zhao, Y. Zheng, Y. Zhao, Y. Liu, Z. Chen, P. Liu, and J. Liu (2018) Uniqueness-driven saliency analysis for automated lesion detection with applications to retinal diseases. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 109–118. Cited by: §I, §I.