Automated eye disease classification method from anterior eye image using anatomical structure focused image classification technique

05/04/2020
by   Masahiro Oda, et al.
0

This paper presents an automated classification method of infective and non-infective diseases from anterior eye images. Treatments for cases of infective and non-infective diseases are different. Distinguishing them from anterior eye images is important to decide a treatment plan. Ophthalmologists distinguish them empirically. Quantitative classification of them based on computer assistance is necessary. We propose an automated classification method of anterior eye images into cases of infective or non-infective disease. Anterior eye images have large variations of the eye position and brightness of illumination. This makes the classification difficult. If we focus on the cornea, positions of opacified areas in the corneas are different between cases of the infective and non-infective diseases. Therefore, we solve the anterior eye image classification task by using an object detection approach targeting the cornea. This approach can be said as "anatomical structure focused image classification". We use the YOLOv3 object detection method to detect corneas of infective disease and corneas of non-infective disease. The detection result is used to define a classification result of a image. In our experiments using anterior eye images, 88.3 method.

READ FULL TEXT VIEW PDF

Authors

page 2

page 4

page 6

page 7

06/04/2020

Deep Sequential Feature Learning in Clinical Image Classification of Infectious Keratitis

Infectious keratitis is the most common entities of corneal diseases, in...
01/19/2016

Eye detection in digital images: challenges and solutions

Eye Detection has an important role in the field of biometric identifica...
06/15/2022

Automatic Detection of Rice Disease in Images of Various Leaf Sizes

Fast, accurate and affordable rice disease detection method is required ...
06/10/2022

Saccade Mechanisms for Image Classification, Object Detection and Tracking

We examine how the saccade mechanism from biological vision can be used ...
01/27/2022

Eye-focused Detection of Bell's Palsy in Videos

In this paper, we present how Bell's Palsy, a neurological disorder, can...
03/27/2016

Evolution of active categorical image classification via saccadic eye movement

Pattern recognition and classification is a central concern for modern i...
06/05/2019

The Stanford Acuity Test: A Probabilistic Approach for Precise Visual Acuity Testing

Chart-based visual acuity measurements are used by billions of people to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Diseases of the eye can be roughly classified into infective and non-infective diseases. Eye infection occurs by bacteria, fungi, and viruses [8]. This causes red eyes, pain, itching, and blurry vision. Non-infective diseases include ulcers and trauma. Treatments for infective and non-infective diseases cases are different. Distinguish of infective disease or non-infective one is an important task in diagnosis. Ophthalmologists distinguish them by observing the anterior eye. However, the criteria for distinguish the two diseases are made based on experience of individual ophthalmologist. Decisions made by a less experienced ophthalmologist have possibility to be wrong. Decision assistance by a computer is necessary to provide diagnoses at a certain level.

The cornea and conjunctiva can be observed in anterior eye images. Typical anterior eye images of cases of infective and non-infective diseases are shown in Fig. 1. Characteristics of the eye with infective disease are: center of the cornea is opacified and redness of the conjunctiva is observed. Characteristics of the eye with non-infective disease are: marginal area of the cornea is opacified and redness of the conjunctiva varies among patients. More samples of anterior eye images of infective and non-infective diseases are shown in Fig. 2. As shown in this figure, not only the position of opacified area and redness of the conjunctiva, but also the eye position, degree of open of the eyelid, and brightness of illumination greatly varies among the images. These variations make automated diagnosis difficult. Automated diagnosis assistance methods for infective and non-infective disease from anterior eye images have not been developed.

(a) (b)
Figure 1: Typical anterior eye images of cases of (a) infective and (b) non-infective diseases. Positions of opacified areas in the cornea are different among them.
(a)
(b)
Figure 2: Anterior eye image samples of cases of (a) infective and (b) non-infective diseases.

Recently, deep learning techniques are used to develop automated diagnosis assistances of the eye. However, most of such methods process fundus images to perform anatomical structure segmentations and lesion detections

[2, 9, 4, 5, 1, 11, 10]. Despite anterior eye images contain useful information for diagnosis, they are not commonly utilized in automated processes for diagnosis assistance.

In this paper, we propose an automated classification method of cases of infective and non-infective diseases from anterior eye images. To the best of our knowledge, this is the first automated method for infective and non-infective diseases diagnosis assistance from anterior eye images. As we explained above, there are many variations of appearances among anterior eye images that make classification difficult. We need to focus on a specific anatomical structure that shows different appearances between cases of infective and non-infective diseases. Appearances of the cornea and conjunctiva are useful to differentiate cases of infective and non-infective diseases. Especially, the position of opacified area in the cornea is different among the two disease cases. Therefore, we solve the image classification task by using an object detection approach targeting the cornea. Our approach can be said as “anatomical structure focused image classification”. We use an object detection approach not for object detection but also classification of images.

We use the YOLOv3 [7] object detection method. The YOLOv3 is trained to detect two targets including the cornea of infective disease and the cornea of non-infective disease from anterior eye images. In the inference stage, we classify an anterior eye image into one of infective or non-infective disease class based on the detection result of the YOLOv3. This process enables image classification based on difference of appearances of the cornea, which is important for diagnosing infective and non-infective diseases.

2 Method

2.1 Training data

The training data contains anterior eye images taken from infective and non-infective disease patients. We manually provide annotations to the images. For the images of the infective disease patients, “infective cornea” label and bounding boxes covering the corneas are given as annotations. For the images of the non-infective disease patients, “non-infective cornea” label and bounding boxes covering the corneas are given as annotations. Samples of annotated images are shown in Fig. 3. The images and corresponding annotations are used to train the object detection method.

(a)
(b)
Figure 3: Samples of annotations for (a) infective and (b) non-infective disease images. Yellow boxes are bounding boxes with “infective cornea” or “non-infective cornea” label.

2.2 Anatomical structure focused anterior eye image classification method

In the YOLO framework [6]

, a convolutional neural network (CNN) is used to perform estimation of bounding boxes, object class labels, and objectness scores from an input image. Multiple bounding boxes can be detected from an input image. We train the YOLOv3 using the images for training and corresponding annotations. The object classes include “infective cornea” and “non-infective cornea”. After the training, we input images for testing to the trained YOLOv3. We obtain estimated bounding boxes, object class labels, and objectness scores of the images.

Two-class classification needs to be implemented in a given problem. However, the estimation result may contain multiple estimated bounding boxes that correspond to multiple object classes for an image. When multiple bounding boxes are estimated from an image, we perform non-maximal suppression [6] using objectness scores. This process is illustrated in Fig. 4. Estimated bounding boxes having non-maximal objectness scores are removed. After this process, we obtain one or zero estimated bounding box per image. A class label corresponds to an estimated bounding box of an image is defined as the estimated class label of the image.

Figure 4: Example of non-maximal objectness score removal. In estimation results, multiple bounding boxes can be contained. Boxes in figure are samples of detected bounding boxes. Class labels and objectness scores are also estimated. Among them, a bounding box having highest objectness score is selected and the remaining bounding boxes are removed.

3 Experiments

We applied the proposed method to anterior eye images. The images consist of 100 images of infective disease and 96 images of non-infective disease. These images were taken at multiple medical institutions. Ground truth class labels of the images were assigned by ophthalmologists. Ground truth bounding boxes were given by an engineering researcher.

A Windows 10 PC with an NVIDIA TITAN V GPU was used to perform experiments. The method was implemented using the Keras

[3]

with the TensorFlow library.

4 Results

Five fold cross validation was performed to evaluate classification performance of the proposed method. In the result, 88.3% of images (173/196) were correctly classified. 22 images were classified into wrong classes. One image was not classified (no bounding box was estimated). A confusion matrix of the classification result is shown in Table

1.

5 Discussion

Our method successfully classified the anterior eye images into two classes with the high classification accuracy. Automated classification of anterior eye images for infective and non-infective diseases have never been tackled before. This is the first trial that tackles the classification problem and achieved the practical performance. Our method can be used as an automated diagnosis assistance of eye diseases. As shown in Table 1, classification accuracies of infective and non-infective classes were similar. 89.0% (89/100) of infective images were correctly classified. Also, 87.5% (84/96) of non-infective images were correctly classified. This means the proposed classification method successfully extracted feature values that are useful to classify them from training images. We addressed the image classification task as detection task of the corneas. By using the bounding box annotations, the YOLOv3 focuses to distinguish the corneas. Between the cases of infective and non-infective diseases, the appearances of the corneas are different. Therefore, distinguishing the corneas based on its appearances is effective to the classification task of infective and non-infective cases.

We translated the anterior eye image classification task to the object detection task to make our method focuses on finding a specific anatomical structure that contributes to perform classification. We selected the cornea as the object detection target because its appearance is different between cases of infective and non-infective diseases. The “anatomical structure focused image classification” approach resulted in the high classification performance.

Classification result
Infective Non-infective Not classified
Ground truth Infective 89 10 1
Non-infective 12 84 0
Table 1: Confusion matrix of classification result

Figure 5 shows infective disease images classified into infective and non-infective disease classes by the proposed method. Also, non-infective disease images classified into non-infective and infective disease classes are shown in Fig. 6. The proposed method was robust to differences of eye position, degree of open of the eyelid, and brightness of illumination. Wrong classifications may be caused by specular on the corneas. Position of opacified areas in the cornea is one criteria of differentiating infective and non-infective diseases. Both of opacified areas and specular on the corneas are observed as white regions in images. Therefore, miss-classifications may be caused by existence of specularity. A removal process of specularity on the eye is necessary to improve classification accuracy.

(a)
(b)
Figure 5: Samples of classification results of infective disease images: (a) correctly classified infective disease images. (b) incorrectly classified infective disease images (classified as non-infective).
(a)
(b)
Figure 6: Samples of classification results of non-infective disease images: (a) correctly classified non-infective disease images. (b) incorrectly classified non-infective disease images (classified as infective).

6 Conclusions

We proposed a classification method of cases of infective and non-infective diseases from anterior eye images. Because the anterior eye images have wide variation of their appearances, our method focuses on distinguishing the cornea. The position of opacified area in the cornea is different among the two disease cases. In our method, a object detection method finds infective cornea or non-infective cornea from the anterior eye images. The two-class classification result of the image is made based on the object detection result. In our experiment, the proposed method correctly classified 88.3% of images. The accuracy is enough high to be used as a diagnosis assistance of the anterior eye. Future work includes classification of images into more detailed disease classes and utilization of other anatomical structure information.

Acknowledgements.
Parts of this research were supported by the AMED Grant Numbers 18lk1010028s0401, 19lk1010036h0001, and 19hs0110006h0003, the MEXT/JSPS KAKENHI Grant Numbers 26108006, 17H00867, and 17K20099, the JSPS Bilateral International Collaboration Grants.

References

  • [1] A. Dasgupta and S. Singh (2017) A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation. Cited by: §1.
  • [2] H. Fu, Y. Xu, S. Lin, D. W. K. Wong, and J. Liu (2016) DeepVessel: retinal vessel segmentation via deep learning and conditional random field. Vol. 9901, pp. 132–139. Cited by: §1.
  • [3] Keras Documentation. Note: https://keras.io/ Cited by: §3.
  • [4] P. Liskowski and K. Krawiec (2016) Segmenting retinal blood vessels with deep neural networks. Vol. 35, pp. 2369–2380. Cited by: §1.
  • [5] P. Prentas̆ic, M. Heisler, Z. Mammo, S. Lee, A. B. Merkur, E. V. Navajas, M. F. Beg, M. S̆arunic, and S. Loncaric (2016) Segmentation of the foveal microvasculature using deep learning networks. Vol. 21, pp. 132–139. Cited by: §1.
  • [6] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2015) You only look once: unified, real-time object detection. Vol. arXiv:1506.02640. Cited by: §2.2, §2.2.
  • [7] J. Redmon and A. Farhadi (2018) YOLOv3: an incremental improvement. Vol. arXiv:1804.02767. Cited by: §1.
  • [8] S. Watson, M. Cabrera-Aguas, and P. Khoo (2018) Common eye infections. Vol. 41, pp. 67–72. Cited by: §1.
  • [9] A. Wu, Z. Xu, M. Gao, M. Buty, and D. J. Mollura (2016) Deep vessel tracking: a generalized probabilistic approach via deep learning. Cited by: §1.
  • [10] Y. Wu, Y. Xia, Y. Song, Y. Zhang, and W. Cai (2018) Multiscale network followed network model for retinal vessel segmentation. Vol. 11071, pp. 119–126. Cited by: §1.
  • [11] Y. Zhang and A. C. S. Chung (2018) Deep supervision with additional labels for retinal vessel segmentation task. Vol. 11071, pp. 83–91. Cited by: §1.