Two-Stream Deep Feature Modelling for Automated Video Endoscopy Data Analysis

07/12/2020 ∙ by Harshala Gammulle, et al. ∙ 0

Automating the analysis of imagery of the Gastrointestinal (GI) tract captured during endoscopy procedures has substantial potential benefits for patients, as it can provide diagnostic support to medical practitioners and reduce mistakes via human error. To further the development of such methods, we propose a two-stream model for endoscopic image analysis. Our model fuses two streams of deep feature inputs by mapping their inherent relations through a novel relational network model, to better model symptoms and classify the image. In contrast to handcrafted feature-based models, our proposed network is able to learn features automatically and outperforms existing state-of-the-art methods on two public datasets: KVASIR and Nerthus. Our extensive evaluations illustrate the importance of having two streams of inputs instead of a single stream and also demonstrates the merits of the proposed relational network architecture to combine those streams.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In medicine, endoscopy procedures on the Gastrointestinal (GI) tract play an important role in supporting domain experts to track down abnormalities within the GI tract of a patient. Such abnormalities may be a symptom for a life-threatening disease such as colorectal cancer. This analysis is typically carried out manually by a medical expert, and detecting critical symptoms relies solely on the experience of the practitioner, and is susceptible to human error. As such, we seek to automate the process of endoscopic video analysis, providing support to human experts during diagnosis.

Due to advancements in biomedical engineering, extensive research has been performed to support and improve the detection of anomalies via machine learning and computer vision techniques. These methods have shown great promise, and can detect abnormalities that can be easily missed by human experts

[9, 25, 13]. Yet automated methods face multiple challenges when analysing endoscopic videos, due to overlaps between symptoms and the difficult imaging conditions.

Most previous endoscopy analysis approaches obtain a set of hand-crafted features and train models to detect abnormalities [2, 16]. For example, in [16]

the encoded image features are obtained through a bidirectional marginal Fisher analysis (BMFA) and classified using a support vector machine (SVM). In

[18]

, local binary patterns (LBP) and edge histogram features are used with logistic regression. A limitation of these hand-crafted methods is that they are highly dependent on the domain knowledge of the human designer, and as such risk losing information that best describes the image. Therefore, through the advancement of deep learning approaches and due to their automatic feature learning ability, research has focused on deep learning methods. However, training these deep learning models from scratch is time consuming and requires a great amount of data. To overcome this challenge, transfer learning has been widely used; whereby a deep neural network that is trained on a different domain is adapted to the target domain through fine-tuning some or all layers. Such approaches have been widely used for anomaly detection in endoscopy videos obtained from the GI tract. The recent methods

[21, 19]

on computer aided video endoscopy analysis predominately extract discriminative features from a pre-trained convolutional neural network (CNN), and classify them using a classifier such as a Logistic Model Tree (LMT) or SVM. In

[4], a Bayesian optimisation method is used to optimise the hyper-parameters for a CNN based model for endoscopy data analysis. In [1], the authors tested multiple existing pre-trained CNN network features to better detect abnormalities.

In [14] the authors propose an architecture that consists of two feature extractors. The outputs of these are multiplied using an outer product at each location of the image and are pooled to obtain an image descriptor. This architecture models local pairwise feature interactions. The authors of [26] introduce a hierarchical bilinear pooling framework where they integrate multiple cross-layer bilinear modules to obtain information from intermediate convolution layers. In [15] several skip connections between different layers were added to detect objects in different scales and aspect ratios. In contrast, the proposed work extracts semantic features from different CNN layers and explicitly models the relationship between these through a novel relation mapping network.

In this paper, we introduce a relational reasoning approach [24]

that is able to map the relationships among individual features extracted by a pre-trained deep neural network. We extract features from the mid layers of a pre-trained deep model and pass them through the relational network, which considers all possible relationships among individual features to classify an endoscopy image. Our primary evaluations are performed on the KVASIR dataset

[21], containing endoscopic images and eight classes to detect. We also evaluate the proposed model on the Nerthus dataset [20] to further demonstrate the effectiveness of the proposed model. For both datasets, the proposed method outperforms the existing state-of-the-art.

Figure 1: Proposed Model: The semantic features of the input image are extracted through two layers of a pre-trained ResNet50 network, and the relations among the encoded feature vectors are mapped through the relational network which facilitates the final classification.

2 Method

In this paper, we propose a deep relational model that obtains deep information from two feature streams, that are combined to understand the class of the input endoscopy image. An overview of our proposed architecture is given in Figure 1.

Training a CNN model from scratch is time consuming and requires a large dataset. Therefore, in practice it is more convenient to use a pre-trained network and adapt this to a target domain, and this has been shown to be an effective method in the computer vision [7, 8] and medical domains [1, 21]. To obtain the two feature streams we utilise a pre-trained ResNet50 [10]

network, trained on ImageNet

[23]. Training on large-scale datasets such as ImageNet [23] improves the ability of the network to capture important patterns in input images and translate them into more discriminative feature vectors, that support different computer vision tasks.

When extracting features from a pre-trained CNN model, features from earlier layers contain more local information than those from later layers; though later layers contain more semantic information [6]

. Thus combining such features offers more discriminative information and facilitates our final prediction task. In this study, we combine features from an earlier layer and a later layer from the pre-trained CNN model. This allows us to capture spatial and semantic features, both of which are useful for accurate classification of endoscopy images. We avoid features from the final layers as they are over-compressed and do not contain information relating to our task, instead containing information primarily for the task the network is previously trained on (i.e. object detection). Our extracted features are further encoded through 1D convolutional and max pooling layers, and passed through a relational network to map the relationship between feature vectors, facilitating the final classification task.

2.1 Semantic Feature Extractor

The input image, , is first passed through the Semantic Feature Extractor (SFE) module. The SFE is based on a ResNet50 pre-trained CNN, and features are extracted from two layers. We denote the respective features as,

(1)
(2)

where and denote the sizes of the respective three-dimensional vectors. We reshape these vectors to two-dimensions such that they are of shape and .

2.2 Relational Network

The resultant two-dimensional feature vectors are passed through separate 1D convolution functions, and , to further encode these features from the individual streams such that,

(3)
(4)

Then through a relational network, , we map all possible relations among the two input feature streams. Our relational network is inspired by the model introduced in [24]. However, there exists a clear distinction between the proposed architecture and that of [24]. [24] utilises a relational network to map the relationships among the pixels in an input image. In the proposed work we illustrate that a relational network can be effectively utilised to map the correspondences among two distinct feature streams. We define the output of the relational network, , as,

(5)

where is composed of and

which are Multi-Layer Perceptrons (MLPs),

and ,

(6)

The resultant vector, , is passed through a decoding function, , which is composed of a layer of LSTM cells [11], and three fully connected layers to generate the classification of the input image,

(7)

3 Experiments

3.1 Datasets

We utilise two publicly available endoscopy datasets, KVASIR and Nerthus, to demonstrate the capability of our model to analyse endoscopy images and detect varying conditions within the GI tract.

The KVASIR Dataset [21] was released as part of the medical multimedia challenge presented by MediaEval [22]. It is based on images obtained from the GI tract via an endoscopy procedure. The dataset is composed of images that are annotated and verified by medical doctors, and captures 8 different classes. The classes are based on three anatomical landmarks (z-line, pylorus, cecum), three pathological findings (esophagitis, polyps, ulcerative colitis) and two other classes (dyed and lifted polyps, dyed resection margins) related to the polyp removal process. Overall, the dataset contains 8,000 endoscopic images, with 1,000 image examples per class. We utilise the standard test set released by the dataset authors, where 4,000 samples are used for model training and 4,000 for testing.

The Nerthus Dataset [20] is composed of 2,552 images from 150 colonoscopy videos. The dataset contains 4 different classes defined by the Boston Bowel Preparation Scale (BBPS) score, that ranks the cleanliness of the bowel and is an essential part of a successful colonoscopy (the endoscopy examination of the bowel). The number of examples per class lies within the range 160 to 980, and the data is annotated by medical doctors. We use the training/testing splits provided by the dataset authors.

3.2 Metrics

For the evaluations on the KVASIR dataset we utilise the metrics accuracy, precision, recall, F1-score, and matthews correlation coefficient (MCC) as suggested in [21]. The evaluations on the Nerthus dataset utilise the accuracy metric.

3.3 Implementation Details

We use a pre-trained ResNet50 [10] network and extract features from two layers: ‘activation_36’ and ‘activation_40’. Feature shapes are () and () respectively. For the encoding of each feature stream we utilise a 1D convolution layer with a kernel size of 3 and 32 filters, followed by a BatchNorm_ReLu [12]

and a dropout layer, with a dropout rate of 0.25. The LSTM used has 300 hidden units and the output is further passed through three fully connected layers with the dimensionality of 256, 128 and k (number of classes) respectively. The model is trained using the RMSProp optimiser with a learning rate of 0.001 with a decay of

for 100 epochs. Implementation is completed in Keras

[5]

with a theano

[3] backend.

3.4 Results

We use the KVASIR dataset for our primary evaluation and compare our results with recent state-of-the-art models (see Table 1). The first block of results in Table 1 are the results obtained from various methods introduced for the MediaEval Challenge [22] on the KVASIR data. In [16], a dimensionality reduction method called bidirectional marginal Fisher analysis (BMFA) which uses a Support Vector Machine (SVM) is proposed; while in [18] a method that combines 6 different features (JCD, Edge Histogram, Color Layout, AutoColor Correlogram, LBP, Haralick) and uses a logistic regressor to classify these features is presented. Aside from hand-crafted feature based methods, in [21] ResNet50 CNN features are extracted and fed to a Logistic Model Tree (LMT) classifier, and in [19], a GoogLeNet based model is employed. The authors in [2], introduce an approach where they obtained a collection of hand-crafted features (Tamura, ColorLayout, EdgeHistogram and, AutoColorCorrelogram) and deep CNN network features (VGGNet and Inception-V3 features), and train a multi-class SVM. This model records the highest performance among the previous state-of-the-art methods. However, with two streams of deep feature fusion and relation mapping, our proposed model is able to outperform [2] by 2.3% in accuracy, 5.1% in precision, 4.5% in recall, 5% in F1-score, 5.1% in MCC and 1.4% in specificity.

In [1], the authors have tested extracting features from input endoscopic images through different pre-trained networks and classifying them through a multi-class SVM. In Table 1 we show these results for ResNet50 features, MobileNet features and a combined deep feature obtained from multiple pre-trained CNN networks. In our proposed method, we also utilise features from a ResNet50 network, yet instead of naively combining features we utilise the proposed relational network to effectively attend to the feature vectors, and derive salient features for classification.

Method Accuracy Precision Recall F1-score MCC Specificity
Liu [16] 0.926 0.703 0.703 0.703 0.660 0.958
Petsch [19] 0.939 0.755 0.755 0.755 0.720 0.965
Naqvi [18] 0.942 0.767 0.774 0.767 0.736 0.966
Pogorelov [21] 0.957 0.829 0.826 0.826 0.802 0.975
Agrawal [2] 0.961 0.847 0.852 0.847 0.827 0.978
ResNet50 [1] 0.611 - - - - -
MobileNet [1] 0.817 - - - - -
Combined feat. [1] 0.838 - - - - -
Proposed 0.984 0.898 0.897 0.897 0.878 0.992
Table 1: The evaluation results on KVASIR dataset.

Figure 2

shows the confusion matrix for the evaluation results on the KVASIR dataset. For clarity we represent the classes as 0- ‘dyed-lifted-polyps’, 1- ‘dyed-resection-margins’, 2- ‘esophagitis’, 3- ‘normal-cecum’, 4- ‘normal-pylorus’, 5- ‘normal-z-line’, 6- ‘polyps’, 7- ‘ulcerative-colitis’. Confusions occur primarily between the normal-z-line and esophagitis classes, and a number of classes are classified correctly for all instances.

Figure 2: Confusion matrix illustration for the KVASIR dataset.

To further illustrate the importance of our two-stream architecture and the value of the relational network for combining these feature streams, we visualise (in Figure 3) the activations obtained from the LSTM layer of the proposed model and two ablation models, each with only one input stream. The ablation model in Figure 3 (b) receives the feature stream as the input, while ablation model in Figure 3 (c) receives the feature stream as the input. In the ablation models (b) and (c), as in [24] the relational network is used to model relationships within a single vector.

The activations are obtained for a randomly selected set of 500 images from the KVASIR test-set, and we use t-SNE [17] to plot them in two dimensions.

Considering Fig. 3 (a), we observe that samples from a particular class are tightly grouped and clear separation exists between classes. However, in the ablation models (b) and (c) we observe significant overlaps between the embeddings from different classes, indicating that the model is not capable of discriminating between those classes. These visualisations provide further evidence of the importance of utilising multiple input streams, and how they can be effectively fused together with the proposed relational model to learn discriminative features to support the classification task.

(a) Proposed two stream method
(b) Stream
(c) Stream
Figure 3: 2D Visualisation of embeddings extracted from the LSTM layer of the proposed model (a) and two ablation models (b and c)

To demonstrate the effectiveness of our model on different problem domains, we evaluated our model on the Nerthus dataset [20]. While the task in this dataset, measuring the cleanliness of the bowel based on the BBPS value, is less challenging compared to the abnormality classification task in the KVASIR dataset, the Nerthus dataset provides a different evaluation scenario to investigate the generalisability of the proposed approach. We obtained a 100% accuracy when predicting the BBPS value with our proposed model, while the baseline model of [20] has only achieved a 95% accuracy. This clearly illustrates the applicability of the proposed architecture for different classification tasks within the domain of automated endoscopy image analysis.

4 Conclusion

Endoscopy image analysis is a challenging task and automating this process can aid both the patient and the medical practitioner. Our approach is significantly different from the previous approaches that are based on obtaining handcrafted features or extracting pre-trained CNN features and learning a classifier based on these features. Our relational model, with two discriminative feature streams, is able to map dependencies between feature streams to help detect and identify salient features, and outperforms state-of-the-art methods for the KVASIR and Nerthus datasets. Furthermore, as our model learns the image to label mapping automatically, it is applicable for detecting abnormalities in other medical domains apart from the analysis of endoscopy images.

5 Acknowledgement

The research presented in this paper was supported by an Australian Research Council (ARC) grant DP170100632.

References

  • [1] T. Agrawal, R. Gupta, and S. Narayanan (2019) On evaluating cnn representations for low resource medical image classification. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1363–1367. Cited by: §1, §2, §3.4, Table 1.
  • [2] T. Agrawal, R. Gupta, S. Sahu, and C. Y. Espy-Wilson (2017) SCL-umd at the medico task-mediaeval 2017: transfer learning based classification of medical images.. In MediaEval, Cited by: §1, §3.4, Table 1.
  • [3] R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, A. Belikov, A. Belopolsky, et al. (2016) Theano: a python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688 472, pp. 473. Cited by: §3.3.
  • [4] R. J. Borgli, H. K. Stensland, M. A. Riegler, and P. Halvorsen (2019)

    Automatic hyperparameter optimization for transfer learning on medical image datasets using bayesian optimization

    .
    In 2019 13th International Symposium on Medical Information and Communication Technology (ISMICT), pp. 1–6. Cited by: §1.
  • [5] F. Chollet et al. (2015) Keras. Note: https://keras.io Cited by: §3.3.
  • [6] H. Gammulle, S. Denman, S. Sridharan, and C. Fookes (2017) Two stream lstm: a deep fusion framework for human action recognition. In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, pp. 177–186. Cited by: §2.
  • [7] H. Gammulle, S. Denman, S. Sridharan, and C. Fookes (2019) Forecasting future action sequences with neural memory networks. British Machine Vision Conference (BMVC). Cited by: §2.
  • [8] H. Gammulle, S. Denman, S. Sridharan, and C. Fookes (2019) Predicting the future: a jointly learnt model for action anticipation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5562–5571. Cited by: §2.
  • [9] X. Guo and Y. Yuan (2019) Triple anet: adaptive abnormal-aware attention network for wce image classification. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 293–301. Cited by: §1.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 770–778. Cited by: §2, §3.3.
  • [11] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §2.2.
  • [12] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017-07) Image-to-image translation with conditional adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.3.
  • [13] N. Kumar, A. V. Rajwade, S. Chandran, and S. P. Awate (2017)

    Kernel generalized-gaussian mixture model for robust abnormality detection

    .
    In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 21–29. Cited by: §1.
  • [14] T. Lin, A. RoyChowdhury, and S. Maji (2015) Bilinear cnn models for fine-grained visual recognition. In Proceedings of the IEEE international conference on computer vision, pp. 1449–1457. Cited by: §1.
  • [15] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §1.
  • [16] Y. Liu, Z. Gu, and W. K. Cheung (2017) HKBU at mediaeval 2017 medico: medical multimedia task. Cited by: §1, §3.4, Table 1.
  • [17] L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: §3.4.
  • [18] S. S. A. Naqvi, S. Nadeem, M. Zaid, and M. A. Tahir (2017) Ensemble of texture features for finding abnormalities in the gastro-intestinal tract.. In MediaEval, Cited by: §1, §3.4, Table 1.
  • [19] S. Petscharnig, K. Schöffmann, and M. Lux (2017) An inception-like cnn architecture for gi disease and anatomical landmark classification.. In MediaEval, Cited by: §1, §3.4, Table 1.
  • [20] K. Pogorelov, K. R. Randel, T. de Lange, S. L. Eskeland, C. Griwodz, D. Johansen, C. Spampinato, M. Taschwer, M. Lux, P. T. Schmidt, et al. (2017) Nerthus: a bowel preparation quality video dataset. In Proceedings of the 8th ACM on Multimedia Systems Conference, pp. 170–174. Cited by: §1, §3.1, §3.4.
  • [21] K. Pogorelov, K. R. Randel, C. Griwodz, S. L. Eskeland, T. de Lange, D. Johansen, C. Spampinato, D. Dang-Nguyen, M. Lux, P. T. Schmidt, et al. (2017) Kvasir: a multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, pp. 164–169. Cited by: §1, §1, §2, §3.1, §3.2, §3.4, Table 1.
  • [22] M. Riegler, K. Pogorelov, P. Halvorsen, C. Griwodz, T. Lange, K. Randel, S. Eskeland, D. Nguyen, D. Tien, M. Lux, et al. (2017) Multimedia for medicine: the medico task at mediaeval 2017. Cited by: §3.1, §3.4.
  • [23] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. Cited by: §2.
  • [24] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap (2017) A simple neural network module for relational reasoning. In Advances in neural information processing systems, pp. 4967–4976. Cited by: §1, §2.2, §3.4.
  • [25] X. Wang, L. Ju, X. Zhao, and Z. Ge (2019) Retinal abnormalities recognition using regional multitask learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 30–38. Cited by: §1.
  • [26] C. Yu, X. Zhao, Q. Zheng, P. Zhang, and X. You (2018) Hierarchical bilinear pooling for fine-grained visual recognition. In Proceedings of the European conference on computer vision (ECCV), pp. 574–589. Cited by: §1.