An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning applied to Gastrointestinal Tract Abnormality Classification

05/08/2020
by   Vajira Thambawita, et al.
11

Precise and efficient automated identification of Gastrointestinal (GI) tract diseases can help doctors treat more patients and improve the rate of disease detection and identification. Currently, automatic analysis of diseases in the GI tract is a hot topic in both computer science and medical-related journals. Nevertheless, the evaluation of such an automatic analysis is often incomplete or simply wrong. Algorithms are often only tested on small and biased datasets, and cross-dataset evaluations are rarely performed. A clear understanding of evaluation metrics and machine learning models with cross datasets is crucial to bring research in the field to a new quality level. Towards this goal, we present comprehensive evaluations of five distinct machine learning models using Global Features and Deep Neural Networks that can classify 16 different key types of GI tract conditions, including pathological findings, anatomical landmarks, polyp removal conditions, and normal findings from images captured by common GI tract examination instruments. In our evaluation, we introduce performance hexagons using six performance metrics such as recall, precision, specificity, accuracy, F1-score, and Matthews Correlation Coefficient to demonstrate how to determine the real capabilities of models rather than evaluating them shallowly. Furthermore, we perform cross-dataset evaluations using different datasets for training and testing. With these cross-dataset evaluations, we demonstrate the challenge of actually building a generalizable model that could be used across different hospitals. Our experiments clearly show that more sophisticated performance metrics and evaluation methods need to be applied to get reliable models rather than depending on evaluations of the splits of the same dataset, i.e., the performance metrics should always be interpreted together rather than relying on a single metric.

READ FULL TEXT
research
06/07/2021

A Comprehensive Assessment of Dialog Evaluation Metrics

Automatic evaluation metrics are a crucial component of dialog systems r...
research
03/31/2023

Evaluation Challenges for Geospatial ML

As geospatial machine learning models and maps derived from their predic...
research
01/27/2020

Performance Analysis and Comparison of Machine and Deep Learning Algorithms for IoT Data Classification

In recent years, the growth of Internet of Things (IoT) as an emerging t...
research
09/04/2018

Handwriting styles: benchmarks and evaluation metrics

Evaluating the style of handwriting generation is a challenging problem,...
research
11/30/2016

Reliable Evaluation of Neural Network for Multiclass Classification of Real-world Data

This paper presents a systematic evaluation of Neural Network (NN) for c...
research
03/29/2022

Investigating Data Variance in Evaluations of Automatic Machine Translation Metrics

Current practices in metric evaluation focus on one single dataset, e.g....
research
01/28/2022

Developing a Machine-Learning Algorithm to Diagnose Age-Related Macular Degeneration

Today, more than 12 million people over the age of 40 suffer from ocular...

Please sign up or login with your details

Forgot password? Click here to reset