Objective evaluation metrics for automatic classification of EEG events

by   Saeedeh Ziyabari, et al.

The evaluation of machine learning algorithms in biomedical fields for applications involving sequential data lacks standardization. Common quantitative scalar evaluation metrics such as sensitivity and specificity can often be misleading depending on the requirements of the application. Evaluation metrics must ultimately reflect the needs of users yet be sufficiently sensitive to guide algorithm development. Feedback from critical care clinicians who use automated event detection software in clinical applications has been overwhelmingly emphatic that a low false alarm rate, typically measured in units of the number of errors per 24 hours, is the single most important criterion for user acceptance. Though using a single metric is not often as insightful as examining performance over a range of operating conditions, there is a need for a single scalar figure of merit. In this paper, we discuss the deficiencies of existing metrics for a seizure detection task and propose several new metrics that offer a more balanced view of performance. We demonstrate these metrics on a seizure detection task based on the TUH EEG Corpus. We show that two promising metrics are a measure based on a concept borrowed from the spoken term detection literature, Actual Term-Weighted Value, and a new metric, Time-Aligned Event Scoring (TAES), that accounts for the temporal alignment of the hypothesis to the reference annotation. We also demonstrate that state of the art technology based on deep learning, though impressive in its performance, still needs significant improvement before it will meet very strict user acceptance guidelines.


Automatic Analysis of EEGs Using Big Data and Hybrid Deep Learning Architectures

Objective: A clinical decision support tool that automatically interpret...

Assessing the applicability of common performance metrics for real-world infrared small-target detection

Infrared small target detection (IRSTD) is a challenging task in compute...

Deep Learning Schema-based Event Extraction: Literature Review and Current Trends

Schema-based event extraction is a critical technique to apprehend the e...

Why We Need New Evaluation Metrics for NLG

The majority of NLG evaluation relies on automatic metrics, such as BLEU...

Towards trustworthy phoneme boundary detection with autoregressive model and improved evaluation metric

Phoneme boundary detection has been studied due to its central role in v...

An Analysis Method for Metric-Level Switching in Beat Tracking

For expressive music, the tempo may change over time, posing challenges ...

Combining Evaluation Metrics via the Unanimous Improvement Ratio and its Application to Clustering Tasks

Many Artificial Intelligence tasks cannot be evaluated with a single qua...

Please sign up or login with your details

Forgot password? Click here to reset