DeepAI AI Chat
Log In Sign Up

A critical analysis of metrics used for measuring progress in artificial intelligence

by   Kathrin Blagec, et al.

Comparing model performances on benchmark datasets is an integral part of measuring and driving progress in artificial intelligence. A model's performance on a benchmark dataset is commonly assessed based on a single or a small set of performance metrics. While this enables quick comparisons, it may also entail the risk of inadequately reflecting model performance if the metric does not sufficiently cover all performance characteristics. Currently, it is unknown to what extent this might impact current benchmarking efforts. To address this question, we analysed the current landscape of performance metrics based on data covering 3867 machine learning model performance results from the web-based open platform 'Papers with Code'. Our results suggest that the large majority of metrics currently used to evaluate classification AI benchmark tasks have properties that may result in an inadequate reflection of a classifiers' performance, especially when used with imbalanced datasets. While alternative metrics that address problematic properties have been proposed, they are currently rarely applied as performance metrics in benchmarking tasks. Finally, we noticed that the reporting of metrics was partly inconsistent and partly unspecific, which may lead to ambiguities when comparing model performances.


A global analysis of metrics used for measuring performance in natural language processing

Measuring the performance of natural language processing models is chall...

Mapping global dynamics of benchmark creation and saturation in artificial intelligence

Benchmarks are crucial to measuring and steering progress in artificial ...

Common Metrics to Benchmark Human-Machine Teams (HMT): A Review

A significant amount of work is invested in human-machine teaming (HMT) ...

A curated, ontology-based, large-scale knowledge graph of artificial intelligence tasks and benchmarks

Research in artificial intelligence (AI) is addressing a growing number ...

A Boo(n) for Evaluating Architecture Performance

We point out important problems with the common practice of using the be...

Benchmark datasets driving artificial intelligence development fail to capture the needs of medical professionals

Publicly accessible benchmarks that allow for assessing and comparing mo...

Are We There Yet? Evaluating State-of-the-Art Neural Network based Geoparsers Using EUPEG as a Benchmarking Platform

Geoparsing is an important task in geographic information retrieval. A g...