A critical analysis of metrics used for measuring progress in artificial intelligence

08/06/2020
by   Kathrin Blagec, et al.
36

Comparing model performances on benchmark datasets is an integral part of measuring and driving progress in artificial intelligence. A model's performance on a benchmark dataset is commonly assessed based on a single or a small set of performance metrics. While this enables quick comparisons, it may also entail the risk of inadequately reflecting model performance if the metric does not sufficiently cover all performance characteristics. Currently, it is unknown to what extent this might impact current benchmarking efforts. To address this question, we analysed the current landscape of performance metrics based on data covering 3867 machine learning model performance results from the web-based open platform 'Papers with Code'. Our results suggest that the large majority of metrics currently used to evaluate classification AI benchmark tasks have properties that may result in an inadequate reflection of a classifiers' performance, especially when used with imbalanced datasets. While alternative metrics that address problematic properties have been proposed, they are currently rarely applied as performance metrics in benchmarking tasks. Finally, we noticed that the reporting of metrics was partly inconsistent and partly unspecific, which may lead to ambiguities when comparing model performances.

READ FULL TEXT
research
04/25/2022

A global analysis of metrics used for measuring performance in natural language processing

Measuring the performance of natural language processing models is chall...
research
03/09/2022

Mapping global dynamics of benchmark creation and saturation in artificial intelligence

Benchmarks are crucial to measuring and steering progress in artificial ...
research
08/11/2020

Common Metrics to Benchmark Human-Machine Teams (HMT): A Review

A significant amount of work is invested in human-machine teaming (HMT) ...
research
10/04/2021

A curated, ontology-based, large-scale knowledge graph of artificial intelligence tasks and benchmarks

Research in artificial intelligence (AI) is addressing a growing number ...
research
07/05/2018

A Boo(n) for Evaluating Architecture Performance

We point out important problems with the common practice of using the be...
research
08/08/2023

Benchmarking LLM powered Chatbots: Methods and Metrics

Autonomous conversational agents, i.e. chatbots, are becoming an increas...
research
06/03/2023

ACI-BENCH: a Novel Ambient Clinical Intelligence Dataset for Benchmarking Automatic Visit Note Generation

Recent immense breakthroughs in generative models such as in GPT4 have p...

Please sign up or login with your details

Forgot password? Click here to reset