Common Metrics to Benchmark Human-Machine Teams (HMT): A Review

08/11/2020
by   Praveen Damacharla, et al.
0

A significant amount of work is invested in human-machine teaming (HMT) across multiple fields. Accurately and effectively measuring system performance of an HMT is crucial for moving the design of these systems forward. Metrics are the enabling tools to devise a benchmark in any system and serve as an evaluation platform for assessing the performance, along with the verification and validation, of a system. Currently, there is no agreed-upon set of benchmark metrics for developing HMT systems. Therefore, identification and classification of common metrics are imperative to create a benchmark in the HMT field. The key focus of this review is to conduct a detailed survey aimed at identification of metrics employed in different segments of HMT and to determine the common metrics that can be used in the future to benchmark HMTs. We have organized this review as follows: identification of metrics used in HMTs until now, and classification based on functionality and measuring techniques. Additionally, we have also attempted to analyze all the identified metrics in detail while classifying them as theoretical, applied, real-time, non-real-time, measurable, and observable metrics. We conclude this review with a detailed analysis of the identified common metrics along with their usage to benchmark HMTs.

READ FULL TEXT

page 1

page 3

page 6

page 7

page 8

page 9

page 19

research
07/09/2021

Measuring and Improving Model-Moderator Collaboration using Uncertainty Estimation

Content moderation is often performed by a collaboration between humans ...
research
08/06/2020

A critical analysis of metrics used for measuring progress in artificial intelligence

Comparing model performances on benchmark datasets is an integral part o...
research
12/10/2021

How to Quantify the Security Level of Embedded Systems? A Taxonomy of Security Metrics

Embedded Systems (ES) development has been historically focused on funct...
research
09/09/2018

Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology

Performance metrics (error measures) are vital components of the evaluat...
research
07/15/2020

On the benchmarking of partitioned real-time systems

Avionic software is the subject of critical real time, determinism and s...
research
10/05/2020

UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers

A flurry of fuzzing tools (fuzzers) have been proposed in the literature...
research
12/16/2020

Measuring Disentanglement: A Review of Metrics

Learning to disentangle and represent factors of variation in data is an...

Please sign up or login with your details

Forgot password? Click here to reset