On the Definition of Appropriate Trust and the Tools that Come with it

09/21/2023
by   Helena Löfström, et al.
0

Evaluating the efficiency of human-AI interactions is challenging, including subjective and objective quality aspects. With the focus on the human experience of the explanations, evaluations of explanation methods have become mostly subjective, making comparative evaluations almost impossible and highly linked to the individual user. However, it is commonly agreed that one aspect of explanation quality is how effectively the user can detect if the predictions are trustworthy and correct, i.e., if the explanations can increase the user's appropriate trust in the model. This paper starts with the definitions of appropriate trust from the literature. It compares the definitions with model performance evaluation, showing the strong similarities between appropriate trust and model performance evaluation. The paper's main contribution is a novel approach to evaluating appropriate trust by taking advantage of the likenesses between definitions. The paper offers several straightforward evaluation methods for different aspects of user performance, including suggesting a method for measuring uncertainty and appropriate trust in regression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/25/2022

A Meta Survey of Quality Evaluation Criteria in Explanation Methods

Explanation methods and their evaluation have become a significant issue...
research
12/11/2018

Metrics for Explainable AI: Challenges and Prospects

The question addressed in this paper is: If we present to a user an AI s...
research
10/05/2022

Exploring Effectiveness of Explanations for Appropriate Trust: Lessons from Cognitive Psychology

The rapid development of Artificial Intelligence (AI) requires developer...
research
04/18/2023

A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective

User trust in Artificial Intelligence (AI) enabled systems has been incr...
research
07/26/2019

How model accuracy and explanation fidelity influence user trust

Machine learning systems have become popular in fields such as marketing...
research
04/27/2022

Exploring How Anomalous Model Input and Output Alerts Affect Decision-Making in Healthcare

An important goal in the field of human-AI interaction is to help users ...
research
09/15/2019

X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust

We present a new explainable AI (XAI) framework aimed at increasing just...

Please sign up or login with your details

Forgot password? Click here to reset