The Response Shift Paradigm to Quantify Human Trust in AI Recommendations

02/16/2022
by   Ali Shafti, et al.
0

Explainability, interpretability and how much they affect human trust in AI systems are ultimately problems of human cognition as much as machine learning, yet the effectiveness of AI recommendations and the trust afforded by end-users are typically not evaluated quantitatively. We developed and validated a general purpose Human-AI interaction paradigm which quantifies the impact of AI recommendations on human decisions. In our paradigm we confronted human users with quantitative prediction tasks: asking them for a first response, before confronting them with an AI's recommendations (and explanation), and then asking the human user to provide an updated final response. The difference between final and first responses constitutes the shift or sway in the human decision which we use as metric of the AI's recommendation impact on the human, representing the trust they place on the AI. We evaluated this paradigm on hundreds of users through Amazon Mechanical Turk using a multi-branched experiment confronting users with good/poor AI systems that had good, poor or no explainability. Our proof-of-principle paradigm allows one to quantitatively compare the rapidly growing set of XAI/IAI approaches in terms of their effect on the end-user and opens up the possibility of (machine) learning trust.

READ FULL TEXT

page 6

page 7

research
09/19/2023

Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

We present an overview of the literature on trust in AI and AI trustwort...
research
10/02/2022

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Despite the proliferation of explainable AI (XAI) methods, little is und...
research
02/11/2020

Trust dynamics and user attitudes on recommendation errors: preliminary results

Artificial Intelligence based systems may be used as digital nudging tec...
research
11/16/2021

Will We Trust What We Don't Understand? Impact of Model Interpretability and Outcome Feedback on Trust in AI

Despite AI's superhuman performance in a variety of domains, humans are ...
research
02/28/2023

Steering Recommendations and Visualising Its Impact: Effects on Adolescents' Trust in E-Learning Platforms

Researchers have widely acknowledged the potential of control mechanisms...
research
02/24/2020

SupRB: A Supervised Rule-based Learning System for Continuous Problems

We propose the SupRB learning system, a new Pittsburgh-style learning cl...
research
04/27/2022

Exploring How Anomalous Model Input and Output Alerts Affect Decision-Making in Healthcare

An important goal in the field of human-AI interaction is to help users ...

Please sign up or login with your details

Forgot password? Click here to reset