Will We Trust What We Don't Understand? Impact of Model Interpretability and Outcome Feedback on Trust in AI

11/16/2021
by   Daehwan Ahn, et al.
0

Despite AI's superhuman performance in a variety of domains, humans are often unwilling to adopt AI systems. The lack of interpretability inherent in many modern AI techniques is believed to be hurting their adoption, as users may not trust systems whose decision processes they do not understand. We investigate this proposition with a novel experiment in which we use an interactive prediction task to analyze the impact of interpretability and outcome feedback on trust in AI and on human performance in AI-assisted prediction tasks. We find that interpretability led to no robust improvements in trust, while outcome feedback had a significantly greater and more reliable effect. However, both factors had modest effects on participants' task performance. Our findings suggest that (1) factors receiving significant attention, such as interpretability, may be less effective at increasing trust than factors like outcome feedback, and (2) augmenting human performance via AI systems may not be a simple matter of increasing trust in AI, as increased trust is not always associated with equally sizable improvements in performance. These findings invite the research community to focus not only on methods for generating interpretations but also on techniques for ensuring that interpretations impact trust and performance in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2022

Comparison of human trust in an AI system, a human, and a social robot as a task partner

This study investigated trust in a social robot compared with that in an...
research
02/10/2022

Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient

The problem of human trust in artificial intelligence is one of the most...
research
04/03/2023

Adoption of Artificial Intelligence in Schools: Unveiling Factors Influencing Teachers Engagement

Albeit existing evidence about the impact of AI-based adaptive learning ...
research
02/16/2023

Modeling Reliance on XAI Indicating Its Purpose and Attention

This study used XAI, which shows its purposes and attention as explanati...
research
02/16/2022

The Response Shift Paradigm to Quantify Human Trust in AI Recommendations

Explainability, interpretability and how much they affect human trust in...
research
12/23/2021

More Than Words: Towards Better Quality Interpretations of Text Classifiers

The large size and complex decision mechanisms of state-of-the-art text ...
research
02/21/2018

Manipulating and Measuring Model Interpretability

Despite a growing body of research focused on creating interpretable mac...

Please sign up or login with your details

Forgot password? Click here to reset