What Else Do I Need to Know? The Effect of Background Information on Users' Reliance on AI Systems

05/23/2023
by   Navita Goyal, et al.
0

AI systems have shown impressive performance at answering questions by retrieving relevant context. However, with the increasingly large models, it is impossible and often undesirable to constrain models' knowledge or reasoning to only the retrieved context. This leads to a mismatch between the information that these models access to derive the answer and the information available to the user consuming the AI predictions to assess the AI predicted answer. In this work, we study how users interact with AI systems in absence of sufficient information to assess AI predictions. Further, we ask the question of whether adding the requisite background alleviates the concerns around over-reliance in AI predictions. Our study reveals that users rely on AI predictions even in the absence of sufficient information needed to assess its correctness. Providing the relevant background, however, helps users catch AI errors better, reducing over-reliance on incorrect AI predictions. On the flip side, background information also increases users' confidence in their correct as well as incorrect judgments. Contrary to common expectation, aiding a user's perusal of the context and the background through highlights is not helpful in alleviating the issue of over-confidence stemming from availability of more information. Our work aims to highlight the gap between how NLP developers perceive informational need in human-AI interaction and the actual human interaction with the information available to them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2020

How to Answer Why – Evaluating the Explanations of AI Through Mental Model Analysis

To achieve optimal human-system integration in the context of user-AI in...
research
01/03/2023

AI in HCI Design and User Experience

In this chapter, we review and discuss the transformation of AI technolo...
research
02/07/2020

Cognitive Anthropomorphism of AI: How Humans and Computers Classify Images

Modern AI image classifiers have made impressive advances in recent year...
research
07/28/2021

The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

Explainability of AI systems is critical for users to take informed acti...
research
08/21/2020

It's better to say "I can't answer" than answering incorrectly: Towards Safety critical NLP systems

In order to make AI systems more reliable and their adoption in safety c...
research
11/21/2017

Toward Foraging for Understanding of StarCraft Agents: An Empirical Study

Assessing and understanding intelligent agents is a difficult task for u...
research
01/10/2022

Does Interacting Help Users Better Understand the Structure of Probabilistic Models?

Despite growing interest in probabilistic modeling approaches and availa...

Please sign up or login with your details

Forgot password? Click here to reset