Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables

07/28/2022
by   Kenneth Holstein, et al.
0

In many real world contexts, successful human-AI collaboration requires humans to productively integrate complementary sources of information into AI-informed decisions. However, in practice human decision-makers often lack understanding of what information an AI model has access to in relation to themselves. There are few available guidelines regarding how to effectively communicate about unobservables: features that may influence the outcome, but which are unavailable to the model. In this work, we conducted an online experiment to understand whether and how explicitly communicating potentially relevant unobservables influences how people integrate model outputs and unobservables when making predictions. Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance. Furthermore, the impacts of these prompts can vary depending on decision-makers' prior domain expertise. We conclude by discussing implications for future research and design of AI-based decision support tools.

READ FULL TEXT
research
06/27/2022

Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

Human-AI collaboration (HAIC) in decision-making aims to create synergis...
research
04/02/2021

Designing for human-AI complementarity in K-12 education

Recent work has explored how complementary strengths of humans and artif...
research
06/22/2023

Critical-Reflective Human-AI Collaboration: Exploring Computational Tools for Art Historical Image Retrieval

Just as other disciplines, the humanities explore how computational rese...
research
02/06/2023

Learning Complementary Policies for Human-AI Teams

Human-AI complementarity is important when neither the algorithm nor the...
research
05/15/2023

Capturing Humans' Mental Models of AI: An Item Response Theory Approach

Improving our understanding of how humans perceive AI teammates is an im...
research
11/04/2021

Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment

Explainable AI (XAI) is a promising means of supporting human-AI collabo...
research
04/25/2022

Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation

Despite impressive performance in many benchmark datasets, AI models can...

Please sign up or login with your details

Forgot password? Click here to reset