Human Uncertainty in Concept-Based AI Systems

03/22/2023
by   Katherine M. Collins, et al.
7

Placing a human in the loop may abate the risks of deploying AI systems in safety-critical settings (e.g., a clinician working with a medical AI system). However, mitigating risks arising from human error and uncertainty within such human-AI interactions is an important and understudied issue. In this work, we study human uncertainty in the context of concept-based models, a family of AI systems that enable human feedback via concept interventions where an expert intervenes on human-interpretable concepts relevant to the task. Prior work in this space often assumes that humans are oracles who are always certain and correct. Yet, real-world decision-making by humans is prone to occasional mistakes and uncertainty. We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans. We show that training with uncertain concept labels may help mitigate weaknesses of concept-based systems when handling uncertain interventions. These results allow us to identify several open challenges, which we argue can be tackled through future multidisciplinary research on building interactive uncertainty-aware systems. To facilitate further research, we release a new elicitation platform, UElic, to collect uncertain feedback from humans in collaborative prediction tasks.

READ FULL TEXT

page 5

page 7

page 9

page 13

page 14

page 17

page 19

page 20

research
06/27/2022

Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

Human-AI collaboration (HAIC) in decision-making aims to create synergis...
research
08/03/2023

VisAlign: Dataset for Measuring the Degree of Alignment between AI and Humans in Visual Perception

AI alignment refers to models acting towards human-intended goals, prefe...
research
09/19/2022

Concept Embedding Models

Deploying AI-powered systems requires trustworthy models supporting effe...
research
01/15/2020

AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data Proceedings

To facilitate the widespread acceptance of AI systems guiding decision-m...
research
12/13/2021

Role of Human-AI Interaction in Selective Prediction

Recent work has shown the potential benefit of selective prediction syst...
research
11/25/2021

Meaningful human control over AI systems: beyond talking the talk

The concept of meaningful human control has been proposed to address res...
research
12/01/2021

Collaborative AI Needs Stronger Assurances Driven by Risks

Collaborative AI systems (CAISs) aim at working together with humans in ...

Please sign up or login with your details

Forgot password? Click here to reset