Optimizing AI for Teamwork

04/27/2020
by   Gagan Bansal, et al.
9

In many high-stakes domains such as criminal justice, finance, and healthcare, AI systems may recommend actions to a human expert responsible for final decisions, a context known as AI-advised decision making. When AI practitioners deploy the most accurate system in these domains, they implicitly assume that the system will function alone in the world. We argue that the most accurate AI team-mate is not necessarily the em best teammate; for example, predictable performance is worth a slight sacrifice in AI accuracy. So, we propose training AI systems in a human-centered manner and directly optimizing for team performance. We study this proposal for a specific type of human-AI team, where the human overseer chooses to accept the AI recommendation or solve the task themselves. To optimize the team performance we maximize the team's expected utility, expressed in terms of quality of the final decision, cost of verifying, and individual accuracies. Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the improvements in utility while being small and varying across datasets and parameters (such as cost of mistake), are real and consistent with our definition of team utility. We discuss the shortcoming of current optimization approaches beyond well-studied loss functions such as log-loss, and encourage future work on human-centered optimization problems motivated by human-AI collaborations.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

research
06/26/2020

Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

Increasingly, organizations are pairing humans with AI systems to improv...
research
10/23/2022

Learning to Advise Humans By Leveraging Algorithm Discretion

Expert decision-makers (DMs) in high-stakes AI-advised (AIDeT) settings ...
research
12/13/2021

Role of Human-AI Interaction in Selective Prediction

Recent work has shown the potential benefit of selective prediction syst...
research
06/04/2019

A Case for Backward Compatibility for Human-AI Teams

AI systems are being deployed to support human decision making in high-s...
research
03/24/2023

'Team-in-the-loop' organisational oversight of high-stakes AI

Oversight is rightly recognised as vital within high-stakes public secto...
research
10/06/2021

Two Many Cooks: Understanding Dynamic Human-Agent Team Communication and Perception Using Overcooked 2

This paper describes a research study that aims to investigate changes i...
research
02/05/2020

Joint Optimization of AI Fairness and Utility: A Human-Centered Approach

Today, AI is increasingly being used in many high-stakes decision-making...

Please sign up or login with your details

Forgot password? Click here to reset