`Why not give this work to them?' Explaining AI-Moderated Task-Allocation Outcomes using Negotiation Trees

02/05/2020
by   Zahra Zahedi, et al.
1

The problem of multi-agent task allocation arises in a variety of scenarios involving human teams. In many such settings, human teammates may act with selfish motives and try to minimize their cost metrics. In the absence of (1) complete knowledge about the reward of other agents and (2) the team's overall cost associated with a particular allocation outcome, distributed algorithms can only arrive at sub-optimal solutions within a reasonable amount of time. To address these challenges, we introduce the notion of an AI Task Allocator (AITA) that, with complete knowledge, comes up with fair allocations that strike a balance between the individual human costs and the team's performance cost. To ensure that AITA is explicable to the humans, we allow each human agent to question AITA's proposed allocation with counterfactual allocations. In response, we design AITA to provide a replay negotiation tree that acts as an explanation showing why the counterfactual allocation, with the correct costs, will eventually result in a sub-optimal allocation. This explanation also updates a human's incomplete knowledge about their teammate's and the team's actual costs. We then investigate whether humans are (1) able to understand the explanations provided and (2) convinced by it using human factor studies. Finally, we show the effect of various kinds of incompleteness on the length of explanations. We conclude that underestimation of other's costs often leads to the need for explanations and in turn, longer explanations on average.

READ FULL TEXT
research
06/26/2020

Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

Increasingly, organizations are pairing humans with AI systems to improv...
research
01/21/2020

Adequate and fair explanations

Explaining sophisticated machine-learning based systems is an important ...
research
01/29/2021

Counterfactual State Explanations for Reinforcement Learning Agents via Generative Deep Learning

Counterfactual explanations, which deal with "why not?" scenarios, can p...
research
12/13/2022

Explanations Can Reduce Overreliance on AI Systems During Decision-Making

Prior work has identified a resilient phenomenon that threatens the perf...
research
10/23/2020

Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams

Future coalition operations can be substantially augmented through agile...
research
03/08/2023

"How to make them stay?" – Diverse Counterfactual Explanations of Employee Attrition

Employee attrition is an important and complex problem that can directly...
research
05/05/2023

Improving LaCAM for Scalable Eventually Optimal Multi-Agent Pathfinding

This study extends the recently-developed LaCAM algorithm for multi-agen...

Please sign up or login with your details

Forgot password? Click here to reset