Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams

10/23/2020
by   Dave Braines, et al.
0

Future coalition operations can be substantially augmented through agile teaming between human and machine agents, but in a coalition context these agents may be unfamiliar to the human users and expected to operate in a broad set of scenarios rather than being narrowly defined for particular purposes. In such a setting it is essential that the human agents can rapidly build trust in the machine agents through appropriate transparency of their behaviour, e.g., through explanations. The human agents are also able to bring their local knowledge to the team, observing the situation unfolding and deciding which key information should be communicated to the machine agents to enable them to better account for the particular environment. In this paper we describe the initial steps towards this human-agent knowledge fusion (HAKF) environment through a recap of the key requirements, and an explanation of how these can be fulfilled for an example situation. We show how HAKF has the potential to bring value to both human and machine agents working as part of a distributed coalition team in a complex event processing setting with uncertain sources.

READ FULL TEXT

page 3

page 4

research
05/22/2023

Enabling Team of Teams: A Trust Inference and Propagation (TIP) Model in Multi-Human Multi-Robot Teams

Trust has been identified as a central factor for effective human-robot ...
research
03/24/2022

Onto4MAT: A Swarm Shepherding Ontology for Generalised Multi-Agent Teaming

Research in multi-agent teaming has increased substantially over recent ...
research
04/13/2013

Justificatory and Explanatory Argumentation for Committing Agents

In the interaction between agents we can have an explicative discourse, ...
research
02/05/2020

`Why not give this work to them?' Explaining AI-Moderated Task-Allocation Outcomes using Negotiation Trees

The problem of multi-agent task allocation arises in a variety of scenar...
research
11/10/2020

Emergent Reciprocity and Team Formation from Randomized Uncertain Social Preferences

Multi-agent reinforcement learning (MARL) has shown recent success in in...
research
06/09/2011

Monitoring Teams by Overhearing: A Multi-Agent Plan-Recognition Approach

Recent years are seeing an increasing need for on-line monitoring of tea...
research
10/25/2021

Observable and Attention-Directing BDI Agents for Human-Autonomy Teaming

Human-autonomy teaming (HAT) scenarios feature humans and autonomous age...

Please sign up or login with your details

Forgot password? Click here to reset