Culture-Based Explainable Human-Agent Deconfliction

by   Alex Raymond, et al.

Law codes and regulations help organise societies for centuries, and as AI systems gain more autonomy, we question how human-agent systems can operate as peers under the same norms, especially when resources are contended. We posit that agents must be accountable and explainable by referring to which rules justify their decisions. The need for explanations is associated with user acceptance and trust. This paper's contribution is twofold: i) we propose an argumentation-based human-agent architecture to map human regulations into a culture for artificial agents with explainable behaviour. Our architecture leans on the notion of argumentative dialogues and generates explanations from the history of such dialogues; and ii) we validate our architecture with a user study in the context of human-agent path deconfliction. Our results show that explanations provide a significantly higher improvement in human performance when systems are more complex. Consequently, we argue that the criteria defining the need of explanations should also consider the complexity of a system. Qualitative findings show that when rules are more complex, explanations significantly reduce the perception of challenge for humans.



There are no comments yet.


page 6


Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition

Explainable machine learning and artificial intelligence models have bee...

Interestingness Elements for Explainable Reinforcement Learning: Understanding Agents' Capabilities and Limitations

We propose an explainable reinforcement learning (XRL) framework that an...

An Argumentation-based Approach for Explaining Goal Selection in Intelligent Agents

During the first step of practical reasoning, i.e. deliberation or goals...

The Impact of Explanations on AI Competency Prediction in VQA

Explainability is one of the key elements for building trust in AI syste...

What Does Explainable AI Really Mean? A New Conceptualization of Perspectives

We characterize three notions of explainable AI that cut across research...

CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models

We propose CX-ToM, short for counterfactual explanations with theory-of ...

An Explainable AI System for the Diagnosis of High Dimensional Biomedical Data

Typical state of the art flow cytometry data samples consists of measure...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.