Levels of explainable artificial intelligence for human-aligned conversational explanations

07/07/2021
by   Richard Dazeley, et al.
10

Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level `narrow' explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level `strong' explanations.

READ FULL TEXT

page 4

page 9

page 11

page 14

page 16

page 19

page 21

page 26

research
06/22/2017

Explanation in Artificial Intelligence: Insights from the Social Sciences

There has been a recent resurgence in the area of explainable artificial...
research
08/20/2021

Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey

Broad Explainable Artificial Intelligence moves away from interpreting i...
research
06/09/2023

HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence for Digital Medicine

Providing high quality explanations for AI predictions based on machine ...
research
08/23/2021

Knowledge-based XAI through CBR: There is more to explanations than models can tell

The underlying hypothesis of knowledge-based explainable artificial inte...
research
09/27/2020

Measure Utility, Gain Trust: Practical Advice for XAI Researcher

Research into the explanation of machine learning models, i.e., explaina...
research
06/09/2023

Strategies to exploit XAI to improve classification systems

Explainable Artificial Intelligence (XAI) aims to provide insights into ...
research
03/27/2023

Monetizing Explainable AI: A Double-edged Sword

Algorithms used by organizations increasingly wield power in society as ...

Please sign up or login with your details

Forgot password? Click here to reset