Natural Language Generation Challenges for Explainable AI

11/20/2019
by   Ehud Reiter, et al.
0

Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific NLG for XAI research challenges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2023

Attribution-Scores and Causal Counterfactuals as Explanations in Artificial Intelligence

In this expository article we highlight the relevance of explanations fo...
research
08/31/2022

Language and Intelligence, Artificial vs. Natural or What Can and What Cannot AI Do with NL?

In this talk, I argue that there are certain pragmatic features of natur...
research
10/02/2020

AI pptX: Robust Continuous Learning for Document Generation with AI Insights

Business analysts create billions of slide decks, reports and documents ...
research
03/13/2019

Natural Language Interaction with Explainable AI Models

This paper presents an explainable AI (XAI) system that provides explana...
research
08/28/2023

Large Graph Models: A Perspective

Large models have emerged as the most recent groundbreaking achievements...
research
04/26/2021

What Makes a Message Persuasive? Identifying Adaptations Towards Persuasiveness in Nine Exploratory Case Studies

The ability to persuade others is critical to professional and personal ...

Please sign up or login with your details

Forgot password? Click here to reset