Mediators: Conversational Agents Explaining NLP Model Behavior

06/13/2022
by   Nils Feldhus, et al.
0

The human-centric explainable artificial intelligence (HCXAI) community has raised the need for framing the explanation process as a conversation between human and machine. In this position paper, we establish desiderata for Mediators, text-based conversational agents which are capable of explaining the behavior of neural models interactively using natural language. From the perspective of natural language processing (NLP) research, we engineer a blueprint of such a Mediator for the task of sentiment analysis and assess how far along current research is on the path towards dialogue-based explanations.

READ FULL TEXT
research
09/13/2022

The Role of Explanatory Value in Natural Language Processing

A key aim of science is explanation, yet the idea of explaining language...
research
09/01/2022

In conversation with Artificial Intelligence: aligning language models with human values

Large-scale language technologies are increasingly used in various forms...
research
07/19/2019

DREAMT – Embodied Motivational Conversational Storytelling

Storytelling is fundamental to language, including culture, conversation...
research
11/02/2020

An ontology-based chatbot for crises management: use case coronavirus

Today is the era of intelligence in machines. With the advances in Artif...
research
01/17/2022

Chatbot System Architecture

The conversational agents is one of the most interested topics in comput...
research
09/06/2022

"Mama Always Had a Way of Explaining Things So I Could Understand”: A Dialogue Corpus for Learning to Construct Explanations

As AI is more and more pervasive in everyday life, humans have an increa...
research
09/06/2022

Explaining Machine Learning Models in Natural Conversations: Towards a Conversational XAI Agent

The goal of Explainable AI (XAI) is to design methods to provide insight...

Please sign up or login with your details

Forgot password? Click here to reset