Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations

10/07/2019
by   Camburu Oana-Maria, et al.
0

To increase trust in artificial intelligence systems, a growing amount of works are enhancing these systems with the capability of producing natural language explanations that support their predictions. In this work, we show that such appealing frameworks are nonetheless prone to generating inconsistent explanations, such as "A dog is an animal" and "A dog is not an animal", which are likely to decrease users' trust in these systems. To detect such inconsistencies, we introduce a simple but effective adversarial framework for generating a complete target sequence, a scenario that has not been addressed so far. Finally, we apply our framework to a state-of-the-art neural model that provides natural language explanations on SNLI, and we show that this model is capable of generating a significant amount of inconsistencies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/18/2021

I don't understand! Evaluation Methods for Natural Language Explanations

Explainability of intelligent systems is key for future adoption. While ...
research
05/08/2021

e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks

Recently, an increasing number of works have introduced models capable o...
research
05/23/2023

Process-To-Text: A Framework for the Quantitative Description of Processes in Natural Language

In this paper we present the Process-To-Text (P2T) framework for the aut...
research
04/05/2022

CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations

Providing explanations in the context of Visual Question Answering (VQA)...
research
05/29/2023

Faithfulness Tests for Natural Language Explanations

Explanations of neural models aim to reveal a model's decision-making pr...
research
11/01/2019

What Gets Echoed? Understanding the "Pointers" in Explanations of Persuasive Arguments

Explanations are central to everyday life, and are a topic of growing in...
research
11/01/2019

Generating Justifications for Norm-Related Agent Decisions

We present an approach to generating natural language justifications of ...

Please sign up or login with your details

Forgot password? Click here to reset