Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards

05/05/2021
by   Marco Valentino, et al.
0

An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales, used to build and evaluate models with step-wise inference and explanation generation capabilities. While human-annotated explanations are used as ground-truth for the inference, there is a lack of systematic assessment of their consistency and rigour. In an attempt to provide a critical quality assessment of Explanation Gold Standards (XGSs) for NLI, we propose a systematic annotation methodology, named Explanation Entailment Verification (EEV), to quantify the logical validity of human-annotated explanations. The application of EEV on three mainstream datasets reveals the surprising conclusion that a majority of the explanations, while appearing coherent on the surface, represent logically invalid arguments, ranging from being incomplete to containing clearly identifiable logical errors. This conclusion confirms that the inferential properties of explanations are still poorly formalised and understood, and that additional work on this line of research is necessary to improve the way Explanation Gold Standards are constructed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2023

Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations

Human-annotated labels and explanations are critical for training explai...
research
05/03/2022

Scientific Explanation and Natural Language: A Unified Epistemological-Linguistic Perspective for Explainable AI

A fundamental research goal for Explainable AI (XAI) is to build models ...
research
10/13/2022

How (Not) To Evaluate Explanation Quality

The importance of explainability is increasingly acknowledged in natural...
research
05/20/2021

Evaluating the Correctness of Explainable AI Algorithms for Classification

Explainable AI has attracted much research attention in recent years wit...
research
02/24/2021

Teach Me to Explain: A Review of Datasets for Explainable NLP

Explainable NLP (ExNLP) has increasingly focused on collecting human-ann...
research
12/08/2022

Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations

Natural language explanations promise to offer intuitively understandabl...
research
11/07/2018

A Framework to build Games with a Purpose for Linked Data Refinement

With the rise of linked data and knowledge graphs, the need becomes comp...

Please sign up or login with your details

Forgot password? Click here to reset