Grow-and-Clip: Informative-yet-Concise Evidence Distillation for Answer Explanation

01/13/2022
by   Yuyan Chen, et al.
0

Interpreting the predictions of existing Question Answering (QA) models is critical to many real-world intelligent applications, such as QA systems for healthcare, education, and finance. However, existing QA models lack interpretability and provide no feedback or explanation for end-users to help them understand why a specific prediction is the answer to a question. In this research, we argue that the evidences of an answer is critical to enhancing the interpretability of QA models. Unlike previous research that simply extracts several sentence(s) in the context as evidence, we are the first to explicitly define the concept of evidence as the supporting facts in a context which are informative, concise, and readable. Besides, we provide effective strategies to quantitatively measure the informativeness, conciseness and readability of evidence. Furthermore, we propose Grow-and-Clip Evidence Distillation (GCED) algorithm to extract evidences from the contexts by trade-off informativeness, conciseness, and readability. We conduct extensive experiments on the SQuAD and TriviaQA datasets with several baseline models to evaluate the effect of GCED on interpreting answers to questions. Human evaluation are also carried out to check the quality of distilled evidences. Experimental results show that automatic distilled evidences have human-like informativeness, conciseness and readability, which can enhance the interpretability of the answers to questions.

READ FULL TEXT

page 1

page 3

research
09/25/2018

HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering

Existing question answering (QA) datasets fail to train QA systems to pe...
research
09/12/2019

Finding Generalizable Evidence by Learning to Convince Q A Models

We propose a system that finds the strongest supporting evidence for a g...
research
10/04/2022

Detect, Retrieve, Comprehend: A Flexible Framework for Zero-Shot Document-Level Question Answering

Businesses generate thousands of documents that communicate their strate...
research
06/26/2019

Interpretable Question Answering on Knowledge Bases and Text

Interpretability of machine learning (ML) models becomes more relevant w...
research
12/14/2021

Do Answers to Boolean Questions Need Explanations? Yes

Existing datasets that contain boolean questions, such as BoolQ and TYDI...
research
09/11/2021

What's in a Name? Answer Equivalence For Open-Domain Question Answering

A flaw in QA evaluation is that annotations often only provide one gold ...
research
01/12/2019

Semi-interactive Attention Network for Answer Understanding in Reverse-QA

Question answering (QA) is an important natural language processing (NLP...

Please sign up or login with your details

Forgot password? Click here to reset