F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering

10/13/2020
by   Hendrik Schuff, et al.
0

Explainable question answering systems predict an answer together with an explanation showing why the answer has been selected. The goal is to enable users to assess the correctness of the system and understand its reasoning process. However, we show that current models and evaluation settings have shortcomings regarding the coupling of answer and explanation which might cause serious issues in user experience. As a remedy, we propose a hierarchical model and a new regularization term to strengthen the answer-explanation coupling as well as two evaluation scores to quantify the coupling. We conduct experiments on the HOTPOTQA benchmark data set and perform a user study. The user study shows that our models increase the ability of the users to judge the correctness of the system and that scores like F1 are not enough to estimate the usefulness of a model in a practical setting with human users. Our scores are better aligned with user experience, making them promising candidates for model selection.

READ FULL TEXT

page 8

page 13

page 16

page 17

research
09/08/2020

QED: A Framework and Dataset for Explanations in Question Answering

A question answering system that in addition to providing an answer prov...
research
10/13/2022

How (Not) To Evaluate Explanation Quality

The importance of explainability is increasingly acknowledged in natural...
research
12/15/2022

Saved You A Click: Automatically Answering Clickbait Titles

Often clickbait articles have a title that is phrased as a question or v...
research
05/24/2023

Reasoning over Hierarchical Question Decomposition Tree for Explainable Question Answering

Explainable question answering (XQA) aims to answer a given question and...
research
12/28/2020

Causal Perception in Question-Answering Systems

Root cause analysis is a common data analysis task. While question-answe...
research
03/16/2022

E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning

The ability to recognize analogies is fundamental to human cognition. Ex...
research
07/14/2020

XAlgo: Explaining the Internal States of Algorithms via Question Answering

Algorithms often appear as 'black boxes' to non-expert users. While prio...

Please sign up or login with your details

Forgot password? Click here to reset