A Framework for Rationale Extraction for Deep QA models

10/09/2021
by   Sahana Ramnath, et al.
0

As neural-network-based QA models become deeper and more complex, there is a demand for robust frameworks which can access a model's rationale for its prediction. Current techniques that provide insights on a model's working are either dependent on adversarial datasets or are proposing models with explicit explanation generation components. These techniques are time-consuming and challenging to extend to existing models and new datasets. In this work, we use `Integrated Gradients' to extract rationale for existing state-of-the-art models in the task of Reading Comprehension based Question Answering (RCQA). On detailed analysis and comparison with collected human rationales, we find that though  40-80 (precision), only 6-19 rationale (recall).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2021

More Than Reading Comprehension: A Survey on Datasets and Metrics of Textual Question Answering

Textual Question Answering (QA) aims to provide precise answers to user'...
research
04/14/2022

XLMRQA: Open-Domain Question Answering on Vietnamese Wikipedia-based Textual Knowledge Source

Question answering (QA) is a natural language understanding task within ...
research
12/31/2020

Coreference Reasoning in Machine Reading Comprehension

The ability to reason about multiple references to a given entity is ess...
research
07/27/2021

QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension

Alongside huge volumes of research on deep learning models in NLP in the...
research
11/01/2021

Introspective Distillation for Robust Question Answering

Question answering (QA) models are well-known to exploit data bias, e.g....
research
11/29/2022

Which Shortcut Solution Do Question Answering Models Prefer to Learn?

Question answering (QA) models for reading comprehension tend to learn s...

Please sign up or login with your details

Forgot password? Click here to reset