Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering

11/05/2020
by   Quan Tran, et al.
0

Interpretability and explainability of deep neural networks are challenging due to their scale, complexity, and the agreeable notions on which the explaining process rests. Previous work, in particular, has focused on representing internal components of neural networks through human-friendly visuals and concepts. On the other hand, in real life, when making a decision, human tends to rely on similar situations and/or associations in the past. Hence arguably, a promising approach to make the model transparent is to design it in a way such that the model explicitly connects the current sample with the seen ones, and bases its decision on these samples. Grounded on that principle, we propose in this paper an explainable, evidence-based memory network architecture, which learns to summarize the dataset and extract supporting evidences to make its decision. Our model achieves state-of-the-art performance on two popular question answering datasets (i.e. TrecQA and WikiQA). Via further analysis, we show that this model can reliably trace the errors it has made in the validation step to the training instances that might have caused these errors. We believe that this error-tracing capability provides significant benefit in improving dataset quality in many applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2021

Explainable Fact-checking through Question Answering

Misleading or false information has been creating chaos in some places a...
research
12/24/2020

REM-Net: Recursive Erasure Memory Network for Commonsense Evidence Refinement

When answering a question, people often draw upon their rich world knowl...
research
01/07/2016

Learning to Compose Neural Networks for Question Answering

We describe a question answering model that applies to both images and s...
research
05/12/2023

A Memory Model for Question Answering from Streaming Data Supported by Rehearsal and Anticipation of Coreference Information

Existing question answering methods often assume that the input content ...
research
04/28/2020

DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification

Recently, many methods discover effective evidence from reliable sources...
research
03/22/2022

Remember Intentions: Retrospective-Memory-based Trajectory Prediction

To realize trajectory prediction, most previous methods adopt the parame...
research
07/04/2019

A Road-map Towards Explainable Question Answering A Solution for Information Pollution

The increasing rate of information pollution on the Web requires novel s...

Please sign up or login with your details

Forgot password? Click here to reset