QADiver: Interactive Framework for Diagnosing QA Models

12/01/2018
by   Gyeongbok Lee, et al.
0

Question answering (QA) extracting answers from text to the given question in natural language, has been actively studied and existing models have shown a promise of outperforming human performance when trained and evaluated with SQuAD dataset. However, such performance may not be replicated in the actual setting, for which we need to diagnose the cause, which is non-trivial due to the complexity of model. We thus propose a web-based UI that provides how each model contributes to QA performances, by integrating visualization and analysis tools for model explanation. We expect this framework can help QA model researchers to refine and improve their models.

READ FULL TEXT

page 1

page 2

research
01/01/2015

QANUS: An Open-source Question-Answering Platform

In this paper, we motivate the need for a publicly available, generic so...
research
04/18/2021

Can NLI Models Verify QA Systems' Predictions?

To build robust question answering systems, we need the ability to verif...
research
07/02/2019

CS563-QA: A Collection for Evaluating Question Answering Systems

Question Answering (QA) is a challenging topic since it requires tacklin...
research
08/31/2022

Lifelong Learning for Question Answering with Hierarchical Prompts

QA models with lifelong learning (LL) abilities are important for practi...
research
01/02/2021

Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering

Many Question-Answering (QA) datasets contain unanswerable questions, bu...
research
06/26/2019

Interpretable Question Answering on Knowledge Bases and Text

Interpretability of machine learning (ML) models becomes more relevant w...
research
01/12/2019

Semi-interactive Attention Network for Answer Understanding in Reverse-QA

Question answering (QA) is an important natural language processing (NLP...

Please sign up or login with your details

Forgot password? Click here to reset