Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment

04/06/2022
by   Zichao Li, et al.
0

Most research on question answering focuses on the pre-deployment stage; i.e., building an accurate model for deployment. In this paper, we ask the question: Can we improve QA systems further post-deployment based on user interactions? We focus on two kinds of improvements: 1) improving the QA system's performance itself, and 2) providing the model with the ability to explain the correctness or incorrectness of an answer. We collect a retrieval-based QA dataset, FeedbackQA, which contains interactive feedback from users. We collect this dataset by deploying a base QA system to crowdworkers who then engage with the system and provide feedback on the quality of its answers. The feedback contains both structured ratings and unstructured natural language explanations. We train a neural model with this feedback data that can generate explanations and re-score answer candidates. We show that feedback data not only improves the accuracy of the deployed QA system but also other stronger non-deployed systems. The generated explanations also help users make informed decisions about the correctness of answers. Project page: https://mcgill-nlp.github.io/feedbackqa/

READ FULL TEXT
research
05/21/2023

Continually Improving Extractive QA via Human Feedback

We study continually improving an extractive question answering (QA) sys...
research
08/27/2019

Incremental Improvement of a Question Answering System by Re-ranking Answer Candidates using Machine Learning

We implement a method for re-ranking top-10 results of a state-of-the-ar...
research
12/10/2021

Improving the Question Answering Quality using Answer Candidate Filtering based on Natural-Language Features

Software with natural-language user interfaces has an ever-increasing im...
research
04/27/2022

Towards Teachable Reasoning Systems

Our goal is a teachable reasoning system for question-answering (QA), wh...
research
03/18/2022

Simulating Bandit Learning from User Feedback for Extractive Question Answering

We study learning from user feedback for extractive question answering b...
research
11/01/2020

Improving Conversational Question Answering Systems after Deployment using Feedback-Weighted Learning

The interaction of conversational systems with users poses an exciting o...
research
04/11/2022

Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions

Current QA systems can generate reasonable-sounding yet false answers wi...

Please sign up or login with your details

Forgot password? Click here to reset