From Rewriting to Remembering: Common Ground for Conversational QA Models

04/08/2022
by   Marco Del Tredici, et al.
0

In conversational QA, models have to leverage information in previous turns to answer upcoming questions. Current approaches, such as Question Rewriting, struggle to extract relevant information as the conversation unwinds. We introduce the Common Ground (CG), an approach to accumulate conversational information as it emerges and select the relevant information at every turn. We show that CG offers a more efficient and human-like way to exploit conversational information compared to existing approaches, leading to improvements on Open Domain Conversational QA.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

Question Rewriting for Conversational Question Answering

Conversational question answering (QA) requires answers conditioned on t...
research
05/04/2020

DoQA – Accessing Domain-Specific FAQs via Conversational QA

The goal of this work is to build conversational Question Answering (QA)...
research
06/22/2021

Learn to Resolve Conversational Dependency: A Consistency Training Framework for Conversational Question Answering

One of the main challenges in conversational question answering (CQA) is...
research
04/17/2021

A Graph-guided Multi-round Retrieval Method for Conversational Open-domain Question Answering

In recent years, conversational agents have provided a natural and conve...
research
06/07/2021

GTM: A Generative Triple-Wise Model for Conversational Question Generation

Generating some appealing questions in open-domain conversations is an e...
research
11/21/2019

What Do You Mean `Why?': Resolving Sluices in Conversations

In conversation, we often ask one-word questions such as `Why?' or `Who?...
research
02/15/2022

Saving Dense Retriever from Shortcut Dependency in Conversational Search

In conversational search (CS), it needs holistic understanding over conv...

Please sign up or login with your details

Forgot password? Click here to reset