Technical report on Conversational Question Answering

09/24/2019
by   Ying Ju, et al.
0

Conversational Question Answering is a challenging task since it requires understanding of conversational history. In this project, we propose a new system RoBERTa + AT +KD, which involves rationale tagging multi-task, adversarial training, knowledge distillation and a linguistic post-process strategy. Our single model achieves 90.4(F1) on the CoQA test set without data augmentation, outperforming the current state-of-the-art single model by 2.6 F1.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2021

Conversational Question Answering over Knowledge Graphs with Transformer and Graph Attention Networks

This paper addresses the task of (complex) conversational question answe...
research
10/21/2020

Contextualized Attention-based Knowledge Transfer for Spoken Conversational Question Answering

Spoken conversational question answering (SCQA) requires machines to mod...
research
10/17/2022

PACIFIC: Towards Proactive Conversational Question Answering over Tabular and Textual Data in Finance

To facilitate conversational question answering (CQA) over hybrid contex...
research
01/26/2022

An Automated Question-Answering Framework Based on Evolution Algorithm

Building a deep learning model for a Question-Answering (QA) task requir...
research
10/11/2019

Multi-Task Learning for Conversational Question Answering over a Large-Scale Knowledge Base

We consider the problem of conversational question answering over a larg...
research
04/21/2019

Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System

Deep pre-training and fine-tuning models (like BERT, OpenAI GPT) have de...
research
08/26/2022

Building the Intent Landscape of Real-World Conversational Corpora with Extractive Question-Answering Transformers

For companies with customer service, mapping intents inside their conver...

Please sign up or login with your details

Forgot password? Click here to reset