Neural Conversational QA: Learning to Reason v.s. Exploiting Patterns

09/09/2019
by   Abhishek Sharma, et al.
0

In this paper we work on the recently introduced ShARC task - a challenging form of conversational QA that requires reasoning over rules expressed in natural language. Attuned to the risk of superficial patterns in data being exploited by neural models to do well on benchmark tasks (Niven and Kao 2019), we conduct a series of probing experiments and demonstrate how current state-of-the-art models rely heavily on such patterns. To prevent models from learning based on the superficial clues, we modify the dataset by automatically generating new instances reducing the occurrences of those patterns. We also present a simple yet effective model that learns embedding representations to incorporate dialog history along with the previous answers to follow-up questions. We find that our model outperforms existing methods on all metrics, and the results show that the proposed model is more robust in dealing with spurious patterns and learns to reason meaningfully.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2018

KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings

The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question ...
research
04/05/2022

Improved and Efficient Conversational Slot Labeling through Question Answering

Transformer-based pretrained language models (PLMs) offer unmatched perf...
research
01/17/2020

Modality-Balanced Models for Visual Dialogue

The Visual Dialog task requires a model to exploit both image and conver...
research
08/14/2019

FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine Comprehension

Conversational machine comprehension requires deep understanding of the ...
research
05/06/2021

Learning to Perturb Word Embeddings for Out-of-distribution QA

QA models based on pretrained language mod-els have achieved remarkable ...
research
01/09/2023

MAQA: A Multimodal QA Benchmark for Negation

Multimodal learning can benefit from the representation power of pretrai...
research
02/15/2022

Saving Dense Retriever from Shortcut Dependency in Conversational Search

In conversational search (CS), it needs holistic understanding over conv...

Please sign up or login with your details

Forgot password? Click here to reset