Retrieval-guided Counterfactual Generation for QA

10/14/2021
by   Bhargavi Paranjape, et al.
0

Deep NLP models have been shown to learn spurious correlations, leaving them brittle to input perturbations. Recent work has shown that counterfactual or contrastive data – i.e. minimally perturbed inputs – can reveal these weaknesses, and that data augmentation using counterfactuals can help ameliorate them. Proposed techniques for generating counterfactuals rely on human annotations, perturbations based on simple heuristics, and meaning representation frameworks. We focus on the task of creating counterfactuals for question answering, which presents unique challenges related to world knowledge, semantic diversity, and answerability. To address these challenges, we develop a Retrieve-Generate-Filter(RGF) technique to create counterfactual evaluation and training data with minimal human supervision. Using an open-domain QA framework and question generation model trained on original task data, we create counterfactuals that are fluent, semantically diverse, and automatically labeled. Data augmentation with RGF counterfactuals improves performance on out-of-domain and challenging evaluation sets over and above existing methods, in both the reading comprehension and open-domain QA settings. Moreover, we find that RGF data leads to significant improvements in a model's robustness to local perturbations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2022

CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation

Counterfactual data augmentation (CDA) – i.e., adding minimally perturbe...
research
12/20/2022

DISCO: Distilling Phrasal Counterfactuals with Large Language Models

Recent methods demonstrate that data augmentation using counterfactual k...
research
09/14/2023

CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration

In recent years, large language models (LLMs) have shown remarkable capa...
research
05/23/2023

IfQA: A Dataset for Open-domain Question Answering under Counterfactual Presuppositions

Although counterfactual reasoning is a fundamental aspect of intelligenc...
research
05/23/2023

Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited Questions

Contrast consistency, the ability of a model to make consistently correc...
research
05/26/2023

CREST: A Joint Framework for Rationalization and Counterfactual Text Generation

Selective rationales and counterfactual examples have emerged as two eff...
research
11/29/2022

Penalizing Confident Predictions on Largely Perturbed Inputs Does Not Improve Out-of-Distribution Generalization in Question Answering

Question answering (QA) models are shown to be insensitive to large pert...

Please sign up or login with your details

Forgot password? Click here to reset