CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation

10/10/2022
by   Tanay Dixit, et al.
0

Counterfactual data augmentation (CDA) – i.e., adding minimally perturbed inputs during training – helps reduce model reliance on spurious correlations and improves generalization to out-of-distribution (OOD) data. Prior work on generating counterfactuals only considered restricted classes of perturbations, limiting their effectiveness. We present COunterfactual Generation via Retrieval and Editing (CORE), a retrieval-augmented generation framework for creating diverse counterfactual perturbations for CDA. For each training example, CORE first performs a dense retrieval over a task-related unlabeled text corpus using a learned bi-encoder and extracts relevant counterfactual excerpts. CORE then incorporates these into prompts to a large language model with few-shot learning capabilities, for counterfactual editing. Conditioning language model edits on naturally occurring data results in diverse perturbations. Experiments on natural language inference and sentiment analysis benchmarks show that CORE counterfactuals are more effective at improving generalization to OOD data compared to other DA approaches. We also show that the CORE retrieval framework can be used to encourage diversity in manually authored perturbations

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2021

Retrieval-guided Counterfactual Generation for QA

Deep NLP models have been shown to learn spurious correlations, leaving ...
research
12/20/2022

DISCO: Distilling Phrasal Counterfactuals with Large Language Models

Recent methods demonstrate that data augmentation using counterfactual k...
research
08/20/2019

Counterfactual Distribution Regression for Structured Inference

We consider problems in which a system receives external perturbations f...
research
06/20/2023

A novel Counterfactual method for aspect-based sentiment analysis

Aspect-based-sentiment-analysis (ABSA) is a fine-grained sentiment evalu...
research
10/21/2022

Robustifying Sentiment Classification by Maximally Exploiting Few Counterfactuals

For text classification tasks, finetuned language models perform remarka...
research
01/01/2021

Polyjuice: Automated, General-purpose Counterfactual Generation

Counterfactual examples have been shown to be useful for many applicatio...
research
05/26/2023

CREST: A Joint Framework for Rationalization and Counterfactual Text Generation

Selective rationales and counterfactual examples have emerged as two eff...

Please sign up or login with your details

Forgot password? Click here to reset