Improving Commonsense Causal Reasoning by Adversarial Training and Data Augmentation

01/13/2021
by   Ieva Staliūnaitė, et al.
0

Determining the plausibility of causal relations between clauses is a commonsense reasoning task that requires complex inference ability. The general approach to this task is to train a large pretrained language model on a specific dataset. However, the available training data for the task is often scarce, which leads to instability of model training or reliance on the shallow features of the dataset. This paper presents a number of techniques for making models more robust in the domain of causal reasoning. Firstly, we perform adversarial training by generating perturbed inputs through synonym substitution. Secondly, based on a linguistic theory of discourse connectives, we perform data augmentation using a discourse parser for detecting causally linked clauses in large text, and a generative language model for generating distractors. Both methods boost model performance on the Choice of Plausible Alternatives (COPA) dataset, as well as on a Balanced COPA dataset, which is a modified version of the original data that has been developed to avoid superficial cues, leading to a more challenging benchmark. We show a statistically significant improvement in performance and robustness on both datasets, even with only a small number of additionally generated data points.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2020

G-DAUG: Generative Data Augmentation for Commonsense Reasoning

Recent advances in commonsense reasoning depend on large-scale human-ann...
research
07/05/2021

Doing Good or Doing Right? Exploring the Weakness of Commonsense Causal Reasoning Models

Pretrained language models (PLM) achieve surprising performance on the C...
research
10/22/2022

DiscoSense: Commonsense Reasoning with Discourse Connectives

We present DiscoSense, a benchmark for commonsense reasoning via underst...
research
04/19/2021

BERTić – The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian

In this paper we describe a transformer model pre-trained on 8 billion t...
research
05/23/2023

LLM-powered Data Augmentation for Enhanced Crosslingual Performance

This paper aims to explore the potential of leveraging Large Language Mo...
research
10/21/2020

KnowDis: Knowledge Enhanced Data Augmentation for Event Causality Detection via Distant Supervision

Modern models of event causality detection (ECD) are mainly based on sup...
research
10/08/2020

Precise Task Formalization Matters in Winograd Schema Evaluations

Performance on the Winograd Schema Challenge (WSC), a respected English ...

Please sign up or login with your details

Forgot password? Click here to reset