Adversarial Training for Commonsense Inference

05/17/2020
by   Lis Pereira, et al.
0

We propose an AdversariaL training algorithm for commonsense InferenCE (ALICE). We apply small perturbations to word embeddings and minimize the resultant adversarial risk to regularize the model. We exploit a novel combination of two different approaches to estimate these perturbations: 1) using the true label and 2) using the model prediction. Without relying on any human-crafted features, knowledge bases, or additional datasets other than the target datasets, our model boosts the fine-tuning performance of RoBERTa, achieving competitive results on multiple reading comprehension datasets that require commonsense inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2020

Commonsense knowledge adversarial dataset that challenges ELECTRA

Commonsense knowledge is critical in human reading comprehension. While ...
research
06/08/2021

Adversarial Training for Machine Reading Comprehension with Virtual Embeddings

Adversarial training (AT) as a regularization method has proved its effe...
research
09/25/2019

FreeLB: Enhanced Adversarial Training for Language Understanding

Adversarial training, which minimizes the maximal risk for label-preserv...
research
09/12/2020

Improving Machine Reading Comprehension with Contextualized Commonsense Knowledge

In this paper, we aim to extract commonsense knowledge to improve machin...
research
06/07/2022

OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection

We propose a multilingual adversarial training model for determining whe...
research
09/03/2018

A3Net: Adversarial-and-Attention Network for Machine Reading Comprehension

In this paper, we introduce Adversarial-and-attention Network (A3Net) fo...

Please sign up or login with your details

Forgot password? Click here to reset