Improving Machine Reading Comprehension via Adversarial Training

11/09/2019
by   Ziqing Yang, et al.
0

Adversarial training (AT) as a regularization method has proved its effectiveness in various tasks, such as image classification and text classification. Though there are successful applications of AT in many tasks of natural language processing (NLP), the mechanism behind it is still unclear. In this paper, we aim to apply AT on machine reading comprehension (MRC) and study its effects from multiple perspectives. We experiment with three different kinds of RC tasks: span-based RC, span-based RC with unanswerable questions and multi-choice RC. The experimental results show that the proposed method can improve the performance significantly and universally on SQuAD1.1, SQuAD2.0 and RACE. With virtual adversarial training (VAT), we explore the possibility of improving the RC models with semi-supervised learning and prove that examples from a different task are also beneficial. We also find that AT helps little in defending against artificial adversarial examples, but AT helps the model to learn better on examples that contain more low-frequency words.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2021

Adversarial Training for Machine Reading Comprehension with Virtual Embeddings

Adversarial training (AT) as a regularization method has proved its effe...
research
01/16/2020

A Pilot Study on Multiple Choice Machine Reading Comprehension for Vietnamese Texts

Machine Reading Comprehension (MRC) is the task of natural language proc...
research
08/02/2022

To Answer or Not to Answer? Improving Machine Reading Comprehension Model with Span-based Contrastive Learning

Machine Reading Comprehension with Unanswerable Questions is a difficult...
research
09/03/2018

A3Net: Adversarial-and-Attention Network for Machine Reading Comprehension

In this paper, we introduce Adversarial-and-attention Network (A3Net) fo...
research
11/16/2019

Robust Reading Comprehension with Linguistic Constraints via Posterior Regularization

In spite of great advancements of machine reading comprehension (RC), ex...
research
04/13/2022

A Novel Approach to Train Diverse Types of Language Models for Health Mention Classification of Tweets

Health mention classification deals with the disease detection in a give...
research
05/28/2019

DSReg: Using Distant Supervision as a Regularizer

In this paper, we aim at tackling a general issue in NLP tasks where som...

Please sign up or login with your details

Forgot password? Click here to reset