Robust Machine Comprehension Models via Adversarial Training

04/17/2018
by   Yicheng Wang, et al.
0

It is shown that many published models for the Stanford Question Answering Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50 decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentDiverse, that significantly increases the variance within the adversarial training data by providing effective examples that punish the model for making certain superficial assumptions. Further, in order to improve robustness to AddSent's semantic perturbations (e.g., antonyms), we jointly improve the model's semantic-relationship learning capabilities in addition to our AddSentDiverse-based adversarial training data augmentation. With these additions, we show that we can make a state-of-the-art model significantly more robust, achieving a 36.5 adversarial evaluation while maintaining performance on the regular SQuAD task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2023

GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks

While leveraging additional training data is well established to improve...
research
11/10/2022

Impact of Adversarial Training on Robustness and Generalizability of Language Models

Adversarial training is widely acknowledged as the most effective defens...
research
10/02/2019

ROMark: A Robust Watermarking System Using Adversarial Training

The availability and easy access to digital communication increase the r...
research
12/05/2018

Are you tough enough? Framework for Robustness Validation of Machine Comprehension Systems

Deep Learning NLP domain lacks procedures for the analysis of model robu...
research
02/11/2020

Adversarial Robustness for Code

We propose a novel technique which addresses the challenge of learning a...
research
06/02/2022

Robustness Evaluation and Adversarial Training of an Instance Segmentation Model

To evaluate the robustness of non-classifier models, we propose probabil...
research
09/06/2018

Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models

We present two categories of model-agnostic adversarial strategies that ...

Please sign up or login with your details

Forgot password? Click here to reset