Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness

03/15/2022
by   Tejas Gokhale, et al.
4

Data modification, either via additional training datasets, data augmentation, debiasing, and dataset filtering, has been proposed as an effective solution for generalizing to out-of-domain (OOD) inputs, in both natural language processing and computer vision literature. However, the effect of data modification on adversarial robustness remains unclear. In this work, we conduct a comprehensive study of common data modification strategies and evaluate not only their in-domain and OOD performance, but also their adversarial robustness (AR). We also present results on a two-dimensional synthetic dataset to visualize the effect of each method on the training distribution. This work serves as an empirical study towards understanding the relationship between generalizing to unseen domains and defending against adversarial perturbations. Our findings suggest that more data (either via additional datasets or data augmentation) benefits both OOD accuracy and AR. However, data filtering (previously shown to improve OOD accuracy on natural language inference) hurts OOD accuracy on other tasks such as question answering and image classification. We provide insights from our experiments to inform future work in this direction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2019

Augmenting Data with Mixup for Sentence Classification: An Empirical Study

Mixup, a recent proposed data augmentation method through linearly inter...
research
09/21/2020

SSMBA: Self-Supervised Manifold Based Data Augmentation for Improving Out-of-Domain Robustness

Models that perform well on a training domain often fail to generalize t...
research
09/13/2021

How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding

Knowledge Distillation (KD) is a model compression algorithm that helps ...
research
09/03/2023

Generative Data Augmentation using LLMs improves Distributional Robustness in Question Answering

Robustness in Natural Language Processing continues to be a pertinent is...
research
11/20/2022

Feature Weaken: Vicinal Data Augmentation for Classification

Deep learning usually relies on training large-scale data samples to ach...
research
04/28/2018

Generalizing Across Domains via Cross-Gradient Training

We present CROSSGRAD, a method to use multi-domain training data to lear...
research
06/24/2022

QAGAN: Adversarial Approach To Learning Domain Invariant Language Features

Training models that are robust to data domain shift has gained an incre...

Please sign up or login with your details

Forgot password? Click here to reset