Measure and Improve Robustness in NLP Models: A Survey

12/15/2021
by   Xuezhi Wang, et al.
0

As NLP models achieved state-of-the-art performances over benchmarks and gained wide applications, it has been increasingly important to ensure the safe deployment of these models in the real world, e.g., making sure the models are robust against unseen or challenging scenarios. Despite robustness being an increasingly studied topic, it has been separately explored in applications like vision and NLP, with various definitions, evaluation and mitigation strategies in multiple lines of research. In this paper, we aim to provide a unifying survey of how to define, measure and improve robustness in NLP. We first connect multiple definitions of robustness, then unify various lines of work on identifying robustness failures and evaluating models' robustness. Correspondingly, we present mitigation strategies that are data-driven, model-driven, and inductive-prior-based, with a more systematic view of how to effectively improve robustness in NLP models. Finally, we conclude by outlining open challenges and future directions to motivate further research in this area.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2023

A Survey on Out-of-Distribution Evaluation of Neural NLP Models

Adversarial robustness, domain generalization and dataset biases are thr...
research
02/14/2023

Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions

Although backdoor learning is an active research topic in the NLP domain...
research
02/04/2021

Building Representative Corpora from Illiterate Communities: A Review of Challenges and Mitigation Strategies for Developing Countries

Most well-established data collection methods currently adopted in NLP d...
research
04/30/2021

Summarization, Simplification, and Generation: The Case of Patents

We survey Natural Language Processing (NLP) approaches to summarizing, s...
research
12/27/2022

A Survey on Table-and-Text HybridQA: Concepts, Methods, Challenges and Future Directions

Table-and-text hybrid question answering (HybridQA) is a widely used and...
research
01/13/2021

Robustness Gym: Unifying the NLP Evaluation Landscape

Despite impressive performance on standard benchmarks, deep neural netwo...
research
02/09/2023

SoK: A Data-driven View on Methods to Detect Reflective Amplification DDoS Attacks Using Honeypots

In this paper, we revisit the use of honeypots for detecting reflective ...

Please sign up or login with your details

Forgot password? Click here to reset