Evaluating the Robustness of Neural Language Models to Input Perturbations

08/27/2021
by   Milad Moradi, et al.
0

High-performance neural language models have obtained state-of-the-art results on a wide range of Natural Language Processing (NLP) tasks. However, results for common benchmark datasets often do not reflect model reliability and robustness when applied to noisy, real-world data. In this study, we design and implement various types of character-level and word-level perturbation methods to simulate realistic scenarios in which input texts may be slightly noisy or different from the data distribution on which NLP systems were trained. Conducting comprehensive experiments on different NLP tasks, we investigate the ability of high-performance language models such as BERT, XLNet, RoBERTa, and ELMo in handling different types of input perturbations. The results suggest that language models are sensitive to input perturbations and their performance can decrease even when small changes are introduced. We highlight that models need to be further improved and that current benchmarks are not reflecting model robustness well. We argue that evaluations on perturbed inputs should routinely complement widely-used benchmarks in order to yield a more realistic understanding of NLP systems robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2021

Deep learning models are not robust against noise in clinical text

Artificial Intelligence (AI) systems are attracting increasing interest ...
research
05/23/2023

On Robustness of Finetuned Transformer-based NLP Models

Transformer-based pretrained models like BERT, GPT-2 and T5 have been fi...
research
11/16/2021

Improving the robustness and accuracy of biomedical language models through adversarial training

Deep transformer neural network models have improved the predictive accu...
research
02/11/2023

Evaluating the Robustness of Discrete Prompts

Discrete prompts have been used for fine-tuning Pre-trained Language Mod...
research
09/21/2020

Improving Robustness and Generality of NLP Models Using Disentangled Representations

Supervised neural networks, which first map an input x to a single repre...
research
11/15/2022

GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective

Pre-trained language models (PLMs) are known to improve the generalizati...
research
04/14/2017

How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?

This paper investigates the robustness of NLP against perturbed word for...

Please sign up or login with your details

Forgot password? Click here to reset