DeepAI AI Chat
Log In Sign Up

TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing

by   Tao Gui, et al.

Various robustness evaluation methodologies from different perspectives have been proposed for different natural language processing (NLP) tasks. These methods have often focused on either universal or task-specific generalization capabilities. In this work, we propose a multilingual robustness evaluation platform for NLP tasks (TextFlint) that incorporates universal text transformation, task-specific transformation, adversarial attack, subpopulation, and their combinations to provide comprehensive robustness analysis. TextFlint enables practitioners to automatically evaluate their models from all aspects or to customize their evaluations as desired with just a few lines of code. To guarantee user acceptability, all the text transformations are linguistically based, and we provide a human evaluation for each one. TextFlint generates complete analytical reports as well as targeted augmented data to address the shortcomings of the model's robustness. To validate TextFlint's utility, we performed large-scale empirical evaluations (over 67,000 evaluations) on state-of-the-art deep learning models, classic supervised methods, and real-world systems. Almost all models showed significant performance degradation, including a decline of more than 50 BERT's prediction accuracy on tasks such as aspect-level sentiment classification, named entity recognition, and natural language inference. Therefore, we call for the robustness to be included in the model evaluation, so as to promote the healthy development of NLP technology.


page 8

page 18

page 19


ParsBERT: Transformer-based Model for Persian Language Understanding

The surge of pre-trained language models has begun a new era in the fiel...

Robustness Gym: Unifying the NLP Evaluation Landscape

Despite impressive performance on standard benchmarks, deep neural netwo...

Language Invariant Properties in Natural Language Processing

Meaning is context-dependent, but many properties of language (should) r...

GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective

Pre-trained language models (PLMs) are known to improve the generalizati...

What's in a Name? Are BERT Named Entity Representations just as Good for any other Name?

We evaluate named entity representations of BERT-based NLP models by inv...

ChatGPT: Jack of all trades, master of none

OpenAI has released the Chat Generative Pre-trained Transformer (ChatGPT...

Interviewer-Candidate Role Play: Towards Developing Real-World NLP Systems

Standard NLP tasks do not incorporate several common real-world scenario...