Understanding Model Robustness to User-generated Noisy Texts

10/14/2021
by   Jakub Náplava, et al.
0

Sensitivity of deep-neural models to input noise is known to be a challenging problem. In NLP, model performance often deteriorates with naturally occurring noise, such as spelling errors. To mitigate this issue, models may leverage artificially noised data. However, the amount and type of generated noise has so far been determined arbitrarily. We therefore propose to model the errors statistically from grammatical-error-correction corpora. We present a thorough evaluation of several state-of-the-art NLP systems' robustness in multiple languages, with tasks including morpho-syntactic analysis, named entity recognition, neural machine translation, a subset of the GLUE benchmark and reading comprehension. We also compare two approaches to address the performance drop: a) training the NLP models with noised data generated by our framework; and b) reducing the input noise with external system for natural language correction. The code is released at https://github.com/ufal/kazitext.

READ FULL TEXT
research
03/12/2021

Improving Translation Robustness with Visual Cues and Error Correction

Neural Machine Translation models are brittle to input noise. Current ro...
research
07/01/2022

An Understanding-Oriented Robust Machine Reading Comprehension Model

Although existing machine reading comprehension models are making rapid ...
research
06/29/2022

GERNERMED++: Transfer Learning in German Medical NLP

We present a statistical model for German medical natural language proce...
research
05/23/2023

WYWEB: A NLP Evaluation Benchmark For Classical Chinese

To fully evaluate the overall performance of different NLP models in a g...
research
05/06/2023

NER-to-MRC: Named-Entity Recognition Completely Solving as Machine Reading Comprehension

Named-entity recognition (NER) detects texts with predefined semantic la...
research
11/13/2019

Robustness to Capitalization Errors in Named Entity Recognition

Robustness to capitalization errors is a highly desirable characteristic...
research
07/12/2021

DaCy: A Unified Framework for Danish NLP

Danish natural language processing (NLP) has in recent years obtained co...

Please sign up or login with your details

Forgot password? Click here to reset