DeepAI AI Chat
Log In Sign Up

On the Robustness of Language Encoders against Grammatical Errors

by   Fan Yin, et al.
Shanghai Jiao Tong University
Peking University

We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors. Specifically, we collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data. We use this approach to facilitate debugging models on downstream applications. Results confirm that the performance of all tested models is affected but the degree of impact varies. To interpret model behaviors, we further design a linguistic acceptability task to reveal their abilities in identifying ungrammatical sentences and the position of errors. We find that fixed contextual encoders with a simple classifier trained on the prediction of sentence correctness are able to locate error positions. We also design a cloze test for BERT and discover that BERT captures the interaction between errors and specific tokens in context. Our results shed light on understanding the robustness and behaviors of language encoders against grammatical errors.


ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples

This paper describes our system, Joint Encoders for Stable Suggestion In...

Detecting Backdoors in Pre-trained Encoders

Self-supervised learning in computer vision trains on unlabeled data, su...

What Does BERT Look At? An Analysis of BERT's Attention

Large pre-trained neural networks such as BERT have had great recent suc...

Misspelling Correction with Pre-trained Contextual Language Model

Spelling irregularities, known now as spelling mistakes, have been found...

On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for Multimodal Sentiment Analysis

This paper investigates the effectiveness and implementation of modality...

EFSG: Evolutionary Fooling Sentences Generator

Large pre-trained language representation models (LMs) have recently col...

Abstraction not Memory: BERT and the English Article System

Article prediction is a task that has long defied accurate linguistic de...