Exploring the Capacity of a Large-scale Masked Language Model to Recognize Grammatical Errors

08/27/2021
by   Ryo Nagata, et al.
0

In this paper, we explore the capacity of a language model-based method for grammatical error detection in detail. We first show that 5 to 10 data are enough for a BERT-based error detection method to achieve performance equivalent to a non-language model-based method can achieve with the full training data; recall improves much faster with respect to training data size in the BERT-based method than in the non-language model method while precision behaves similarly. These suggest that (i) the BERT-based method should have a good knowledge of grammar required to recognize certain types of error and that (ii) it can transform the knowledge into error detection rules by fine-tuning with a few training samples, which explains its high generalization ability in grammatical error detection. We further show with pseudo error data that it actually exhibits such nice properties in learning rules for recognizing various types of error. Finally, based on these findings, we explore a cost-effective method for detecting grammatical errors with feedback comments explaining relevant grammatical rules to learners.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2023

Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values

Although Shapley values have been shown to be highly effective for ident...
research
05/28/2023

Rethinking Masked Language Modeling for Chinese Spelling Correction

In this paper, we study Chinese Spelling Correction (CSC) as a joint dec...
research
12/01/2021

DPRK-BERT: The Supreme Language Model

Deep language models have achieved remarkable success in the NLP domain....
research
10/22/2020

Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data

Fine-tuned pre-trained language models can suffer from severe miscalibra...
research
10/23/2020

HateBERT: Retraining BERT for Abusive Language Detection in English

In this paper, we introduce HateBERT, a re-trained BERT model for abusiv...
research
12/17/2018

Conditional BERT Contextual Augmentation

We propose a novel data augmentation method for labeled sentences called...
research
12/01/2021

True or False: Does the Deep Learning Model Learn to Detect Rumors?

It is difficult for humans to distinguish the true and false of rumors, ...

Please sign up or login with your details

Forgot password? Click here to reset