Pretraining Chinese BERT for Detecting Word Insertion and Deletion Errors

04/26/2022
by   Cong Zhou, et al.
1

Chinese BERT models achieve remarkable progress in dealing with grammatical errors of word substitution. However, they fail to handle word insertion and deletion because BERT assumes the existence of a word at each position. To address this, we present a simple and effective Chinese pretrained model. The basic idea is to enable the model to determine whether a word exists at a particular position. We achieve this by introducing a special token , the prediction of which stands for the non-existence of a word. In the training stage, we design pretraining tasks such that the model learns to predict and real words jointly given the surrounding context. In the inference stage, the model readily detects whether a word should be inserted or deleted with the standard masked language modeling function. We further create an evaluation dataset to foster research on word insertion and deletion. It includes human-annotated corrections for 7,726 erroneous sentences. Results show that existing Chinese BERT performs poorly on detecting insertion and deletion errors. Our approach significantly improves the F1 scores from 24.1% to 78.1% for word insertion and from 26.5% to 68.5% for word deletion, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2022

"Is Whole Word Masking Always Better for Chinese BERT?": Probing on Chinese Grammatical Error Correction

Whole word masking (WWM), which masks all subwords corresponding to a wo...
research
11/17/2020

MVP-BERT: Redesigning Vocabularies for Chinese BERT and Multi-Vocab Pretraining

Despite the development of pre-trained language models (PLMs) significan...
research
03/12/2022

MarkBERT: Marking Word Boundaries Improves Chinese BERT

We present a Chinese BERT model dubbed MarkBERT that uses word informati...
research
06/01/2021

SHUOWEN-JIEZI: Linguistically Informed Tokenizers For Chinese Language Model Pretraining

Conventional tokenization methods for Chinese pretrained language models...
research
02/24/2022

Pretraining without Wordpieces: Learning Over a Vocabulary of Millions of Words

The standard BERT adopts subword-based tokenization, which may break a w...
research
08/20/2022

BSpell: A CNN-blended BERT Based Bengali Spell Checker

Bengali typing is mostly performed using English keyboard and can be hig...
research
06/22/2021

A Simple and Practical Approach to Improve Misspellings in OCR Text

The focus of our paper is the identification and correction of non-word ...

Please sign up or login with your details

Forgot password? Click here to reset