Chinese Grammatical Correction Using BERT-based Pre-trained Model

11/04/2020
by   Hongfei Wang, et al.
0

In recent years, pre-trained models have been extensively studied, and several downstream tasks have benefited from their utilization. In this study, we verify the effectiveness of two methods that incorporate a BERT-based pre-trained model developed by Cui et al. (2020) into an encoder-decoder model on Chinese grammatical error correction tasks. We also analyze the error type and conclude that sentence-level errors are yet to be addressed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2020

Pre-trained Model for Chinese Word Segmentation with Meta Learning

Recent researches show that pre-trained models such as BERT (Devlin et a...
research
05/03/2020

Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction

This paper investigates how to effectively incorporate a pre-trained mas...
research
11/09/2022

A Method to Judge the Style of Classical Poetry Based on Pre-trained Model

One of the important topics in the research field of Chinese classical p...
research
11/16/2021

Integrated Semantic and Phonetic Post-correction for Chinese Speech Recognition

Due to the recent advances of natural language processing, several works...
research
04/03/2023

MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model

In natural language processing, pre-trained language models have become ...
research
04/15/2022

Improving Pre-trained Language Models with Syntactic Dependency Prediction Task for Chinese Semantic Error Recognition

Existing Chinese text error detection mainly focuses on spelling and sim...
research
03/01/2019

Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data

Neural machine translation systems have become state-of-the-art approach...

Please sign up or login with your details

Forgot password? Click here to reset