Spelling Error Correction Using a Nested RNN Model and Pseudo Training Data

11/01/2018
by   Hao Li, et al.
0

We propose a nested recurrent neural network (nested RNN) model for English spelling error correction and generate pseudo data based on phonetic similarity to train it. The model fuses orthographic information and context as a whole and is trained in an end-to-end fashion. This avoids feature engineering and does not rely on a noisy channel model as in traditional methods. Experiments show that the proposed method is superior to existing systems in correcting spelling errors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2017

A Nested Attention Neural Hybrid Model for Grammatical Error Correction

Grammatical error correction (GEC) systems strive to correct both global...
research
04/16/2021

Comparison of Grammatical Error Correction Using Back-Translation Models

Grammatical error correction (GEC) suffers from a lack of sufficient par...
research
09/24/2022

Unsupervised domain adaptation for speech recognition with unsupervised error correction

The transcription quality of automatic speech recognition (ASR) systems ...
research
07/02/2018

A Simple but Effective Classification Model for Grammatical Error Correction

We treat grammatical error correction (GEC) as a classification problem ...
research
09/02/2019

An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction

The incorporation of pseudo data in the training of grammatical error co...
research
06/06/2021

Do Grammatical Error Correction Models Realize Grammatical Generalization?

There has been an increased interest in data generation approaches to gr...
research
03/17/2022

Type-Driven Multi-Turn Corrections for Grammatical Error Correction

Grammatical Error Correction (GEC) aims to automatically detect and corr...

Please sign up or login with your details

Forgot password? Click here to reset