The Unreasonable Effectiveness of Transformer Language Models in Grammatical Error Correction

06/04/2019
by   Dimitrios Alikaniotis, et al.
0

Recent work on Grammatical Error Correction (GEC) has highlighted the importance of language modeling in that it is certainly possible to achieve good performance by comparing the probabilities of the proposed edits. At the same time, advancements in language modeling have managed to generate linguistic output, which is almost indistinguishable from that of human-generated text. In this paper, we up the ante by exploring the potential of more sophisticated language models in GEC and offer some key insights on their strengths and weaknesses. We show that, in line with recent results in other NLP tasks, Transformer architectures achieve consistently high performance and provide a competitive baseline for future machine learning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2023

Evaluating the Capability of Large-scale Language Models on Chinese Grammatical Error Correction Task

Large-scale language models (LLMs) has shown remarkable capability in va...
research
08/03/2023

Does Correction Remain A Problem For Large Language Models?

As large language models, such as GPT, continue to advance the capabilit...
research
12/30/2018

ATHENA: Automated Tuning of Genomic Error Correction Algorithms using Language Models

The performance of most error-correction algorithms that operate on geno...
research
08/06/2023

Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies

Large language models (LLMs) have demonstrated remarkable performance ac...
research
03/02/2021

The Rediscovery Hypothesis: Language Models Need to Meet Linguistics

There is an ongoing debate in the NLP community whether modern language ...
research
12/02/2019

Neural Academic Paper Generation

In this work, we tackle the problem of structured text generation, speci...
research
05/29/2023

Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A Study on Performance and Controllability in Prompt-Based Methods

Large-scale pre-trained language models such as GPT-3 have shown remarka...

Please sign up or login with your details

Forgot password? Click here to reset