Controlled Generation with Prompt Insertion for Natural Language Explanations in Grammatical Error Correction

09/20/2023
by   Masahiro Kaneko, et al.
0

In Grammatical Error Correction (GEC), it is crucial to ensure the user's comprehension of a reason for correction. Existing studies present tokens, examples, and hints as to the basis for correction but do not directly explain the reasons for corrections. Although methods that use Large Language Models (LLMs) to provide direct explanations in natural language have been proposed for various tasks, no such method exists for GEC. Generating explanations for GEC corrections involves aligning input and output tokens, identifying correction points, and presenting corresponding explanations consistently. However, it is not straightforward to specify a complex format to generate explanations, because explicit control of generation is difficult with prompts. This study introduces a method called controlled generation with Prompt Insertion (PI) so that LLMs can explain the reasons for corrections in natural language. In PI, LLMs first correct the input text, and then we automatically extract the correction points based on the rules. The extracted correction points are sequentially inserted into the LLM's explanation output as prompts, guiding the LLMs to generate explanations for the correction points. We also create an Explainable GEC (XGEC) dataset of correction reasons by annotating NUCLE, CoNLL2013, and CoNLL2014. Although generations from GPT-3 and ChatGPT using original prompts miss some correction points, the generation control using PI can explicitly guide to describe explanations for all correction points, contributing to improved performance in generating correction reasons.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2018

Building a Lemmatizer and a Spell-checker for Sorani Kurdish

The present paper aims at presenting a lemmatization and a word-level er...
research
04/18/2021

Improving Neural Model Performance through Natural Language Feedback on Their Explanations

A class of explainable NLP models for reasoning tasks support their deci...
research
05/28/2023

Decoding the Underlying Meaning of Multimodal Hateful Memes

Recent studies have proposed models that yielded promising performance f...
research
09/23/2020

Seq2Edits: Sequence Transduction Using Span-level Edit Operations

We propose Seq2Edits, an open-vocabulary approach to sequence editing fo...
research
05/22/2022

Sequence-to-Action: Grammatical Error Correction with Action Guided Sequence Generation

The task of Grammatical Error Correction (GEC) has received remarkable a...
research
08/14/2016

Numerically Grounded Language Models for Semantic Error Correction

Semantic error detection and correction is an important task for applica...
research
11/04/2022

Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book

Advances in natural language processing have resulted in large language ...

Please sign up or login with your details

Forgot password? Click here to reset