Faithful Low-Resource Data-to-Text Generation through Cycle Training

05/24/2023
by   Zhuoer Wang, et al.
0

Methods to generate text from structured data have advanced significantly in recent years, primarily due to fine-tuning of pre-trained language models on large datasets. However, such models can fail to produce output faithful to the input data, particularly on out-of-domain data. Sufficient annotated data is often not available for specific domains, leading us to seek an unsupervised approach to improve the faithfulness of output text. Since the problem is fundamentally one of consistency between the representations of the structured data and text, we evaluate the effectiveness of cycle training in this work. Cycle training uses two models which are inverses of each other: one that generates text from structured data, and one which generates the structured data from natural language text. We show that cycle training, when initialized with a small amount of supervised data (100 samples in our case), achieves nearly the same performance as fully supervised approaches for the data-to-text generation task on the WebNLG, E2E, WTQ, and WSQL datasets. We perform extensive empirical analysis with automated evaluation metrics and a newly designed human evaluation schema to reveal different cycle training strategies' effectiveness of reducing various types of generation errors. Our code is publicly available at https://github.com/Edillower/CycleNLG.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2021

Stage-wise Fine-tuning for Graph-to-Text Generation

Graph-to-text generation has benefited from pre-trained language models ...
research
11/25/2022

CodeExp: Explanatory Code Document Generation

Developing models that can automatically generate detailed code explanat...
research
12/12/2022

T5Score: Discriminative Fine-tuning of Generative Evaluation Metrics

Modern embedding-based metrics for evaluation of generated text generall...
research
05/21/2023

PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs

Large language models (LLMs) have shown great abilities of solving vario...
research
12/14/2020

Fork or Fail: Cycle-Consistent Training with Many-to-One Mappings

Cycle-consistent training is widely used for jointly learning a forward ...
research
02/06/2021

Neural Data-to-Text Generation with LM-based Text Augmentation

For many new application domains for data-to-text generation, the main o...
research
10/03/2020

Partially-Aligned Data-to-Text Generation with Distant Supervision

The Data-to-Text task aims to generate human-readable text for describin...

Please sign up or login with your details

Forgot password? Click here to reset