ReGen: Reinforcement Learning for Text and Knowledge Base Generation using Pretrained Language Models

08/27/2021
by   Pierre L. Dognin, et al.
27

Automatic construction of relevant Knowledge Bases (KBs) from text, and generation of semantically meaningful text from KBs are both long-standing goals in Machine Learning. In this paper, we present ReGen, a bidirectional generation of text and graph leveraging Reinforcement Learning (RL) to improve performance. Graph linearization enables us to re-frame both tasks as a sequence to sequence generation problem regardless of the generative direction, which in turn allows the use of Reinforcement Learning for sequence training where the model itself is employed as its own critic leading to Self-Critical Sequence Training (SCST). We present an extensive investigation demonstrating that the use of RL via SCST benefits graph and text generation on WebNLG+ 2020 and TekGen datasets. Our system provides state-of-the-art results on WebNLG+ 2020 by significantly improving upon published results from the WebNLG 2020+ Challenge for both text-to-graph and graph-to-text generation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2020

Investigating Pretrained Language Models for Graph-to-Text Generation

Graph-to-text generation, a subtask of data-to-text generation, aims to ...
research
11/20/2021

RDF-to-Text Generation with Reinforcement Learning Based Graph-augmented Structural Neural Encoders

Considering a collection of RDF triples, the RDF-to-text generation task...
research
06/14/2021

Text Generation with Efficient (Soft) Q-Learning

Maximum likelihood estimation (MLE) is the predominant algorithm for tra...
research
06/20/2023

Learning to Generate Better Than Your LLM

Reinforcement learning (RL) has emerged as a powerful paradigm for fine-...
research
05/05/2020

Smart To-Do : Automatic Generation of To-Do Items from Emails

Intelligent features in email service applications aim to increase produ...
research
10/19/2019

Natural Question Generation with Reinforcement Learning Based Graph-to-Sequence Model

Natural question generation (QG) aims to generate questions from a passa...
research
09/17/2022

Selective Token Generation for Few-shot Natural Language Generation

Natural language modeling with limited training data is a challenging pr...

Please sign up or login with your details

Forgot password? Click here to reset