Controllable Generation from Pre-trained Language Models via Inverse Prompting

03/19/2021
by   Xu Zou, et al.
0

Large-scale pre-trained language models have demonstrated strong capabilities of generating realistic text. However, it remains challenging to control the generation results. Previous approaches such as prompting are far from sufficient, which limits the usage of language models. To tackle this challenge, we propose an innovative method, inverse prompting, to better control text generation. The core idea of inverse prompting is to use generated text to inversely predict the prompt during beam search, which enhances the relevance between the prompt and the generated text and provides better controllability. Empirically, we pre-train a large-scale Chinese language model to perform a systematic study using human evaluation on the tasks of open-domain poem generation and open-domain long-form question answering. Our results show that our proposed method substantially outperforms the baselines and that our generation quality is close to human performance on some of the tasks. Narrators can try our poem generation demo at https://pretrain.aminer.cn/apps/poetry.html, while our QA demo can be found at https://pretrain.aminer.cn/app/qa. For researchers, the code is provided in https://github.com/THUDM/InversePrompting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2023

Huatuo-26M, a Large-scale Chinese Medical QA Dataset

In this paper, we release a largest ever medical Question Answering (QA)...
research
10/19/2022

BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining

Pre-trained language models have attracted increasing attention in the b...
research
06/13/2023

WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences

We present WebGLM, a web-enhanced question-answering system based on the...
research
07/25/2023

FacTool: Factuality Detection in Generative AI – A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios

The emergence of generative pre-trained models has facilitated the synth...
research
05/19/2022

RankGen: Improving Text Generation with Large Ranking Models

Given an input sequence (or prefix), modern language models often assign...
research
02/17/2023

Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints

The limits of open-ended generative models are unclear, yet increasingly...
research
04/13/2023

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study

Large decoder-only language models (LMs) can be largely improved in term...

Please sign up or login with your details

Forgot password? Click here to reset