Controllable Text Generation with Neurally-Decomposed Oracle

05/27/2022
by   Tao Meng, et al.
0

We propose a general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO). Given a pre-trained base language model and a sequence-level boolean oracle function, we propose to decompose the oracle function into token-level guidance to steer the base model in text generation. Specifically, the token-level guidance is approximated by a neural model trained with examples sampled from the base model, demanding no additional auxiliary labeled data. We present the closed-form optimal solution to incorporate the token-level guidance into the base model for controllable generation. We further provide a theoretical analysis of how the approximation quality of NADO affects the controllable generation results. Experiments conducted on two applications: (1) text generation with lexical constraints and (2) machine translation with formality control demonstrate that our framework efficiently guides the base model towards the given oracle while maintaining high generation quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2022

Controllable Text Generation with Language Constraints

We consider the task of text generation in language models with constrai...
research
06/20/2023

On Compositionality and Improved Training of NADO

NeurAlly-Decomposed Oracle (NADO) is a powerful approach for controllabl...
research
10/07/2022

PCAE: A Framework of Plug-in Conditional Auto-Encoder for Controllable Text Generation

Controllable text generation has taken a gigantic step forward these day...
research
12/16/2022

DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation

Self-training (ST) has prospered again in language understanding by augm...
research
05/04/2020

Exploring Controllable Text Generation Techniques

Neural controllable text generation is an important area gaining attenti...
research
06/01/2023

Preference-grounded Token-level Guidance for Language Model Fine-tuning

Aligning language models (LMs) with preferences is an important problem ...
research
02/12/2021

On Efficient Training, Controllability and Compositional Generalization of Insertion-based Language Generators

Auto-regressive language models with the left-to-right generation order ...

Please sign up or login with your details

Forgot password? Click here to reset