RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning

05/25/2022
by   Mingkai Deng, et al.
9

Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform diverse NLP tasks, especially when only few downstream data are available. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning soft prompt (e.g., embeddings) which falls short of interpretability, reusability across LMs, and applicability when gradients are not accessible. Discrete prompt, on the other hand, is difficult to optimize, and is often created by "enumeration (e.g., paraphrasing)-then-selection" heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the desired discrete prompt after training with reward. To overcome the complexity and stochasticity of reward signals by the large LM environment, we incorporate effective reward stabilization that substantially enhances the training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating LM prompting may not follow human language patterns.

READ FULL TEXT

page 3

page 10

research
04/16/2022

Efficient Reinforcement Learning for Unsupervised Controlled Text Generation

Controlled text generation tasks such as unsupervised text style transfe...
research
06/14/2021

Text Generation with Efficient (Soft) Q-Learning

Maximum likelihood estimation (MLE) is the predominant algorithm for tra...
research
08/23/2023

Language Reward Modulation for Pretraining Reinforcement Learning

Using learned reward functions (LRFs) as a means to solve sparse-reward ...
research
09/19/2023

Specializing Small Language Models towards Complex Style Transfer via Latent Attribute Pre-Training

In this work, we introduce the concept of complex text style transfer ta...
research
02/02/2023

Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment

Recent progress in scaling up large language models has shown impressive...
research
08/14/2023

Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt Optimization for Few-shot Learning

Prompt-based pre-trained language models (PLMs) paradigm have succeeded ...
research
04/18/2021

The Power of Scale for Parameter-Efficient Prompt Tuning

In this work, we explore "prompt tuning", a simple yet effective mechani...

Please sign up or login with your details

Forgot password? Click here to reset