Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control

by   Natasha Jaques, et al.

This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.


page 1

page 2

page 3

page 4


RL with KL penalties is better viewed as Bayesian inference

Reinforcement learning (RL) is frequently employed in fine-tuning large ...

OptiGAN: Generative Adversarial Networks for Goal Optimized Sequence Generation

One of the challenging problems in sequence generation tasks is the opti...

Diverse Keyphrase Generation with Neural Unlikelihood Training

In this paper, we study sequence-to-sequence (S2S) keyphrase generation ...

Interactive Music Generation with Positional Constraints using Anticipation-RNNs

Recurrent Neural Networks (RNNS) are now widely used on sequence generat...

Emergence of Hierarchy via Reinforcement Learning Using a Multiple Timescale Stochastic RNN

Although recurrent neural networks (RNNs) for reinforcement learning (RL...

Please sign up or login with your details

Forgot password? Click here to reset