-
Sample-efficient Deep Reinforcement Learning for Dialog Control
Representing a dialog policy as a recurrent neural network (RNN) is attr...
read it
-
OptiGAN: Generative Adversarial Networks for Goal Optimized Sequence Generation
One of the challenging problems in sequence generation tasks is the opti...
read it
-
Diverse Keyphrase Generation with Neural Unlikelihood Training
In this paper, we study sequence-to-sequence (S2S) keyphrase generation ...
read it
-
Interactive Music Generation with Positional Constraints using Anticipation-RNNs
Recurrent Neural Networks (RNNS) are now widely used on sequence generat...
read it
-
Emergence of Hierarchy via Reinforcement Learning Using a Multiple Timescale Stochastic RNN
Although recurrent neural networks (RNNs) for reinforcement learning (RL...
read it
-
On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models
This paper addresses the general problem of reinforcement learning (RL) ...
read it
-
Triple-to-Text: Converting RDF Triples into High-Quality Natural Languages via Optimizing an Inverse KL Divergence
Knowledge base is one of the main forms to represent information in a st...
read it
Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control
This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.
READ FULL TEXT
Comments
There are no comments yet.