Generating Text with Deep Reinforcement Learning

10/30/2015
by   Hongyu Guo, et al.
0

We introduce a novel schema for sequence to sequence learning with a Deep Q-Network (DQN), which decodes the output sequence iteratively. The aim here is to enable the decoder to first tackle easier portions of the sequences, and then turn to cope with difficult parts. Specifically, in each iteration, an encoder-decoder Long Short-Term Memory (LSTM) network is employed to, from the input sequence, automatically create features to represent the internal states of and formulate a list of potential actions for the DQN. Take rephrasing a natural sentence as an example. This list can contain ranked potential words. Next, the DQN learns to make decision on which action (e.g., word) will be selected from the list to modify the current decoded sequence. The newly modified output sequence is subsequently used as the input to the DQN for the next decoding iteration. In each iteration, we also bias the reinforcement learning's attention to explore sequence portions which are previously difficult to be decoded. For evaluation, the proposed strategy was trained to decode ten thousands natural sentences. Our experiments indicate that, when compared to a left-to-right greedy beam search LSTM decoder, the proposed method performed competitively well when decoding sentences from the training set, but significantly outperformed the baseline when decoding unseen sentences, in terms of BLEU score obtained.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2014

Sequence to Sequence Learning with Neural Networks

Deep Neural Networks (DNNs) are powerful models that have achieved excel...
research
02/18/2018

Sequence-to-Sequence Prediction of Vehicle Trajectory via LSTM Encoder-Decoder Architecture

In this paper, we propose a deep learning-based vehicle trajectory predi...
research
09/13/2016

An Experimental Study of LSTM Encoder-Decoder Model for Text Simplification

Text simplification (TS) aims to reduce the lexical and structural compl...
research
06/10/2016

Length bias in Encoder Decoder Models and a Case for Global Conditioning

Encoder-decoder networks are popular for modeling sequences probabilisti...
research
05/31/2021

Constrained Deep Reinforcement Based Functional Split Optimization in Virtualized RANs

Virtualized Radio Access Network (vRAN) brings agility to Next-Generatio...
research
04/07/2019

SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression

Neural sequence-to-sequence models are currently the dominant approach i...
research
09/26/2016

Online Segment to Segment Neural Transduction

We introduce an online neural sequence to sequence model that learns to ...

Please sign up or login with your details

Forgot password? Click here to reset