Learning to Decode for Future Success

01/23/2017
by   Jiwei Li, et al.
0

We introduce a simple, general strategy to manipulate the behavior of a neural decoder that enables it to generate outputs that have specific properties of interest (e.g., sequences of a pre-specified length). The model can be thought of as a simple version of the actor-critic model that uses an interpolation of the actor (the MLE-based token generation policy) and the critic (a value function that estimates the future values of the desired property) for decision making. We demonstrate that the approach is able to incorporate a variety of properties that cannot be handled by standard neural sequence decoders, such as sequence length and backward probability (probability of sources given targets), in addition to yielding consistent improvements in abstractive summarization and machine translation when the property to be optimized is BLEU or ROUGE scores.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/09/2020

Is Standard Deviation the New Standard? Revisiting the Critic in Deep Policy Gradients

Policy gradient algorithms have proven to be successful in diverse decis...
research
10/10/2022

Actor-Critic or Critic-Actor? A Tale of Two Time Scales

We revisit the standard formulation of tabular actor-critic algorithm as...
research
04/05/2020

Reinforcement Learning Architectures: SAC, TAC, and ESAC

The trend is to implement intelligent agents capable of analyzing availa...
research
04/27/2020

Neural Machine Translation with Monte-Carlo Tree Search

Recent algorithms in machine translation have included a value network t...
research
05/13/2018

Emergence and Evolution of Hierarchical Structure in Complex Systems

It is well known that many complex systems, both in technology and natur...

Please sign up or login with your details

Forgot password? Click here to reset