Discourse-Aware Neural Rewards for Coherent Text Generation

05/10/2018
by   Antoine Bosselut, et al.
0

In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with cross-entropy or with reinforcement learning with commonly used scores as rewards.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset