Hierarchical Text Generation and Planning for Strategic Dialogue
End-to-end models for strategic dialogue are challenging to train, because linguistic and strategic aspects are entangled in latent state vectors. We introduce an approach to generating latent representations of dialogue moves, by inducing sentence representations to maximize the likelihood of subsequent sentences and actions. The effect is to decouple much of the semantics of the utterance from its linguistic realisation. We then use these latent sentence representations for hierarchical language generation, planning and reinforcement learning. Experiments show that using our message representations increases the reward achieved by the model, improves the effectiveness of long-term planning using rollouts, and allows self-play reinforcement learning to improve decision making without diverging from human language. Our hierarchical latent-variable model outperforms previous work both linguistically and strategically.
READ FULL TEXT