Reinforced Natural Language Interfaces via Entropy Decomposition

09/23/2021
by   Xiaoran Wu, et al.
0

In this paper, we study the technical problem of developing conversational agents that can quickly adapt to unseen tasks, learn task-specific communication tactics, and help listeners finish complex, temporally extended tasks. We find that the uncertainty of language learning can be decomposed to an entropy term and a mutual information term, corresponding to the structural and functional aspect of language, respectively. Combined with reinforcement learning, our method automatically requests human samples for training when adapting to new tasks and learns communication protocols that are succinct and helpful for task completion. Human and simulation test results on a referential game and a 3D navigation game prove the effectiveness of the proposed method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset