Overprotective Training Environments Fall Short at Testing Time: Let Models Contribute to Their Own Training

03/20/2021
by   Alberto Testoni, et al.
0

Despite important progress, conversational systems often generate dialogues that sound unnatural to humans. We conjecture that the reason lies in their different training and testing conditions: agents are trained in a controlled "lab" setting but tested in the "wild". During training, they learn to generate an utterance given the human dialogue history. On the other hand, during testing, they must interact with each other, and hence deal with noisy data. We propose to fill this gap by training the model with mixed batches containing both samples of human and machine-generated dialogues. We assess the validity of the proposed method on GuessWhat?!, a visual referential game.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset