DeepAI
Log In Sign Up

Supervised Seeded Iterated Learning for Interactive Language Learning

10/06/2020
by   Yuchen Lu, et al.
0

Language drift has been one of the major obstacles to train language models through interaction. When word-based conversational agents are trained towards completing a task, they tend to invent their language rather than leveraging natural language. In recent literature, two general methods partially counter this phenomenon: Supervised Selfplay (S2P) and Seeded Iterated Learning (SIL). While S2P jointly trains interactive and supervised losses to counter the drift, SIL changes the training dynamics to prevent language drift from occurring. In this paper, we first highlight their respective weaknesses, i.e., late-stage training collapses and higher negative likelihood when evaluated on human corpus. Given these observations, we introduce Supervised Seeded Iterated Learning to combine both methods to minimize their respective weaknesses. We then show the effectiveness of in the language-drift translation game.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/28/2020

Countering Language Drift with Seeded Iterated Learning

Supervised learning methods excel at capturing statistical properties of...
04/15/2021

Multitasking Inhibits Semantic Drift

When intelligent agents communicate to accomplish shared goals, how do t...
09/10/2019

Countering Language Drift via Visual Grounding

Emergent multi-agent communication protocols are very different from nat...
04/26/2018

Interactive Language Acquisition with One-shot Visual Concept Learning through a Conversational Game

Building intelligent agents that can communicate with and learn from hum...
12/12/2020

Concept Drift Monitoring and Diagnostics of Supervised Learning Models via Score Vectors

Supervised learning models are one of the most fundamental classes of mo...
07/21/2022

Leveraging Natural Supervision for Language Representation Learning and Generation

Recent breakthroughs in Natural Language Processing (NLP) have been driv...