DeepAI AI Chat
Log In Sign Up

Countering Language Drift via Visual Grounding

09/10/2019
by   Jason Lee, et al.
Facebook
NYU college
0

Emergent multi-agent communication protocols are very different from natural language and not easily interpretable by humans. We find that agents that were initially pretrained to produce natural language can also experience detrimental language drift: when a non-linguistic reward is used in a goal-based task, e.g. some scalar success metric, the communication protocol may easily and radically diverge from natural language. We recast translation as a multi-agent communication game and examine auxiliary training constraints for their effectiveness in mitigating language drift. We show that a combination of syntactic (language model likelihood) and semantic (visual grounding) constraints gives the best communication performance, allowing pre-trained agents to retain English syntax while learning to accurately convey the intended meaning.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/02/2017

Interactive Reinforcement Learning for Object Grounding via Self-Talking

Humans are able to identify a referred visual object in a complex scene ...
06/26/2017

Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog

A number of recent works have proposed techniques for end-to-end learnin...
03/28/2020

Countering Language Drift with Seeded Iterated Learning

Supervised learning methods excel at capturing statistical properties of...
10/06/2020

Supervised Seeded Iterated Learning for Interactive Language Learning

Language drift has been one of the major obstacles to train language mod...
04/15/2021

Multitasking Inhibits Semantic Drift

When intelligent agents communicate to accomplish shared goals, how do t...
10/27/2022

Natural Language Syntax Complies with the Free-Energy Principle

Natural language syntax yields an unbounded array of hierarchically stru...