Emergent Systematic Generalization in a Situated Agent

10/01/2019
by   Felix Hill, et al.
17

The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we demonstrate strong emergent systematic generalisation in a neural network agent and isolate the factors that support this ability. In environments ranging from a grid-world to a rich interactive 3D Unity room, we show that an agent can correctly exploit the compositional nature of a symbolic language to interpret never-seen-before instructions. We observe this capacity not only when instructions refer to object properties (colors and shapes) but also verb-like motor skills (lifting and putting) and abstract modifying operations (negation). We identify three factors that can contribute to this facility for systematic generalisation: (a) the number of object/word experiences in the training set; (b) the invariances afforded by a first-person, egocentric perspective; and (c) the variety of visual input experienced by an agent that perceives the world actively over time. Thus, while neural nets trained in idealised or reduced situations may fail to exhibit a compositional or systematic understanding of their experience, this competence can readily emerge when, like human learners, they have access to many examples of richly varying, multi-modal observations as they learn.

READ FULL TEXT

page 5

page 7

page 8

research
03/11/2020

A Benchmark for Systematic Generalization in Grounded Language Understanding

Human language users easily interpret expressions that describe unfamili...
research
05/08/2023

How Do In-Context Examples Affect Compositional Generalization?

Compositional generalization–understanding unseen combinations of seen p...
research
06/05/2018

Learning to Follow Language Instructions with Adversarial Reward Induction

Recent work has shown that deep reinforcement-learning agents can learn ...
research
09/03/2020

Grounded Language Learning Fast and Slow

Recent work has shown that large text-based neural language models, trai...
research
11/29/2020

Self-supervised Visual Reinforcement Learning with Object-centric Representations

Autonomous agents need large repertoires of skills to act reasonably on ...
research
07/10/2021

What underlies rapid learning and systematic generalization in humans

Despite the groundbreaking successes of neural networks, contemporary mo...
research
07/19/2018

Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks

Systematic compositionality is the ability to recombine meaningful units...

Please sign up or login with your details

Forgot password? Click here to reset