Compositional Generalization in Grounded Language Learning via Induced Model Sparsity

07/06/2022
by   Sam Spilsbury, et al.
0

We provide a study of how induced model sparsity can help achieve compositional generalization and better sample efficiency in grounded language learning problems. We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations. We show that standard neural architectures do not always yield compositional generalization. To address this, we design an agent that contains a goal identification module that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal. The output of the goal identification module is the input to a value iteration network planner. Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations. We examine the internal representations of our agent and find the correct correspondences between words in its dictionary and attributes in the environment.

READ FULL TEXT

page 3

page 5

page 10

research
10/23/2022

When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks

Humans can reason compositionally whilst grounding language utterances t...
research
03/28/2017

A Deep Compositional Framework for Human-like Language Acquisition in Virtual Environment

We tackle a task where an agent learns to navigate in a 2D maze-like env...
research
01/27/2022

Recursive Decoding: A Situated Cognition Approach to Compositional Generation in Grounded Language Understanding

Compositional generalization is a troubling blind spot for neural langua...
research
10/02/2022

Compositional Generalization in Unsupervised Compositional Representation Learning: A Study on Disentanglement and Emergent Language

Deep learning models struggle with compositional generalization, i.e. th...
research
08/26/2015

Alignment-based compositional semantics for instruction following

This paper describes an alignment-based model for interpreting natural l...
research
07/19/2018

Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks

Systematic compositionality is the ability to recombine meaningful units...
research
02/12/2020

Deep compositional robotic planners that follow natural language commands

We demonstrate how a sampling-based robotic planner can be augmented to ...

Please sign up or login with your details

Forgot password? Click here to reset