Generalization in Multimodal Language Learning from Simulation

08/03/2021
by   Aaron Eisermann, et al.
9

Neural networks can be powerful function approximators, which are able to model high-dimensional feature distributions from a subset of examples drawn from the target distribution. Naturally, they perform well at generalizing within the limits of their target function, but they often fail to generalize outside of the explicitly learned feature space. It is therefore an open research topic whether and how neural network-based architectures can be deployed for systematic reasoning. Many studies have shown evidence for poor generalization, but they often work with abstract data or are limited to single-channel input. Humans, however, learn and interact through a combination of multiple sensory modalities, and rarely rely on just one. To investigate compositional generalization in a multimodal setting, we generate an extensible dataset with multimodal input sequences from simulation. We investigate the influence of the underlying training data distribution on compostional generalization in a minimal LSTM-based network trained in a supervised, time continuous setting. We find compositional generalization to fail in simple setups while improving with the number of objects, actions, and particularly with a lot of color overlaps between objects. Furthermore, multimodality strongly improves compositional generalization in settings where a pure vision model struggles to generalize.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2021

Meta-Learning to Compositionally Generalize

Natural language is compositional; the meaning of a sentence is a functi...
research
06/19/2021

Improving Compositional Generalization in Classification Tasks via Structure Annotations

Compositional generalization is the ability to generalize systematically...
research
10/12/2021

Dynamic Inference with Neural Interpreters

Modern neural network architectures can leverage large amounts of data t...
research
02/13/2020

Learn to Expect the Unexpected: Probably Approximately Correct Domain Generalization

Domain generalization is the problem of machine learning when the traini...
research
06/06/2017

Deep Learning: Generalization Requires Deep Compositional Feature Space Design

Generalization error defines the discriminability and the representation...
research
01/15/2022

Unobserved Local Structures Make Compositional Generalization Hard

While recent work has convincingly showed that sequence-to-sequence mode...
research
10/06/2020

CURI: A Benchmark for Productive Concept Learning Under Uncertainty

Humans can learn and reason under substantial uncertainty in a space of ...

Please sign up or login with your details

Forgot password? Click here to reset