On the Generalization of Learned Structured Representations

04/25/2023
by   Andrea Dittadi, et al.
0

Despite tremendous progress over the past decade, deep learning methods generally fall short of human-level systematic generalization. It has been argued that explicitly capturing the underlying structure of data should allow connectionist systems to generalize in a more predictable and systematic manner. Indeed, evidence in humans suggests that interpreting the world in terms of symbol-like compositional entities may be crucial for intelligent behavior and high-level reasoning. Another common limitation of deep learning systems is that they require large amounts of training data, which can be expensive to obtain. In representation learning, large datasets are leveraged to learn generic data representations that may be useful for efficient learning of arbitrary downstream tasks. This thesis is about structured representation learning. We study methods that learn, with little or no supervision, representations of unstructured data that capture its hidden structure. In the first part of the thesis, we focus on representations that disentangle the explanatory factors of variation of the data. We scale up disentangled representation learning to a novel robotic dataset, and perform a systematic large-scale study on the role of pretrained representations for out-of-distribution generalization in downstream robotic tasks. The second part of this thesis focuses on object-centric representations, which capture the compositional structure of the input in terms of symbol-like entities, such as objects in visual scenes. Object-centric learning methods learn to form meaningful entities from unstructured input, enabling symbolic information processing on a connectionist substrate. In this study, we train a selection of methods on several common datasets, and investigate their usefulness for downstream tasks and their ability to generalize out of distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2021

Generalization and Robustness Implications in Object-Centric Learning

The idea behind object-centric representation learning is that natural s...
research
12/09/2020

On the Binding Problem in Artificial Neural Networks

Contemporary neural networks still fall short of human-level generalizat...
research
12/27/2017

Combining Representation Learning with Logic for Language Processing

The current state-of-the-art in many natural language processing and aut...
research
08/03/2023

On the Transition from Neural Representation to Symbolic Knowledge

Bridging the huge disparity between neural and symbolic representation c...
research
03/01/2019

Multi-Object Representation Learning with Iterative Variational Inference

Human perception is structured around objects which form the basis for o...
research
10/11/2022

Robust and Controllable Object-Centric Learning through Energy-based Models

Humans are remarkably good at understanding and reasoning about complex ...
research
08/27/2016

Learning to generalize to new compositions in image understanding

Recurrent neural networks have recently been used for learning to descri...

Please sign up or login with your details

Forgot password? Click here to reset