DeepAI AI Chat
Log In Sign Up

A Perspective on Objects and Systematic Generalization in Model-Based RL

by   Sjoerd van Steenkiste, et al.

In order to meet the diverse challenges in solving many real-world problems, an intelligent agent has to be able to dynamically construct a model of its environment. Objects facilitate the modular reuse of prior knowledge and the combinatorial construction of such models. In this work, we argue that dynamically bound features (objects) do not simply emerge in connectionist models of the world. We identify several requirements that need to be fulfilled in overcoming this limitation and highlight corresponding inductive biases.


page 1

page 2

page 3

page 4


Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions

Common-sense physical reasoning is an essential ingredient for any intel...

Relate to Predict: Towards Task-Independent Knowledge Representations for Reinforcement Learning

Reinforcement Learning (RL) can enable agents to learn complex tasks. Ho...

Is a Modular Architecture Enough?

Inspired from human cognition, machine learning systems are gradually re...

An investigation of model-free planning

The field of reinforcement learning (RL) is facing increasingly challeng...

Quantifying Multimodality in World Models

Model-based Deep Reinforcement Learning (RL) assumes the availability of...

DiffG-RL: Leveraging Difference between State and Common Sense

Taking into account background knowledge as the context has always been ...