DeepAI AI Chat
Log In Sign Up

Deep Sets for Generalization in RL

by   Tristan Karch, et al.

This paper investigates the idea of encoding object-centered representations in the design of the reward function and policy architectures of a language-guided reinforcement learning agent. This is done using a combination of object-wise permutation invariant networks inspired from Deep Sets and gated-attention mechanisms. In a 2D procedurally-generated world where agents targeting goals in natural language navigate and interact with objects, we show that these architectures demonstrate strong generalization capacities to out-of-distribution goals. We study the generalization to varying numbers of objects at test time and further extend the object-centered architectures to goals involving relational reasoning.


page 1

page 2

page 3

page 6

page 9

page 11

page 12

page 14


Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven Exploration

Autonomous reinforcement learning agents must be intrinsically motivated...

Inverse Reinforcement Learning with Natural Language Goals

Humans generally use natural language to communicate task requirements a...

Learning Object-Centered Autotelic Behaviors with Graph Neural Networks

Although humans live in an open-ended world and endlessly face new chall...

Language Grounding through Social Interactions and Curiosity-Driven Multi-Goal Learning

Autonomous reinforcement learning agents, like children, do not have acc...

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments

First-person object-interaction tasks in high-fidelity, 3D, simulated en...

Relate to Predict: Towards Task-Independent Knowledge Representations for Reinforcement Learning

Reinforcement Learning (RL) can enable agents to learn complex tasks. Ho...