DeepAI AI Chat
Log In Sign Up

DECSTR: Learning Goal-Directed Abstract Behaviors using Pre-Verbal Spatial Predicates in Intrinsically Motivated Agents

06/12/2020
by   Ahmed Akakzia, et al.
Inria
UPMC
1

Intrinsically motivated agents freely explore their environment and set their own goals. Such goals are traditionally represented as specific states, but recent works introduced the use of language to facilitate abstraction. Language can, for example, represent goals as sets of general properties that surrounding objects should verify. However, language-conditioned agents are trained simultaneously to understand language and to act, which seems to contrast with how children learn: infants demonstrate goal-oriented behaviors and abstract spatial concepts very early in their development, before language mastery. Guided by these findings from developmental psychology, we introduce a high-level state representation based on natural semantic predicates that describe spatial relations between objects and that are known to be present early in infants. In a robotic manipulation environment, our DECSTR system explores this representation space by manipulating objects, and efficiently learns to achieve any reachable configuration within it. It does so by leveraging an object-centered modular architecture, a symmetry inductive bias, and a new form of automatic curriculum learning for goal selection and policy learning. As with children, language acquisition takes place in a second phase, independently from goal-oriented sensorimotor learning. This is done via a new goal generation module, conditioned on instructions describing expected transformations in object relations. We present ablations studies for each component and highlight several advantages of targeting abstract goals over specific ones. We further show that using this intermediate representation enables efficient language grounding by evaluating agents on sequences of language instructions and their logical combinations.

READ FULL TEXT

page 14

page 19

page 20

02/21/2020

Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven Exploration

Autonomous reinforcement learning agents must be intrinsically motivated...
11/08/2019

Language Grounding through Social Interactions and Curiosity-Driven Multi-Goal Learning

Autonomous reinforcement learning agents, like children, do not have acc...
06/12/2020

Language-Conditioned Goal Generation: a New Approach to Language Grounding for RL

In the real world, linguistic agents are also embodied agents: they perc...
04/11/2021

Learn Goal-Conditioned Policy with Intrinsic Motivation for Deep Reinforcement Learning

It is of significance for an agent to learn a widely applicable and gene...
03/20/2020

Deep Sets for Generalization in RL

This paper investigates the idea of encoding object-centered representat...
04/11/2022

Learning Object-Centered Autotelic Behaviors with Graph Neural Networks

Although humans live in an open-ended world and endlessly face new chall...
12/03/2022

Language Models as Agent Models

Language models (LMs) are trained on collections of documents, written b...