Feature-Attending Recurrent Modules for Generalization in Reinforcement Learning

12/15/2021
by   Wilka Carvalho, et al.
11

Deep reinforcement learning (Deep RL) has recently seen significant progress in developing algorithms for generalization. However, most algorithms target a single type of generalization setting. In this work, we study generalization across three disparate task structures: (a) tasks composed of spatial and temporal compositions of regularly occurring object motions; (b) tasks composed of active perception of and navigation towards regularly occurring 3D objects; and (c) tasks composed of remembering goal-information over sequences of regularly occurring object-configurations. These diverse task structures all share an underlying idea of compositionality: task completion always involves combining recurring segments of task-oriented perception and behavior. We hypothesize that an agent can generalize within a task structure if it can discover representations that capture these recurring task-segments. For our tasks, this corresponds to representations for recognizing individual object motions, for navigation towards 3D objects, and for navigating through object-configurations. Taking inspiration from cognitive science, we term representations for recurring segments of an agent's experience, "perceptual schemas". We propose Feature Attending Recurrent Modules (FARM), which learns a state representation where perceptual schemas are distributed across multiple, relatively small recurrent modules. We compare FARM to recurrent architectures that leverage spatial attention, which reduces observation features to a weighted average over spatial positions. Our experiments indicate that our feature-attention mechanism better enables FARM to generalize across the diverse object-centric domains we study.

READ FULL TEXT

page 2

page 9

page 22

page 23

research
05/24/2017

State Space Decomposition and Subgoal Creation for Transfer in Deep Reinforcement Learning

Typical reinforcement learning (RL) agents learn to complete tasks speci...
research
06/14/2023

OCAtari: Object-Centric Atari 2600 Reinforcement Learning Environments

Cognitive science and psychology suggest that object-centric representat...
research
04/19/2021

Agent-Centric Representations for Multi-Agent Reinforcement Learning

Object-centric representations have recently enabled significant progres...
research
08/14/2017

Deep Object-Centric Representations for Generalizable Robot Learning

Robotic manipulation in complex open-world scenarios requires both relia...
research
12/30/2022

Transformer in Transformer as Backbone for Deep Reinforcement Learning

Designing better deep networks and better reinforcement learning (RL) al...
research
12/13/2019

Lessons from reinforcement learning for biological representations of space

Neuroscientists postulate 3D representations in the brain in a variety o...
research
03/12/2020

Analyzing Visual Representations in Embodied Navigation Tasks

Recent advances in deep reinforcement learning require a large amount of...

Please sign up or login with your details

Forgot password? Click here to reset