Possibility Before Utility: Learning And Using Hierarchical Affordances

by   Robby Costales, et al.

Reinforcement learning algorithms struggle on tasks with complex hierarchical dependency structures. Humans and other intelligent agents do not waste time assessing the utility of every high-level action in existence, but instead only consider ones they deem possible in the first place. By focusing only on what is feasible, or "afforded", at the present moment, an agent can spend more time both evaluating the utility of and acting on what matters. To this end, we present Hierarchical Affordance Learning (HAL), a method that learns a model of hierarchical affordances in order to prune impossible subtasks for more effective learning. Existing works in hierarchical reinforcement learning provide agents with structural representations of subtasks but are not affordance-aware, and by grounding our definition of hierarchical affordances in the present state, our approach is more flexible than the multitude of approaches that ground their subtask dependencies in a symbolic history. While these logic-based methods often require complete knowledge of the subtask hierarchy, our approach is able to utilize incomplete and varying symbolic specifications. Furthermore, we demonstrate that relative to non-affordance-aware methods, HAL agents are better able to efficiently learn complex tasks, navigate environment stochasticity, and acquire diverse skills in the absence of extrinsic supervision – all of which are hallmarks of human learning.


page 7

page 15


Learning Temporally Extended Skills in Continuous Domains as Symbolic Actions for Planning

Problems which require both long-horizon planning and continuous control...

Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning

Hierarchical Reinforcement Learning (HRL) allows interactive agents to d...

Hierarchical Reinforcement Learning in Complex 3D Environments

Hierarchical Reinforcement Learning (HRL) agents have the potential to d...

How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents

This work addresses the problem of sensing the world: how to learn a mul...

Model-based Utility Functions

Orseau and Ring, as well as Dewey, have recently described problems, inc...

Goal Space Abstraction in Hierarchical Reinforcement Learning via Set-Based Reachability Analysis

Open-ended learning benefits immensely from the use of symbolic methods ...

An Improved Intelligent Agent for Mining Real-Time Databases Using Modified Cortical Learning Algorithms

Cortical Learning Algorithms based on the Hierarchical Temporal Memory, ...

Please sign up or login with your details

Forgot password? Click here to reset