Possibility Before Utility: Learning And Using Hierarchical Affordances

03/23/2022
by   Robby Costales, et al.
0

Reinforcement learning algorithms struggle on tasks with complex hierarchical dependency structures. Humans and other intelligent agents do not waste time assessing the utility of every high-level action in existence, but instead only consider ones they deem possible in the first place. By focusing only on what is feasible, or "afforded", at the present moment, an agent can spend more time both evaluating the utility of and acting on what matters. To this end, we present Hierarchical Affordance Learning (HAL), a method that learns a model of hierarchical affordances in order to prune impossible subtasks for more effective learning. Existing works in hierarchical reinforcement learning provide agents with structural representations of subtasks but are not affordance-aware, and by grounding our definition of hierarchical affordances in the present state, our approach is more flexible than the multitude of approaches that ground their subtask dependencies in a symbolic history. While these logic-based methods often require complete knowledge of the subtask hierarchy, our approach is able to utilize incomplete and varying symbolic specifications. Furthermore, we demonstrate that relative to non-affordance-aware methods, HAL agents are better able to efficiently learn complex tasks, navigate environment stochasticity, and acquire diverse skills in the absence of extrinsic supervision – all of which are hallmarks of human learning.

READ FULL TEXT

page 7

page 15

research
07/11/2022

Learning Temporally Extended Skills in Continuous Domains as Symbolic Actions for Planning

Problems which require both long-horizon planning and continuous control...
research
04/21/2022

Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning

Hierarchical Reinforcement Learning (HRL) allows interactive agents to d...
research
02/28/2023

Hierarchical Reinforcement Learning in Complex 3D Environments

Hierarchical Reinforcement Learning (HRL) agents have the potential to d...
research
10/07/2021

How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents

This work addresses the problem of sensing the world: how to learn a mul...
research
11/16/2011

Model-based Utility Functions

Orseau and Ring, as well as Dewey, have recently described problems, inc...
research
09/14/2023

Goal Space Abstraction in Hierarchical Reinforcement Learning via Set-Based Reachability Analysis

Open-ended learning benefits immensely from the use of symbolic methods ...
research
01/02/2016

An Improved Intelligent Agent for Mining Real-Time Databases Using Modified Cortical Learning Algorithms

Cortical Learning Algorithms based on the Hierarchical Temporal Memory, ...

Please sign up or login with your details

Forgot password? Click here to reset