Discovering and Exploiting Sparse Rewards in a Learned Behavior Space

11/02/2021
by   Giuseppe Paolo, et al.
0

Learning optimal policies in sparse rewards settings is difficult as the learning agent has little to no feedback on the quality of its actions. In these situations, a good strategy is to focus on exploration, hopefully leading to the discovery of a reward signal to improve on. A learning algorithm capable of dealing with this kind of settings has to be able to (1) explore possible agent behaviors and (2) exploit any possible discovered reward. Efficient exploration algorithms have been proposed that require to define a behavior space, that associates to an agent its resulting behavior in a space that is known to be worth exploring. The need to define this space is a limitation of these algorithms. In this work, we introduce STAX, an algorithm designed to learn a behavior space on-the-fly and to explore it while efficiently optimizing any reward discovered. It does so by separating the exploration and learning of the behavior space from the exploitation of the reward through an alternating two-steps process. In the first step, STAX builds a repertoire of diverse policies while learning a low-dimensional representation of the high-dimensional observations generated during the policies evaluation. In the exploitation step, emitters are used to optimize the performance of the discovered rewarding solutions. Experiments conducted on three different sparse reward environments show that STAX performs comparably to existing baselines while requiring much less prior information about the task as it autonomously builds the behavior space.

READ FULL TEXT

page 14

page 19

page 22

research
03/02/2022

Learning in Sparse Rewards settings through Quality-Diversity algorithms

In the Reinforcement Learning (RL) framework, the learning is guided thr...
research
02/05/2021

Sparse Reward Exploration via Novelty Search and Emitters

Reward-based optimization algorithms require both exploration, to find r...
research
04/30/2023

Learning Achievement Structure for Structured Exploration in Domains with Sparse Reward

We propose Structured Exploration with Achievements (SEA), a multi-stage...
research
09/12/2019

Unsupervised Learning and Exploration of Reachable Outcome Space

Performing Reinforcement Learning in sparse rewards settings, with very ...
research
09/17/2021

Knowledge is reward: Learning optimal exploration by predictive reward cashing

There is a strong link between the general concept of intelligence and t...
research
06/17/2022

Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis

Each year, expert-level performance is attained in increasingly-complex ...
research
01/13/2023

Time-Myopic Go-Explore: Learning A State Representation for the Go-Explore Paradigm

Very large state spaces with a sparse reward signal are difficult to exp...

Please sign up or login with your details

Forgot password? Click here to reset