Discovering Options for Exploration by Minimizing Cover Time

03/02/2019
by   Yuu Jinnai, et al.
0

One of the main challenges in reinforcement learning is solving tasks with sparse reward. We show that the difficulty of discovering a distant rewarding state in an MDP is bounded by the expected cover time of a random walk over the graph induced by the MDP's transition dynamics. We therefore propose to accelerate exploration by constructing options that minimize cover time. The proposed algorithm finds an option which provably diminishes the expected number of steps to visit every state in the state space by a uniform random walk. We show empirically that the proposed algorithm improves the learning time in several domains with sparse rewards.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2023

A Cover Time Study of a non-Markovian Algorithm

Given a traversal algorithm, cover time is the expected number of steps ...
research
10/07/2022

Multi-agent Deep Covering Option Discovery

The use of options can greatly accelerate exploration in reinforcement l...
research
04/23/2022

Discovering Intrinsic Reward with Contrastive Random Walk

The aim of this paper is to demonstrate the efficacy of using Contrastiv...
research
11/12/2019

The Power of Two Choices for Random Walks

We apply the power-of-two-choices paradigm to random walk on a graph: ra...
research
10/10/2018

On the cover time of dense graphs

We consider arbitrary graphs G with n vertices and minimum degree at lea...
research
06/13/2020

Distant Transfer Learning via Deep Random Walk

Transfer learning, which is to improve the learning performance in the t...
research
03/26/2015

Universal Psychometrics Tasks: difficulty, composition and decomposition

This note revisits the concepts of task and difficulty. The notion of co...

Please sign up or login with your details

Forgot password? Click here to reset