Directed Exploration in PAC Model-Free Reinforcement Learning

08/31/2018
by   Min-hwan Oh, et al.
0

We study an exploration method for model-free RL that generalizes the counter-based exploration bonus methods and takes into account long term exploratory value of actions rather than a single step look-ahead. We propose a model-free RL method that modifies Delayed Q-learning and utilizes the long-term exploration bonus with provable efficiency. We show that our proposed method finds a near-optimal policy in polynomial time (PAC-MDP), and also provide experimental evidence that our proposed algorithm is an efficient exploration method.

READ FULL TEXT
research
10/25/2021

Recurrent Off-policy Baselines for Memory-based Continuous Control

When the environment is partially observable (PO), a deep reinforcement ...
research
05/19/2020

Safe Learning for Near Optimal Scheduling

In this paper, we investigate the combination of synthesis techniques an...
research
04/11/2018

DORA The Explorer: Directed Outreaching Reinforcement Action-Selection

Exploration is a fundamental aspect of Reinforcement Learning, typically...
research
07/14/2020

Single-partition adaptive Q-learning

This paper introduces single-partition adaptive Q-learning (SPAQL), an a...
research
09/05/2022

SlateFree: a Model-Free Decomposition for Reinforcement Learning with Slate Actions

We consider the problem of sequential recommendations, where at each ste...
research
11/28/2016

Improving Policy Gradient by Exploring Under-appreciated Rewards

This paper presents a novel form of policy gradient for model-free reinf...
research
04/05/2016

Bounded Optimal Exploration in MDP

Within the framework of probably approximately correct Markov decision p...

Please sign up or login with your details

Forgot password? Click here to reset