
Models and algorithms for skipfree Markov decision processes on trees
We introduce a class of models for multidimensional control problems whi...
read it

Double Reinforcement Learning for Efficient OffPolicy Evaluation in Markov Decision Processes
Offpolicy evaluation (OPE) in reinforcement learning allows one to eval...
read it

Reinforcement Learning with Parameterized Actions
We introduce a modelfree algorithm for learning in Markov decision proc...
read it

The Value Function Polytope in Reinforcement Learning
We establish geometric and topological properties of the space of value ...
read it

Algorithms for Batch Hierarchical Reinforcement Learning
Hierarchical Reinforcement Learning (HRL) exploits temporal abstraction ...
read it

Learning Policies for Markov Decision Processes from Data
We consider the problem of learning a policy for a Markov decision proce...
read it

POMCPOW: An online algorithm for POMDPs with continuous state, action, and observation spaces
Online solvers for partially observable Markov decision processes have b...
read it
Online Reinforcement Learning for RealTime Exploration in Continuous State and Action Markov Decision Processes
This paper presents a new method to learn online policies in continuous state, continuous action, modelfree Markov decision processes, with two properties that are crucial for practical applications. First, the policies are implementable with a very low computational cost: once the policy is computed, the action corresponding to a given state is obtained in logarithmic time with respect to the number of samples used. Second, our method is versatile: it does not rely on any a priori knowledge of the structure of optimal policies. We build upon the Fitted Qiteration algorithm which represents the Qvalue as the average of several regression trees. Our algorithm, the Fitted Policy Forest algorithm (FPF), computes a regression forest representing the Qvalue and transforms it into a single tree representing the policy, while keeping control on the size of the policy using resampling and leaf merging. We introduce an adaptation of MultiResolution Exploration (MRE) which is particularly suited to FPF. We assess the performance of FPF on three classical benchmarks for reinforcement learning: the "Inverted Pendulum", the "Double Integrator" and "Car on the Hill" and show that FPF equals or outperforms other algorithms, although these algorithms rely on the use of particular representations of the policies, especially chosen in order to fit each of the three problems. Finally, we exhibit that the combination of FPF and MRE allows to find nearly optimal solutions in problems where ϵgreedy approaches would fail.
READ FULL TEXT
Comments
There are no comments yet.