Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning

12/05/2012
by   Jake Bouvrie, et al.
0

Many problems in sequential decision making and stochastic control often have natural multiscale structure: sub-tasks are assembled together to accomplish complex goals. Systematically inferring and leveraging hierarchical structure, particularly beyond a single level of abstraction, has remained a longstanding challenge. We describe a fast multiscale procedure for repeatedly compressing, or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of sub-problems at different scales is automatically determined. Coarsened MDPs are themselves independent, deterministic MDPs, and may be solved using existing algorithms. The multiscale representation delivered by this procedure decouples sub-tasks from each other and can lead to substantial improvements in convergence rates both locally within sub-problems and globally across sub-problems, yielding significant computational savings. A second fundamental aspect of this work is that these multiscale decompositions yield new transfer opportunities across different problems, where solutions of sub-tasks at different levels of the hierarchy may be amenable to transfer to new problems. Localized transfer of policies and potential operators at arbitrary scales is emphasized. Finally, we demonstrate compression and transfer in a collection of illustrative domains, including examples involving discrete and continuous statespaces.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2018

Compositional planning in Markov decision processes: Temporal abstraction meets generalized logic composition

In hierarchical planning for Markov decision processes (MDPs), temporal ...
research
11/17/2018

Autonomous Extraction of a Hierarchical Structure of Tasks in Reinforcement Learning, A Sequential Associate Rule Mining Approach

Reinforcement learning (RL) techniques, while often powerful, can suffer...
research
11/21/2019

Scalable methods for computing state similarity in deterministic Markov Decision Processes

We present new algorithms for computing and approximating bisimulation m...
research
01/30/2013

Flexible Decomposition Algorithms for Weakly Coupled Markov Decision Problems

This paper presents two new approaches to decomposing and solving large ...
research
12/12/2012

Qualitative MDPs and POMDPs: An Order-Of-Magnitude Approximation

We develop a qualitative theory of Markov Decision Processes (MDPs) and ...
research
08/18/2019

Hybrid LBM-FVM and LBM-MCM Methods for Fluid Flow and Heat Transfer Simulation

The fluid flow and heat transfer problems encountered in industry applic...
research
08/03/2018

Cities of the Future: Employing Wireless Sensor Networks for Efficient Decision Making in Complex Environments

Decision making in large scale urban environments is critical for many a...

Please sign up or login with your details

Forgot password? Click here to reset