Hyperspace Neighbor Penetration Approach to Dynamic Programming for Model-Based Reinforcement Learning Problems with Slowly Changing Variables in A Continuous State Space

06/10/2021
by   Vincent Zha, et al.
0

Slowly changing variables in a continuous state space constitute an important category of reinforcement learning and see its application in many domains, such as modeling a climate control system where temperature, humidity, etc. change slowly over time. However, this subject is less addressed in recent studies. Classical methods with certain variants, such as Dynamic Programming with Tile Coding which discretizes the state space, fail to handle slowly changing variables because those methods cannot capture the tiny changes in each transition step, as it is computationally expensive or impossible to establish an extremely granular grid system. In this paper, we introduce a Hyperspace Neighbor Penetration (HNP) approach that solves the problem. HNP captures in each transition step the state's partial "penetration" into its neighboring hyper-tiles in the gridded hyperspace, thus does not require the transition to be inter-tile in order for the change to be captured. Therefore, HNP allows for a very coarse grid system, which makes the computation feasible. HNP assumes near linearity of the transition function in a local space, which is commonly satisfied. In summary, HNP can be orders of magnitude more efficient than classical method in handling slowly changing variables in reinforcement learning. We have made an industrial implementation of NHP with a great success.

READ FULL TEXT
research
09/10/2019

Multi-Step Greedy and Approximate Real Time Dynamic Programming

Real Time Dynamic Programming (RTDP) is a well-known Dynamic Programming...
research
08/03/2022

Contractivity of Bellman Operator in Risk Averse Dynamic Programming with Infinite Horizon

The paper deals with a risk averse dynamic programming problem with infi...
research
07/11/2012

Dynamic Programming for Structured Continuous Markov Decision Problems

We describe an approach for exploiting structure in Markov Decision Proc...
research
11/28/2018

Toward breaking the curse of dimensionality: an FPTAS for stochastic dynamic programs with multidimensional actions and scalar states

We propose a Fully Polynomial-Time Approximation Scheme (FPTAS) for stoc...
research
02/08/2018

Learning and Querying Fast Generative Models for Reinforcement Learning

A key challenge in model-based reinforcement learning (RL) is to synthes...
research
12/13/2020

Reinforcement Learning with Subspaces using Free Energy Paradigm

In large-scale problems, standard reinforcement learning algorithms suff...
research
02/14/2012

A temporally abstracted Viterbi algorithm

Hierarchical problem abstraction, when applicable, may offer exponential...

Please sign up or login with your details

Forgot password? Click here to reset