The Concept of Criticality in Reinforcement Learning

10/16/2018
by   Yitzhak Spielberg, et al.
0

Reinforcement learning methods carry a well known bias-variance trade-off in n-step algorithms for optimal control. Unfortunately, this has rarely been addressed in current research. This trade-off principle holds independent of the choice of the algorithm, such as n-step SARSA, n-step Expected SARSA or n-step Tree backup. A small n results in a large bias, while a large n leads to large variance. The literature offers no straightforward recipe for the best choice of this value. While currently all n-step algorithms use a fixed value of n over the state space we extend the framework of n-step updates by allowing each state to have its specific n. We propose a solution to this problem within the context of human aided reinforcement learning. Our approach is based on the observation that a human can learn more efficiently if she receives input regarding the criticality of a given state and thus the amount of attention she needs to invest into the learning in that state. This observation is related to the idea that each state of the MDP has a certain measure of criticality which indicates how much the choice of the action in that state influences the return. In our algorithm the RL agent utilizes the criticality measure, a function provided by a human trainer, in order to locally choose the best stepnumber n for the update of the Q function.

READ FULL TEXT
research
01/13/2022

Criticality-Based Varying Step-Number Algorithm for Reinforcement Learning

In the context of reinforcement learning we introduce the concept of cri...
research
07/12/2014

Extreme State Aggregation Beyond MDPs

We consider a Reinforcement Learning setup where an agent interacts with...
research
06/29/2020

Exploring Optimal Control With Observations at a Cost

There has been a current trend in reinforcement learning for healthcare ...
research
06/04/2022

Adaptive Tree Backup Algorithms for Temporal-Difference Reinforcement Learning

Q(σ) is a recently proposed temporal-difference learning method that int...
research
07/22/2021

A reinforcement learning approach to resource allocation in genomic selection

Genomic selection (GS) is a technique that plant breeders use to select ...
research
12/30/2016

Adaptive Lambda Least-Squares Temporal Difference Learning

Temporal Difference learning or TD(λ) is a fundamental algorithm in the ...
research
02/27/2023

Taylor TD-learning

Many reinforcement learning approaches rely on temporal-difference (TD) ...

Please sign up or login with your details

Forgot password? Click here to reset