A Gentle Lecture Note on Filtrations in Reinforcement Learning

08/06/2020
by   W. J. A. van Heeswijk, et al.
0

This note aims to provide a basic intuition on the concept of filtrations as used in the context of reinforcement learning (RL). Filtrations are often used to formally define RL problems, yet their implications might not be eminent for those without a background in measure theory. Essentially, a filtration is a construct that captures partial knowledge up to time t, without revealing any future information that has already been simulated, yet not revealed to the decision-maker. We illustrate this with simple examples from the finance domain on both discrete and continuous outcome spaces. Furthermore, we show that the notion of filtration is not needed, as basing decisions solely on the current problem state (which is possible due to the Markovian property) suffices to eliminate future knowledge from the decision-making process.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2022

RLang: A Declarative Language for Expression Prior Knowledge for Reinforcement Learning

Communicating useful background knowledge to reinforcement learning (RL)...
research
11/16/2020

Blind Decision Making: Reinforcement Learning with Delayed Observations

Reinforcement learning typically assumes that the state update from the ...
research
11/29/2020

Offline Reinforcement Learning Hands-On

Offline Reinforcement Learning (RL) aims to turn large datasets into pow...
research
09/01/2023

End-to-end Lidar-Driven Reinforcement Learning for Autonomous Racing

Reinforcement Learning (RL) has emerged as a transformative approach in ...
research
06/22/2021

Reinforcement Learning for Physical Layer Communications

In this chapter, we will give comprehensive examples of applying RL in o...
research
05/06/2023

Explaining RL Decisions with Trajectories

Explanation is a key component for the adoption of reinforcement learnin...

Please sign up or login with your details

Forgot password? Click here to reset