Exploring Optimal Control With Observations at a Cost

06/29/2020
by   Rui Aguiar, et al.
0

There has been a current trend in reinforcement learning for healthcare literature, where in order to prepare clinical datasets, researchers will carry forward the last results of the non-administered test known as the last-observation-carried-forward (LOCF) value to fill in gaps, assuming that it is still an accurate indicator of the patient's current state. These values are carried forward without maintaining information about exactly how these values were imputed, leading to ambiguity. Our approach models this problem using OpenAI Gym's Mountain Car and aims to address when to observe the patient's physiological state and partly how to intervene, as we have assumed we can only act after following an observation. So far, we have found that for a last-observation-carried-forward implementation of the state space, augmenting the state with counters for each state variable tracking the time since last observation was made, improves the predictive performance of an agent, supporting the notion of "informative missingness", and using a neural network based Dynamics Model to predict the most probable next state value of non-observed state variables instead of carrying forward the last observed value through LOCF further improves the agent's performance, leading to faster convergence and reduced variance.

READ FULL TEXT
research
10/16/2018

The Concept of Criticality in Reinforcement Learning

Reinforcement learning methods carry a well known bias-variance trade-of...
research
10/29/2019

Learning to Predict Without Looking Ahead: World Models Without Forward Prediction

Much of model-based reinforcement learning involves learning a model of ...
research
02/25/2021

Visualizing MuZero Models

MuZero, a model-based reinforcement learning algorithm that uses a value...
research
05/13/2021

SIDE: I Infer the State I Want to Learn

As one of the solutions to the Dec-POMDP problem, the value decompositio...
research
03/29/2022

Learning to act: a Reinforcement Learning approach to recommend the best next activities

The rise of process data availability has led in the last decade to the ...
research
09/30/2019

Learning Compact Models for Planning with Exogenous Processes

We address the problem of approximate model minimization for MDPs in whi...

Please sign up or login with your details

Forgot password? Click here to reset