Identifying Decision Points for Safe and Interpretable Reinforcement Learning in Hypotension Treatment

01/09/2021
by   Kristine Zhang, et al.
0

Many batch RL health applications first discretize time into fixed intervals. However, this discretization both loses resolution and forces a policy computation at each (potentially fine) interval. In this work, we develop a novel framework to compress continuous trajectories into a few, interpretable decision points –places where the batch data support multiple alternatives. We apply our approach to create recommendations from a cohort of hypotensive patients dataset. Our reduced state space results in faster planning and allows easy inspection by a clinical expert.

READ FULL TEXT

page 4

page 9

research
10/08/2020

Trajectory Inspection: A Method for Iterative Clinician-Driven Design of Reinforcement Learning Studies

Treatment policies learned via reinforcement learning (RL) from observat...
research
07/20/2020

Interpretable Control by Reinforcement Learning

In this paper, three recently introduced reinforcement learning (RL) met...
research
04/08/2021

ACERAC: Efficient reinforcement learning in fine time discretization

We propose a framework for reinforcement learning (RL) in fine time disc...
research
06/03/2020

Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains

Reinforcement learning algorithms have had tremendous successes in onlin...
research
06/26/2022

Predicting the Need for Blood Transfusion in Intensive Care Units with Reinforcement Learning

As critically ill patients frequently develop anemia or coagulopathy, tr...
research
11/24/2016

Multiscale Inverse Reinforcement Learning using Diffusion Wavelets

This work presents a multiscale framework to solve an inverse reinforcem...

Please sign up or login with your details

Forgot password? Click here to reset