Extracting Clinician's Goals by What-if Interpretable Modeling

10/28/2021
by   Chun-Hao Chang, et al.
0

Although reinforcement learning (RL) has tremendous success in many fields, applying RL to real-world settings such as healthcare is challenging when the reward is hard to specify and no exploration is allowed. In this work, we focus on recovering clinicians' rewards in treating patients. We incorporate the what-if reasoning to explain clinician's actions based on future outcomes. We use generalized additive models (GAMs) - a class of accurate, interpretable models - to recover the reward. In both simulation and a real-world hospital dataset, we show our model outperforms baselines. Finally, our model's explanations match several clinical guidelines when treating patients while we found the previously-used linear model often contradicts them.

READ FULL TEXT
research
04/29/2019

Challenges of Real-World Reinforcement Learning

Reinforcement learning (RL) has proven its worth in a series of artifici...
research
05/02/2021

InferNet for Delayed Reinforcement Tasks: Addressing the Temporal Credit Assignment Problem

The temporal Credit Assignment Problem (CAP) is a well-known and challen...
research
09/04/2023

Leveraging Reward Consistency for Interpretable Feature Discovery in Reinforcement Learning

The black-box nature of deep reinforcement learning (RL) hinders them fr...
research
11/17/2020

Explaining Conditions for Reinforcement Learning Behaviors from Real and Imagined Data

The deployment of reinforcement learning (RL) in the real world comes wi...
research
02/12/2021

Disturbing Reinforcement Learning Agents with Corrupted Rewards

Reinforcement Learning (RL) algorithms have led to recent successes in s...
research
11/02/2022

Knowing the Past to Predict the Future: Reinforcement Virtual Learning

Reinforcement Learning (RL)-based control system has received considerab...

Please sign up or login with your details

Forgot password? Click here to reset