Synthesizing Policies That Account For Human Execution Errors Caused By State-Aliasing In Markov Decision Processes

09/15/2021
by   Sriram Gopalakrishnan, et al.
0

When humans are given a policy to execute, there can be policy execution errors and deviations in execution if there is uncertainty in identifying a state. So an algorithm that computes a policy for a human to execute ought to consider these effects in its computations. An optimal MDP policy that is poorly executed (because of a human agent) maybe much worse than another policy that is executed with fewer errors. In this paper, we consider the problems of erroneous execution and execution delay when computing policies for a human agent that would act in a setting modeled by a Markov Decision Process. We present a framework to model the likelihood of policy execution errors and likelihood of non-policy actions like inaction (delays) due to state uncertainty. This is followed by a hill climbing algorithm to search for good policies that account for these errors. We then use the best policy found by hill climbing with a branch and bound algorithm to find the optimal policy. We show experimental results in a Gridworld domain and analyze the performance of the two algorithms. We also present human studies that verify if our assumptions on policy execution by humans under state-aliasing are reasonable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2021

Acting in Delayed Environments with Non-Stationary Markov Policies

The standard Markov Decision Process (MDP) formulation hinges on the ass...
research
03/25/2012

On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes

We consider infinite-horizon γ-discounted Markov Decision Processes, for...
research
08/03/2022

Bayesian regularization of empirical MDPs

In most applications of model-based Markov decision processes, the param...
research
03/28/2021

A Bayesian Approach to Identifying Representational Errors

Trained AI systems and expert decision makers can make errors that are o...
research
06/30/2017

Tableaux for Policy Synthesis for MDPs with PCTL* Constraints

Markov decision processes (MDPs) are the standard formalism for modellin...
research
03/16/2023

Recommending the optimal policy by learning to act from temporal data

Prescriptive Process Monitoring is a prominent problem in Process Mining...
research
09/30/2022

Prioritizing emergency evacuations under compounding levels of uncertainty

Well-executed emergency evacuations can save lives and reduce suffering....

Please sign up or login with your details

Forgot password? Click here to reset