Exploitation vs Caution: Risk-sensitive Policies for Offline Learning

05/27/2021
by   Giorgio Angelotti, et al.
0

Offline model learning for planning is a branch of machine learning that trains agents to perform actions in an unknown environment using a fixed batch of previously collected experiences. The limited size of the data set hinders the estimate of the Value function of the relative Markov Decision Process (MDP), bounding the performance of the obtained policy in the real world. In this context, recent works showed that planning with a discount factor lower than the one used during the evaluation phase yields more performing policies. However, the optimal discount factor is finally chosen by cross-validation. Our aim is to show that looking for a sub-optimal solution of a Bayesian MDP might lead to better performances with respect to the current baselines that work in the offline setting. Hence, we propose Exploitation vs Caution (EvC), an algorithm that automatically selects the policy that solves a Risk-sensitive Bayesian MDP in a set of policies obtained by solving several MDPs characterized by different discount factors and transition dynamics. On one hand, the Bayesian formalism elegantly includes model uncertainty and on another hand the introduction of a risk-sensitive utility function guarantees robustness. We evaluated the proposed approach in different discrete simple environments offering a fair variety of MDP classes. We also compared the obtained results with state-of-the-art offline learning for planning baselines such as MOPO and MOReL. In the tested scenarios EvC is more robust than the said approaches suggesting that sub-optimally solving an Offline Risk-sensitive Bayesian MDP (ORBMDP) could define a sound framework for planning under model uncertainty.

READ FULL TEXT

page 9

page 14

page 15

research
12/28/2020

Blackwell Online Learning for Markov Decision Processes

This work provides a novel interpretation of Markov Decision Processes (...
research
10/05/2020

Offline Learning for Planning: A Summary

The training of autonomous agents often requires expensive and unsafe tr...
research
06/20/2023

Regularized Robust MDPs and Risk-Sensitive MDPs: Equivalence, Policy Gradient, and Sample Complexity

This paper focuses on reinforcement learning for the regularized robust ...
research
10/26/2022

Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning Approach

In this article, we propose a novel pessimism-based Bayesian learning me...
research
12/18/2021

Exploiting Expert-guided Symmetry Detection in Markov Decision Processes

Offline estimation of the dynamical model of a Markov Decision Process (...
research
06/14/2021

RAPTOR: End-to-end Risk-Aware MDP Planning and Policy Learning by Backpropagation

Planning provides a framework for optimizing sequential decisions in com...
research
07/04/2012

Existence and Finiteness Conditions for Risk-Sensitive Planning: Results and Conjectures

Decision-theoretic planning with risk-sensitive planning objectives is i...

Please sign up or login with your details

Forgot password? Click here to reset