A Possibilistic Model for Qualitative Sequential Decision Problems under Uncertainty in Partially Observable Environments

01/23/2013
by   Régis Sabbadin, et al.
0

In this article we propose a qualitative (ordinal) counterpart for the Partially Observable Markov Decision Processes model (POMDP) in which the uncertainty, as well as the preferences of the agent, are modeled by possibility distributions. This qualitative counterpart of the POMDP model relies on a possibilistic theory of decision under uncertainty, recently developed. One advantage of such a qualitative framework is its ability to escape from the classical obstacle of stochastic POMDPs, in which even with a finite state space, the obtained belief state space of the POMDP is infinite. Instead, in the possibilistic framework even if exponentially larger than the state space, the belief state space remains finite.

READ FULL TEXT

page 1

page 3

page 7

research
06/11/2014

Quantum POMDPs

We present quantum observable Markov decision processes (QOMDPs), the qu...
research
04/01/2012

Learning from Humans as an I-POMDP

The interactive partially observable Markov decision process (I-POMDP) i...
research
09/26/2013

Qualitative Possibilistic Mixed-Observable MDPs

Possibilistic and qualitative POMDPs (pi-POMDPs) are counterparts of POM...
research
12/22/2020

Autonomous sPOMDP Environment Modeling With Partial Model Exploitation

A state space representation of an environment is a classic and yet powe...
research
07/09/2019

Partially Observable Planning and Learning for Systems with Non-Uniform Dynamics

We propose a neural network architecture, called TransNet, that combines...
research
06/27/2012

Monte Carlo Bayesian Reinforcement Learning

Bayesian reinforcement learning (BRL) encodes prior knowledge of the wor...
research
01/11/2018

Counterfactual equivalence for POMDPs, and underlying deterministic environments

Partially Observable Markov Decision Processes (POMDPs) are rich environ...

Please sign up or login with your details

Forgot password? Click here to reset