Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces

03/16/2017
by   Shayegan Omidshafiei, et al.
0

This paper presents the first ever approach for solving continuous-observation Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) and their semi-Markovian counterparts, Dec-POSMDPs. This contribution is especially important in robotics, where a vast number of sensors provide continuous observation data. A continuous-observation policy representation is introduced using Stochastic Kernel-based Finite State Automata (SK-FSAs). An SK-FSA search algorithm titled Entropy-based Policy Search using Continuous Kernel Observations (EPSCKO) is introduced and applied to the first ever continuous-observation Dec-POMDP/Dec-POSMDP domain, where it significantly outperforms state-of-the-art discrete approaches. This methodology is equally applicable to Dec-POMDPs and Dec-POSMDPs, though the empirical analysis presented focuses on Dec-POSMDPs due to their higher scalability. To improve convergence, an entropy injection policy search acceleration approach for both continuous and discrete observation cases is also developed and shown to improve convergence rates without degrading policy quality.

READ FULL TEXT

page 4

page 8

research
02/20/2015

Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions

The focus of this paper is on solving multi-robot planning problems in c...
research
09/18/2017

POMCPOW: An online algorithm for POMDPs with continuous state, action, and observation spaces

Online solvers for partially observable Markov decision processes have b...
research
07/22/2018

Optimal Continuous State POMDP Planning with Semantic Observations: A Variational Approach

This work develops novel strategies for optimal planning with semantic o...
research
08/10/2021

Decentralized Observation of Discrete-Event Systems: At Least One Can Tell

We introduce a new decentralized observation condition which we call "at...
research
10/10/2019

Sparse tree search optimality guarantees in POMDPs with continuous observation spaces

Partially observable Markov decision processes (POMDPs) with continuous ...
research
05/14/2023

A Surprisingly Simple Continuous-Action POMDP Solver: Lazy Cross-Entropy Search Over Policy Trees

The Partially Observable Markov Decision Process (POMDP) provides a prin...
research
03/15/2012

A Scalable Method for Solving High-Dimensional Continuous POMDPs Using Local Approximation

Partially-Observable Markov Decision Processes (POMDPs) are typically so...

Please sign up or login with your details

Forgot password? Click here to reset