Optimal Continuous State POMDP Planning with Semantic Observations: A Variational Approach

07/22/2018
by   Luke Burks, et al.
0

This work develops novel strategies for optimal planning with semantic observations using continuous state Partially Observable Markov Decision Processes (CPOMDPs). Two major innovations are presented in relation to Gaussian mixture (GM) CPOMDP policy approximation methods. While existing methods have many theoretically nice properties, they are hampered by the inability to efficiently represent and reason over hybrid continuous-discrete probabilistic models. The first major innovation is the derivation of closed-form variational Bayes GM approximations of Point-Based Value Iteration Bellman policy backups, using softmax models of continuous-discrete semantic observation probabilities. A key benefit of this approach is that dynamic decision-making tasks can be performed with complex non-Gaussian uncertainties, while also exploiting continuous dynamic state space models (thus avoiding cumbersome and costly discretization). The second major innovation is a new clustering-based technique for mixture condensation that scales well to very large GM policy functions and belief functions. Simulation results for a target search and interception task with semantic observations show that the GM policies resulting from these innovations are more effective than those produced by other state of the art GM and Monte Carlo based policy approximations, but require significantly less modeling overhead and runtime cost. Additional results demonstrate the robustness of this approach to model errors.

READ FULL TEXT

page 7

page 9

page 11

page 13

page 14

research
06/03/2018

Closed-loop Bayesian Semantic Data Fusion for Collaborative Human-Autonomy Target Search

In search applications, autonomous unmanned vehicles must be able to eff...
research
03/16/2017

Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces

This paper presents the first ever approach for solving continuous-obser...
research
03/15/2012

A Scalable Method for Solving High-Dimensional Continuous POMDPs Using Local Approximation

Partially-Observable Markov Decision Processes (POMDPs) are typically so...
research
03/29/2017

Optimal Policies for Observing Time Series and Related Restless Bandit Problems

The trade-off between the cost of acquiring and processing data, and unc...
research
07/04/2012

Representation Policy Iteration

This paper addresses a fundamental issue central to approximation method...
research
10/18/2021

Probabilistic Semantic Data Association for Collaborative Human-Robot Sensing

In collaborative human-robot semantic sensing problems, e.g. for scienti...
research
01/23/2014

Plan-based Policies for Efficient Multiple Battery Load Management

Efficient use of multiple batteries is a practical problem with wide and...

Please sign up or login with your details

Forgot password? Click here to reset