Decision-making under uncertainty is a ubiquitous robotics problem wherein a robot collects data from its environment and decides subsequent tasks to execute. While low-cost robotics platforms and sensors have increased the affordability of multi-robot systems, derivation of policies dictating robot decisions remains a challenge. This decision-making problem is even more complex in noisy settings with imperfect communication, requiring a formal framework for its treatment.
A general representation of the multi-agent planning under uncertainty problem is the Decentralized Partially Observable Markov Decision Process (Dec-POMDP) , which extends single-agent POMDPs to decentralized domains. Due to Dec-POMDPs’ usage of primitive actions (atomic actions assumed to each take a single time unit to execute) they have exceedingly large policy spaces which severely limits planning scalability. Recent efforts have extended Dec-POMDPs to use macro-actions (temporally extended actions), resulting in the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP) [2, 3]. The result is a scalable asynchronous multi-robot decision-making framework which plans over the space of high-level robot tasks (e.g., Open-the-valve or Find-the-key) with non-deterministic durations.
Despite the increased action-space scalability offered by Dec-POSMDPs, they have so far been limited to planning over the space of discrete observations. To date, no algorithms exist for continuous-observation Dec-POSMDPs (or Dec-POMDPs ). This is a major research gap, especially important in the context of robotics where a vast number of real-world sensors provide continuous observation data. Application of Dec-POSMDPs to continuous problems such as robot navigation currently mandates observation space discretization, resulting in loss of valuable sensor information which could otherwise be used to better inform the decision-making policy. Several approaches have targeted single-agent continuous-observation POMDPs. These include partitioning of continuous spaces into lossless discrete spaces , Gaussian mixtures for belief representation 
, use of continuous-observation classifiers, and learned discrete representations for continuous state spaces . This paper expands this body of work beyond the single-agent case, targeting scalable treatment of continuous-observation Dec-POSMDPs. The methods presented are applicable to domains with continuous underlying state spaces, as shown in some of the experiments used for evaluation.
In order to develop solvers for continuous-observation Dec-POSMDPs, we build on current state-of-the-art discrete policy search methods [9, 2, 3]. Unfortunately, these algorithms suffer from convergence speed limitations—an issue which was identified in prior work but remains untreated . A major gap exists in addressing these issues before extending the foundations of these discrete algorithms to the continuous case, where such convergence issues are exacerbated. To resolve this, we first introduce a maximal entropy injection approach targeting convergence acceleration for both discrete and continuous algorithms, without degrading overall policy quality. The approach is shown to significantly outperform existing search acceleration methods.
The paper’s key contribution is a stochastic kernel-based policy representation and search algorithm, allowing direct mapping of continuous observations to robot decisions (with no discretization necessary). This algorithm leverages the proposed entropy injection acceleration method and is evaluated on a multi-robot nuclear contamination domain—the first ever continuous-observation Dec-POMDP/Dec-POSMDP domain—in which discrete policy search algorithms perform extremely poorly. Failure modes of discrete methods are analyzed and compared to the superior continuous policy behavior. The contributions introduced in this paper can be readily applied to Dec-POMDPs and Dec-POSMDPs. However, as we are motivated by applications to extremely large action-observation spaces, the notation used and experiments conducted focus on the more scalable Dec-POSMDP framework.
Ii-a Decentralized Planning using Macro-Actions
This section summarizes the Dec-POSMDP, a multi-robot decentralized decision-making under uncertainty framework targeting action-space scalability. For a more detailed introduction to Dec-POSMDPs, we refer readers to [9, 2, 3].
The Dec-POSMDP is a belief-space framework in which agents execute macro-actions (temporally-extended actions) with non-deterministic completion times, and receive noisy high-level observations of their post-MA state. Macro-actions (MAs) are abstractions of low-level POMDPs involving primitive actions and observations , allowing execution of high-level tasks (e.g., Park-the-car)111We denote a generic parameter of the -th robot as , a joint team parameter as , and a joint team parameter at timestep as .. Each MA executes until an -neighborhood of its belief milestone is reached. This neighborhood defines the MA termination condition or goal belief node, denoted .
Upon completion of an MA, each robot makes a macro (or high-level) observation of the underlying high-level system state . It also calculates its own final belief state, . Thus far, both Dec-POMDPs and Dec-POSMDPs have only seen limited applications to finite discrete observation spaces. Due to its action-space scalability, let us focus on the Dec-POSMDP, defined as follows:
is the set of heterogeneous robots.
is the belief space, with local belief milestones and joint environment (or high-level) space .
is the joint MA space, where is the finite set of MAs for the -th robot.
is the space of all joint MA-observations.
is the high-level transition probability model under MAsfrom to .
is the high-level reward of taking a joint MA at .
is the joint observation likelihood model, with joint observation .
is the reward discount factor.
Macro-observations and final beliefs are jointly denoted as MA-observation . Trajectories of MAs and received MA-observations are denoted as the MA-history,
Transition probability from to under joint MA in timesteps is,
The generalized high-level team reward for a discrete-time Dec-POSMDP during execution of joint MA is defined ,
where is the first timestep at which any robot completes its current MA.
The joint high-level policy, , dictates MA selection. High-level policy maps the -th robot’s MA-history to the next MA to be executed. Joint Dec-POSMDP value under policy is then ,
The optimal joint high-level policy is,
Solving the Dec-POSMDP results in joint high-level decision-making policy dictating the MA executed by each robot based on its MA-history. Each MA is, itself, a policy over low-level actions and observations . Thus, decision-making using the Dec-POSMDP allows abstraction of task-level actions from low-level actions, leading to significantly improved planning scalability over Dec-POMDPs.
Ii-B Dec-POSMDP Policy Search Algorithms
So far, research efforts have focused on Dec-POSMDP policy search for discrete observation spaces, resulting in several algorithms: Masked Monte Carlo Search (MMCS) 
, MacDec-POMDP Heuristic Search (MDHS), and Graph-based Direct Cross Entropy method (G-DICE) . These algorithms use Finite State Automata (FSAs) for policy representation. FSA-based policy for robot consists of FSA nodes, . FSA-based decision-making is two-fold: each robot begins execution in FSA node , where MA output function assigns it an MA, . Following MA execution, the robot receives a high-level observation and selects its next FSA node using transition function . The graph-based nature of FSAs allows their application to infinite-horizon domains.
Though Dec-POSMDPs have increased the size of solvable planning domains beyond Dec-POMDP counterparts, major algorithm limitations still exist. MMCS is a greedy algorithm which succumbs to local optimality issues . MDHS uses lower and upper bound value heuristics to bias search towards promising policy regions, by initiating an empty (partial) FSA and incrementally assigning nodes actions and transitions . Partial policies with high upper bounds are expanded incrementally. Yet, each expansion involves child policies, severely limiting usage for large observation spaces.
G-DICE is a cross entropy-based algorithm which iteratively updates policies using two sampling distributions at each FSA node: MA distribution and node transition distribution , where
are parameter vectors. Each iteration samples the distributionstimes, resulting in
deterministic FSA policies. Maximum likelihood estimates (MLE) of parametersare calculated using the best policies. To prevent convergence to local optima, smooth parameter updates,
are used, with iteration number and learning rate . For sufficiently small values of , this process minimizes cross entropy between each sampling distribution and a unit mass centered at the optimal policy . G-DICE is executed until convergence, after which the best deterministic policy from the history of samples is returned.
Using smooth parameter updates and sampling distributions initiating from a uniform distribution allows G-DICE to tradeoff exploration and exploitation in the policy space, outperforming other Dec-POSMDP search approaches given a fixed computational budget. Yet, G-DICE suffers from sample degeneracy and convergence issues related to the sampling distributions, and in its current form only applies to discrete observation settings. The following sections resolve these issues, resulting in a scalable, accelerated continuous-observation search algorithm.
Iii Accelerated Policy Search
Prior to extending to continuous observations, this section treats the sampling distribution degeneracy issue in sampling-based Dec-POSMDP approaches. It also introduces a maximal entropy injection scheme which is then embedded in the proposed continuous-observation Dec-POSMDP algorithm.
Iii-a Sampling Distribution Degeneracy Problem
A major issue with sampling distribution-based approaches, such as G-DICE, occurs when a low enough learning rate is not used, causing underlying sampling distributions to rapidly converge to degenerate distributions far from the optimum . All subsequent search iterations return identical samples of the policy space, stifling exploration altogether. Yet, one benefit of a high learning rate is fast convergence, especially useful for complex Dec-POSMDPs with large observation spaces and computationally expensive trajectory sampling and evaluation. Sampling distribution-based approaches such as G-DICE often require hand-tuned selection of for good performance, even after which convergence may be excessively slow and can hinder experimentation and analysis. This trade-off was noted in , where it was left as future work. Recall the motivation behind the Dec-POSMDP framework is scalability to very large multi-robot planning domains. Despite the fact that policy search is conducted offline, hindrance of human-in-the-loop analysis due to slow convergence is undesirable. A naïve solution is to set arbitrarily low, but this implies arbitrarily high convergence time (on the order of many days for complex domains). These foundational issues must first be resolved before extending these algorithms to treat the more complex continuous observation case.
Several works have targeted this degeneracy problem. One approach uses dynamic smoothing of learning rates ,
where is the baseline rate (typically close to ) and is the drop-off rate (typically between to ). The result is a monotonically decreasing which initially starts high.
Another approach involves the addition of a noise term to the sampling distribution at each iteration to prevent degeneration. Linearly decreasing noise injection,
was investigated in . In the above, is the maximum allowable noise and is the noise drop-off rate.
These approaches are not ideal as they are agnostic to Dec-POSMDP value function convergence, meaning they do not adapt to domain-specific behaviors. Thus, sub-parameters (, , , ) typically need significant tuning to alleviate convergence issues for individual domains.
Iii-B Maximal Entropy Injection
A principled approach combining policy exploration with fast convergence is desired, without reliance on sensitive dynamic smoothing or noise terms. As degenerate distributions have minimal entropy , an intuitive idea is to simultaneously monitor policy value convergence and underlying sampling distribution entropy to alleviate degeneracy issues.
In the proposed acceleration approach, search is conducted as usual for iterations where policy value has not converged, allowing policy space exploration. Once convergence occurs, entropies of sampling distributions and are calculated. If a distribution’s entropy is significantly below the max entropy for its distribution family, degeneracy has likely occurred . Max entropy distributions are well-studied and closed form results for many families and constraint sets are known . For Dec-POSMDPs, these entropy calculations are computationally cheap as sampling distributions are categorical, with corresponding discrete uniform maximal entropy distributions.
In post-degeneracy iterations, each sampling distribution’s entropy is increased by incrementally combining its parameters with the max entropy distribution parameters ,
where is the entropy injection rate. This encourages policy space exploration while still allowing usage of high learning rates (e.g., ) for fast convergence. In practice, entropy injection rate has a low value (between 1% - 3% per iteration). As this process is repeated only in post-convergence iterations, there is low sensitivity to as entropy is incrementally increased whenever necessary. Injection stops as soon as the policy value diverges, allowing unhindered exploration. This acceleration approach is evaluated in Section V-A and also integrated into the proposed continuous-observation search algorithm in the next section.
Iv Continuous-Observation Dec-POSMDP Search
This section focuses on multi-robot policy search in continuous observation spaces. It first presents an extension of traditional discrete, deterministic FSAs to allow representation of continuous policies. A continuous-observation Dec-POSMDP search algorithm is then introduced.
Iv-a Stochastic Kernel-Based Finite State Automata
We first extend the notion of deterministic policies used in existing Dec-POSMDP algorithms to stochastic policies. In a stochastic FSA, MA output function and node transition function
provide robots with a probability distribution over MAs and next-nodesduring policy execution, rather than deterministic MA and transition assignments. The resulting stochastic decision-making scheme allows robots to escape cycles of incorrect decisions which may otherwise occur in deterministic FSAs . While it has been shown that finite-horizon Dec-POMDPs have at least one optimal deterministic policy (i.e., guaranteed to at least equal performance of the optimal stochastic policy) , in approximate searches, stochastic FSAs often result in a higher joint value [18, 16]. One can readily modify cross entropy-based search to provide such a stochastic policy by simply using the underlying sampling distributions and to define the policy, rather than the best sampled deterministic policy (as done in G-DICE).
A second issue is extension of FSAs to support continuous observations, a formidable task as continuous observation spaces are uncountably infinite. Existing Dec-POSMDP algorithms are, thus, inapplicable. To resolve this, we assume policy smoothness over the observation space, a characteristic which occurs naturally in many robotics domains. In other words, the controller structure should induce similar decisions from similar observation chains. This typical assumption is also made by the continuous state-action MDP and POMDP literature [19, 8, 7].
We exploit this smoothness assumption and introduce Stochastic Kernel-based Finite State Automata (SK-FSAs) for policy representation (Fig. 1), which have similar structure to the controllers used in . Policy execution in SK-FSAs is similar to traditional FSAs. Each robot’s SK-FSA node (e.g., node in Fig. 1) outputs categorical MA distribution , which the robot samples to select its next MA (Fig. 1). Following MA execution, the robot receives a continuous high-level observation, which the SK-FSA node transition function uses to output a corresponding node transition distribution . Note the distinction between transition function and transition distribution—the transition function maps continuous observations to the -dimensional simplex. Given an observation, outputs an infinitesimal ‘slice’ representing a categorical transition distribution over next-nodes . Fig. 1 illustrates such a slice, evaluated at high-level observation . The robot samples this categorical distribution, transitions to its next SK-FSA node , and repeats this process indefinitely.
We propose use of kernel logistic regression (KLR) to represent node transition functions. KLR is a non-parametric multi-class classification model (i.e., model complexity grows with the number of kernel points). In SK-FSAs, node transition functions use KLR with high-level observation inputs,, and output probabilities over next-nodes . KLR is a natural model for stochastic policies as it is a probabilistic classifier (i.e., SK-FSA transition distributions correspond to KLR probabilities) 
. Our approach uses KLR with radial basis function (RBF) kernels over the observation space,
where is the kernel radius. RBF kernels are preferred as they provide smooth classification outputs while allowing non-linear decision boundaries , in contrast to linear kernels. The next section discusses SK-FSA policy search, including details on kernel basis selection and kernel weight training.
Iv-B Entropy-based Policy Search over SK-FSAs
This section introduces an SK-FSA search algorithm titled Entropy-based Policy Search using Continuous Kernel Observations (EPSCKO). EPSCKO consists of 3 steps: cross entropy search for MA distributions (as done in G-DICE), memory-bounded KLR training for SK-FSA node transition functions, and entropy injection for search acceleration (as in Section III-B). In each EPSCKO iteration, decision trajectories are sampled from the SK-FSA policy. The best trajectories (evaluated using Equation 4) are used for policy update.
We first detail the KLR training approach and then present the overall algorithm. As transition function uses a kernel-based representation over the observation space, it requires a set of observation kernel basis points and weights. In EPSCKO, kernel weights constitute the node transition parameter vector . To simplify notation, references to in this section refer to this transition parameter vector.
The computational cost of training KLR models is , where is the training input size. For a sustainable training time, EPSCKO uses a memory-bounded kernel basis consisting of continuous observations received during evaluation of the best policies in each of the latest iterations. In each iteration, the bundle of observations in the best decision trajectories is pushed to a first-in, first-out (FIFO) circular queue of length . KLR training outputs are the corresponding sampled node transitions taken along these same trajectories. The non-parametric nature of KLR ensures that node transition function complexity increases in regions with high observation density, so the policy naturally focuses on prominent observation space regions. The result is a compact yet informative policy representation.
To counter convergence to locally optimal SK-FSAs, EPSKCO uses a weighted log-likelihood function to train the KLR model. Weights are discounted such that observations sampled in earlier algorithm iterations are given higher value. Given learning rate , the following weight set is used,
where is the training weight for the -th observation bundle in the FIFO kernel queue. This weighting is derived from recursive application of (7), and is analogous to the smoothing step used in G-DICE. For each robot , the weighted log-likelihood function is maximized over for KLR training,
where , , and are transition function training inputs, outputs, and kernel weights for the -th observation bundle. The partial derivative with respect to the -th component of each parameter is,
where is the indicator function. The log-likelihood can be maximized using a quasi-Newton method (our implementation uses the Broyden-Fletcher-Goldfarb-Shanno algorithm). To improve the generalization of the learned model, regularization is used during weight training.
EPSCKO is outlined in Algorithm 1. It begins by specifying an empty SK-FSA policy and -length FIFO circular kernel basis queue for each robot (Algorithm 1, Lines 1-1). The best-value-so-far, , and worst-joint-value, , are set to (Algorithm 1, Line 1). To encourage policy space exploration, SK-FSA parameter vectors are initialized such that associated distributions are uniform (Algorithm 1, Line 1).
The main algorithm loop updates the SK-FSA policy over iterations, using the maximal entropy injection scheme detailed in Section III-B to accelerate search. Entropy injection is initially disabled and a flag indicating successful entropy injection in the current iteration is set to False (Algorithm 1, Line 1). The team’s SK-FSA policies are evaluated times, with perceived continuous observation and node transition trajectories saved for KLR training (Algorithm 1, Line 1). MA selections and node transitions from policies exceeding the previous iteration’s worst joint value are tracked in (Algorithm 1, Lines 1-1). The best-value-so-far, , is saved (Algorithm 1, Line 1). Trajectory lists are pruned to retain only the best trajectories (Algorithm 1, Line 1). Continuous observations and node transitions from this list are pushed to the FIFO queue, causing old trajectories to be popped (Algorithm 1, Line 1). The iteration’s worst joint value, , is then updated.
At this point, the algorithm checks if the Dec-POSMDP joint value has converged. If so, entropy injection is enabled to counter convergence to a local optima (Algorithm 1, Line 1). This does not imply entropy injection will occur, only that it is allowed to occur. Each robot subsequently updates its MA distribution parameter vector, , using a smoothed MLE approach (Algorithm 1, Lines 1-1). As discussed earlier, weighted log-likelihood maximization is used to train the KLR model for each node transition function (Algorithm 1, Line 1).
Next, if maximal entropy injection is allowed, entropies of sampling distributions are calculated and (if necessary) injection occurs (Algorithm 1, Line 1). As transition function is continuous and non-linear, an approximate measure of its entropy is calculated using transition distributions sampled at its underlying set of observation kernels. This approximation was found to work well in practice (Section V-B) and is computationally efficient as it avoids domain re-sampling. To increase entropy of the node transition function, a continuous uniform distribution injection is done using update rule Equation 10. If entropy injection is conducted for any robot, the current iteration’s worst joint value, , is set to (Algorithm 1, Line. 1). This critical step ensures trajectories sampled in the next iteration can actually be used for policy exploration.
EPSKCO is an anytime algorithm applicable to continuous-observation Dec-POMDPs and Dec-POSMDPs. This approach also offers memory advantages to discretization as SK-FSA memory usage is , in contrast to for FSAs with discretization resolution .
This section first validates maximal entropy search acceleration, which resolves a long-standing convergence issue for sampling-based Dec-POSMDP algorithms. Then, EPSCKO is evaluated against discrete approaches in the first ever continuous-observation Dec-POMDP/Dec-POSMDP domain.
V-a Accelerated Policy Search
We evaluate policy search acceleration approaches discussed in Section III on the benchmark Navigation Among Movable Obstacles (NAMO) domain  with horizon and a grid. Fig. 2 shows convergence trends for all approaches. A low learning rate of is needed in G-DICE  to find the optimal policy (taking iterations). 50 policies are sampled per iteration, with 1000 trajectories used to approximate policy value in each iteration, so total policy evaluations are conducted. This computationally expensive evaluation becomes prohibitively large as domain complexity grows. Increasing learning rate to causes fast convergence to a sub-optimal solution, after which exploration stops due to sampling distribution degeneration.
Existing search acceleration approaches are also evaluated. Dynamic smoothing with a moderate baseline rate (, ) slightly improves value. However, decay rate is static with no closed-loop feedback from underlying sampling distributions. The result is a sub-optimal policy (found around iteration ) which then quickly converges to the same value as the baseline approach with . Linearly decreasing noise injection with and performs similarly, with fast initial increase in value and subsequent degeneration to a sub-optimal policy.
The proposed entropy injection method significantly outperforms the above approaches. The same baseline learning rate as previous methods () is used with a 3% entropy injection rate, resulting in much faster convergence (around ). Sensitivity to and injection rate is low as value convergence monitoring is conducted in all iterations. While some initial tuning of entropy injection rate is necessary, the key insight is that post-tuning results converge much faster and are more conducive to additional experimentation and analysis (e.g., with domain/policy structure). Oscillations in plots are due to post-convergence injections, which reset underlying sampling distributions and forces further policy space exploration. In practice, the best policy found in a fixed number of iterations would be returned by the algorithm.
V-B Continuous Observation Domain
To evaluate EPSCKO, a multi-robot continuous-observation nuclear contamination domain is considered (Fig. 4). This first-ever continuous-observation Dec-POMDP/POSMDP domain involves 3 robots cleaning up nuclear waste. MAs are Navigate to base, Navigate to waste zone, Correct position, and Collect nuclear contaminant. Following MA execution, each robot receives a noisy high-level observation of its 2D () state. The above MAs have non-deterministic durations and a 30% failure probability (due to nuclear contaminant degrading the robots). This causes poor performance of observation-agnostic policies which memorize chains of MAs, rather than make informed decisions using the observations.
Robots are initially at the base and must first navigate to the waste zone prior to collection attempt. Robots which execute the Navigate to base MA terminate with a random continuous state in a region centered on the base (brown region marked ‘B’ in Fig. 4). The Navigate to waste zone MA results in a random terminal state within two large regions surrounding the nuclear zone (everything interior of gray regions marked ‘L’ in Fig. 4, including the green regions marked ‘S’). Collection attempts are only possible if the robot is within the waste zone (green regions marked ‘S’ in Fig. 4). Collections attempted outside these small contamination regions result in wasted time, which further discounts the team’s future joint rewards. Robot can attempt a Correct position MA, which re-samples their state to be within these smaller regions. However, repeated attempts may be necessary due to the 30% MA failure probabilities.
After successful collection, each robot must return to the base to deposit the waste before attempting another collection. Each collection results in joint team reward (with discount factor ). This domain is particularly challenging due to the high failure rate of MAs, and the presence of a continuous, non-linear decision boundary in the nuclear zone center, where the trade-off between the correction and collection MAs must be considered by robots given their noisy observations.
Fig. 3 compares best values obtained using continuous-observation and discrete-observation policy search (EPSCKO, G-DICE with maximal entropy injection, and MDHS). Time horizon was used for evaluation, with each MA taking an average of - time units to complete. nodes were used for both discrete and continuous policies. G-DICE and MDHS results are shown for observation discretization factors , with uniform discretization in each observation dimension. EPSCKO significantly outperforms the discrete approaches, more than doubling the mean policy value of the best discrete-observation case (). MDHS faces the policy expansion issues discussed in Section II-B.
G-DICE policy values initially increase with higher discretization resolutions ( to ), yet a drop-off occurs beyond . While initially counterintuitive, as higher discretization factors imply increased precision regarding important decision boundaries in the continuous domain, Figs. 4 and 4 reveal the underlying problem. These plots show the normalized count of observation samples used to compute node discrete policies for the and cases, with discounting of old observation samples using (7). In other words, they provide a measure of discrete observation bins which have informed each G-DICE policy throughout its iterations. The core issue for discrete policies is that no correlation exists between decisions at nearby observation bins. Fine discretization meshes, as in Fig. 4, result in cyclic processes where observation bins with no previous samples are encountered, therefore causing the robot to make a poor MA selection. Nearby observation bins do not inform the robot during this process, leading it to repeatedly make incorrect decisions. This issue is especially compounded in this domain due to delays caused by high MA failure probabilities, which reduce the overall number of observations received by robots. The result is a highly uninformative policy with no observations made in many bins, in contrast to policies with lower discretization factor (Fig. 4).
To build intuition on continuous-policy decision-making, Fig. 5 plots transition functions for a 6-node EPSCKO policy. For each node , colored 3D manifolds represent probabilities of transitioning to next-nodes, , given a continuous observation. Circles plotted beneath transition functions indicate base and nuclear zone locations. Colorbars indicate the transition manifold color associated with each node and the highest-probability MA, , executed in it.
Consider a robot policy starting at node (far left in Fig. 5) which has two major manifolds (beige and green). Observations under a prominent green manifold region indicate high probability of transitioning to node (as its colorbar is green), which has Navigate to waste zone. For , this green manifold is centered on the base, which makes intuitive sense as the Navigate to waste zone MA should only be executed if the robot is confident it is at the base. Thus, the robot most likely transitions to node , and a complex transition function manifold is encountered. Two beige peaks are centered on the small inner regions of the nuclear zone, indicating transition to node , which has Collect nuclear contaminant. Thus, when the robot is in and confident that it is in the center of the nuclear zone, it attempts a collection MA. Yet, for observations outside the inner nuclear zone, the red and blue manifolds are most prominent. These indicate high probabilities of transitioning to and , which have Correct position. Thus, the robot most likely performs a heading correction before continuing policy execution and attempting waste collection. This process continues indefinitely or until the time horizon is reached. Recall that SK-FSA policies are stochastic, so these discussions provide an intuition of the ‘most likely’ continuous-policy behaviors.
This paper presented an approach for solving continuous-observation multi-robot planning under uncertainty problems. Entropy injection for policy search acceleration was presented, targeting convergence issues of existing algorithms, which are exacerbated in the continuous case. Stochastic Kernel-based Finite State Automata (SK-FSAs) were introduced for policy representation in continuous domains, with the Entropy-based Policy Search using Continuous Kernel Observations (EPSCKO) algorithm for continuous policy search. EPSCKO was shown to significantly outperform discrete search approaches for a complex multi-robot continuous-observation nuclear contamination mission—the first ever Dec-POMDP/Dec-POSMDP domain. Future work includes extending the framework to continuous-time planning.
-  D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, “The complexity of decentralized control of Markov decision processes,” Math. of Oper. Research, vol. 27, no. 4, pp. 819–840, 2002.
-  C. Amato, G. Konidaris, A. Anders, G. Cruz, J. How, and L. Kaelbling, “Policy search for multi-robot coordination under uncertainty,” in Robotics: Science and Systems XI (RSS), 2015.
-  S. Omidshafiei, A.-A. Agha-Mohammadi, C. Amato, and J. P. How, “Decentralized control of partially observable markov decision processes using belief space macro-actions,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, 2015, pp. 5962–5969.
-  F. A. Oliehoek and C. Amato, A Concise Introduction to Decentralized POMDPs. Springer, 2016.
-  J. Hoey and P. Poupart, “Solving POMDPs with continuous or large discrete observation spaces,” in IJCAI, 2005, pp. 1332–1338.
J. M. Porta, N. Vlassis, M. T. Spaan, and P. Poupart, “Point-based value
iteration for continuous POMDPs,”
Journal of Machine Learning Research, vol. 7, no. Nov, pp. 2329–2367, 2006.
-  H. Bai, D. Hsu, and W. S. Lee, “Integrated perception and planning in the continuous space: A POMDP approach,” The International Journal of Robotics Research, vol. 33, no. 9, pp. 1288–1302, 2014.
-  S. Brechtel, T. Gindele, and R. Dillmann, “Solving continuous POMDPs: Value iteration with incremental learning of an efficient space representation.” in ICML (3), 2013, pp. 370–378.
-  S. Omidshafiei, A.-A. Agha-Mohammadi, C. Amato, S.-Y. Liu, J. P. How, and J. Vian, “Graph-based cross entropy method for solving multi-robot decentralized pomdps,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on. IEEE, 2016, pp. 5395–5402.
-  A. Costa, O. D. Jones, and D. P. Kroese, “Convergence properties of the cross-entropy method for discrete optimization.” Oper. Res. Lett., vol. 35, no. 5, pp. 573–580, 2007.
-  Z. I. Botev and D. P. Kroese, “Global likelihood optimization via the cross-entropy method, with an application to mixture models.” in Winter Simulation Conference. WSC, 2004, pp. 529–535.
-  D. P. Kroese, S. Porotsky, and R. Y. Rubinstein, “The cross-entropy method for continuous multi-extremal optimization,” Methodology and Computing in Applied Probability, vol. 8, no. 3, pp. 383–407, 2006.
-  C. Thiery and B. Scherrer, “Improvements on learning tetris with cross entropy.” ICGA Journal, vol. 32, no. 1, pp. 23–33, 2009.
L. Devroye, L. Györfi, and G. Lugosi,
A probabilistic theory of pattern recognition. Springer Science & Business Media, 2013, vol. 31.
-  C. E. Shannon, “A mathematical theory of communication,” ACM SIGMOBILE Mob. Comp. and Comm. Rev., vol. 5, no. 1, 2001.
-  C. Amato, D. S. Bernstein, and S. Zilberstein, “Optimizing fixed-size stochastic controllers for POMDPs and decentralized POMDPs,” Auton. Agents and Multi-Agent Sys., vol. 21, no. 3, pp. 293–320, 2010.
-  F. Oliehoek, Value-based planning for teams of agents in stochastic partially observable environments. Amsterdam University Press, 2010.
-  D. S. Bernstein, C. Amato, E. A. Hansen, and S. Zilberstein, “Policy iteration for decentralized control of markov decision processes,” J. of Artif. Intell. Res., vol. 34, no. 1, p. 89, 2009.
-  S. W. Carden, “Convergence of a Q-learning variant for continuous states and actions,” J. of Artif. Intell. Res., vol. 49, pp. 705–731, 2014.
-  J. Zhu and T. Hastie, “Kernel logistic regression and the import vector machine,” Journal of Computational and Graphical Statistics, 2012.
-  M. Stilman and J. J. Kuffner, “Navigation among movable obstacles: Real-time reasoning in complex environments,” International Journal of Humanoid Robotics, vol. 2, no. 04, pp. 479–503, 2005.