Adaptive Sampling using POMDPs with Domain-Specific Considerations

We investigate improving Monte Carlo Tree Search based solvers for Partially Observable Markov Decision Processes (POMDPs), when applied to adaptive sampling problems. We propose improvements in rollout allocation, the action exploration algorithm, and plan commitment. The first allocates a different number of rollouts depending on how many actions the agent has taken in an episode. We find that rollouts are more valuable after some initial information is gained about the environment. Thus, a linear increase in the number of rollouts, i.e. allocating a fixed number at each step, is not appropriate for adaptive sampling tasks. The second alters which actions the agent chooses to explore when building the planning tree. We find that by using knowledge of the number of rollouts allocated, the agent can more effectively choose actions to explore. The third improvement is in determining how many actions the agent should take from one plan. Typically, an agent will plan to take the first action from the planning tree and then call the planner again from the new state. Using statistical techniques, we show that it is possible to greatly reduce the number of rollouts by increasing the number of actions taken from a single planning tree without affecting the agent's final reward. Finally, we demonstrate experimentally, on simulated and real aquatic data from an underwater robot, that these improvements can be combined, leading to better adaptive sampling. The code for this work is available at https://github.com/uscresl/AdaptiveSamplingPOMCP

READ FULL TEXT VIEW PDF
10/07/2020

Bayesian Optimized Monte Carlo Planning

Online solvers for partially observable Markov decision processes have d...
10/07/2020

Improved POMDP Tree Search Planning with Prioritized Action Branching

Online solvers for partially observable Markov decision processes have d...
04/23/2019

Monte-Carlo Tree Search for Efficient Visually Guided Rearrangement Planning

In this paper, we address the problem of visually guided rearrangement p...
11/07/2018

Computing the Value of Computation for Planning

An intelligent agent performs actions in order to achieve its goals. Suc...
01/12/2021

Scalable Anytime Planning for Multi-Agent MDPs

We present a scalable tree search planning algorithm for large multi-age...
03/19/2021

Knowledge-Based Hierarchical POMDPs for Task Planning

The main goal in task planning is to build a sequence of actions that ta...
05/22/2019

Minimizing the Negative Side Effects of Planning with Reduced Models

Reduced models of large Markov decision processes accelerate planning by...

I Introduction

Adaptive sampling is the process of intelligently sampling the environment by an agent, such as an underwater or aerial robot. The robot does this by creating an internal model and selecting sampling positions that improve the model [12]. Adaptive sampling is often preferred to full workspace coverage plans when either a. the robot cannot cover the entire workspace due to a constrained time or energy budget, or b. an approximate model of the workspace is acceptable. Adaptive sampling can also make use of domain-specific information when it is available. For example, researchers in the area of algal bloom monitoring in aquatic ecosystems find areas of high chlorophyll concentration more valuable to study. Such use-cases naturally lend themselves to the integration of Bayesian optimization in adaptive sampling [5]. Solving adaptive sampling problems exactly is known to be NP-hard [16]. However, these problems can be solved using exact solvers [3], sampling-based planners [11] or Monte Carlo tree search (MCTS)-based solvers which sample random trajectories from the final reward distribution [18, 2]. Here we focus on using MCTS-based solvers to effectively sample complex environments. These iterative solvers generally use rollouts to sample a reward value for a given state by following trajectories from that state. The process of sampling from a final reward distribution using a random trajectory from a state at the leaf of the planning tree is called a rollout. Rollouts are used in local planners, such as POMCP [20], to sample discounted rewards over trajectories, from an unknown reward distribution.

(a)
(b)
Fig. 1: Fig. fig:drone_baseline shows an agent’s trajectories on a hyperspectral orthomosaic collected in Clearlake, California. Blue is low value, red is high value. The baseline trajectory overlaps with itself (wasted time), goes outside the bounds of the orthomosaic (no reward outside workspace), and mostly samples near the starting position. The trajectory from our proposed method avoids such behavior and samples regions further away. Fig. fig:block_diagram shows POMDP planning with the POMCP planner. Portions in grey are areas we study and improve in this work.

In adaptive sampling, this reward distribution is defined by some objective function over samples. Typically, rollouts are used to build an estimate of the mean reward for an action. By performing more rollouts, the planner improves its estimate of the expected reward for a particular sequence of actions. Often, planning for adaptive sampling is done online. A fixed number of environment steps (typically one) are enacted after planning for a fixed number of iterations. This process is repeated until the finite budget (e.g., path length or energy 

[11]) is exhausted. Here, we verify that this process of committing to a fixed number of steps and rollouts at each invocation of the planner can be modified to reduce the total number of rollouts needed over the entire episode. We find that, in information gathering problems, there is a period when the information gathered is sufficient to predict the field accurately enough to make more useful plans. The intuition behind our result is that this period should be allocated more rollouts than the period when less information is known, or when gathering more samples does not result in as much reward. Additionally, more environment steps can be enacted from a single POMCP planning tree because the reward for future actions can be accurately predicted. We cast the adaptive sampling problem as a Partially Observable Markov Decision Process (POMDP) which is solved using a local solver that updates the expected rewards for action sequences by sampling some reward distribution in an iterative fashion using rollouts. Specifically, we investigate minimizing the number of rollouts performed to achieve comparable accumulated reward by altering three parameters: the number of rollouts to perform, the choice of which actions to take in the planning tree during a planning iteration, and how many steps of the planned trajectory to follow. creftype 0(a) shows a sample trajectory of a drone for a lake dataset performed by our method and the baseline POMCP solver in this work.

Ii Background

Gaussian Processes are widely used modeling tools for adaptive sampling because of their non-parametric and continuous representation of the sensed quantity with uncertainty quantification [11, 13, 18]. Gaussian processes approximate an unknown function from its known outputs by computing the similarity between points from a kernel function [19]. Gaussian Processes are specifically useful for modeling the belief distribution of the underlying function from observations in the POMDP formulation of Bayesian optimization [21]. Online Adaptive Sampling consists of constructing an optimal path by alternating between planning and action. A plan is developed which attempts to maximize some objective function by taking the actions describe by partial trajectory . The partial trajectory is executed and samples are added to the model of the environment. These partial trajectories are concatenated to form full trajectory . The plan and act iterations are iteratively interleaved until the cost exceeds some budget . Formally this is described by creftype 1.

(1)

where is the space of full trajectories, and is the optimal trajectory [11]. Typically, is an objective function describing the quality of the model of the environment sampled by is. In this work, the objective function is , where is the Gaussian process estimate of the underlying function value at and

is the variance of the Gaussian process estimate at

. This objective function is commonly used in Bayesian adaptive sampling with a parameter to trade off between exploration and exploitation. The exploration term of the objective function, , exhibits submodularity characteristics [8]. Formally a function, , is submodular if and , we have that  [16]. This naturally describes diminishing returns exhibited in many adaptive sampling problems where taking a sample provides more information if you have taken fewer samples before this. Partially Observable Markov Decision Processes (POMDPs) are a framework for solving estimation problems when observations do not fully describe the state. It has been shown that formulating observations as samples from the underlying world state and representing the robot’s model of the underlying function as a belief state can be formulated as a Bayesian Search Game [21], a framework for solving Bayesian optimization problems. In this game, an agent has to select points in the domain that maximize the value of an unknown . Samples from constitute observations which are partially observable components of the overall state . If this state is augmented with the state of the robot, , and constrained to locally feasible robot actions, this formulation can easily be extended to adaptive sampling [18]. To represent the underlying belief at each state, a Gaussian process may be used. We formulate the adaptive sampling problem as a POMDP as shown in creftype I.

POMDP Adaptive Sampling
States Robot position , Underlying unknown function
Actions Neighboring search point
Observations Robot position , Sampled value
Belief Gaussian Process
Rewards
TABLE I: Adaptive Sampling as a POMDP [21]

Partially Observable Monte Carlo Planning (POMCP): POMDPs have been used for adaptive sampling and informative path planners in many situations [17, 14, 18, 10]. Many of these use a traditional dynamic programming solution to solve the underlying POMDP. This is infeasible for large state spaces or complex belief representations that are typically present in many common adaptive sampling problems. Recently, attention has focused to solvers which are locally accurate using probabilistic rollout updates [18]. A state of the art algorithm for solving large POMDPs online is the Partially Observable Monte-Carlo Planning solver (POMCP) [20]. POMCP uses a Monte Carlo tree search (MCTS) which propagates reward information from simulated rollouts in the environment. At every iteration in the planner, the agent performs a search through the generated tree to choose actions, using Upper-Confidence Tree (UCT) [15] exploration for partially observable environments, until it reaches a leaf node of . From node

, the planner performs a rollout using a pre-defined policy (usually a random policy) until the agent reaches the planning horizon. The reward the agent collects while simulating this trajectory is then backpropagated up through the visited states. Finally, the first node the agent visited from the rollout is added as a child of

, and the tree is expanded. Once the specified number of iterations (rollouts) are completed, the tree is considered adequate for planning. The action from the root node that gives the highest expected reward is then chosen, and the agent executes the action in the environment. At each observation node of , the observation is estimated with , where is the agent’s position at that node. To update the belief , the state-observation pair is integrated into the Gaussian Process. Multi-Armed Bandits (MAB) are a family of problems in which actions are available and each action has an unknown associated reward distribution, . At each time step, the agent chooses an action and receives a reward drawn from . The goal of the agent is to maximize the cumulative reward over time or minimize risk, which is defined as the difference between the agent’s cumulative reward and some optimal policy. There is a natural exploration-exploitation trade-off in the MAB problem because at each step the agent receives more information about the distribution of one of the actions by sampling the reward [4]. This framework provides the mechanism for action selection in a variety of rollout-based algorithms, including POMCP [20], and is used when each rollout can be viewed as a draw from the reward distribution conditioned on the currently selected actions. In contrast to optimal action selection algorithms, there is a family of algorithms which seek to identify the best action in terms of mean reward. These algorithms work in two settings: fixed-budget and fixed-confidence. In the fixed-budget setting, the agent finds the best action from a fixed number of samples. In the fixed-confidence setting, the agent finds the best arm having , in the fewest number of samples [6].

Iii Formulation and Approach

Most online informative planning pathfinders use the same planning duration at all points in the planning sequence. We propose to modify the POMCP [20] planner, an online, rollout-based POMDP-planner, given the knowledge about the underlying problem. Our method selects how many rollout iterations to use at each environment interaction and which actions should be tried in the planner. Using this tree, it also adaptively selects how much of the tree can be incorporated into the executed plan, based on the rewards received during these rollouts. We show that we can produce similar models of the environment in fewer overall iterations of the (expensive) rollout sequence. An overview of how our improvements fit into the planning and action pipeline with POMCP is in creftype 0(b).

Iii-a Rollout Allocation

The first improvement we propose is to alter how the overall rollout budget is handled. Typically, the total rollout budget is divided evenly and a fixed number of rollouts are allocated each time the planner is called to compute a new partial trajectory. This results in a linear increase in the number of rollouts used as the number of environment steps increases. We propose that this rollout allocation method should take advantage of three key ideas: cold-starting, submodularity, and starvation. The idea of cold-starting is well studied in adaptive sampling [13] and captures the notion that the planner can not make useful decisions with little information to plan on. Planning with little information is futile since far away points will generally return to the global mean with high variance. Typically, this is handled by having the robot perform a pre-determined pilot survey to gather initial information about the workspace [13]. This strategy wastes sampling time if too many pre-programmed samples are taken. The lack of information manifests itself in another problem when planning for adaptive sampling: submodularity of the objective function which necessitates effective sampling early on in the episode, since early samples provide the largest benefit. Additionally, because this information is used in planning later on, the importance of good early information gathering is compounded. There is, of course, a trade-off to allocating rollouts early on: plans which allocate too many rollouts early on could suffer from the problem of starvation of rollouts at later stages. This can cause the planner to make poor decisions when there is rich information to plan on and the areas of interest are well-known and understood. This rollout allocation trade-off in planning is an instance of the exploration-exploitation trade-off which is common in information gathering tasks.

Iii-B Exploration Algorithm

MCTS-based planners, such as POMCP, treat each action at each layer in the tree as an MAB problem. In the usual MCTS algorithm, the objective is to optimally explore possible trajectories by choosing an action at each layer according to some optimal action selection criterion [20]. We propose using optimal arm identification algorithms instead of optimal exploration algorithms because the final goal of the POMCP is to choose and execute the action with the highest mean reward, not to maximize the cumulative reward during searching. Shifting from the cumulative-reward setting to the fixed-budget setting allows the exploration algorithm to decide the exploration-exploitation trade-off based on the number of allocated rollouts at each planning step. When many rollouts are allowed, the algorithm can be more explorative and consider each action for longer, while with fewer rollouts the algorithm typically becomes more exploitative. A fixed-budget action selection algorithm can only be used for the first action in the tree as these actions are the only ones for which the total number of combined rollouts is fixed. In this work, we investigate three exploration algorithms. The first, Upper Confidence Tree (UCT), is an optimal exploration algorithm and is the default choice for most MCTS-based solvers, including POMCP [20]. UCT provides a trade-off between exploration and exploitation by adding an exploration bonus to each action using the number of times the parent and child have been explored. UCT does not take into account a budget, but instead tries to maximize the sum of the reward samples during planning. Because of this, UCT may tend to be highly explorative [1]

. The remaining two algorithms are fixed-budget algorithms that explicitly incorporate the amount of rollouts allotted and attempt to maximize the probability that the chosen action is, in fact, the best action. The first algorithm, UGapEb, uses an upper bound on the simple regret of the actions to choose the action which is most likely to switch to become the best one to exploit 

[6]

. This algorithm incorporates the total budget into the confidence interval to make the most efficient use of the provided budget. It contains a (difficult to estimate) parameter,

, which requires the gap between actions to be known ahead of time. Both UCT and UGapEb additionally depend on a (difficult to estimate) parameter, , which is multiplied by the bound to create the exploration-exploitation trade-off. In this work, we use the difference between the highest and lowest discounted rewards ever seen, but this may not be the optimal bound and requires the agent to know these values before exploring the workspace. The final exploration algorithm is Successive Rejects. This algorithm is unique in that it does not attempt to use a confidence bound like UCT and UGapEb. Instead, it chooses each action a number of times in successive rounds and eliminates an action in each round until a single action is found [1]. This algorithm is preferable in some situations because it is parameter-free, while UCT and UGapEb have hard to tune parameters. However it may waste some rollouts when an action is obviously inferior.

Iii-C Plan Commitment

Each call to the POMCP planner produces a planning tree of estimated rewards for action sequences. Typically, the agent executes the first action with the highest expected reward and replans with the part of the tree it went down. Since an adaptive sampling agent is approximating the underlying function using an initial belief, the generative model is different after each call to POMCP planner. This is because every time the agent takes an action in the environment and receives new samples, the initial belief for the underlying POMDP changes. Hence, the tree must be discarded and completely replanned after incorporating the observation from the environment into the initial belief state. We propose to take more than one action from the tree when there is certainty at lower levels about the optimality of the action. In order for the agent’s performance to be unaffected by taking further actions, there are two considerations it must have. The first consideration is of the quality of the estimate of the reward from an action. If rollouts are spread evenly, the number of rollouts performed for lower actions (deeper in the tree) will be exponentially less than higher actions (closer to the root), causing the estimate of their reward to be poor. The second consideration is the quality of the estimate of the observation at locations further away. The initial belief greatly affects the quality of observation estimates. Trajectories further away from what the agent has seen will generally have worse estimates for what their observations will likely be. With these two considerations the agent may be able to extract more than one plan step from the tree without significant deterioration in the accumulated reward. The simplest method is to take a fixed number of actions from the MCTS and execute them. This does not take into account any considerations of the quality of estimates of the reward for actions. With this method, the agent may have an inadequate understanding of the underlying world state at some point, but still take actions from the tree. If the agent accounts for the statistics of the samples for the reward for actions, more complex methods can be used. We use a method based on UGapEc, a fixed-confidence MAB scheme [6]. UGapEc determines if there is an action that has a mean reward higher than all other actions with probability

. This algorithm can be used to verify if a fixed-confidence threshold is met by checking if the algorithm would not choose to explore further. Another method which uses a statistical test similar to UGapEc is a two tailed Welch’s t-test 

[22]

. This test assumes the distributions the samples are from are Gaussian but does not assume the standard deviations are known or equal. Since the error in the estimate of standard deviation is quadratic in the number of samples, our estimate of the standard deviation deteriorates much faster than our estimate of the mean. Because of this, a more complex test must be used than a simple Gaussian confidence interval test since it may underestimate the sample standard deviation 

[9]

. The unequal variances two tailed t-test tests the null hypothesis that the reward distributions for two actions have identical expected values. This method returns an estimate of the probability (p-value) that the means are equal. A threshold is set and if the p-value of the null hypothesis is below the threshold, the action is considered safe to take. This method is statistically robust. It causes the action to not be chosen in two cases. The first case is that there are not enough samples of each action and the second is that the means are too close to distinguish with the number of samples gathered. This method uses a Student’s t-distribution 

[7] and calculates the t statistic and the v value with creftype 2 which can be used to compute the p-value with a Student’s t-distribution.

(2)

where is the sample mean reward for distribution , is the sample standard deviation for distribution , and is the sample size for distribution . We compare the reward distributions of the top two actions, with highest expected reward, to determine the p-value. We ignore other actions because of the asymmetrical nature of an MCTS tree causing the worst actions to have very few rollouts.

Iv Experiments

(a)
(b)
Fig. 2: Results from a grid search over possible rollout curves. creftype 1(a) presents the curves that were searched over. creftype 1(b) shows the three most optimal curves and a linear allocation curve, colored by their mean accumulated reward at the end of the episode.
(a)
(b)
(c)
(d)
(e)
Fig. 3: Comparison of Exploration Algorithms (alc:sbo_fn, alc:val1, and alc:val2) and Plan Commitment Algorithms (fig:planCommit_reward and fig:planCommit_rollouts). UCT, Fixed is the baseline which evenly splits the rollouts at each step and uses the UCT exploration algorithm (the default for MCTS). Other results use a curved rollout allocation. For plan commitment, creftype 2(d) shows the reward accumulation and creftype 2(e) shows the number of rollouts used in all POMCP calls for the whole episode. A small offset value is added to methods which overlap.

We assess the performance of our improvements on three environments. The first environment is a test function for testing the effectiveness of sequential Bayesian Optimization using POMCP [18]. We use a dynamic (time-varying) two dimensional function as the underlying ground truth for testing our individual improvements. It corresponds to a Gaussian curve circling a fixed point twelve times.

(3)

where . In this environment the agent starts out at the bottom-center of the time box and progresses towards the top, choosing actions in the x-y plane. A subsampled image of this environment (points below removed for clarity) can be seen in creftype 6. In the other two environments, called Validation Environment 1 (creftype 4(a)) and Validation Environment 2 (creftype 4(d)), we use chlorophyll concentration data collected from a YSI Ecomapper robot as input data. These datasets are by , by

deep. We interpolate this with a Gaussian process to create the underlying function to estimate. In these scenarios the robot travels

between each sample and can freely travel in any direction at any point. The robot starts at the center of the environment at 0 depth. For all environments, the agent is allowed to take 200 environment steps and is assumed to have a point motion model that can move to neighboring locations. We use the objective function and use for the dynamic function and for the validation environments. All experiments are run for five seeds each.

Iv-a Grid Search for Rollout Allocation

To find the proper form of the rollout allocation and test the assertion that different parts of the POMDP planning process need different number of rollouts we perform a grid search over different curves that describe the rollout allocation. For each curve, if it would allocate less than one rollout per action, we allow the planner to perform a single rollout per action. We parameterize these curves by cumulative beta distributions because of their flexibility in representing many different kinds of curves. These curves are parametrized by an

and parameter which determine the exact form of the curve. We search over and . These curves can be seen in creftype 1(a). The results of this experiment are shown in creftype 1(b), which indicate that an exponential increase in the rollout allocations is desirable and a very flat curve is undesirable. We find that the best curve for the dynamic function to be . We empirically find that this curve does worse on the validation environments, possibly due to overfitting, and that a curve with works best. We use this for tests involving a curved rollout allocation with them.

Iv-B Comparison of Exploration Algorithms

We test the effectiveness of alternative exploration algorithms to UCT and the interaction between the rollout allocation method and exploration algorithm. We test three exploration algorithms described in creftype III-B: UGapEb, UCT, and Successive-Rejects on three environments. In creftype 2(a) all beta curve-based methods outperform the fixed method and all allocators work almost equally well, with UCT having a slight performance boost. In creftype 2(b), UGapEb and Successive Rejects with curved rollout allocation perform approximately equally but out-perform UCT with both a fixed and curved rollout allocation. In creftype 2(c) all three curved allocators are out-performed by a fixed rollout allocation curve. This is likely because the rollout curve is poorly chosen for this environment due to not being chosen by grid search. UGapEb outperforms all curved allocators by a significant margin.

Iv-C Comparison of Plan Commitment Algorithms

We test the methods described in creftype III-C for determining how many steps to take once the MCTS tree is generated. We test the unequal variances t-test and UGapEc methods with different parameters against a baseline method, which takes only the first action, across 5 seeds. creftype 2(d) and creftype 2(e) shows the comparison of all these combinations against the baseline. UGapEc and the baseline largely overlap because UGapEc cannot confidently predict whether the chosen action is the best action with such few rollouts and a larger epsilon does not make sense for the scale of the rewards. We believe that UGapEc may be of use for complex environments where the agent cannot make strong assumptions about the underlying reward distributions and many more rollouts will be required for the POMCP algorithm. The unequal variances t-test performs the best amongst the options. Within the t-test parameters, the p-value of 0.1 requires slightly fewer rollouts than a p-value of 0.05 for similar reward. However, choosing 0.1 implies riskier behavior which can have a negative effect in complex environments and real-world datasets, such as our validation environments. Hence, we choose the unequal variance t-test with p = 0.05 as our best choice for plan commitment. creftype 2(d) shows the accumulated reward for a trajectory execution between the baseline and our choice. In creftype 2(e) it is clear that each of the algorithms take vastly different amounts of rollouts for obtaining this result. Hence, we see that the plan commitment t-test algorithm helps to significantly reduce the rollouts needed to solve the adaptive sampling problem.

(a)
(b)
(c)
(d)
(e)
(f)
Fig. 4: Comparison of the combined proposed improvements (Proposed Method) against the baseline for all environments. creftype 3(a) and creftype 3(b) are the reward and number of rollouts used by the agent in the dynamic function environment. creftype 3(c) and creftype 3(d) are the reward and number of rollouts used by the agent in Validation Environment 1, creftype 3(e) and creftype 3(f) are the reward and number of rollouts used by the agent in Validation Environment 2.

Iv-D Comparison of Baseline with Combined Improvements

creftype 4 shows the combined effect of all improvements from the preceding experiments: using a curved rollout allocation, using the UGapEb exploration algorithm and using the t-test plan commitment algorithm. We compare against a baseline which uses an equal number of rollouts at each step, uses the UCT exploration algorithm and takes only one action before replanning. We compare our method and this baseline for each environment. creftype 3(a) and creftype 3(b) show that the combined features achieve a much higher reward in fewer rollouts on the dynamic environment. creftype 3(c) and creftype 3(f) show that again the agent receives a higher reward in many fewer rollouts than the baseline method. creftype 3(e) and creftype 3(f) indicate that our method is comparable to the baseline in terms of reward but achieves this reward in fewer rollouts.

(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5: creftype 4(a) shows a dataset collected with an underwater robot, and creftype 4(b) and creftype 4(c) show example trajectories from a baseline implementation and our proposed implementation respectively. creftype 4(d) shows another, more complex, dataset collected in the same location, creftype 4(e) and creftype 4(f) show example trajectories from a baseline implementation and our proposed implementation respectively.

V Conclusion

We present improvements for online adaptive sampling with a Monte Carlo-based POMDP solver which uses specific knowledge of the adaptive sampling problem structure. This allows the agent to estimate a non-parametric function by taking samples of the underlying phenomenon such as the concentration of chlorophyll in a body of water. First, we show that by changing the amount of rollouts that are allocated to more heavily favor later stages in planning, a better overall model of the environment can be created. We believe this is due to later stages having more information to plan on and therefore able to develop better and longer plans. We show that searching for an optimal curve can lead to high performance increases and that reasonable curves chosen can lead to increased performance. Second, we show that the agent’s total reward can increase by changing the action exploration algorithm to one that explicitly incorporates knowledge of the number of rollouts allocated for each planning step. This works with the rollout allocation to improve selection when few rollouts are allocated. We also show that by modifying the amount of steps the agent takes from a planning tree, the overall planning can be made more efficient. We show a statistical test can be used to determine if an action can be confidently determined to be the best action. With this test we are able to reduce the number of rollouts needed to reach a comparable accumulated reward. Finally, we show that these improvements are synergistic and when used together can greatly improve the planning over a fixed-step, optimal exploration, fixed-rollout allocation planner.

References

  • [1] J. Audibert and S. Bubeck (2010-06) Best Arm Identification in Multi-Armed Bandits. pp. 13 p. (en). External Links: Link Cited by: §III-B.
  • [2] G. Best, O. M. Cliff, T. Patten, R. R. Mettu, and R. Fitch (2019) Dec-MCTS: Decentralized planning for multi-robot active perception. The International Journal of Robotics Research 38 (3), pp. 316–337. Note: Using MCTS in a distributed fashion if we attempt to use a multi robot system External Links: Document Cited by: §I.
  • [3] J. Binney and G. S. Sukhatme (2012-05) Branch and bound for informative path planning. In 2012 IEEE International Conference on Robotics and Automation, pp. 2147–2154. Note: ISSN: 1050-4729 External Links: Document Cited by: §I.
  • [4] D. Bouneffouf, I. Rish, and C. Aggarwal (2020-07) Survey on Applications of Multi-Armed and Contextual Bandits. In

    2020 IEEE Congress on Evolutionary Computation (CEC)

    ,
    pp. 1–8. External Links: Document Cited by: §II.
  • [5] J. Das, K. Rajany, S. Frolovy, F. Py, J. Ryan, D. A. Caron, and G. S. Sukhatme (2010-05) Towards marine bloom trajectory prediction for AUV mission planning. In 2010 IEEE International Conference on Robotics and Automation, pp. 4784–4790. Note: ISSN: 1050-4729 External Links: Document Cited by: §I.
  • [6] V. Gabillon, M. Ghavamzadeh, and A. Lazaric (2012) Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.), pp. 3212–3220. External Links: Link Cited by: §II, §III-B, §III-C.
  • [7] W. S. Gosset (1908) The Probable Error of a Mean. Student. Note: Originally published under the pseudonym Cited by: §III-C.
  • [8] C. Guestrin, A. Krause, and A. P. Singh (2005) Near-optimal sensor placements in Gaussian processes. In

    Proceedings of the 22nd international conference on Machine learning - ICML ’05

    ,
    Bonn, Germany, pp. 265–272 (en). Note: foundational adaptive sampling paper, basically explores triyng to optimally place and greedily approximate sensor placement External Links: ISBN 978-1-59593-180-1, Link, Document Cited by: §II.
  • [9] J. Gurland and R. C. Tripathi (1971-10)

    A Simple Approximation for Unbiased Estimation of the Standard Deviation

    .
    The American Statistician 25 (4), pp. 30–32. Note: Publisher: Taylor & Francis External Links: ISSN 0003-1305, Document Cited by: §III-C.
  • [10] J. Hai-Feng, C. Yu, D. Wei, and P. Shuo (2019-03) Underwater Chemical Plume Tracing Based on Partially Observable Markov Decision Process. International Journal of Advanced Robotic Systems 16 (2), pp. 1729881419831874. Note: Publisher: SAGE Publications External Links: ISSN 1729-8814, Link, Document Cited by: §II.
  • [11] G. A. Hollinger and G. S. Sukhatme (2014) Sampling-based robotic information gathering algorithms. The International Journal of Robotics Research 33 (9), pp. 1271–1287. Cited by: §I, §I, §II.
  • [12] J. Hwang, N. Bose, and S. Fan (2019-01) AUV Adaptive Sampling Methods: A Review. Applied Sciences 9 (15), pp. 3145 (en). External Links: Link, Document Cited by: §I.
  • [13] S. Kemna, O. Kroemer, and G. S. Sukhatme (2018-05) Pilot Surveys for Adaptive Informative Sampling. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6417–6424. Note: ISSN: 2577-087X External Links: Document Cited by: §II, §III-A.
  • [14] S. Kim, A. Bouman, G. Salhotra, D. D. Fan, K. Otsu, J. Burdick, and A. Agha-mohammadi (2021) PLGRIM: hierarchical value learning for large-scale exploration in unknown environments. arXiv preprint arXiv:2102.05633. Cited by: §II.
  • [15] L. Kocsis and C. Szepesvári (2006) Bandit based monte-carlo planning. In European conference on machine learning, pp. 282–293. Cited by: §II.
  • [16] A. Krause and C. Guestrin (2011-07) Submodularity and its applications in optimized information gathering. ACM Transactions on Intelligent Systems and Technology 2 (4), pp. 1–20 (en). Note: Fundamentals of the type of objective function we’d probably like to employ External Links: ISSN 2157-6904, 2157-6912, Link, Document Cited by: §I, §II.
  • [17] P. F.J. Lermusiaux, T. Lolla, P. J. Haley Jr., K. Yigit, M. P. Ueckermann, T. Sondergaard, and W. G. Leslie (2016) Science of Autonomy: Time-Optimal Path Planning and Adaptive Sampling for Swarms of Ocean Vehicles. In Springer Handbook of Ocean Engineering, M. R. Dhanak and N. I. Xiros (Eds.), pp. 481–498. External Links: ISBN 978-3-319-16649-0, Link, Document Cited by: §II.
  • [18] R. Marchant, F. Ramos, and S. Sanner (2014-07) Sequential Bayesian optimisation for spatial-temporal monitoring. In

    Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence

    ,
    UAI’14, Arlington, Virginia, USA, pp. 553–562. External Links: ISBN 978-0-9749039-1-0 Cited by: §I, §II, §II, §IV.
  • [19] C. E. Rasmussen and C. K. I. Williams (2006) Gaussian Processes for Machine Learning.. MIT press. Note: ISSN: 0129-0657 External Links: ISBN 0-262-18253-X, Document Cited by: §II.
  • [20] D. Silver and J. Veness (2010) Monte-Carlo Planning in Large POMDPs. In Advances in Neural Information Processing Systems 23, J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta (Eds.), pp. 2164–2172. Note: the pomcp paper External Links: Link Cited by: §I, §II, §III-B, §III.
  • [21] M. Toussaint (2014) The Bayesian Search Game. In Theory and Principled Methods for the Design of Metaheuristics, Y. Borenstein and A. Moraglio (Eds.), pp. 129–144 (en). Note: Series Title: Natural Computing Series External Links: ISBN 978-3-642-33205-0 978-3-642-33206-7, Link, Document Cited by: TABLE I, §II.
  • [22] B. L. Welch (1947) The generalisation of student’s problems when several different population variances are involved. Biometrika 34 (1-2), pp. 28–35 (eng). External Links: ISSN 0006-3444, Document Cited by: §III-C.

Vi Appendix

Vi-a Comparison of time saving to baseline

Because our proposed method reduces the number of rollouts, the time to compute a plan is reduced. Additionally, rollouts from later environment steps are cheaper to compute because the remaining budget is lower. This causes our method to finish the episode faster because it allocates more rollouts to later environment steps. These two effects combine to produce a saving in wall-clock time for the entire episode, as shown in creftype II. Experiments were run on a server with 2 Intel Xeon Gold processors and 256GB RAM.

Method Dynamic Function Validation Environment 1 Validation Environment 2
Baseline 2061.84 2687.73 3542.72
Proposed Method 1371.77 2497.36 3120.90
TABLE II: Wall-clock time (seconds) required to complete five episodes.

Vi-B Visualization of Dynamic Function

We present a visualization of the dynamic function used for testing. The function can be seen in creftype 6.

Fig. 6: The dynamic (time-varying) two dimensional function used for testing the effectiveness of our method, described by creftype 3. Note that this is a subsampled image, only showing values above , for clarity.

Vi-C Future Work

Currently the rollout allocation algorithm either requires an expensive grid search or an a-priori guess. We would like to determine the correct rollout curve online and adapt to the information the agent has seen. Future directions may also include environments where the underlying reward distributions are farther away from a Gaussian distribution. In this case, methods like uGapEc or other MAB methods that do not make assumptions about the underlying distributions may perform better.