Upper and Lower Bounds for End-to-End Risks in Stochastic Robot Navigation

10/29/2021
by   Apurva Patil, et al.
The University of Texas at Austin
0

We present novel upper and lower bounds to estimate the collision probability of motion plans for autonomous agents with discrete-time linear Gaussian dynamics. Motion plans generated by planning algorithms cannot be perfectly executed by autonomous agents in reality due to the inherent uncertainties in the real world. Estimating collision probability is crucial to characterize the safety of trajectories and plan risk optimal trajectories. Our approach is an application of standard results in probability theory including the inequalities of Hunter, Kounias, Frechet, and Dawson. Using a ground robot navigation example, we numerically demonstrate that our method is considerably faster than the naive Monte Carlo sampling method and the proposed bounds are significantly less conservative than Boole's bound commonly used in the literature.

READ FULL TEXT VIEW PDF

Authors

page 7

05/17/2022

Upper Bounds for Continuous-Time End-to-End Risks in Stochastic Robot Navigation

We present an analytical method to estimate the continuous-time collisio...
02/26/2019

Efficient Probabilistic Collision Detection for Non-Gaussian Noise Distributions

We present algorithms to compute tight upper bounds of collision probabi...
10/12/2021

Exact and Bounded Collision Probability for Motion Planning under Gaussian Uncertainty

Computing collision-free trajectories is of prime importance for safe na...
01/27/2021

An Integrated Localisation, Motion Planning and Obstacle Avoidance Algorithm in Belief Space

As robots are being increasingly used in close proximity to humans and o...
03/06/2019

Lambda-Field: A Continuous Counterpart of the Bayesian Occupancy Grid for Risk Assessment

In a context of autonomous robots, one of the most important task is to ...
03/17/2020

Fast Certification of Collision Probability Bounds with Uncertain Convex Obstacles

To operate reactively in uncertain environments, robots need to be able ...
03/09/2020

Fast Collision Probability Estimation Based on Finite-Dimensional Monte Carlo Method

The safety concern for unmanned systems, namely the concern for the pote...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Motivation

Motion plans for mobile robots in obstacle-filled environments can be generated by autonomous trajectory planning algorithms [14]. For real-time implementations, robots are typically equipped with a trajectory tracking controller to mitigate the effects of modeling errors, disturbances, and measurement noises. Since the planned trajectory cannot be tracked perfectly in stochastic environments, collisions with obstacles occur with a nonzero probability in general, even though the planned trajectory is collision-free. To address this issue, risk-aware motion planning has received considerable attention over the years [22], [3], [31]. Optimal planning under set-bounded uncertainty provides some solutions against worst-case disturbances [16], [15]

. However, in many cases, modeling uncertainties with unbounded distributions, such as Gaussian distributions, has a number of advantages over a set-bounded approach

[3]. In the case of unbounded uncertainties, in general, it is difficult to guarantee safety against all realizations of noise. This motivates for an efficient risk estimation technique that can both characterize the safety of trajectories and be embedded in the planning algorithms to allow explicit trade-offs between control optimality and safety. Assuming that a planned trajectory with a finite length in a known configuration space is given, we present several upper and lower bounds for the collision probability while tracking the trajectory. This probability will hereafter be called the end-to-end probability of failure. The analysis in this paper assumes that the system dynamics are discrete-time. However, in Section V-A, we also study the performance of our discrete-time risk bounds in the continuous-time setting as the underlying discretization is refined.

The paper is organized as follows: in Section I, we formally define the problem of end-to-end risk analysis, review state-of-the-art literature, and state the contributions of this paper. In Section II, we review probability inequalities from the literature based on which we derive upper and lower bounds of the end-to-end risks in Section III. In Section IV, we demonstrate the results of our analysis using a ground robot navigation example. Finally, Sections V and VI are devoted to discussion and conclusion.

I-B Problem Formulation

Let be a known configuration space, where , . Let , and be the obstacle region, obstacle-free region, and target region, respectively. Given an initial position of the robot, a planning algorithm generates a trajectory by designing a finite, optimal sequence of control inputs such that the end point of the trajectory satisfies . We call the finite sequence the planned trajectory, which satisfies

Let be the actual position of the robot during the execution of the plan and be the control input applied at time step . In order to compensate for the effects of motion and sensing uncertainties, we assume the robot executes the planned trajectory in a closed-loop fashion [30]. We call the finite sequence the executed trajectory. For the analysis purpose, in this paper, the system dynamics and the control policy are assumed to be linear. For nonlinear systems, we assume that the dynamics are linearized around the planned trajectories. Such an approach is shown to be effective in many control applications [26], [27]. We assume that the executed trajectory satisfies

at each time step , where

is a Gaussian white noise that models the motion uncertainty. The sensor model is given by

(1)

for , where is a Gaussian white noise that models the noise in the measurements. The sequence is assumed to be known a priori.

Since our main focus is to evaluate the risk of a given trajectory plan, we assume that the trajectory is already provided. If represent the event that the robot collides with the obstacles at time step while tracking the planned trajectory, the end-to-end probability of failure in the trajectory tracking phase can be formulated as

(2)

I-C Literature Review

Monte Carlo and other sampling-based methods [2], [10], [6] provide accurate estimates of (2) by computing the ratio of the number of simulated executions that collide with obstacles. However, these methods are often expensive in computation due to the need for a large number of simulation runs to obtain reliable estimates and are cumbersome to embed in planning algorithms.

Various analytical approaches also have been proposed in the literature. In general,

are statistically dependent events. Using the law of total probability, we can reformulate (

2) as

(3)

where represents the event that the robot is collision-free at time step . It is challenging to compute (3) exactly because it requires evaluating integrals of multivariate distributions over non-convex regions. One approach to estimate (3) is to assume that the event is independent of other events or depends only on [31], and subsequently approximate (3) as or , respectively. However, these assumptions do not hold in general, and can result in overly conservative estimates or can even underestimate the failure probability.

Another popular approach is to use Boole’s inequality (also called Bonferroni’s first-order inequality) to compute an upper bound of (2) [3], [19], [18], [4]. The inequality is given as

(4)

This approach again ignores the dependency among events and can result in overly conservative estimates, especially as the time discretization is refined. When these estimates are used in planning risk-optimal paths, the algorithms might either find overly conservative paths or fail to find a feasible path, even if one exists.

In contrast to the previous approaches, an approach presented in [20] accounts for the fact that the distribution of the state at each time step along the trajectory is conditioned on the previous time steps being collision-free. It truncates the estimated distributions of the robot’s positions with respect to obstacles and approximates the truncated distributions as Gaussians. However, the Gaussianity is inexact and this approximation leads to an estimate that might not remain statistically consistent [8].

I-D Contributions

The main contributions of the paper are summarized as follows: In this work, we account for the fact that the events of collision at different time-steps are statistically dependent. Unlike [20]

, we compute the joint distribution of the entire robot trajectory without attempting to approximate the conditional state distributions. Using this joint trajectory distribution, we derive both upper and lower bounds for the end-to-end probabilities of failure. Our upper bounds are considerably tighter than the estimates obtained by Boole’s inequality commonly used in the literature. The lower bounds, on the other hand, are useful for predicting how conservative the computed upper bounds are. Further, we show, in simulation, the validity and performance of our bounds, using a ground robot navigation example. We demonstrate that our method is considerably faster than the Monte Carlo sampling method. The approach presented in this paper is quite general and can be applied to estimate the discrete-time risks in stochastic navigation of any motion plan generated by an arbitrary planning algorithm.

Ii Probability Bounds

In this section, we summarize first and second-order inequalities for the probability of union of events. These inequalities require computation of the terms and , , which are often easy to calculate. We define

and

(5)

Following are the upper bounds for the probability of union of events:

  • Kwerel’s upper bound:

    (6)

    Here, is the total number of events in the union. This inequality was proved by Kwerel [13], [12] as well as Sathe, Pradhan and Shah [25]. It is the closest upper bound for the probability of the union of events based on the knowledge of and .

  • Kounias’ upper bound:

    (7)
  • Hunter’s upper bound:

    (8)

    Here, is a spanning tree of the graph whose vertices are , with and joined by an edge if and only if . Kruskal’s minimum spanning tree algorithm [11] can be used to find the which attains the maximum of . Kounias’ inequality (7) uses the maximum of over only a subset of all spanning trees. Hence, Hunter’s bound is sharper than Kounias’ bound. Also, Hunter’s bound is always at least as good as Kwerel’s upper bound (6).

    In this work, we also compute a suboptimal Hunter’s bound choosing a particular spanning having edges :

    (9)

    Compared to (8), the bound in (9) is cheaper in computation. It also possesses the time-additive structure similar to Boole’s bound (4); hence, this bound could be embedded in the risk-aware motion planning framework.

Following are the lower bounds for the probability of union of events:

  • Fréchet’s lower bound:

  • Bonferroni’s second-order lower bound [23]:

    (10)
  • Dawson and Sankoff’s lower bound [7]: If ,

    where is the integer part of . It is the closest lower bound for the probability of union of events based on the knowledge of and . This optimality was proved by Galambos [9].

Iii End-to-End Risk Analysis

In this section, we present a method to estimate the end-to-end risks based on the bounds given in Section II.

Iii-a Trajectory Tracking Controller

During an actual execution of the planned trajectory, the robot will likely deviate from the plan due to motion and sensing uncertainties. In order to compensate for these uncertainties, we assume the robot executes the plan using a linear feedback controller. In this paper, we use the LQG controller [30] and present here the derivation of the control policy briefly. For , let

be the deviation of the robot from the planned trajectory. The deviation is governed by

(11)

We assume the robot is initially at and thus . Let be a feedback policy at time , based on observations . The optimal control law can be derived solving the following optimization problem:

(12)

where , and . The solution of Problem (12) can be obtained using the separation principle. The optimal controller is given as

(13)

where are the LQR gains and are the state estimates based on the measurements . The LQR gains are computed as

where is obtained using the backward Riccati recursion:

The state estimates

are determined by the Kalman filter. Let

and be the a priori and a posteriori covariances, respectively, at time . The a priori and a posteriori state estimates are computed as

(14)

where are the Kalman gains that are evaluated as

and are computed using the forward Riccati recursion with the initial condition :

Iii-B Distribution of the Closed-Loop Trajectory

Combining (11), (13) and (14), the state deviation and its a priori estimate jointly evolve as [20]:

where

and

Stacking for all time steps, we can write the equation of the closed-loop trajectory as

where

and

Assuming , the distribution of the closed-loop trajectory can be written as where

(15)

Iii-C Computation of the Bounds

We can obtain the exact end-to-end probability of failure by integrating the distribution of the closed-loop trajectory over finite regions. However, as stated earlier, obtaining an integral in a high dimensional space is a computationally expensive problem. Instead, we make use of the probability inequalities listed in Section II to obtain bounds for the end-to-end probability of failure. The main task in evaluating these bounds is to compute the univariate probabilities , and bivariate joint probabilities , , . These are computed from the distribution of and respectively:

where and are obtained by marginalizing . Then,

(16)

and

(17)

If the obstacles in the configuration space are polyhedral, the method used in this work for the computation of the integrals (16) and (17) is summarized in the appendix.

Iv Simulation Results

In this Section, we demonstrate, in simulation, the validity of our bounds for the end-to-end risks, using a ground robot navigation example. The configuration space is and the planned trajectory is assumed to satisfy

The executed trajectory satisfies the linearized robot dynamics,

(18)

for , where , with ( is a identity matrix). (18) is a natural model for ground robots whose location uncertainty grows linearly with the distance traveled. The sensor model is given as per (1). is computed using the LQG feedback control policy to minimize the deviation of the robot from the planned trajectory as explained in Section III-A.

First, we plan the trajectories using RRT* with instantaneous safety criterion [21] (i.e., at every time step, the confidence ellipse with a fixed safety level is collision-free) and compute our bounds for these plans. For a given configuration space, four planned trajectories with , , , and instantaneous safety levels are shown in Fig. 1 respectively and our bounds for the end-to-end probabilities of failure vs instantaneous safety levels are plotted in Fig. 2.

(a) (b)
(c) (d)
Fig. 1: Trajectories planned with the instantaneous safety criterion [21]. The red-faced polygons represent . The trajectories are shown with (a) , (b) , (c) and (d) confidence ellipses at all the time steps.

We validate our bounds by comparing them with the failure probabilities computed using Monte Carlo simulations (shown in black). Each trajectory execution of a Monte Carlo simulation is sampled from the distribution of given in (15). Bonferroni’s second-order lower bounds (shown with red dashed line) are trivial for all the paths in this example. As evident from the graph, Hunter’s upper bound or its suboptimal version and Dawson and Sankoff’s lower bound together provide close approximation to the Monte Carlo estimates of the end-to-end probability of failure. The graph shows that the bounds presented in this work are significantly less conservative than Boole’s bound.

Fig. 2: End-to-end probabilities of failure estimated by Monte Carlo simulations (plotted in black) and their analytical bounds for the trajectories generated with different instantaneous safety levels.

Next, we demonstrate a larger statistical evaluation over trajectories planned using RRT* in randomly-generated environments (random initial, goal and obstacle positions). These trajectories are nominally safe i.e., only the planned positions are ensured to be collision-free. Table I compares the mean absolute errors of different bounds with respect to Monte Carlo simulations.

Estimates Mean Absolute Error [%] Avg. Time [s]
Monte Carlo 0 46.83
Upper bounds
Boole 40.59 0.01
Kwerel 38.15 2.43
Kounias 13.34 2.42
Hunter 8.63 2.41
Hunter suboptimal 10.25 0.18
Lower bounds
Bonferroni 54.88 2.44
Fréchet 40.08 0.01
Dawson 16.74 2.44
TABLE I: Comparison of different bounds over trajectories in terms of Mean Absolute Error with respect to Monte Carlo simulations and computation time. Computation is performed in MATLAB on a consumer laptop.

If the purpose of risk estimation is verification and performance analysis, then it can be performed off-line. However, when risk estimation is a part of online motion planning algorithms, its computation time plays an important role. The computation times for our MATLAB implementation of these bounds and the Monte Carlo method are also reported in Table I. From the data presented, we can draw the following conclusions. First, the bounds presented in this work require significantly less computation time as compared to the Monte Carlo method. Second, our upper bounds provide considerably tighter estimates than Boole’s bound at the expense of some additional computational overhead. Dawson and Hunter’s estimates provide respectively the best lower and upper bounds of risk among all. Finally, Hunter’s suboptimal bound even though slightly more conservative, is computationally cheaper than Hunter’s bound. As it also possesses the time-additive structure, this bound could be embedded in the risk-aware motion planning framework.

V Discussion

V-a Risk Bounds for Continuous-Time Systems

Although our results so far are restricted to discrete-time systems, in practice we are often interested in the safety of continuous-time systems. Hence, it is of our natural interest to study the impact of increased sampling rates on the aforementioned bounds and how they can be used to imply the safety of continuous-time systems. Consider the configuration space and trajectories from the first example of Section IV. Assume the trajectories are planned for the continuous-time system and its time discretization yields (18). For this system, the probability bounds of Boole, suboptimal Hunter, and Dawson at different rates of time discretization vs instantaneous safety levels are plotted in Fig. 3, 4 and 5 respectively. The probabilities obtained using Monte Carlo simulations for a high rate of time discretization (time steps: ) are plotted in black, in all three figures. The lower bounds for the discrete-time risks at all the rates of time-discretization should also be valid for the continuous-time risks. Moreover, Fig. 5 shows that Dawson and Sankoff’s lower bound becomes sharper with the increase in the sampling rate. Similarly, it can be shown that Fréchet’s bound also increases in sharpness at the higher sampling rates. On the contrary, Fig. 3 and 4 show that Boole’s and Hunter’s suboptimal upper bounds decrease in sharpness with the increase in the sampling rate. However, there is a remarkable difference in the rates at which they lose sharpness. Boole’s bound quickly diverges to the trivial probability of as the sampling rate is increased, unlike the suboptimal Hunter’s bound. It can be shown that Hunter’s and Kounias’ bounds also lose sharpness at the higher sampling rates but they still perform better than Boole’s bound. Hence, these inequalities could be useful even for the systems operated in continuous-time.

More investigation and comparison of our bounds at the high sampling rates with the continuous-time risk estimates computed in the existing literature [29]-[17] are left for the future work. Another direction of using these bounds for the continuous-time settings could be similar to [1], in which the authors use the reflection principle and apply Boole’s inequality over intervals instead of the discrete-time steps to compute risk bounds in continuous-time. The reflection principle similar to [1] could be used with the new bounds presented in this work to obtain less conservative risk estimates for the continuous-time models.

Fig. 3: Boole’s upper bounds of end-to-end probabilities of failure for different sampling rates vs instantaneous safety levels. The black graph shows the end-to-end probabilities of failure estimated by Monte Carlo simulations for time steps: .
Fig. 4: Hunter’s suboptimal upper bounds of end-to-end probabilities of failure for different sampling rates vs instantaneous safety levels. The black graph shows the end-to-end probabilities of failure estimated by Monte Carlo simulations for time steps: .
Fig. 5: Dawson and Sankoff’s lower bounds of end-to-end probabilities of failure for different sampling rates vs instantaneous safety levels. The black graph shows the end-to-end probabilities of failure estimated by Monte Carlo simulations for time steps: . The zoomed view of a small portion of the graph is shown.

V-B Higher-Order Probability Bounds

In this work, we have implemented first and second-order probability bounds. The question naturally arises whether we can consider bounds of order higher than two. We have seen Bonferroni’s first and second-order bounds in (4) and (10), respectively. The classical inclusion-exclusion principle states that

(19)

where and are given by (5). In general, , , is defined as

The sum of the first terms on the right side of (19) provides an upper bound to when

is odd and a lower bound when

is even, producing Bonferroni’s -order bound. However, it is generally not true that Bonferroni’s bounds increase in sharpness with the order [28]. Hence, Bonferroni’s higher-order inequalities might not give sharper bounds than the ones considered in this work. A third-order upper bound computed using the Cherry Trees approach presented in [5]

can be sharper than Hunter’s upper bound. Sharper higher-order upper and lower bounds can be computed using the linear programming algorithms

[23], [24]. Of course, higher-order bounds are associated with higher computational complexities.

Vi Conclusion

In this work, we presented several upper and lower bounds for the probability of collision while tracking a given motion plan under stochastic uncertainties. Our approach makes no independence assumptions on the events of collision at different time steps and computes less conservative bounds for the failure probability than the commonly used Boole’s bound in the literature. The approach is quite general and can be applied to any discrete-time trajectory tracking scenario regardless of the choice of linear feedback control laws and trajectory generation algorithms. We also study the performance of the derived discrete-time risk bounds in the continuous-time setting and show that our bounds perform better than Boole’s bound even when the underlying discretization is refined. The future work includes the incorporation of these bounds in planning algorithms to generate risk-optimal trajectories.

Appendix

Computation of and when obstacles are polyhedral

Assume that the obstacle region is decomposed into the disjoint union of polyhedrons, , . can be represented as a conjunction of linear constraints as follows:

(20)

The vector

is the unit normal of the constraint of the polyhedron , pointing inside the polyhedron. Define

(21)

Let

be a univariate random variable corresponding to the perpendicular distance between the constraint

of the polyhedron and as shown in Fig. 6. It can be shown that [3]

where

Fig. 6: The polyhedral obstacle, , composed of linear constraints. The black dot represents . The perpendicular distances between and the linear constraints are also shown. For convenience, the subscript is removed from , and .

Vi-a Computation of

We can write as

(22)

where

(23)

Using (20), (23) can be written as

(24)

The event is equivalent to . Hence, (24) can be written as

Defining

it can be shown that

where

and

where is defined as (21). Then, can be computed as

(25)

Computing the integral (25) is much easier than evaluating the integral (16). We use MATLAB’s mvncdf function to compute the integral (25) over the hypercube.

Vi-B Computation of

Similarly to (22), we can write as

where

(26)

Using (20), (26) can be written as

(27)

The events and are equivalent to and respectively. Hence, (27) can be written as

Defining

it can be shown that

where

and

where and are defined as (21) and

(28)

Then, can be computed as

(29)

Again, the MATLAB’s mvncdf function can be used to compute the integral (29) over the hypercube.

Acknowledgment

We would like to thank Ali Reza Pedram for the helpful suggestions on the probability bounds.

References

  • [1] K. Ariu, C. Fang, M. Arantes, C. Toledo, and B. Williams (2017) Chance-constrained path planning with continuous time safety guarantees. In

    Workshops at the Thirty-First AAAI Conference on Artificial Intelligence

    ,
    Cited by: §V-A.
  • [2] L. Blackmore, M. Ono, A. Bektassov, and B. C. Williams (2010) A probabilistic particle-control approximation of chance-constrained stochastic predictive control. IEEE transactions on Robotics 26 (3), pp. 502–517. Cited by: §I-C.
  • [3] L. Blackmore, M. Ono, and B. C. Williams (2011) Chance-constrained optimal path planning with obstacles. IEEE Transactions on Robotics 27 (6), pp. 1080–1094. Cited by: §I-A, §I-C, Computation of and when obstacles are polyhedral.
  • [4] L. Blackmore and M. Ono (2009) Convex chance constrained predictive control without sampling. In AIAA Guidance, Navigation, and Control Conference, pp. 5876. Cited by: §I-C.
  • [5] J. Bukszár and A. Prekopa (2001) Probability bounds with cherry trees. Mathematics of Operations Research 26 (1), pp. 174–192. Cited by: §V-B.
  • [6] G. C. Calafiore and M. C. Campi (2006) The scenario approach to robust control design. IEEE Transactions on automatic control 51 (5), pp. 742–753. Cited by: §I-C.
  • [7] D. Dawson and D. Sankoff (1967) An inequality for probabilities. Proceedings of the American Mathematical Society 18 (3), pp. 504–507. Cited by: 3rd item.
  • [8] K. M. Frey, T. J. Steiner, and J. How (2020) Collision probabilities for continuous-time systems without sampling. Proceedings of Robotics: Science and Systems. Corvalis, Oregon, USA (July 2020). Cited by: §I-C.
  • [9] J. Galambos (1977) Bonferroni inequalities. The Annals of Probability, pp. 577–581. Cited by: 3rd item.
  • [10] L. Janson, E. Schmerling, and M. Pavone (2018) Monte Carlo motion planning for robot trajectory optimization under uncertainty. In Robotics Research, pp. 343–361. Cited by: §I-C.
  • [11] J. B. Kruskal (1956) On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical society 7 (1), pp. 48–50. Cited by: 3rd item.
  • [12] S. M. Kwerel (1975) Bounds on the probability of the union and intersection of m events. Advances in Applied Probability 7 (2), pp. 431–448. Cited by: 1st item.
  • [13] S. M. Kwerel (1975) Most stringent bounds on aggregated probabilities of partially specified dependent probability systems. Journal of the American Statistical Association 70 (350), pp. 472–479. Cited by: 1st item.
  • [14] S. M. LaValle (2006) Planning algorithms. Cambridge university press. Cited by: §I-A.
  • [15] B. T. Lopez, J. E. Slotine, and J. P. How (2019) Dynamic tube MPC for nonlinear systems. In 2019 American Control Conference (ACC), pp. 1655–1662. Cited by: §I-A.
  • [16] A. Majumdar and R. Tedrake (2013) Robust online motion planning with regions of finite time invariance. In Algorithmic foundations of robotics X, pp. 543–558. Cited by: §I-A.
  • [17] K. Oguri, M. Ono, and J. W. McMahon (2019) Convex optimization over sequential linear feedback policies with continuous-time chance constraints. In 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 6325–6331. Cited by: §V-A.
  • [18] M. Ono, M. Pavone, Y. Kuwata, and J. Balaram (2015) Chance-constrained dynamic programming with application to risk-aware robotic space exploration. Autonomous Robots 39 (4), pp. 555–571. Cited by: §I-C.
  • [19] M. Ono and B. C. Williams (2008) An efficient motion planning algorithm for stochastic dynamic systems with constraints on probability of failure.. In AAAI, pp. 1376–1382. Cited by: §I-C.
  • [20] S. Patil, J. Van Den Berg, and R. Alterovitz (2012) Estimating probability of collision for safe motion planning under Gaussian motion and sensing uncertainty. In 2012 IEEE International Conference on Robotics and Automation, pp. 3238–3244. Cited by: §I-C, §I-D, §III-B.
  • [21] A. R. Pedram, J. Stefarr, R. Funada, and T. Tanaka (2021) Rationally inattentive path-planning via RRT. In 2021 American Control Conference (ACC), pp. 3440–3446. Cited by: Fig. 1, §IV.
  • [22] R. Pepy and A. Lambert (2006) Safe path planning in an uncertain-configuration space using RRT. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5376–5381. Cited by: §I-A.
  • [23] A. Prékopa (1988) Boole-Bonferroni inequalities and linear programming. Operations Research 36 (1), pp. 145–162. Cited by: 2nd item, §V-B.
  • [24] A. Prékopa (2003) Probabilistic programming. Handbooks in operations research and management science 10, pp. 267–351. Cited by: §V-B.
  • [25] Y. Sathe, M. Pradhan, and S. Shah (1980) Inequalities for the probability of the occurrence of at least m out of n events. Journal of Applied Probability 17 (4), pp. 1127–1132. Cited by: 1st item.
  • [26] T. Schouwenaars, B. De Moor, E. Feron, and J. How (2001) Mixed integer programming for multi-vehicle path planning. In 2001 European control conference (ECC), pp. 2603–2608. Cited by: §I-B.
  • [27] T. Schouwenaars, A. Richards, E. Feron, and J. How (2001) Plume avoidance maneuver planning using mixed integer linear programming. In AIAA Guidance, Navigation, and Control Conference and Exhibit, pp. 4091. Cited by: §I-B.
  • [28] S. J. Schwager (1984) Bonferroni sometimes loses. The American Statistician 38 (3), pp. 192–197. Cited by: §V-B.
  • [29] S. K. Shah, C. D. Pahlajani, and H. G. Tanner (2011) Probability of success in stochastic robot navigation with state feedback. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3911–3916. Cited by: §V-A.
  • [30] R. F. Stengel (1994) Optimal control and estimation. Courier Corporation. Cited by: §I-B, §III-A.
  • [31] D. Strawser and B. Williams (2018) Approximate branch and bound for fast, risk-bound stochastic path planning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7047–7054. Cited by: §I-A, §I-C.