Non Holonomic Collision Avoidance of Dynamic Obstacles under Non-Parametric Uncertainty: A Hilbert Space Approach

by   Unni Krishnan R Nair, et al.

We consider the problem of an agent/robot with non-holonomic kinematics avoiding many dynamic obstacles. State and velocity noise of both the robot and obstacles as well as the robot's control noise are modelled as non-parametric distributions as often the Gaussian assumptions of noise models are violated in real-world scenarios. Under these assumptions, we formulate a robust MPC that samples robotic controls effectively in a manner that aligns the robot to the goal state while avoiding obstacles under the duress of such non-parametric noise. In particular, the MPC incorporates a distribution matching cost that effectively aligns the distribution of the current collision cone to a certain desired distribution whose samples are collision-free. This cost is posed as a distance function in the Hilbert Space, whose minimization typically results in the collision cone samples becoming collision-free. We compare and show tangible performance gain with methods that model the collision cone distribution by linearizing the Gaussian approximations of the original non-parametric state and obstacle distributions. We also show superior performance with methods that pose a chance constraint formulation of the Gaussian approximations of non-parametric noise without subjecting such approximations to further linearizations. The performance gain is shown both in terms of trajectory length and control costs that vindicates the efficacy of the proposed method. To the best of our knowledge, this is the first presentation of non-holonomic collision avoidance of moving obstacles in the presence of non-parametric state, velocity and actuator noise models.



page 1

page 2

page 3

page 4


Reactive Navigation under Non-Parametric Uncertainty through Hilbert Space Embedding of Probabilistic Velocity Obstacles

The probabilistic velocity obstacle (PVO) extends the concept of velocit...

DiffCo: Auto-Differentiable Proxy Collision Detection with Multi-class Labels for Safety-Aware Trajectory Optimization

The objective of trajectory optimization algorithms is to achieve an opt...

Solving Chance Constrained Optimization under Non-Parametric Uncertainty Through Hilbert Space Embedding

In this paper, we present an efficient algorithm for solving a class of ...

IVO: Inverse Velocity Obstacles for Real Time Navigation

In this paper, we present "IVO: Inverse Velocity Obstacles" an ego-centr...

Intent-Aware Probabilistic Trajectory Estimation for Collision Prediction with Uncertainty Quantification

Collision prediction in a dynamic and unknown environment relies on know...

SwarmCCO: Probabilistic Reactive Collision Avoidance for Quadrotor Swarms under Uncertainty

We present decentralized collision avoidance algorithms for quadrotor sw...

Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for Online Collision Avoidance

In this paper, we propose an online path planning architecture that exte...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Collision avoidance of dynamic obstacles by kinematically constrained planar robots has been a problem well studied in literature [1, 19]

. However, collision avoidance in the presence of uncertainty of such systems has been sparsely reported. All the more so, when state and control noise (that includes velocity and actuator noise) are characterized by non-parametric models, which is typically encountered in real-world and practical scenarios. Recently

[13, 8] details collision avoidance, as well as trajectory tracking under the duress of non-parametric noise but wherein the kinematic evolution can be expressed in the form of linear models. In this paper, we propose a formulation to kinematically constrained systems wherein the state evolves according to the popular unicycle kinematics [4].

This paper proposes a novel MPC formulation that outputs actuator controls in the form of changes in linear and angular velocities when subject to non-parametric state, velocity, control and lane boundary noise of both the ego agent and other dynamic/static participants. The MPC formulation takes the form of matching the moments of a certain desired collision cone distribution (whose samples are all collision-free) and the current distribution of the ego agent’s collision cone vis-a-vis other dynamic participants. We also define a desired distribution for velocity samples that avoid the lane boundaries , the MPC formulation also tries to match the moments of this distribution to the current distribution due to ego agent’s current state. By collision cone, we indicate the analytical counterpart of the popular velocity obstacle

[5] that has been successfully employed in many works [1, 18, 6]. The collision cone (2) distribution is non-parametric as well since it is composed of agent and obstacle state and velocities whose distributions are non-parametric. The matching of higher-order moments of two non-parametric distributions can be accomplished by embedding these distributions into the Reproducing Kernel Hilbert Space (RKHS) [14, 15], as shown in Figure 1 through the Kernel trick. Those actuator controls that minimize this distance in the RKHS space are selected by the proposed MPC formulation from a set of feasible controls.

Fig. 1: Probability density functions in the physical space can be represented as functions (or points) in RKHS and therefore the similarity between them can be expressed as distance between points. Here the distributions and are parameterized by , and , .

In particular the paper contributes in the following manner

  1. It proposes a novel method utilizing RKHS based collision avoidance [13] for kinematically constrained robots/agents whose state evolution can be modelled by unicycle kinematics under the duress of non-parametric noise. Typically the dynamic obstacle avoidance problem is solved for a holonomic robot and then a non-holonomic controller is used to best track this trajectory. We circumvent this by sampling achievable velocities in the space of the non holonomic robot, and then choosing an optimum sample.

  2. We also provide a framework for collision avoidance in a realistic setting by bringing in lane/corridor constraints with non parametric noise which provide boundaries to the robots configuration space as typical work-spaces are always bounded. We formulate lane keeping also into a distribution matching problem similar to our obstacle avoidance problem stated above bringing in a unified approach to deal with non parametric uncertainty.

  3. Moreover, the RKHS formulation provides for an easily tunable parameter through which a trade-off between cost and robustness is possible. We showcase this through ablations on this parameter in the results section.

Favourable comparisons in terms of trajectory length, control costs as well as qualitative evaluations vis-a-vis many dynamic participants vindicate the efficacy of the proposed framework in Sections V-C and V-D respectively. It is to be noted that performance gain over other distribution matching frameworks such as KLD was established in our previous works [13]. The proposed method also compares favourably against two variants which approximates the non-parametric noise through Gaussian models, one which linearizes the Gaussian approximations of state and velocities of all participants to obtain linearized collision cone distribution, the other which does not subject the Gaussian models to such linearizations. Both these variants are posed as Chance Constraint Optimization along the lines reported in [7, 6]. We further also showcase real-time performance in the control of a non holonomic robot in a realistic gazebo setting under the duress of non parametric noise.

Ii Related Work

While literature abounds in collision avoidance formulations of dynamic obstacles from both single-agent and multi-agent standpoints [5, 18] the literature is sparse when it comes to collision avoidance under uncertainty. The trade-off comes in the form of just avoiding the mean evolution of dynamic obstacles with heightened collision chances vis-a-vis avoiding the entire uncertainty distribution that gives rise to highly conservative maneuvers. Earlier methods tended to err on the side of caution by growing obstacles by the size of the uncertainty, often referred to as bounding volume methods [9, 17]. However, the limitations of such approaches were exemplified in [6] where it was shown chance-constrained formulations offer tunable trade-offs.

Posing the problem as a robust MPC with probabilistic chance constraints are by nature intractable. The primary difficulty lies in computing the analytical form for the chance constraints. Notable exceptions exist in the case in which the random variables under consideration have Gaussian distribution and the chance constraints are defined over affine inequalities

[3, 2]. Formulations along the lines of [12, 11, 6] have developed reliable surrogates that give tight approximations to these chance constraints defined over non-linear inequalities(could be non-convex also). However, all these algorithms require a fundamental assumption on the nature of uncertainty of the random variables (state and actuation of the robot) involved. In general closed-form, surrogates can only be derived if the random variables belong to a Gaussian distribution. Extensions to the non-Gaussian and non-parametric case are very complex and a very active field of research.

The challenge of robust MPC in the presence of non-parametric uncertainty for collision avoidance systems was first approached in [13] wherein an RKHS embedding of moment matching cost was folded into the original MPC cost function. These approaches were inspired by the formulations of [14, 15] and showcased their efficacy on systems whose state evolution was linear with respect to the current state.

Iii Preliminaries

In this paper, we represent the vectors in bold letters,

x, matrices in capital, , and sets in mathcal, . denotes the Euclidean norm of x. For a random variable x, denotes the mean.

denotes the probability of an event while

denotes the probability density function. The subscript indicates the value at time while the superscript indicates the robot or obstacle. Some of the commonly used symbols and notations are summarized in the table I. We also define some notations in the first place of their use.

Symbol Description
(Position, Velocity) of the robot at time
(Heading, Angular velocity) of the robot at time
(Position, Velocity) of the obstacle at time
Control input to the robot at time
Deterministic collision avoidance constraint
Distribution of under uncertainty
Probability of satisfaction of the chance constraint

Iii-a Robot State Estimation

The state estimates of the robot at time

is given by . We model and

as random variables following a non parametric probability distribution. We define the state evolution over time in (



where, is a random variable described by an unknown non-parametric probability distribution and denotes the uncertainty in executing the given control. is the magnitude of the velocity of the robot at time while is the angular velocity. is the heading of the robot at time . Though the distribution of is unknown, we assume that we have access to the samples of at every instance of . We apply a particle filter-based approach on (1) to propagate the noise in time space.

Iii-B Obstacle State Estimation

We assume that we have access to the state estimates of the obstacles via a sensor like a camera or a LiDAR. The sensors are prone to noise and hence is also a random variable described by an unknown non-parametric probability distribution. We estimate using another particle filter to track the obstacles.

Iii-C Collision Avoidance Chance Constraints

Iii-C1 Collision avoidance

Consider the ego and obstacle modeled as circular disks with radii and respectively. We express the collision avoidance constraint at a given time based on the velocity obstacle method [6] in (2).


Iii-C2 Chance Constraints

According the state estimations described in sections III-A and III-B, the positions and velocities of the robot and obstacles are random variables described by unknown probability distributions. Hence, the collision avoidance constraints described in (2) should be satisfied in a probabilistic manner given by (3).


Ie. a control u is required which moves the distribution to the left of zero. Figure 2 shows how the shape of the distribution can be manipulated by u to obtain a desired distributional shift.

Iii-D Corridor/lane Constraints

Iii-D1 Keeping within the corridor

The corridor is defined as two linear boundaries within which the robot has to be in at all times. The corridor is defined as 2 lines


the collision avoidance condition at time t to avoid collision with the corridor boundary is defined as:


Iii-D2 Chance Constraints for corridor

The collision avoidance constraints described in (6) should be satisfied in a probabilistic manner given by (7).


Iii-E Problem Formulation

We formulate the collision avoidance problem as an optimization problem. For every obstacle, we satisfy the chance constraints while optimizing a trajectory following cost.


where, is some desired velocity that the robot needs to attain at time and is the set of feasible control commands.

Iv Methodology

We now present the method to sample the set of feasible controls, , for a non-holonomic robot. We also describe a method to solve the chance constraints presented in (3). In general, the chance constraint of the form (8b) is intractable and is replaced by surrogate constraints. For non-parametric state and control noise, we model the surrogate constraint as distribution matching problem [13], [8]. From this, we reformulate (8) as


where, is the weight for , is a metric of similarity between the two given distributions, and . is a desired distribution which avoids the collision with . Here, we fold the chance constraint into the main cost function and manifest a surrogate distribution matching cost [13], [8]. It should be noted that we do not have access the parametric form of the distributions of and but we assume access to the samples of the same. We elaborate on the desired distribution in section IV-A. We describe the method to sample the feasible set of controls in section IV-B.

Iv-a Desired Distribution

A control input that can shape the distribution of constraints to have an appropriate shape, e.g, appropriate mass to the left of zero as shown in the Figure 2 is a solution of the chance constrained optimization. The solution distribution should be as similar as possible to the desired distribution in aspects such as the similarity around the tail of the distribution. To put it more simply, the desired distribution is an estimate of how the solution distribution might look like. Thus, given a feasible chance-constrained optimization there always exists a desired distribution. However, the essential question one should ask is whether we can always construct such a desired distribution. The construction of the desired distribution is equivalent to solving the following deterministic optimization problem for .

Fig. 2: The shape of the distribution can be manipulated by u. An appropriate shape is one where most of the probability distributions mass lies to the left of .

The different computed for obtained from (10a)-(10e) are precisely the samples of the desired distribution . A desired distribution can be constructed as long as we can solve (10a)-(10e). In other words, the construction guarantee of the desired distribution is tied to the existence guarantee of solution of (10a)-(10e). Now, since represents the velocity obstacle function, (10a)-(10e) represents a non-convex optimization problem for which absolute solution guarantees are intractable to obtain. Thus, instead we focus on the conditions under which a solution to (10a)-(10e) is likely to be obtained.

Assumption 1: A exists such that , , and .

Assumption 1 essentially means that a collision avoidance control can be obtained if we disregard the uncertainty and just consider the mean. This is a fair assumption since velocity obstacle is indeed known to produce collision-free motion in the deterministic case with a very high success rate [18]. Now, the solution computed with respect to the mean would always result in some non-zero probability. For example, if the uncertainty is Gaussian, the control input computed based on the mean of uncertainty will lead to probability of collision avoidance. This, in turn, implies that there would always be samples around the mean that will be collision-free with respect to the control input computed considering just the mean of the uncertainty. Thus, it is likely that if we choose samples , within a ball of radius of the mean uncertainty, we would obtain a solution for (10a)-(10e) and hence can construct the desired distribution.

The desired distribution can be constructed as long as the solution to the deterministic counterpart of (8) can be obtained by solely the deterministic version of MPC without the surrogate cost. Further, the small perturbations around the mean state and control are considered and solved for. As long as a solution exists to the mean MPC and perturbations around the mean, a desired distribution can be obtained. The distribution of , and are all functions of the state of the robot and obstacle as well as the control input to the robot, which are all, in turn, random variables.

Iv-B Sampling feasible controls

We solve the optimization problem in (8) by sampling a set of feasible control commands and choosing the best control from the sampled set. The sampling approach makes solving (8) independent of the motion model of the robot. For simplicity, we sample a set of feasible controls using a differential drive motion model. Algorithm 1 describes how the feasible sample set is populated.

1:   # steps in the magnitude of velocity
2:   # steps in the magnitude of angular velocity
3:   Bounds on magnitude of velocity
4:   Bounds on magnitude of angular velocity
7:  for Every time step do
8:     for   do
9:        for  do
11:        end for
12:     end for
14:  end for
Algorithm 1 Sampling controls using the unicycle model

Essentially, we take steps between the bounds on the magnitude of angular velocities and the bounds of the magnitude of linear velocities to get samples of linear velocities and samples of angular velocities. All combinations of the sampled linear and angular velocities are used to generate the set of feasible controls . The number of steps is chosen to ensure adequate coverage of the sample space of velocities that the ego vehicle can take.

Iv-C Solving Collision Avoidance Chance Constraints

Iv-C1 Distribution matching

A probability distribution is defined by its moments. From [10], for given random variables x and y, the distributions of x and y become more similar as the higher-order moments become similar.


where, refers to the order up to which the moments of and are similar. To the best of our knowledge, there is no method to deduce the exact mapping between the similarity of the distribution and order of moment to match starting from the first moment. Taking the concepts from [16], [13] formulates a workaround using the concept of embedding distributions in Reproducing Kernel Hilbert Space (RKHS) and Maximum Mean Discrepancy (MMD) distance.


where, and are number of samples from respective distributions of the random variables x and y. represents a polynomial kernel of degree . and are the weight vectors corresponding to and . For samples, and take the values given in (13).


Iv-C2 Collision Avoidance

Algorithm 2 describes the procedure to obtain an optimal control, , that avoids the obstacles under uncertainty at time .

1:   Samples of the position of the robot at time
2:   Samples of (Position, Velocity) of the obstacle at time
3:  for Every  do
4:     Evaluate the value of using (8a)
11:     Evaluate the value of using (12)
13:  end for
16:  return  
Algorithm 2 Collision avoidance at time

Essentially, we calculate the distributions , and for each sample control in the feasible sample set for all noise samples. Then the desired distributions , , and are calculated. Finally we calculate the cost defined in (9) as and find the optimal control as which yields the minimum cost .

V Results and Discussion

All simulations were carried out on a commodity laptop with MATLAB as the simulation engine. The robot’s state was evolved according to unicycle kinematics. Non-Gaussian and at times multi-model distributions were used to model the robot’s state, velocity and control noise and the state and velocity noise of the dynamic obstacles and participants.More details about the noise distributions can be found here Twenty-five samples for robot and obstacle each were used to model the uncertainty in the state, velocity and actuator controls where the samples were drawn from a Pearson distribution.

V-a In-feasibility of an optimization based approach

Figure 3 shows that the cost function surface has many kinks and valleys where a gradient based optimisation technique may get stuck or cause delays in the time to converge or may even converge to a sub-optimal solution 3. Instead by sampling in the space we are able to converge very close to an optimal solution and are able to maintain tight bounds on the frequency of control output which is very important for realtime control.

Fig. 3: Notice the uneven cost surface with multiple kinks and valleys.

V-B Qualitative Results

Figure 4 shows the various stages in an obstacle avoidance in the presence of five obstacles. For every snapshot where we are performing an avoidance manoeuvre we show below it the desired distribution of collision cones in black and the current distribution in blue for the closest obstacle. When a majority of samples of the robot are colliding with samples of the obstacles this distribution tends to be on the right side of the co-ordinate axis. Progressively as the obstacle gets avoided the collision cone distribution moves to the left of the axis and as can be seen in Figure 4 coincides largely in its moments (such as the mean) with the desired distribution. Also shown in the top part of the distribution plots, is a horizontal bar whose length shows the number of samples of the obstacle distribution that are avoided. By showcasing the ability of the algorithm where many samples of the ego robot is able to avoid many samples of the obstacles we verify its efficiency. Figure 6 shows the how we can change the behaviour of the avoidance maneuver with changes in the tunable parameter in the presence of static obstacles and lanes/corridors. With we are able to pass into tight spaces while prioritises safer maneuvers and more avoidance. In these figures the obstacle is shown in a shade of orange, the robot in blue; the mean sample is shown dense in their respective colours whereas the other samples are shown by the circles whose interiors are unfilled. We also show the proposed method working in a real-time simulation in gazebo in the presence of lane/corridor constraints and stationary and dynamic obstacles at

V-C Ablation Studies

As the RKHS hyper-parameter increases higher order moments of the distributions strive to match each other as shown in 4(d), 4(e), 4(f). This results in enhanced collision avoidance at the cost of increasing trend in conservativeness of the maneuver. We show this for a robot avoiding single obstacle in Figures 4(a) 4(b) 4(c). Figure 4(a) depicts the scenario for where the agent gets close enough to the obstacle and not all samples of the obstacles are avoided. As increases to , more samples of the obstacles are avoided by the robot samples as seen in Figures 4(b), 4(c). However the conservativeness of the maneuver depicted in terms of deviation from the optimal path (straight line) also increases with increasing as shown in the bar chart of Figures 7. We further compare with the following 2 formulations. In the first one the collision cone distribution is obtained by further linearization of the Gaussian distributions [20]

. In the second one the mean and variance are derived from the Gaussian approximation of the noise. Then we construct the surrogate chance constraint along the lines of

[7, 6]. We consistently perform better than them with respect to both the control cost and the deviation in 7.

(a) 5 obstacles
Fig. 4: Obstacle avoidance in the presence of 5 dynamic obstacles . For every snapshot in which we are executing an avoidance maneuver we show below it the desired distribution of collision cones in black and the current distribution in blue for the closest obstacle.
(a) d = 1
(b) d = 2
(c) d = 3
(d) d = 1
(e) d = 2
(f) d = 3
Fig. 5: 4(a), 4(b), 4(c) are the snapshots of the collision avoidance simulation for d = 1,2,3 respectively. An increase in d results in an increase in the clearance between the robot and the obstacles. The increased clearance in turn results in an improvement to the probability of collision avoidance. 4(d), 4(e), 4(f) are the distributions of velocity obstacles obtained after solving the RKHS based approach for different values of d. We see that as d increases more of the tail end starts to match which is also shown by the increasing values of .
Fig. 6: 5(a) and 5(a) show the ego robot avoiding a static table and reaching a goal behind it. With we are able to go between the legs of the table. Whereas with higher values of we route around it. 5(c) and 5(c) show the ego vehicle prioritising to stay away from the lane boundaries with increasing while reaching the same goal.
(a) Control costs
(b) Deviation plots
(c) Time duration
Fig. 7: 6(a) shows the average control cost 6(b) shows the average deviation from the optimal trajectory 6(c) shows the time taken to avoid. All costs were obtained with different methods for collision avoidance of a single obstacle observed across n different instances. The proposed RKHS formulation consistently results in a lower cost more optimal solution compared to the listed baseline approaches. Furthermore, the approach based on surrogate constraints is often infeasible for higher .
(a) Control costs
(b) Deviation plots
(c) Time duration
Fig. 8: 7(a) shows the average control cost 7(b) shows the average deviation from the optimal trajectory 7(c) shows the time taken to avoid. All costs were obtained with different methods for collision avoidance of the given number of obstacle observed across n different instances.

V-D Quantitative Comparisons

In this section, we compare the proposed RKHS formulation vis a vis with a chance constrained linearized formulation which approximates the non-parametric distribution by Gaussian models as described in Section V-C. The comparisons are shown in the bar plots of Figure 7(b) which compares over deviations in trajectory length from the optimal trajectory whereas Figure 7(a) compares over the L2 norm of control or actuator costs. The bar coloured in blue in each bar plot represents the proposed formulation for , while the bar coloured in orange is for the method of linearization. Figures 7(a), 7(b) and 7(c) show the respective comparisons for the cases of one, two and three obstacles between the proposed method and the linearized method.

It is evident from these plots that the RKHS formulation outperforms our baseline on both trajectory deviations and control costs. Additional results based on the proposed method can be found at

Vi Conclusion

This paper proposed a novel formulation based on Reproducing Kernel Hilbert Space embedding of non-parametric agent and obstacle distributions and gainfully employed it to solve the problem of the ego agent avoiding several dynamic obstacles under the duress of non-parametric state, velocity, lane boundary, and actuator noise. The formulation was validated for a kinematically constrained agent whose state evolves according to unicycle kinematics. Performance gain across various parameters such as trajectory length and time were shown with respect to two formulations first where the collision cone distribution is obtained by further linearization of the Gaussian distributions with respect to state and velocity variables and the second method where the collision cone distribution is not analytically computed but its mean and variance are derived from the Gaussian approximation which is then used to construct the surrogate chance constraint. To the best of the authors’ knowledge, this is the first such paper to have attacked this problem.


  • [1] J. Alonso-Mora, P. Gohl, S. Watson, R. Siegwart, and P. Beardsley (2014) Shared control of autonomous vehicles based on velocity space optimization. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1639–1645. Cited by: §I, §I.
  • [2] L. Blackmore, M. Ono, A. Bektassov, and B. C. Williams (2010) A probabilistic particle-control approximation of chance-constrained stochastic predictive control. IEEE transactions on Robotics 26 (3), pp. 502–517. Cited by: §II.
  • [3] S. Boyd (2008) Stochastic programming. Lecture Notes, Stanford University. Cited by: §II.
  • [4] A. De Luca, G. Oriolo, and C. Samson (1998) Feedback control of a nonholonomic car-like robot. In Robot motion, planning and control, J.-P. Laumond (Ed.), Lecture Notes in Control and Information Sciences 229. Cited by: §I.
  • [5] P. Fiorini and Z. Shiller (1998) Motion planning in dynamic environments using velocity obstacles. The International Journal of Robotics Research 17 (7), pp. 760–772. Cited by: §I, §II.
  • [6] B. Gopalakrishnan, A. K. Singh, M. Kaushik, K. M. Krishna, and D. Manocha (2017) PRVO: probabilistic reciprocal velocity obstacle for multi robot navigation under uncertainty. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pp. 1089–1096. Cited by: §I, §I, §II, §II, §V-C.
  • [7] B. Gopalakrishnan, A. K. Singh, and K. M. Krishna (2015) Closed form characterization of collision free velocities and confidence bounds for non-holonomic robots in uncertain dynamic environments. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. 4961–4968. Cited by: §I, §V-C.
  • [8] B. Gopalakrishnan, A. K. Singh, K. M. Krishna, and D. Manocha (2018) Solving chance constrained optimization under non-parametric uncertainty through hilbert space embedding. External Links: 1811.09311 Cited by: §I, §IV, §IV.
  • [9] D. Hennes, D. Claes, W. Meeussen, and K. Tuyls (2012) Multi-robot collision avoidance with localization uncertainty. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 147–154. Cited by: §II.
  • [10] B. G. Lindsay and P. Basak (2000) Moments determine the tail of a distribution (but not much else). The American Statistician 54 (4), pp. 248–251. External Links: ISSN 00031305, Link Cited by: §IV-C1.
  • [11] B. Luders, M. Kothari, and J. How (2010) Chance constrained rrt for probabilistic robustness to environmental uncertainty. In AIAA guidance, navigation, and control conference, pp. 8160. Cited by: §II.
  • [12] A. Mesbah, S. Streif, R. Findeisen, and R. Braatz (2014-06) Stochastic nonlinear model predictive control with probabilistic constraints. pp. 2413–2419. External Links: ISBN 978-1-4799-3274-0, Document Cited by: §II.
  • [13] S. N. J. Poonganam, B. Gopalakrishnan, V. S. S. B. K. Avula, A. K. Singh, K. M. Krishna, and D. Manocha (2020-04) Reactive navigation under non-parametric uncertainty through hilbert space embedding of probabilistic velocity obstacles. IEEE Robotics and Automation Letters 5 (2), pp. 2690–2697. External Links: Document, ISSN 2377-3774 Cited by: item 1, §I, §I, §II, §IV-C1, §IV, §IV.
  • [14] B. Schölkopf, K. Muandet, K. Fukumizu, S. Harmeling, and J. Peters (2015) Computing functions of random variables via reproducing kernel hilbert space representations. Statistics and Computing 25 (4), pp. 755–766. Cited by: §I, §II.
  • [15] A. Scibior, C. Simon-Gabriel, I. O. Tolstikhin, and B. Schölkopf (2016) Consistent kernel mean estimation for functions of random variables. In Advances in Neural Information Processing Systems, pp. 1732–1740. Cited by: §I, §II.
  • [16] C. Simon-Gabriel, A. Scibior, I. O. Tolstikhin, and B. Schölkopf (2016) Consistent kernel mean estimation for functions of random variables. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 1732–1740. Cited by: §IV-C1.
  • [17] J. Snape, J. Van Den Berg, S. J. Guy, and D. Manocha (2011) The hybrid reciprocal velocity obstacle. IEEE Transactions on Robotics 27 (4), pp. 696–706. Cited by: §II.
  • [18] J. van den Berg, Ming Lin, and D. Manocha (2008-05) Reciprocal velocity obstacles for real-time multi-agent navigation. In 2008 IEEE International Conference on Robotics and Automation, Vol. , pp. 1928–1935. External Links: Document, ISSN 1050-4729 Cited by: §I, §II, §IV-A.
  • [19] D. Wilkie, J. Van Den Berg, and D. Manocha (2009) Generalized velocity obstacles. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5573–5578. Cited by: §I.
  • [20] H. Zhu and J. Alonso-Mora (2019-04) Chance-constrained collision avoidance for mavs in dynamic environments. IEEE Robotics and Automation Letters 4 (2), pp. 776–783. External Links: Document, ISSN 2377-3774 Cited by: §V-C.