NH-TTC: A gradient-based framework for generalized anticipatory collision avoidance

07/12/2019 ∙ by Bobby Davis, et al. ∙ 0

We propose NH-TTC, a general method for fast, anticipatory collision avoidance for autonomous robots having arbitrary equations of motions. Our proposed approach exploits implicit differentiation and subgradient descent to locally optimize the non-convex and non-smooth cost functions that arise from planning over the anticipated future positions of nearby obstacles. The result is a flexible framework capable of supporting high-quality, collision-free navigation with a wide variety of robot motion models in various challenging scenarios. We show results for different navigating tasks, with our method controlling various numbers of agents (with and without reciprocity), on both physical differential drive robots, and simulated robots with different motion models and kinematic and dynamic constraints, including acceleration-controlled agents, differential-drive agents, and smooth car-like agents. The resulting paths are high quality and collision-free, while needing only a few milliseconds of computation as part of an integrated sense-plan-act navigation loop.



There are no comments yet.


page 4

page 12

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent trends in robotics, machine learning, and computer graphics have significantly advanced the state-of-the-art in autonomous navigation of mobile robots and intelligent agents. Over the past two decades, numerous

global approaches have been proposed including incremental sampling-based planners (Karaman and Frazzoli, 2011; LaValle and Kuffner Jr, 2001) and receding horizon formulations (Falcone et al., 2007; Williams et al., 2017) which compute a sequence of controls for the robot by optimizing a cost function that accounts for goal-oriented safe navigation. In many situations, though, an important task for the robot is to immediately react to its local surroundings while still making its best effort to follow its global plan. Whether it is an autonomous car driving on a highway, or a semi-autonomous smart-shelf navigating in an automated warehouse, a robot should be able to observe its surroundings, anticipate the expected behavior of nearby obstacles, and react accordingly, all within a tight sense-plan-act loop.

Many successful recent techniques for anticipatory local navigation have been based on the concepts of velocity obstacles (VO) (Fiorini and Shiller, 1998; van den Berg et al., 2008), that is, the set of all possible relative velocities that will lead to a collision between two robots. VO-based formulations of collision avoidance are naturally anticipatory in that they penalize upcoming, future collisions. Recent work has extended the basic idea of VOs to allow efficient real-time solutions for complex scenarios  (Guy et al., 2009), provide formal guarantees of collision avoidance with multiple interacting agents (van den Berg et al., 2011b), and shown how to support a variety of non-holonomic motion models (Wilkie et al., 2009; Bareiss and van den Berg, 2015). Broadly speaking, these methods work by computing (conservative) linear approximations of the complex action spaces that are known to be collision free with respect to a given dynamic obstacle. Optimal controls can then be found using geometric optimization techniques, leading to an overall approach that is computationally efficient, but can be overly conservative, especially in complex scenarios.

Inspired by the success of these geometric optimization techniques, we seek to propose a more generalized framework for representing, specifying, and accounting for the anticipatory collision avoidance needs of mobile robots. While previous local methods typically plan for robot actions in some intermediate space (e.g., a velocity space), we propose a method that allows robots to quickly plan directly in the space of their controls. This allows us to naturally support planning over a wide variety of robot types with various kinematic or dynamic constraints.

Our NH-TTC approach, short for non-holonomic time-to-collision, leverages gradient-based optimization techniques to develop an “anytime” framework for iteratively refining a robot’s control to minimize the expected costs of the resulting trajectories. We achieve this by borrowing techniques from trajectory optimization. Specifically, we compose a cost function consisting of two terms: a time-to-collision based trajectory cost and a cost-to-goal term. In addition, to enable quick optimizations, we consider only trajectories created by a single, constant control. The result is a high quality estimate of the true cost of a control that can still be quickly optimized even for complex, non-holonomic dynamics models with kinematic constraints.

Two key technical challenges must be addressed to enable our approach:

  1. Because the new cost term we introduce is a function of the expected future states of the robot, it will have discontinuities in its values (e.g., an arbitrarily small change in control can move a robot discontinuously into a collision course).

  2. There is typically no closed-form, analytical solution that can represent the dynamics of future collision that will develop over time for arbitrary robot motion models.

Together, these two issues can prevent the use of many traditional gradient-based techniques for finding optimal controls. The lack of a closed-form solution means we can not easily compute the true gradient of the cost function with respect to the robot controls. In addition, the discontinuity in the cost means that even if we can estimate this gradient well for parts of the solution space, it will be undefined at the discontinuities.

In this paper, we propose NH-TTC, a generalized framework for fast, anticipatory, optimization-based steering that addresses the above challenges by exploiting implicit differentiation and subgradient descent to locally optimize the nonconvex and nondifferentiable cost functions that arise from planning over expected future positions of neighboring obstacles. We show the applicability of our solution both on physical robots and simulated agents including smooth differential drives, double integrators, and car-like agents. Moreover, the method can be easily extended to more complicated navigation tasks such as chasing dynamic goals and multi-robot navigation in shared environments.

Our proposed framework occupies an important middle ground between reactive methods that only account for local state and planning-based approaches which offer full trajectory optimization. We can provide fast computation appropriate for use in a closed-loop, reactive control scheme, while still providing high-quality paths that avoid the long-term inefficiencies of purely local optimization.

Organization. The remainder of the paper is organized as follows. Section 2 reviews relevant work on decentralized planning and local collision avoidance. Section 3 casts local navigation as a control optimization problem, and Section 4 details our generalized optimization framework that relies on subgradient descent and implicit differentiation. Section 5 describes how our method can be applied to different dynamics models, including a smooth car, and Section 6 provides details regarding our experimental setup and the implementation of NH-TTC. In Section 7, we show the applicability of NH-TTC to simulated agents navigating amidst dynamic obstacles. Section 8 shows how our approach can be extended to support dynamic goals, and to enable collision avoidance in a multi-robot setting. Section 9 presents results on physical robots and analyzes the performance of NH-TTC. Finally, in Section 10, we conclude, and discuss limitations along with future research directions.

2 Related Work

There is an extensive amount of literature on planning global trajectories among static and moving obstacles, including sampling-based approaches (Karaman and Frazzoli, 2011; Jaillet and Siméon, 2004), uncertainty-aware planners (Bry and Roy, 2011; van den Berg et al., 2011a), anytime and receding horizon approaches (Falcone et al., 2007; Mattingley et al., 2011)

and more recently end-to-end deep learning techniques 

(Pan et al., 2018), to name just a few. Given the current focus of our work on realtime mobile robot navigation, in the rest of this section, we restrict ourselves to some highly relevant work on decentralized planning and local collision avoidance.

Our method falls under the class of anticipatory collision avoidance methods in which the robot is able to predict how its neighborhood evolves over time and react accordingly (Fox et al., 1997). Most of such anticipatory methods rely on the concept of Velocity Obstacles (VO) introduced by Fiorini and Shiller (1998)

that defines the set of all relative velocities that will lead to a collision between a holonomic robot and moving obstacles at some moment in time. VOs provide a tractable alternative to inevitable collision states 

(Fraichard and Asama, 2004; Petti and Fraichard, 2005), and have been successfully extended to account for reciprocity between agents allowing their application to decentralized multiagent navigation problems (van den Berg et al., 2008; Guy et al., 2009). Further extensions have been proposed including VO definitions for reciprocal collision avoidance between robots having more complicated dynamics (van den Berg et al., 2011c; Rufli et al., 2013; Bareiss and van den Berg, 2013), multi-robot teams walking in formation (Kimmel et al., 2012; Karamouzas and Guy, 2015), and formulations that account for uncertainty in the future trajectory of obstacles (Fulgenzi et al., 2007; Snape et al., 2011).

The Generalized Velocity Obstacles (GVO) approach was introduced by Wilkie et al. (2009) that defines collisions with obstacles in the control space and enables safe navigation of kinematically constrained robots by selecting a control outside the “control obstacles” space. Our approach is closely related to GVOs, as we also plan directly in the control space of the robots. However, GVO relies on the discretization of the control space, which can become prohibitively expensive for real-time implementation on robots with complex dynamics, and also assumes linear motion for the obstacles. In contrast, we use numerical optimization methods to search for a local optimum control in an anytime fashion and can support different motion models for the obstacles.

To address the challenges that sampling-based VOs pose for real-time multi-robot navigation, the popular ORCA framework was proposed by van den Berg et al. (2011b)

. ORCA conservatively approximates VOs as half-planes, allowing robots to quickly find collision-free velocities outside of the union of all VOs by solving a convex optimization problem through linear programming. Since the original work of van den Berg et al. on holonomic robots, many ORCA-based approaches have been proposed that linearize the VOs or learn such constraints, including approaches for steering differential-drives, car-like robots, and other non-holonomic agents 

(Giese et al., 2014; Snape et al., 2010; Alonso-Mora et al., 2013, 2012; Long et al., 2017).

More closely related to our work is the Generalized Reciprocal Velocity Obstacles (GRVO) approach of Bareiss and van den Berg (2015) that provides a generalized framework for local navigation supporting robots with both linear and nonlinear equations of motions. Similar to GVO, GRVO plans over potential robot controls, but does so through an indirect manner by representing a set of controls over time as a single high-level target velocity. GRVO then “ORCAfies” this space of target velocity using linear approximations to represent which target velocities may lead to collisions. However, GRVO, and other ORCA-like approaches, can be overly conservative in their approximations, forbidding large amounts of admissible controls due to the linearization of control constraints. Furthermore, these approaches all typically use a binary indicator cost function to assess the quality of a given control input; a control is either strictly forbidden or fully allowed. Together, these factors can lead to inefficient robot behavior, especially in highly constrained scenarios.

To address these issues, we propose an alternative, gradient-based, framework, where the robot selects a new control by directly optimizing an anticipatory cost function inspired by recent work in understanding collision avoidance between pedestrians (Karamouzas et al., 2014; Olivier et al., 2012). We note that while gradient-based steering has been explored before for local navigation (Forootaninia et al., 2017; Dutra et al., 2017), such approaches focus only on holonomic agents. Importantly, most of the existing work in anticipatory local navigation, including recent socially-inspired approaches and probabilistic-based frameworks (Mavrogiannis and Knepper, 2018; Wolinski and Lin, 2018), plans in an intermediate, higher-level space (such as velocities). In contrast, we focus on reactive local navigation directly in the control space. Here we assume that a robot can interact with other agents that may or may not react to it, and propose a generalized local steering approach that can be applied to a variety of robot motion models and navigation settings.

3 NH-TTC Problem Formulation

Our work considers the problem of a robot that must traverse among moving obstacles while navigating to a goal position. Here, we formulate the problem as one where the robot is following a tight sense-plan-act loop many times a second. As such, our approach is similar to classic “reactive” planning approaches as the robot is given only a few milliseconds to compute new controls each time step in response to its immediate sensor input. Existing reactive approaches typically have one of two key limitations: either they need to use simple dynamics models for the robots (e.g., ORCA assumes direct velocity control), or they work in some intermediate representation (e.g., positional fields or velocity-space) that must be translated somehow to robot controls. In contrast, we directly compute an exact control that is (locally) optimal with respect to some cost function, which allows both the robot and the obstacles in its environment to have (different) arbitrary dynamics functions.

Our approach has two key features that enable strong practical performance. First, the approach is anytime, this means that it can quickly find an acceptable solution (typically, well under a millisecond) and iteratively refines it as time is available. Secondly, as in full trajectory optimization approaches, our approach is anticipatory, penalizing controls based on their expected future impact. Unlike in full trajectory optimization approaches, we focus on trajectories generated by a single control, allowing NH-TTC to quickly find good paths.

3.1 Notation and Preliminaries

We assume our environment contains a single robot navigating to a goal position while avoiding a set of obstacles (see Section 8.2 for the application of our approach to multi-agent settings with several robots navigating simultaneously in a shared environment). We assume both the robot itself and the various obstacles in its environment follow some known continuous time dynamics functions that defines the future state of the robot and obstacles states as follows:


where is a valid control input, and , are (possibly non-linear) continuous-time equations of motion. The set is used to encode constraints on the robots dynamics, such as maximum control limits. Likewise, we can define a collection of valid states that can be used to constrain any aspect of the robot’s state such that . This is needed to specify state constraints that are not directly part of a robot’s control. For example, an acceleration-controlled robot may have a maximum velocity.

To determine collisions between the robot and the obstacles, we model them both as disks. These disks are defined by projecting both the robot and obstacle states into a common Euclidean workspace, typically 2d or 3d, and then finding the minimal covering disk. As such, we define the robot’s position as , where maps from state space to the Euclidean workspace. The function is chosen to place the center of the collision disk with a radius so as to wrap the true shape of the robot as closely as possible. Similarly, let and be the center of the collision disk and its radius for obstacle at time , respectively, where the function maps the obstacle’s state space to the workspace.

3.2 Optimization-based Formulation

Given the above notation, we can formally define the problem as follows. We are given the robot’s current state, , a set of obstacle states over time, , and the robot’s goal position, , which we assume is been computed by a high level planning approach. The task for the robot is find a collision-free trajectory, , that approaches the goal as fast as possible while obeying the constraints of the robot dynamics, , the control constraint set, , and accounting for the state constraint set .

Given an arbitrary trajectory we seek to construct a cost function which evaluates how well the trajectory does at providing efficient, collision-free motion towards the robot’s current goal. Similar to other trajectory optimization approaches we break this cost into two terms:


where evaluates how closely the trajectory comes to meeting its goal state (or goal position), and assigns a penalty to trajectories which have a high risk of collision. Given sufficient computation time, our goal would be to find the complete trajectory which minimizes this cost function, e.g., via sampling as in Bekris and Kavraki (2007), Karaman and Frazzoli (2011), and Denny et al. (2013), or using a POMDP-like formulation as in Platt et al. (2010) and Amato et al. (2016). However, these methods typically take several seconds or longer to converge so are inappropriate for the real-time setting considered here.

In order to allow the fast computation needed for use in a tight reactive planning loop, we consider only trajectories that are represented as a single, consistent, control that is executed indefinitely. As a result, we can reparameterize our cost function in terms of a single control :


and reformulate the task to one of finding the optimal control. That is, finding the single control that leads to the optimal fixed-control trajectory as defined by Equation 3.

In practice, a robot would not take this fixed-control trajectory indefinitely. Rather, it will re-run this optimization many times a second updating its planned trajectory as it approaches the goal and as local conditions change (similar in spirit to a receding horizon planner). Additionally, robots typically have some limits on what controls they make take at a given instant, such as maximum accelerations or steering limits, which leads to a set of admissible controls at any time. The result is a constrained optimization problem


that the robot must solve at each step of its sense-plan-act loop. To ensure state constraints are satisfied, we must also check if the resulting trajectory respects the state constraints , and, if not, project to the nearest control that respects these constraints within a small time horizon.

3.3 NH-TTC Cost Function

(a) Scenario
(b) Cost Field
Figure 3: Gradient-Based Optimization in Control Space
(a) A differential-drive robot, shown in red, has to reach the ‘x’ mark while avoiding a non-reactive obstacle shown in gray. (b) The corresponding cost field of the robot is visualized here by taking samples from the robot’s feasible controls. To better show the gradient, values are plotted in log scale with the color corresponding to .

In general, many different cost fields can be considered in Equation 3. Inspired by trajectory optimization techniques, we find that the sum of two simple cost functions works well in a wide variety of scenarios: a cost-to-goal and a time-to-collision based trajectory cost.

Goal Cost ():

Because a fixed-control trajectory has no end point, we evaluate the position of the robot at some time into the future. The goal cost is then defined by how close the robot will be to its goal position at that time:


where is a scaling constant. Section 8.1 discusses an extension to this cost function to better support cases where a robot is chasing after a moving target.

Collision Cost ().

Again, as our fixed-control trajectory has no endpoint, we evaluate collisions only for the next seconds. Inspired by recent findings that suggest the urgency humans place on a collision follows an inverse power-law relationship with how imminent such a collision is (Karamouzas et al., 2014), we penalize trajectories with a term that is inversely proportional to the time until the nearest collision:


Here is a scaling constant and computes the minimum time to collision between the robot’s trajectory (as determined by the control, the current robot position, and the dynamics of the robot) and the expected future trajectory of obstacle . The result of this human-inspired collision cost is a natural balance between strongly avoiding urgent collisions and (when necessary) taking controls that will lead to collision in the far enough in the future that the robot will have a chance to re-plan well before the collision happens. Note that, to improve efficiency, we only consider the first collision that happens within the next seconds.

An important property of our cost function is that it rises to infinity as the time until the nearest collision approaches zero. This means controls which lead to immediate collisions have effectively an infinite penalty. As a result, in the limit as planning frequency approaches infinity, our approach is guaranteed to be collision-free so long as the optimizer has sufficient time to converge to a finite cost (and assuming a collision-free control exists).

3.4 Challenges

Despite the straightforward nature of its components and , the resulting cost function is difficult to optimize as it is constrained, non-convex, and non-continuous. Specifically, time to collision has a sharp discontinuity between a glancing collision and no collision, jumping from a finite value to infinity, respectively. As an example of these discontinuities, Figure 3 shows the cost field, , for a robot avoiding a single (passive) oncoming obstacle. Here, the robot is assumed to be a differential-drive robot that can directly control its linear and angular velocity.

As can be seen in Figure 3, there are sharp boundaries between the collision and collision-free controls. Controls that drive backwards (the left half of the figure) have higher costs due to the term, controls that the lie along the top left or bottom right quadrants have addition penalties from as they will lead to imminent collisions. The optimal control (linear velocity = 0.3m/s, rotational velocity = 0.03rad/s) steers towards the goal, while gently avoiding any potential collisions.

Due to the discontinuities, classical gradient descent will be ineffective (as will other standard, higher order optimization techniques). Note that smoothing this cost field as in Karamouzas et al. (2017) is not possible with arbitrary dynamics, since we do not in general know a priori where the discontinuities lie. Additionally, itself can be difficult to compute as we may have no closed form solution for the position of the robot or the obstacles at any given time depending on their dynamics model. The following section describes our approach to overcoming these difficulties.

4 Control Optimization

Per our problem formulation, optimizing Equation 4 will allow us to compute a per-time step control for the robot that will result in kinematically feasible, anticipatory, goal-oriented navigation in a constrained settings. Our proposed NH-TTC approach relies on two techniques. First, we will use subgradient descent-based optimization (Shor, 1985; Bertsekas, 1999) which will allow us to find local minima even in the presence of a non-continuous, non-smooth cost function. Second, we will approximate the robot and obstacle dynamics via Runge-Kutta integration and compute the control gradients of the robot’s position and time to collision through the use of implicit differentiation.

4.1 Subgradient Descent

In order to optimize Equation 4 we use the subgradient descent algorithm. As in standard gradient descent, the control is iteratively updated based on the gradient of the cost with respect to :


where the search direction, , is based on the gradient . However, at points where is not smooth, and therefore is undefined, we choose either the left or right gradient. This is known as the subgradient, which we will call . While this gives us an optimization direction, it does not define the stepsize , i.e., how far to update our control in that direction.

Numerous techniques have been proposed to choose an appropriate update size . Experimentally, we found a Polyak update-based approach (Polyak, 1987) to perform well in our domain. This method assumes the optimal possible control cost, , is known in advance. Then, at each descent iteration , we compute the difference between the current cost, , and the optimal value of the function, , and scale the result by the squared magnitude of the current subgradient search direction, :


Given the complex and dynamic nature of our cost function, it is generally not possible to know the optimal value in advance. As such, we compute an approximate optimal cost by first taking the best cost seen in any of the iterations so far, , and then subtracting a small amount:


As has been suggested in the literature (Shor, 1998; Nedic and Bertsekas, 2001), the amount we subtract from gradually decreases with increasing iteration count, . The resulting update to guarantees that the subgradient decent will converge on a true local minimum as approaches infinity.

While it is possible to directly use the subgradient as the search direction (i.e., ), we found convergence to be improved by adding “momentum” to the search direction (Nesterov, 2003). That is, our search direction at iteration is based in part on the current subgradient and in part on the previous search direction, . Experimentally, we found the following update rule to work well:


Subgradient descent does not guarantee a cost decrease at each iteration, so the best control seen across the entire optimization is used as our final result. Additionally, we project the control computed at each iteration onto the control constraints, , to ensure the controls remain feasible. The resulting projected subgradient descent algorithm is shown in full in Algorithm 1. Here, we assume that the robot is given a time budget, , to compute its new control.

Input : 
Output : 
while  do
       Project into
       Update and
end while
Algorithm 1 Subgradient-Based Control Optimization

4.2 Subgradient Computation

To run the subgradient algorithm, we need to be able to compute the gradient, . This is non-trivial as there may not be a closed form solution for the robot position, the obstacle position, or the time to collision. Below, we first discuss how to compute the cost function, and then focus on the computation of the gradient.

4.2.1 Cost Computation

To compute the goal cost, (Equation 5), we need to compute the position at . As we may not have a closed form solution for , we approximate it using fourth order Runge-Kutta integration (RK4). To improve accuracy, we iteratively run multiple steps of RK4, such that each step is, at most, some small time horizon, . This parameter allows for tuning the accuracy of our position estimation, at the expense of computation time. Once has been computed, we pass it through the position mapping function to obtain the Euclidean position, , which, along with the goal position, fully defines the goal cost.

To compute the collision cost, (Equation 6), we need to compute the most imminent time to collision over all the obstacles. Assuming both objects are approximately circular, the time to collision between an agent and and object can be defined as the time at which the two disks touch, i.e.:


However, for many systems, solving this equation for is not feasible, as there may not be a closed form solution for and/or , or the resulting equation is too complex. Instead, we utilize a similar approach to that used to compute the goal cost. We forward propagate the state of the robot and the state of each obstacle using RK4, and perform linear continuous collision checks between the resulting states to estimate the first moment of collision that may have occurred during the integration steps (see Figure 4).

Knowing how to compute the (propagated) cost , we next show how to compute the gradient of the cost with respect to the controls .

4.2.2 Goal Cost Gradient

As Equation 5 does not directly rely on

, we must apply the chain rule to compute the gradient:


The first term can be directly computed as:


The second term, , can be computed directly given the projection function .

All that remains is to compute the third term, . Given a discrete time dynamics function, this gradient can be computed iteratively via the multivariate chain rule as:


We start from , as the current position is independent of the upcoming control, and apply Equation 14 iteratively until the gradient at some user-defined time, , is obtained. However, we may not have a closed form solution for , and therefore no closed form solution for the discrete time dynamics. While we could compute the gradient of the RK4 dynamics directly, as we do for the forward propagation, we find it more computationally efficient (while sufficiently accurate) to compute the gradient using a series of trapezoidal integration steps on the (known) continuous dynamics.

Figure 4: Time to Collision Computation: Robot and obstacle states are forward propagated using RK4 integration, and then linear continuous collision checks are done between each discrete state.

In our case, the trapezoidal integration is defined as follows:


Using these approximate dynamics, we can iteratively compute the partial derivatives required by Equation 14 as follows:


where the half step partial derivatives are computed as:


The full iterative process for computing the derivative at time is shown in Algorithm 2 (here, we set ). Finally, we combine the resulting gradient as in Equation 12 to compute the total gradient for the goal cost term.

Input : 
Output : 
while  do
end while
Algorithm 2 Position Control Gradient

4.2.3 Collision Cost Gradient

Similar to the goal cost term, the collision cost term, (Equation 6), doesn’t depend directly on , so we must again compute its gradient via the chain rule:


where is the obstacle with the closest time-to-collision.

However, unlike in the goal cost gradient, the time to collision, , cannot generally be written explicitly as a function of the controls , which prevents us from computing . To address this issue, we propose the use of implicit differentiation. This allows us to have an analytic expression of the collision-cost gradient implicitly written as a function of . The implicit relationship between and is shown in Equation 11. Using this relationship, we can find by taking the derivative of Equation 11 with respect to (the th element of the control), and solving for :


where and .

Note that and are the known continuous time dynamics of the robot and the obstacle, and can be computed via the chain rule, as shown in section 4.2.2, as:


where is dependent on the dynamics model, and can be computed with Algorithm 2, setting . If is infinite (i.e. there is no collision), this derivative is 0.

4.3 State Constraints

When the agent is subject to state constraints, such as an acceleration controlled robot with a maximum velocity constraint, a small modification needs to be applied to NH-TTC. Since we are generating trajectories with a single control, we cannot always guarantee that such constraints will be satisfied. For example, applying a non-zero acceleration will eventually violate any velocity magnitude constraint. To address this issue, we enforce state constraints by modifying the continuous time dynamics. For example, in the acceleration controlled system, we can provide a (soft) constraint on the velocity to be no more than as follows:



is the acceleration. While it may be natural to zero out the acceleration if the constraint is violated this could result in the gradient of the velocity with respect to acceleration going to zero as well, and no optimization would occur. Instead, we limit the acceleration to a very small value to avoid the vanishing gradient problem.

The above formulation only enforces the state constraint when computing the cost, , and its gradient, . As such, to more strictly enforce the constraint, we also modify the projection in the subgradient descent algorithm (Algorithm 1). In addition to the projection onto the control constraint set, we also project the control such that, one timestep in the future, the state constraint isn’t violated. In the above acceleration controlled system, for example, by solving for the new velocity, we can modify the acceleration to enforce the state constraint as follows:


where is projected onto . Note that this method is only applicable when the state constraints are on states we can solve for explicitly.

5 Dynamics Models

The exact form of the cost gradient computation (Equations  12-21) will vary based on the dynamics of the robot under consideration. For the sake of illustration, we show in Section 5.1 how to apply our method to a robot with a smooth car dynamics model inspired by an autonomous driving task. As this smooth car model is relatively complex, with 2nd order, acceleration-controlled, car-like dynamics, we step through all the relevant equations needed to apply our framework in detail. Our work supports a large variety of different types of dynamics models, a few of which are discussed more briefly in Section 5.2.

5.1 Case Study: Smooth Car

We define a smooth car by a 5d state space in which each state is represented by 2d position, orientation, linear velocity, and steering angle as . See Figure 5 for a visual. To make the velocity and the steering angle of the car vary continuously over time, we define a 2d control that represents the car’s linear acceleration and the rate of change of the steering angle. In addition to constraints on and , we also impose state constraints on and . As described in section 4.3, we introduce the following functions to help enforce the state constraints:


Using these functions, we can define the continuous time dynamics of the system as:


where is the length of the car. Note that and are modified to maintain the velocity and steering angle constraints during forward propagation.

Figure 5: Smooth Car: Definitions of the states of the smooth car , in addition to the collision disk . Given zero controls, the car will follow the trajectory labeled .

From these dynamics, we can compute the partial derivatives with respect to both the state and the controls (only non-zero derivatives are shown). The partial derivatives with respect to the state are:


and the partial derivatives with respect to the controls are:


Using the continuous time dynamics and these partial derivatives, we can compute discrete time dynamics via trapezoid integration, and find the partial derivatives necessary for Algorithm 2. The discrete time dynamics are approximated as:


Using this, we can analytically compute the form of the partial derivatives with respect to the state at time and the controls, as required by Equation 14 (again only showing non-zero elements). First, the partials of the -component of the state with respect to the controls are:


Similarly, the partials of the -component of the state are:


The partials of the orientation are:


Finally the partials with respect to the velocity and steering angle are:




We also need to define the function, , mapping the state, , to the Euclidean workspace, and its derivatives. Because we are modeling the car from the center of the rear axle, we can minimize the encompassing area of the collision avoidance circle by shifting the collision center to lie on the center of the car rather than on the real axle (Figure 5):


The non-zero derivatives of the collision disk center are:


The above derivatives, along with those of the continuous dynamics (Equation 26), and those of the trapezoidal integration (Equations 30-34), fully define the goal cost gradient (Equation 12).

To compute collisions, we also define the radius of the collision disk, assuming a 2-to-1 length to width ratio for the car:


Equation 37 together with the continuous dynamics (Equation 26) and the offset collision circle center (Equation 35), allows us to compute the linear continuous collision checks between RK4 integration steps in order to estimate the time to collision with any obstacles in the scene. After computing the time to collision, we can compute the collision cost, using Equation 6, and the collision cost gradient, by combining Equations 21, 30-34, and 36. Combining the collision cost gradient with the goal cost gradient gives the full gradient, which can then be used in Algorithm 1 for the control update.

5.2 Other Dynamics Models

While our framework supports many robot dynamics models, for the majority of our results, we consider the following five different models that span a range of 1st and 2nd order dynamics, with or without kinematic constraints, and includes both holonomic and non-holonomic systems.

  • Velocity (V): Here the state is the 2d position (), and the controls are the 2d velocities (). The continuous time dynamics are:

  • Acceleration (A): Here the state is the 2d position and the 2d velocity (), and the controls are the 2d accelerations (). The continuous time dynamics are:

  • Differential Drive (DD): Here the state is the 2d position and the 1d orientation (), and the controls are the linear and angular velocities (). The continuous time dynamics are:

  • Smooth Differential Drive (SDD): Here the state is the 2d position, the 1d orientation, and the linear and angular velocity (), and the controls are the linear and angular accelerations (). The continuous time dynamics are:

  • Simple Car (Car): Here the state is the 2d position and the 1d orientation (), and the controls are the linear velocity and the steering angle (). The continuous time dynamics are:


    where is the length of the car.

6 Implementation Details

In all of our experiments, unless otherwise specified, we use the following constraints for the linear velocity, angular velocity, linear acceleration, angular acceleration, steering angle, and steering angle velocity, respectively: , , , , , and . The cost parameters, and , are both set to 1. For the time-to-collision search, is set to 5s and is set to 0.1s. The time when we compute the goal distance, , is set to 1s. Trajectories are planned using 10ms of planning time, and controls are updated at 10Hz. All results generated on a single thread on a Intel Xeon 3.0GHz processor. For the real robot results, each agent planned in its own thread. We implemented our subgradient-based optimization framework in C++, using Eigen (Guennebaud et al., 2010)

to efficiently handle matrix and vector operations.

Our implementation contains a few algorithmic optimizations. First, as the obstacle trajectories are static throughout all optimization iterations per planning step, we pre-compute the trajectories at the beginning of each planning step. Second, we compute the collision checks for each linear segment against every obstacle before moving to the next linear segment. This allows the time-to-collision search to exit as soon as the first collision is found. As such, collision times can be quick to compute for nearby obstacles, allowing for many optimization iterations even in dense scenarios. Finally, we chose a high resolution for , such that our time-to-collision search was very accurate for every dynamics model. However, for dynamics where RK4 and the linear collision check are sufficiently accurate over longer time periods (such as an acceleration controlled system), could be increased to greatly improve performance. This is due to the fact that the max number of iterations of the collision search is defined by .

In the following sections, the figures are rendered as follows: all agents are colored by their dynamics model, or grey if they are non-reactive to the controlled robots. Velocity agents are colored dark blue, acceleration agents are green, differential drive agents are red, smooth differential drive agents are cyan, simple car agents are yellow, and smooth car agents are orange. For models with orientation in their state, a black arrow shows their forward direction. Simple car and smooth car agents are drawn with both their physical extents and the corresponding collision disks. Goal positions are marked by X marks, with the color corresponding to the dynamics model of the agent. For a video of all robot experiments and planned trajectories, please see Extension 1.

7 Single-Agent Collision Avoidance

A key application of anticipatory collision avoidance is to allow a robot to avoid nearby dynamic obstacles as it moves to its goal. Below, we consider two scenarios in simulation, where the agent is navigating in environments containing non-reactive obstacles.

7.1 Three Obstacle Scenario

In this scenario, a single agent avoids 3 non-reactive linear velocity obstacles, each with a different velocity, while traversing from left to right. We show results in Figure 8 for both a velocity controlled agent and an acceleration controlled agent. Both dynamics models are able to find paths that slip between the first and second obstacle. Note that changing dynamics models from velocity-controlled to acceleration-controlled has a noticeable effect on the robot’s path, with the acceleration-controlled robot taking a smoother path due to the bounds on the allowed acceleration. Both robots reach their goals at about the same time (8 s for the acceleration controlled vs 8.1 s for the velocity controlled). This is because the acceleration-controlled robot still has a limit on the velocity it can take implemented as a state constraint.

(a) Velocity Robot

(b) Acceleration Robot
Figure 8: Three-Obstacles: A single robot avoids three non-reactive linear velocity obstacles, while navigating from left to right. In (a), the robot is velocity controlled. In (b), it is acceleration controlled. The trajectories traced by the robot and the obstacles so far are displayed as solid lines. The X marks denote the goal location of each robot.

(a) s

(b) s

(c) s
Figure 12: Smooth Differential Drive Robot Among Random Obstacles: The cyan robot moves to a random goal position, indicated by the colored x, while avoiding a number of grey dynamic obstacles following linear paths. The recent trajectories traced by the robot and the obstacles are displayed with solid lines.

7.2 Random Velocity-Agents Scenario

In a more challenging scenario, an agent is attempting to navigate to randomly generated goals while avoiding 40 non-reactive linear velocity obstacles. Time lapses from one trial of this scenario are shown in Figure 12. This is a rather difficult scenario as the randomly moving agents can box the agent in, and create scenarios that are impossible to avoid collisions in. Even though such challenging situations are unlikely to happen in real life, they allow us to experimentally evaluate the performance of our method in extreme scenarios. Note, this is the only scenario we tested for which there are any collisions with our method.

Table 1 shows how our method performs in this scenario with different agent dynamics models. In general, increasing the complexity of the dynamics model increases the average number of colliding frames. Overall though, even in this very challenging scenario, collisions are very rare, typically occurring in less than 0.5% of frames. Even in the worst average case, where a smooth car-like robot has only

ms of optimization time, collisions occur in only 1.3% of the frames. For nearly all of the dynamics models, increasing the optimization time both reduces the average number of colliding frames and the variance in the number of colliding frames. It is important to note, though, that most of the cost improvement occurs within the first 5


Optimization Time
Dynamics 1 ms 5 ms 10 ms
Table 1: Percent of Collision-Free Frames

: The average percent and standard deviation (reported in percentage points) of collision-free frames is shown for the scenario in Figure 

12 for various dynamics models. Experiments were run for 1000 frames, and averaged over 1000 runs per dynamics model.

8 Extending NH-TTC

The NH-TTC framework we propose can easily be extended to tasks beyond a single robot navigating around dynamic obstacles. In Section 8.1 we show how to adapt the cost function to support a robot chasing a dynamic target. In Section 8.2 we show how to extend the framework to support decentralized navigation of multiple controlled robots in a shared environment.

8.1 Dynamic Goals

The goal cost function defined in Equation 5 focuses on a single static goal. However, in many cases robots can have dynamic goals (e.g., chasing a moving target). In such cases, the robot needs to understand the moving nature of its goal to successfully reach it. Our framework can be easily adapted to support such dynamic goals. Simply greedily minimizing the distance to a dynamic goal can result in oscillations around the goal as it moves. To remedy this oscillatory behavior, we can compute the goal cost, , at multiple temporal points, and then average those costs:


This requires the robot not only to reach the goal, but then to stay on top of it as it keeps moving. In effect, it requires the robot’s velocity to synchronize with that of the goal. We show this type of dynamic goal behavior in the following scenario:


In this scenario, a smooth car-like robot is attempting to follow a passive car moving at a constant speed, while a third slower moving car has gotten between them. The robot tries to maintain a small following distance behind the lead obstacle, but the trailing obstacle starts too close for the robot to move in between. As such, the car needs to wait for a gap to open between the obstacles before sliding in. See Figure 13 for an overview.

Figure 13: Car Following: A smooth car-like robot, shown in orange, navigates to follow the lead car on the right.

8.2 Reciprocity in Multirobot Scenarios

Simply optimizing the cost function in equation 3 is not the correct behavior to take when the obstacles are also actively avoiding collisions. In these scenarios, avoiding the full collision at every time step can result in oscillatory behavior. This is due the fact that each agent tries to resolve the collision by itself without accounting for the fact that the other agent, by symmetry, is facing the exact same condition. Consequently, once each agent chooses a control that resolves the collision, it will then select a goal-directed control which will introduce a new collision that needs to be resolved. This pattern of alternating controls will continue until the two agents move past each other leading to jerky motion (see Figure 16a).

(a) No Reciprocity
(b) With Reciprocity
Figure 16: Reciprocity: Variants of a two agent scenario obtained with our framework, where the velocity controlled agents want to swap sides. In (a), reciprocity is disabled and each agent takes full responsibility to resolve the collision. In (b), reciprocity is enabled using Equation 44. This shows the smoothing effect reciprocity has on the resulting trajectories.

A common approach to address this issue is to enable reciprocity between the two agents which allows the agents to share the effort of averting the collision during mutual collision avoidance tasks (van den Berg et al., 2008). In particular, inspired by the approach taken by ORCA (only avoiding half the collision), we only update the control to halfway between the new optimal control and the previous control:


This modification results in significantly smoother paths, while still fully avoiding collisions (see Figure 16b). Even if the obstacle is not avoiding the agent, the agent will still converge to a collision-free control after a few time steps.

In the following set of results, there are multiple agents planning in a decentralized manner.

8.3 2 vs 1 Oncoming

In this scenario, two agents standing on one side of an environment have to move toward a third agent that is standing on the opposite side. As shown in Figure 20a, using the ORCA framework to plan for holonomic, velocity-controlled agents results in the lone agent staying put until the other two agents have moved around it. This is due to the fact that ORCA conservatively approximates the set of forbidden velocities with half-planes throwing away too many feasible velocities that the agents could have taken. In contrast, by using our subgradient-based optimization framework, all three agents are able to quickly resolve the collisions and reach to their goals in a timely manner as depicted in Figure 20b. In addition, as compared to ORCA and many of its extensions, such as Alonso-Mora et al. (2013) and Bareiss and van den Berg (2015), our approach plans directly in the agent’s control space allowing us to find smooth and collision-free paths for different motion models without being required to cast controls into an intermediate velocity space. As an example, see Figure 20c for trajectories obtained by NH-TTC for differential-drive robots.

(a) Velocity Robots (V) - ORCA (van den Berg et al. (2011b))
(b) Velocity Robots (V) - NH-TTC
(c) Differential Drive Robots (DD) - NH-TTC
Figure 20: 2 vs 1 Oncoming: Two simulated robots move from right to left while a third moves from left to right. All figures show the robots after 4 seconds of simulation. The traces of the robots are shown as colored disks which are light at their initial positions and dark at their current positions. (a) Using the ORCA framework, the standalone robot is reluctant to move forward until the other two robots have walked around it. (b) In contrast, using NH-TTC, all three robots are able to safely reach to their goals in a timely manner. (c) Similar behavior is obtained when NH-TTC plans for differential-drive agents.

8.4 Heterogeneous Circle

To highlight how our approach can handle interactions between heterogeneous agents, i.e. agents that can have different motion models and state spaces, we consider a scenario with five agents, each with a different robot model highlighted in Section 5.2. The agents are attempting to move to antipodal points on a circle. Figure 21 shows the paths taken by each agent. As it can also be observed in the companion video, NH-TTC generates controls that lead to collision-free and smooth paths. Note that the jitter in the initial portion of the velocity-controlled agent comes from the implicit coordination between the agents. After the first few seconds, once the agents have come to an implicit consensus on the paths to take, the paths are smooth for the rest of execution.

Figure 21: 5-Agent Circle: Each of the agents attempts to move to its antipodal position on a circle. The dark blue agent is velocity controlled, the green agent is acceleration controlled, the red agent is differential drive controlled, the light blue agent is smooth differential drive controlled, and the yellow agent is a simple car.

9 Analysis and Experimentation

9.1 Physical Robot Results

To test the applicability of our method to real robots, we implemented our framework on three Turtlebot2 robots (a differential drive system). We used an OptiTrack system for position and orientation localization, and used the internal odometry of the robots to get the linear and angular velocities. Each robot communicated its current pose and velocity to the other robots, and asynchronously planned over the latest received state of the world.

(a) Following
(b) Robot Controls
Figure 24: Physical Robots - Following Scenario: One robot attempts to slip between two unresponsive robots (a simulated version is shown in Figure 13). In (b), the controls executed by the controlled robot are shown, as measured by wheel encoders and gyroscope. The robot is able to quickly accelerate and slide between the two agents as soon as an appropriate gap opens up.

We tested the applicability of our approach to physical robots on two scenarios:

  • “Car” Following: Two non-reactive robots moving at a constant velocity of 0.2 and 0.15m/s, respectively, with a single controlled robot having a maximum velocity of 0.3m/s. As in the simulated version of this case, we check the goal distance at multiple temporal points in the future to reduce oscillatory behavior.

  • 3-Robot Circle: Three robots simultaneously try to move to antipodal points on a circle.

In all of our physical robot experiments, we left the maximum linear velocity at 0.3 m/s, but reduced the maximum rotational velocity to be 0.5 rad/s in order to match the actual limits of the robots used in the experiments. Results from these experiments can be seen in the companion video.

To further study the quality of the generated trajectories, we plotted the controls taken by the robot over time, as measured by its wheel encoders and gyroscope, in Figures 24 and 27. While there is some inherent noise in the measured controls, the robots were able to smoothly achieve their goals without colliding, taking admissible controls that stayed within the given limits in all scenarios tested.

In the car-following scenario, while in motion, the average linear acceleration of the controlled robot was 0.008 and its average rotational acceleration was 0.002. In the first 8s, the robot gradually adapts its linear and angular speed to smoothly slip between the two obstacles. Then, in the next 7s, it starts decelerating in order to align itself with the speed and orientation of the leader obstacle, after which it maintains an almost zero linear and angular acceleration.

(a) 3-Robot Circle
(b) Robots’ Controls
Figure 27: Physical Robots - 3-Robot Circle: Three robots avoid each other while moving to antipodal positions on a circle. The controls executed by each robot, as measured by wheel encoders and gyroscope, are shown in (b), illustrating the ability of the differential drives to smoothly adapt their linear and angular velocities to resolve impending collisions.

In the 3-robot circle scenario, averaged across the times when the three differential drive robots were moving, the average linear acceleration was 0.008 and the average rotational acceleration was 0.011. Here, the robots are able to quickly resolve the collisions within the first 5s, and then smoothly adapt their orientations and accelerate to reach their goals. It is worth noting that the quick rotational velocity changes that the robots exhibit during the first 4s is due to the symmetric nature of the scenario; the robots are on track to arrive in the center of the environment nearly at the same time, and attempt to break the symmetry by trying to implicitly agree on whether to perform clockwise or counter clockwise avoidance maneuvers.

9.2 Performance Analysis

We analyze the performance of our NH-TTC approach in the car-following scenario (see Section 8.1) by varying the time that the robot has at its disposal to plan for a new control, as well as the maximum steering angle velocity that the car can attain. Figure 28 reports the total cost of the controlled-robot averaged over 1,000 runs for various planning times, ranging from 1ms to 15ms, and four distinct control bounds. Overall, as can be seen in the figure, our subgradient descent implementation requires only a small optimization time to start finding low cost trajectories. Using, for example, a very tight constraint of 1rad/s on the steering angle, NH-TTC is able to find near-optimal solutions within 5ms of planning per time step. As we increase the control bounds from 1 rad/s to 20rad/s, the quality of the trajectories returned by NH-TTC remains nearly unchanged while the cost obtained still exhibits low variance across different runs. This highlights the ability of our approach to efficiently search the control space regardless of its size. In contrast, a sampling-based control approach would require more and more time to find good trajectories as the range of the robot’s valid steering angle increases, making it impractical for real-time planning settings.

Figure 28: Trajectory Cost Over Optimization Time: Performance analysis of NH-TTC for various control bounds in the car-following scenario shown in Figure 13. The plot depicts the sum of the costs of the taken controls as a function of the available planning time for a variety of maximum steering angle velocities,

. Solid lines denote averages over 1,000 runs, with each run lasting 300 frames, and shaded regions denote 90% confidence intervals. For a large variety of bounds, our approach is able to quickly find locally optimal solutions exhibiting low variance.

To further show the robustness of our subgradient-based optimization, we test its sensitivity to the initial control, , given as input to Algorithm 1. In particular, we chose a snapshot from the random scenario shown in Figure 32a, where a simple car-like robot is interacting closely with a large number of dynamic obstacles. We ran NH-TTC using five different initial controls while allowing 5ms of planning time for the car. Across all five runs, NH-TTC completes between 164 and 185 subgradient descent iterations within the 5ms of the given planning time. Figures 32b-c show the evolution over time of the best cost seen so far and the corresponding control for each of the runs, where the first descent iteration is delayed by 0.3ms to pre-compute the obstacle trajectories. As highlighted by the cost field in Figure 32c, the car has to solve a complex optimization problem, with many local minima. However, while the initial costs across all of the runs has a large spread, NH-TTC is able to quickly converge to similar near-optimal solutions in only 3 ms.

(a) Scenario
(b) Control Cost Over Time
(c) Cost Field
Figure 32: Effect of Initial Control Guess: (a) A simple car navigating to the goal (the yellow X), avoiding the random velocity agents. (b) The log cost of the best control found so far for a variety of initializations over 5 ms of planning. (c) The cost field corresponding to the simple car’s controls. To better show the gradient, values are plotted in log scale with the color corresponding to . The evolution of each optimization run is shown in its corresponding color.

10 Conclusion

In this paper we have proposed NH-TTC, a new generalizable framework for anticipatory collision avoidance. Our method is able to optimize directly in the control space of the robot in an anytime fashion, allowing collision-free trajectories to be computed over very short planning times for a wide variety of robot motion models. To do so, we cast local navigation as a control optimization problem and employ an anticipatory cost function that focuses on the expected future values of robot controls. As such a function is non-convex and suffers from discontinuities, we minimize it using subgradient descent, and use implicit differentiation to capture the dynamics of future collisions for arbitrary motion models.

In an essence, our approach can be considered as a hybrid between full trajectory optimization methods and local reactive planning methods. Similar to trajectory optimization approaches, we decompose our cost function into two terms: a time-to-collision based trajectory cost, which in our case is discounted based on the risk of future collisions, and a cost-to-goal term that accounts for the goal progress of the robot. However, analogous to local navigation methods, we define our trajectory by a single control propagated over time, allowing us to quickly resolve collisions with obstacles and further refine our solution in an anytime fashion. To the best of our knowledge, NH-TTC is the first “local” approach that demonstrates high-quality trajectories on a variety of kinematically constrained motion models in realtime and reactive settings.

Limitations: While our approach performs well in many scenarios, there are certain types of motion models and scenarios we cannot properly address. Since we are optimizing a single control, we are unable to operate on any unstable systems (such as a humanoid robot) where a single control cannot be taken over long horizons. In addition, this single control optimization limits the type of maneuvers we can generate, as we greedily optimize goal distance in addition to finding collision-free controls. Finally, our Polyak-based step size of the search direction (Equation 8) is domain-agnostic, as most of the times we do not know the optimal cost and we have to rely on the current best estimate. This inefficiency, along with the optimizations discussed in Section 6, could be improved upon if the dynamics model of the robot is known a priori.

Future Work: We are excited to test the application of NH-TTC to other mobile robot types, especially those having 3D dynamics such as quadrotors. In the future, we would also like to extend NH-TTC to account for motion and sensing uncertainty in the future trajectories of obstacles. Prior work on uncertainty-aware local navigation (Hennes et al., 2012; Forootaninia et al., 2017) can provide some interesting ideas in this direction. Furthermore, we would like to relax some of the assumptions that our framework makes, such as that the robot will maintain a constant control input over a finite time horizon. Finally, we currently fit a single collision disk around each robot. Even though the center of the disk is not necessarily tied to the position of the robot’s state, we may still underestimate the true time-to-collision value between a robot and a given obstacle, which can lead to conservative avoidance maneuvers (e.g., when simulating a simple car-like robot). To address this issue we would like to better approximate the geometry of the robot. A simple approach is to wrap a sequence of disks around the true shape of the robot and determine the minimal time to collision between each disk and a given obstacle. A better alternative may be to compute a tight fitting bounding shape for a robot using the medial axis transform as in Ma et al. (2018).


This work was supported in part by the National Science Foundation under Grants IIS-1748541 and CNS-1544887.


  • Alonso-Mora et al. (2012) Alonso-Mora J, Breitenmoser A, Beardsley P and Siegwart R (2012) Reciprocal collision avoidance for multiple car-like robots. In: IEEE International Conference on Robotics and Automation. pp. 360–366.
  • Alonso-Mora et al. (2013) Alonso-Mora J, Breitenmoser A, Rufli M, Beardsley P and Siegwart R (2013) Optimal reciprocal collision avoidance for multiple non-holonomic robots. In: Distributed Autonomous Robotic Systems. Springer, pp. 203–216.
  • Amato et al. (2016) Amato C, Konidaris G, Anders A, Cruz G, How JP and Kaelbling LP (2016) Policy search for multi-robot coordination under uncertainty. The International Journal of Robotics Research 35(14): 1760–1778.
  • Bareiss and van den Berg (2013) Bareiss D and van den Berg J (2013) Reciprocal collision avoidance for robots with linear dynamics using LQR-obstacles. In: IEEE International Conference on Robotics and Automation. pp. 3847–3853.
  • Bareiss and van den Berg (2015) Bareiss D and van den Berg J (2015) Generalized reciprocal collision avoidance. The International Journal of Robotics Research 34(12): 1501–1514.
  • Bekris and Kavraki (2007) Bekris KE and Kavraki LE (2007) Greedy but safe replanning under kinodynamic constraints. In: IEEE International Conference on Robotics and Automation. pp. 704–710.
  • Bertsekas (1999) Bertsekas DP (1999) Nonlinear programming. Athena scientific Belmont.
  • Bry and Roy (2011) Bry A and Roy N (2011) Rapidly-exploring random belief trees for motion planning under uncertainty. In: IEEE International Conference on Robotics and Automation. pp. 723–730.
  • Denny et al. (2013) Denny J, Morales M, Rodriguez S and Amato NM (2013) Adapting rrt growth for heterogeneous environments. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 1772–1778.
  • Dutra et al. (2017) Dutra T, Marques R, Cavalcante-Neto JB, Vidal CA and Pettré J (2017) Gradient-based steering for vision-based crowd simulation algorithms. Computer Graphics Forum 36(2).
  • Falcone et al. (2007) Falcone P, Borrelli F, Asgari J, Tseng HE and Hrovat D (2007) Predictive active steering control for autonomous vehicle systems. IEEE Transactions on Control Systems Technology 15(3): 566–580.
  • Fiorini and Shiller (1998) Fiorini P and Shiller Z (1998) Motion planning in dynamic environments using velocity obstacles. International Journal of Robotics Research 17: 760–772.
  • Forootaninia et al. (2017) Forootaninia Z, Karamouzas I and Narain R (2017) Uncertainty models for TTC-based collision avoidance. In: Robotics: Science and Systems.
  • Fox et al. (1997) Fox D, Burgard W and Thrun S (1997) The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine 4(1): 23–33.
  • Fraichard and Asama (2004) Fraichard T and Asama H (2004) Inevitable collision states - A step towards safer robots? Advanced Robotics 18(10): 1001–1024.
  • Fulgenzi et al. (2007) Fulgenzi C, Spalanzani A and Laugier C (2007) Dynamic obstacle avoidance in uncertain environment combining PVOs and occupancy grid. In: IEEE International Conference on Robotics and Automation. pp. 1610–1616.
  • Giese et al. (2014) Giese A, Latypov D and Amato NM (2014) Reciprocally-rotating velocity obstacles. In: IEEE International Conference on Robotics and Automation.
  • Guennebaud et al. (2010) Guennebaud G, Jacob B and Others (2010) Eigen v3. http://eigen.tuxfamily.org.
  • Guy et al. (2009) Guy SJ, Chhugani J, Kim C, Satish N, Lin M, Manocha D and Dubey P (2009) Clearpath: highly parallel collision avoidance for multi-agent simulation. In: ACM SIGGRAPH/Eurographics Symposium on Computer Animation. pp. 177–187.
  • Hennes et al. (2012) Hennes D, Claes D, Meeussen W and Tuyls K (2012) Multi-robot collision avoidance with localization uncertainty. In: International Conference on Autonomous Agents and Multiagent Systems - Volume 1. International Foundation for Autonomous Agents and Multiagent Systems, pp. 147–154.
  • Jaillet and Siméon (2004) Jaillet L and Siméon T (2004) A PRM-based motion planner for dynamically changing environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, volume 2. pp. 1606–1611.
  • Karaman and Frazzoli (2011) Karaman S and Frazzoli E (2011) Sampling-based algorithms for optimal motion planning. The international journal of robotics research 30(7): 846–894.
  • Karamouzas and Guy (2015) Karamouzas I and Guy SJ (2015) Prioritized group navigation with formation velocity obstacles. In: IEEE International Conference on Robotics and Automation. pp. 5983–5989.
  • Karamouzas et al. (2014) Karamouzas I, Skinner B and Guy SJ (2014) Universal power law governing pedestrian interactions. Physical Review Letters 113: 238701.
  • Karamouzas et al. (2017) Karamouzas I, Sohre N, Narain R and Guy SJ (2017) Implicit crowds: Optimization integrator for robust crowd simulation. ACM Trans. Graph. 36(4): 136:1–136:13.
  • Kimmel et al. (2012) Kimmel A, Dobson A and Bekris K (2012) Maintaining team coherence under the velocity obstacle framework. In: International Conference on Autonomous Agents and Multiagent Systems - Volume 1. pp. 247–256.
  • LaValle and Kuffner Jr (2001) LaValle SM and Kuffner Jr JJ (2001) Randomized kinodynamic planning. The international journal of robotics research 20(5): 378–400.
  • Long et al. (2017) Long P, Liu W and Pan J (2017) Deep-learned collision avoidance policy for distributed multiagent navigation. Robotics and Automation Letters 2(2): 656–663.
  • Ma et al. (2018) Ma Y, Manocha D and Wang W (2018) Efficient reciprocal collision avoidance between heterogeneous agents using ctmat. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, pp. 1044–1052.
  • Mattingley et al. (2011) Mattingley J, Wang Y and Boyd S (2011) Receding horizon control. IEEE Control Systems 31(3): 52–65.
  • Mavrogiannis and Knepper (2018) Mavrogiannis CI and Knepper RA (2018) Multi-agent path topology in support of socially competent navigation planning. The International Journal of Robotics Research .
  • Nedic and Bertsekas (2001) Nedic A and Bertsekas DP (2001) Incremental subgradient methods for nondifferentiable optimization. SIAM Journal on Optimization 12(1): 109–138.
  • Nesterov (2003) Nesterov Y (2003) Introductory lectures on convex optimization: A basic course, volume 87. Springer.
  • Olivier et al. (2012) Olivier AH, Marin A, Crétual A and Pettré J (2012) Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers. Gait Posture 36(3): 399–404.
  • Pan et al. (2018)

    Pan Y, Cheng CA, Saigol K, Lee K, Yan X, Theodorou E and Boots B (2018) Agile autonomous driving via end-to-end deep imitation learning.

    In: Robotics: Science and Systems.
  • Petti and Fraichard (2005) Petti S and Fraichard T (2005) Safe motion planning in dynamic environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 2210–2215.
  • Platt et al. (2010) Platt R, Tedrake R, Kaelbling LP and Lozano-Pérez T (2010) Belief space planning assuming maximum likelihood observations. In: Robotics: Science and Systems.
  • Polyak (1987) Polyak BT (1987) Introduction to optimization. translations series in mathematics and engineering. Optimization Software .
  • Rufli et al. (2013) Rufli M, Alonso-Mora J and Siegwart R (2013) Reciprocal collision avoidance with motion continuity constraints. IEEE Transactions on Robotics 29(4): 899–912.
  • Shor (1998) Shor N (1998) Nondifferentiable Optimization and Polynomial Problems, volume 24. Springer Science & Business Media.
  • Shor (1985) Shor NZ (1985) Minimization methods for non-differentiable functions. In: Springer Series in Computational Mathematics. Springer.
  • Snape et al. (2010) Snape J, van den Berg J, Guy SJ and Manocha D (2010) Smooth and collision-free navigation for multiple robots under differential-drive constraints. In: IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 4584–4589.
  • Snape et al. (2011) Snape J, van den Berg J, Guy SJ and Manocha D (2011) The hybrid reciprocal velocity obstacle. IEEE Transactions on Robotics 27(4): 696–706.
  • van den Berg et al. (2011a) van den Berg J, Abbeel P and Goldberg K (2011a) Lqg-mp: Optimized path planning for robots with motion uncertainty and imperfect state information. The International Journal of Robotics Research 30(7): 895–913.
  • van den Berg et al. (2011b) van den Berg J, Guy SJ, Lin M and Manocha D (2011b) Reciprocal n-body collision avoidance. In: Robotics Research: The 14th International Symposium ISRR, Springer Tracts in Advanced Robotics, volume 70. Springer, pp. 3–19.
  • van den Berg et al. (2008) van den Berg J, Lin M and Manocha D (2008) Reciprocal velocity obstacles for real-time multi-agent navigation. In: IEEE International Conference on Robotics and Automation. pp. 1928–1935.
  • van den Berg et al. (2011c) van den Berg J, Snape J, Guy SJ and Manocha D (2011c) Reciprocal collision avoidance with acceleration-velocity obstacles. In: IEEE International Conference on Robotics and Automation. pp. 3475–3482.
  • Wilkie et al. (2009) Wilkie D, van den Berg J and Manocha D (2009) Generalized Velocity Obstacles. In: IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 5573–5578.
  • Williams et al. (2017)

    Williams G, Wagener N, Goldfain B, Drews P, Rehg JM, Boots B and Theodorou EA (2017) Information theoretic MPC for model-based reinforcement learning.

    In: IEEE International Conference on Robotics and Automation.
  • Wolinski and Lin (2018) Wolinski D and Lin MC (2018) Generalized warpdriver: Unified collision avoidance for multi-robot systems in arbitrarily complex environments. In: Robotics: Science and Systems.