I Introduction
Multirobot motion planning presents several challenges, and key among them are trajectory planning and formation control, as well as the assignment of vehicles to goal states. When the assignment of vehicles to goal states is made beforehand, many methods have been proposed. These include control theory approaches [40, 11], graphbased techniques [14], and optimization methods [27, 2]. Approximations to assist in dimensionality reductions have also been proposed, such as sequential methods [8] and hybrid approaches [10, 37].
When the goal states of multivehicle teams are not set a priori, it is necessary to determine which vehicle to allocate to which terminal goal. There is a large body of literature developing allocation and assignment algorithms [5], and applying these methods to engineering problems has appeared in diverse forms including sensor coverage [26], weapontarget assignment [1], or network routing [24]. Dynamics are not considered in this body of work, which commonly study worstcase system performance under equilibrium conditions using a gametheoretical framework [38, 25, 21]. Without consideration of dynamics, these methods have limited applicability to vehicle coordination problems and there exists a technical gap relating vehiclegoal allocation and motion planning, as most prior work assumes each is done independently. The critical dependency between assignment and trajectory planning is illustrated in Figure 1.
There have been recent attempts to close this gap as in [17]. But this work only evaluates discrete spatial waypoints on a rectangular grid without considering vehicle dynamics, and therefore the paths generated are not feasible for most physical systems. Morgan et al. [31]
generalizes this concept to linear, discretetime dynamics and constructs a nonlinear programming problem that is solved with sequential convex programming in conjunction with an auction algorithm
[5]. However, both [31] and [17] restrict the cost function to one of additive individual vehicle weights (similar to the cost function presented in [5]), which excludes many vehicle optimization problems of interest, including assembling the desired formation shape in minimal time.We present how the optimal assignment and trajectory can be found simultaneously from the viscosity solution to a single Hamilton–Jacobi PDE, a necessary and sufficient condition for optimality. We assume the vehicles can each have unique linear dynamics. Remarkably, we show that the global viscosity solution can be found as the solution to a linear bottleneck assignment problem (LBAP) [15]. This fact can be utilized to construct a level set method that has polynomial computational scaling with the number of vehicles and still ensures a global optimum.
Our formulation has a close relation to reachability analysis which can be useful for the design and analysis of heterogeneous systems. When the dynamics of some vehicles differ greatly from the rest, some formations may not be feasible (e.g. a slowmoving vehicle attempting to join a formation with a much faster vehicle). The zero sublevel set of the viscosity solution of a related HJ PDE defines an implicit surface representation of the backwards reachable set [28], and from this one determines whether the formation can be achieved given the unique collection of vehicle dynamics.
Traditionally, numerical solutions to HJ equations require a dense, discrete grid of the solution space [33, 30]
. Computing the elements of this grid scales poorly with dimension and has limited use for problems with dimension greater than four. The exponential dimensional scaling in optimization is sometimes referred to as the “curse of dimensionality”
[4, 3]. A new result in [9] discovered a numerical solution based on the Hopf formula [18] that does not require a grid and can be used to efficiently compute solutions of a certain class of Hamilton–Jacobi PDEs. However, that only applies to systems with timeindependent Hamiltonians of the form , and has limited use for general linear control problems. Recently, the classes of systems were expanded upon and generalizations of the Hopf formula are used to solve optimal linear control problems in highdimensions [19] and differential games as applied to multivehicle, collaborative pursuitevasion problems [20].The key contributions of this paper are to formulate the multivehicle coordination problem as a single Hamilton–Jacobi equation, show that the solution of which can be found from an LBAP, and use the generalized Hopf formula to simultaneously solve for optimal trajectory, vehicle control, and goal assignment. By utilizing the generalized Hopf formula, we create a levelset method to find the viscosity solution of the HJ in a computationally efficient manner.
We introduce a multivehicle dynamical system and formulate the coordination problem in Section II. Section III formulates the optimal vehicle coordination problem as the viscosity solution to a single Hamilton–Jacobi PDE. Section IV constructs a levelset method for the multivehicle coordination problem and utilizes the generalized Hopf formula as a computational tool to find the viscosity solution to the HJ PDE of the joint, mutlivehicle system. Section V shows the method as applied to several examples including a case of planar motion planning with 4 vehicles. This example shows how assignment and path planning cannot be decoupled and even if a single vehicle has a different initial condition, it may result in a different assignment for optimal coordination.
Ii Problem Formulation
We consider a system that consists of vehicles and each vehicle, , has linear dynamics
(1) 
for , where is the system state and is the control input, constrained to a convex admissible control set
. We add the assumption that no eigenvalue of matrix
has a strictly positive real part. We let denote a state trajectory for vehicle that evolves in time with measurable control sequence , according to starting from initial state at . The trajectory is a solution of in that it satisfies almost everywhere:Iia MultiVehicle Model
For the set of
vehicles, we construct a joint state space with state vector
and control which is written as follows(2) 
We reiterate the above definition of the joint state space with the fact that when quantities , , , , and appear without subscripts, they refer to the joint system in , and when subscripts are used, they refer to a specific individual vehicle as defined in .
IiB Vehicle Coordination
We assume there exists a set of goals, and for each vehicle we associate closed convex sets with the understanding that means the vehicle is assigned to goal . Our goal is to make sure that we have one vehicle at each goal, but it does not matter which vehicle is at each goal. This goal can be expressed by the requirements that the multivehicle state, , belongs to the following goal set
(3) 
where denotes the set of all permutations of . We represent implicitly with the function such that
(4) 
and use it to construct a cost functional for the system trajectory , given terminal time as
(5) 
where the function represents the rate that cost is accrued over time. The value function is defined as the minimum cost, , among all admissible controls for a given initial state with
(6) 
Iii Hamilton–Jacobi Equations For Optimal Coordination
The value function in satisfies the dynamic programming principle [6, 12] and also satisfies the following initial value Hamilton–Jacobi (HJ) equation with being the viscosity solution of
(7)  
for , where the Hamiltonian is defined by
(8) 
where in denotes the costate. We denote by the costate trajectory that can be shown to satisfy almost everywhere:
with initial costate denoted by . With a slight abuse of notation, we will hereafter use to denote , when the initial state and control sequence can be inferred through context with the corresponding state trajectory, .
Iiia System Hamiltonian
We consider a timeoptimal formulation with
where
is the characteristic function of the set of admissible controls and is defined by
In this case, the integral term in disappears (as long as remains in and is simply the value of at the terminal time, . For this problem
means that there exists a feasible trajectory for the vehicles the ends at the state ; whereas means that such trajectories do not exist under the system dynamics and initial conditions. Since each vehicle has independent dynamics, the Hamiltonian in is of the form
(9) 
where each vehicle’s Hamiltonian is given by
(10)  
IiiB Linear Bottleneck Assignment
Our goal is solving the Hamilton–Jacobi PDE in , and any that satisfies is, in general, nonconvex and presents numerical challenges. We show that we can overcome this challenge by alternatively solving for the global value function with the following linear bottleneck assignment problem [15]:
(11) 
where is the viscosity solution to
(12)  
with Hamiltonian defined by . The function is an implicit surface representation of such that
(13) 
The solution to can be found from using the appropriate linear bottleneck assignment algorithms (for example [7, Section 6.2]), requiring only evaluations of lower dimensional viscosity solutions and avoiding computation involving all permutations of vehiclegoal pairs. Also of note is that if each HJ equation has convex initial data, it enables the use of computationally efficient techniques that can guarantee convergence to the appropriate viscosity solution. Additionally, since each of the solutions are independent, the solutions can be computed in parallel.
We introduce a set of mild regularity assumptions that guarantee Hamilton–Jacobi equations have a unique viscosity solution [41, Chapter 7, p. 63]:

Each Hamiltonian
is continuous.

There exists a constant such that for all and for all , the following inequalities hold
and
with .

For any compact set there exists a constant such that for all and for all the inequality holds,
with .

Each terminal cost function
is continuous.
Lemma 1
If each vehicle Hamiltonian, , meets assumptions 13, then the Hamiltonian defined in also meets assumptions 13.
The proof is given in the appendix.
Theorem 2
Under assumptions 14, is a unique viscosity solution to for all , and there exists a that satisfies such that with the Hamiltonian given by , is a viscosity solution to .
We will prove the theorem constructively by proposing a particular given as
(14) 
with
(15) 
where is the implicit representation of , defined in , with . We see that for any when which implies . We also see if there exists an such that , then and implies . Therefore the proposed in satisfies and is an implicit surface representation of the set . Furthermore, since by assumption each is continuous, then is convex as the max of a finite number of continuous functions is also continuous. This implies is continuous as the minimum of a finite number of coninuous functions is continuous. Therefore satisfies assumption 4.
From Lemma 1, the system Hamiltonian defined in meets assumptions 13 and implies that has a unique viscosity solution, denoted as , when the initial cost function is given by [41, Theorem 8.1, p. 70]. The uniqueness of the solution implies that the viscosity solution is equivalent to the value function in . It follows that
(16) 
since when . Denoting by as the control that optimizes , we have
Substituting and we have
(17) 
where we use the notation to represent the th block of the vector . Likewise has a unique viscosity solution,, for each with initial cost given , and as such
(18) 
for each and with denoting the control that optimizes . Note that if it would contradict that is the optimal control. Also if , then it would contradict that is the optimal control of the entire system. Therefore, and our value function in becomes
and we arrive at our result.
Iv A Level Set Method with the Generalized Hopf Formula
First, we introduce the Fenchel–Legendre transform of a convex, proper, lower semicontinuous function , denoted as , and is defined as [16]
(19) 
We propose to find the viscosity solutions of using the generalized Hopf formula.
Theorem 3
Each viscosity solution of can be found from the formula
(20)  
with
(21) 
Proceeding similar to [20], we apply a change of variables to with
(22) 
which results in the following system
(23) 
By utilizing this change of variables, we construct an equivalent HJ equation
(24)  
From Lemma 5 (given in the appendix), meets assumption 13, and by composition rule is continuous [39, Theorem 4.7] and meets assumption 4. Therefore has a unique viscosity solution that is equivalent to the cost functional
where is a solution trajectory that satisfies almost everywhere. Since
we have
noting that since by the transform implies at . Thus, and we can find by finding the viscosity solution to .
Since is assumed closed and convex, this implies is convex and by assumption 4 is continuous in . By assumption 1, is continuous in , therefore gives an exact, pointwise viscosity solution for [23, Section 5.3.2, p. 215]. Formula shows that we can compute a viscosity solution to by solving a finite dimensional optimization problem. This avoids constructing a discrete spatial grid, and is numerically efficient to compute even when the dimension of the state space is large. Additionally, no spatial derivative approximations are needed with Hopf formulabased methods, and this eliminates the numeric dissipation introduced with the Lax–Friedrichs scheme [34], which is necessary to maintain numeric stability in gridbased methods.
Iva Numerical Optimization of the Hopf Formula
We transcribe the Hopf formula into a nonlinear programming problem by approximating the integral in with an point quadrature rule sampled on the time grid
with and corresponding weights . Additionally, we make a simple, invertible change of variable and substituting into results in the following unconstrained optimization problem that solves :
(25) 
with now being defined as
(26) 
The variable transformation is done so that when the matrix has at least one eigenvalue with strictly negative real part, we avoid computation of which would have exponentially unstable poles since we are evaluating the matrix exponential of . This divergence would cause sensitivity in the evaluation of the Hopf formula in with respect to small changes of . By utilizing the variable transformation and optimizing , we avoid this sensitivity.
Remark 4
Often the expression in is known in closed form and we present one such common case. Recall that denotes the Fenchel–Legendre transform of a function defined in , and suppose is the closed convex set defined as
where is any norm. Then defines a norm, which we denote with , which is the dual norm to [16]. From this we write as
(27) 
IvB TimeOptimal Control to a Goal Set
The task of determining the control that drives the system into in minimal time can be determined by finding the smallest such that
(28) 
When the system is constrained controllable, then the set of times such that is reachable with respect to contains the open interval [13]. This insures that if is outside the set at time , then and there exists a time such that for all . This implies standard rootfinding algorithms can be used to find . As noted in [19], we solve for the minimum time to reach the set by constructing a newton iteration, starting from an initial guess, , with
(29) 
where is the solution to at time . The value function must satisfy the HJ equation
where and each is the argument of the minimizer in . We iterate until convergence at the optimal time to reach, which we denote as . This process is summarized in Algorithm 1.
The optimal control and trajectory for each vehicle is found directly from the necessary conditions of optimality established by Pontryagin’s principal [35] by noting the optimal trajectory must satisfy
where is the optimal costate trajectory and is given by
This implies our optimal control is
(30) 
for all time , provided the gradient exists.
V Results
We present results for two examples. For the first we choose a toy example of two vehicles, each with differing single integrator dynamics. The purpose of this example is that it allows easy visualization of the level set propagation and the solution can be verified by comparing with methods such as [29], which cannot be applied to the higher dimensional second example to follow. The LBAP was solved using [7, Algorithm 6.1] and the optimization used sequential quadratic programming [32, Chapter 18] with initial conditions for each vehicle. To prevent a singular control condition, a slight smoothing was applied to the Hamiltonian in and using [36] with parameter .
Va Toy Problem
We present a two vehicle problem where the dynamics are for the first vehicle and for the second vehicle. The states are the linear position of each respective vehicle. The control is bounded by . The goal states are for and for . The level set contours of the joint space are shown in Figure VA, for ten time samples equally spaced on . Notice this seemingly simple heterogeneous system can give rise to counterexamples for the assignment formulations given in [31] where the sum of the vehicle distances are used as the assignment metric and [17] where the time of arrival of vehicles was used. Take the initial state , which is shown on Figure 2. For the time metric, the the assignment gives , while the assignment gives , indicating is the clear choice. However, is the global optimum in this example, as both vehicles reach their desired goal states in a time of , as the zero level set of the global value function intersects the point at that time, giving the minimum timetoreach. For the distance metric, the misassignment is more pronounced, with the assignment giving as compared to the assignment gives .
VB Planar Motion
We choose for dynamics with state , where is spatial position of a robot and is the velocity and
for each vehicle . The control is constrained to lie in the set . The robots are tasked with reaching the goal formation and coming to rest, in minimum time. The goal sets each have radius of and the centers of the goals are located spatially at , , , and . Since the 2norm is selfdual, the Hamiltonian for each vehicle is
Since contains eigenvalues with negative real part, we optimize using the variable transformation of Section IVA. The initial conditions of the vehicles were
Only iterations of were needed to solve for the minimum time to reach the goal formation, which was found to be . Figures (a)a and (b)b show the optimal paths found from . Figure (c)c shows the optimal paths when the initial condition for vehicle 4 was changed to and a different optimal assigment results.
Vi Conclusions and Future Work
We presented how to formulate vehicle coordination problems with unknown goal assignments as the viscosity solution to a single Hamilton–Jacobi equation. We show the solution of this single HJ PDE is equivalent to decomposing the problem and performing a linear bottleneck assignment using the viscosity solutions of independent singlevehicle problems. This allows quadratic computational scaling in the number of vehicles. Finally, a level set method based on the Hopf formula was presented for efficient computation of the vehicle value functions, in which each can be computed in parallel. The Hamilton–Jacobi formulation presented has other advantages for multirobot systems, such as the compensation of time delays which can be induced in several ways including computation, sensing, and interrobot communication. See for example [22].
Future work includes to expand the class of allowable vehicle dynamics from general linear models to certain types of nonlinear dynamics and to allow for dependencies between vehicles both in the dynamics models and in the cost functional.
Appendix
Lemma 5
Given that satisfies assumptions 13, then also satisfies assumption 13.
From assumption 1, is continuous by composition rule [39, Theorem 4.7] and since does not depend on state, it therefore trivially meets assumption 3. It remains to show that meets assumption 2. We know assumption 2 holds for all and for all , therefore the following holds
When we have
for any . And since for all , it follows that assumption 2 is met.
Comments
There are no comments yet.