I Introduction
Onroad autonomous vehicles require obstacle motion prediction to efficiently plan for future trajectories. Though motion planning has been extensively studied in recent decades, safe planning with assurance in highly uncertain environments is still challenging, especially for safetycritical systems such as autonomous vehicles. One problem arises from the highly uncertain trajectories of surrounding vehicles due to sensing, localization, maneuver or intention uncertainties, etc. Probabilistic approaches using Extend Kalman Filters (EKFs) or Unscented Kalman Filters (UKFs) can be leveraged for the state estimation and prediction for target vehicles. However, they work best on the assumption of a Gaussian distribution, which does not always hold in practice. Although particle filters are proven effective against arbitrary distributions, determining the number of particles required to guarantee a convergence is tricky and sometimes computationally heavy. More importantly, they are only capable of providing statistical guarantees in belief space. Constrained Iterative Linear Quadratic Regulator (CILQR) has been proposed for collision avoidance for autonomous vehicles
[cILQR, cILQR2]. Unfortunately, previous work only demonstrates the validation of the optimization framework by considering deterministic trajectories or persistent prediction for target vehicles. The first research question raised in this study is how to incorporate a sophisticated prediction for target vehicles into Iterative Linear Quadratic Regulator (ILQR) framework and assure safe planning considering prediction uncertainty.The second challenge the existing CILQR framework faces is the tuning of the cost function weights. Cost function weights heavily influence planned trajectory. For instance, if the reference tracking term in the cost function dominates the velocity term, a lane keeping behavior behind a slow vehicle is encouraged. If the dominance reverses, overtaking behavior is encouraged instead. Unfortunately, there is no study about the weight tuning problem in CILQR. The second research question we aim to address is how to design a scenarioaware mechanism to adaptively generate appropriate weights for diverse scenarios.
For the first question, we use one reachability analysis to enclose all kinematically possible trajectories of a target vehicle into an envelope represented by a set considering the uncertainty of its initial states (position, yaw, etc.). We allow the target vehicle to behave nondetermistically over the shortterm prediction horizon. Using this envelope, a motion planner can safely avoid all possible extreme scenarios in shortterm. However, as the size of the envelope grows with time, it might block the entire traversable path, which explains why it has been only used in the shortterm prediction. To avoid being overly conservative and avoid a local optimum, an adaptive least square filter is used for longterm predictor. This part can be easily substituted by more complicated and more accurate predictions. Our optimization framework solves the collision avoidance problem in terms of occupancy from the shortterm and the longterm prediction in a unified manner. To address the second question, we design a scenariobased analytical function taking the states from the ego vehicle and the target vehicle as input, and carrying weights of a cost function as output. It allows the ego vehicle to execute multiple behaviors (such as lanekeeping and overtaking) under a single planner.
Our work makes the following contributions:

We incorporate a hybrid prediction crossing shortterm and longterm levels for a safetyassured and efficient planner under highly uncertain surrounding traffic environments.

We propose an analytical function based on ego and target vehicle state difference and target vehicle velocity. This function increases the generality of a single planner under the same set of cost function weights.

The whole framework has achieved realtime performance with average cost around 65ms over a 4s prediction horizon in simulations with a 0.1second discretization step.
The rest of this paper is organized as follows. Section II provides a review of some important related work regarding motion planning methods and predictions. Section III is the detailed framework proposed in this paper and necessary related background materials. Section IV is about the problem formulation relating to the cost function used and vehicle modeling. Section V presents the experimental results. The conclusions and the future work are in Section VI.
Ii Related Work
Iia Trajectory optimizationrelated
Trajectory optimization can be formulated as a constrained quadratic program[rw1]. Ziegler et al. [benz] used Sequential Quadratic Programming (SQP) to solve the nonlinear and nonconvex problem, but the computation time is around 0.5s, which is not suitable for realtime implementation. Xu et al. [rw3] proposed an efficient realtime motion planner with trajectory optimization that discretizes the plan space and selects the best trajectory based on a cost function.
Control methods such as Model Predictive Control (MPC) have also been used for this purpose. Twolayer MPC [2mpc], which reduces the computation cost as compared to onelayer MPC, has been proposed for motion planning with obstacle avoidance. Arab et al. [arab2016motion] presented a SparseRRT* and nonlinear MPCbased motion planner for aggressive maneuvers. Ji et al. [mmpc] present a path planning and tracking framework using 3D potential fields and multiconstrained MPC. Borrelli et al. [borrelli2005mpc] propose a nonlinear MPC to stabilize the vehicle along a desired path, but there is a tradeoff between planning speed and the prediction horizon. Heavy computation still stands as a barrier to nonlinear MPC.
IiB ILQRrelated
Iterative linear quadratic regulator (ILQR) is an optimizationbased method for nonlinear systems with lower computation time by utilizing dynamic programming. However, very few works have considered constraints in ILQR. Controllimited differential dynamic programming (DDP), proposed in [lqr1], deals with constraints of upper and lower bound on control inputs. Extended LQR [lqr2]  [lqr3] works with collision avoidance for circular obstacles, penalizing the distance from the center of the obstacle. Chen et al. proposed a constrained ILQR for autonomous vehicle motion planning [cILQR]. It includes an elliptical overfitting for obstacles and utilizes barrier functions to incorporate various constraints into the cost function. However, test cases do not demonstrate its capability of handling target vehicles with lateral motion, nor can it use the same set of weights to perform both overtaking and lane keeping behaviors over a wide variety of scenarios. Chen et al. [cILQR2] proposed an improved barrier function to impose hard constraints on optimization process (i.e., the ego vehicle will not traverse into unfeasible regions). However, it still does not solve the problem of weighttuning. Another significant drawback in [cILQR, cILQR2] is that they only consider deterministic trajectories or persistent prediction for target vehicles.
IiC Predictionrelated
Readers are referred to a comprehensive literature review about trajectory prediction in Ref. [lefevre2014survey]. Althoff et al., perform an online verification of a planned trajectory using reachability analysis [althoff2014online], i.e., “planningthenverification”. They propagate the reachability of an ego vehicle and its surrounding vehicles simultaneously over the whole planning horizon to verify the overlap. A replanning step is needed if the path is verified to be unsafe. However, in our framework, the reachability has been plugged into the optimization, i.e., the planned trajectory has already been guaranteed to be safe to execute without replanning. In addition, we argue for the use of a shortterm horizon instead of a whole planning horizon for safety to avoid overconservative planning. We get inspired by the spirit of combing shortterm and longterm planning in Ref. [liu2018robot]. They use twolayer planners, whereas, we propose to use a single planner to combine shortterm and longterm planning to a unified framework to improve the efficiency. The details of analyzing the difference will be discussed in the methodology section.
Iii Methodology
In this section, we discussing the sequential framework containing a mixture of short and longterm target trajectory predictions using reachability analysis and an adaptive constrained ILQR optimizationbased motion planner.
Iiia Prediction of dynamic obstacles
The idea of hybrid prediction and planning is inspired by Liu et. al [liu2018robot]. The authors proposed to combine a safetyoriented shortterm planner and an efficiencyoriented longterm planner to deal with a safe humanrobot interaction problem. The shortterm planner considers the prediction uncertainty of the human trajectory, which is critical for safety. However, the disadvantage is that the uncertainty will propagate over time to make the planner excessively conservative. The prediction uncertainty is not considered in the longterm planner for the sake of efficiency. In every iteration, the overlap between longterm planned trajectory and shortterm reachable region is checked. The longterm planning will be executed if no interaction is detected, i.e., the longterm planner is already safe. Otherwise they switch to the shortterm planner. To further improve the efficiency, we propose to optimize the shortterm and longterm collision avoidance at the same time in a unified framework without having two separate planners and extra intersection checking. A highlevel illustration is shown in Figure 1. The shortterm prediction considers the uncertainty of the target vehicle’s initial state (e.g., perception error for the localization) and the uncertainty of its control actions over the shortterm prediction horizon under a kinematically feasible but possibly nondeterministic assumption. The reachable positions are illustrated as rectangles. Note that they will be further inflated into ellipses considering the vehicle’s shape. The longterm prediction does not consider the uncertainty. Each prediction mass point position will also be inflated into an ellipse. Another significant difference is that they use a tailored convex optimization framework by leveraging safe set and convex feasible set.
IiiA1 Shortterm prediction with reachability analysis
In this paper we use a prototype continuous model checker based on the library of Flow* [chen2013flow], which is a nonlinear reachability analysis tool. Given the uncertainty of states and a prediction horizon, our tool is able to compute the reachable states. All possible future trajectories will fall within the reachable region with theoretical guarantee, since the reachability is an overapproximation of dynamics.
The dynamics of a plant are defined as ordinary differential equations (ODEs), i.e.,
. The reachability analysis essentially tries to solve an initial value problem, i.e., given the interval of an initial state at time 0 and a time interval , we compute the enclosure as the reachable states from over . However, we need to solve for the explicit primitive function of , which is only possible for simple ODEs. Instead, Flow* does not solve for the explicit form of . Iterative algorithms can be used to derive a Taylor model consisting of polynomial and safe remainder to overapproximate ; the details of implementation can be found in Ref. [chen2012taylor].Definition III.1
(Taylor Model): A Taylor Model (TM) of order over domain is denoted by a pair , wherein is a Taylor polynomial of degree at most , and is a remainder interval. Given a TM and a function which are both over the same domain , is overapproximated by , denoted by , i.e., .
Definition III.2
(Taylor Polynomial) Given a times differentiable multivariate function , where is set of functions, the domain , the order Taylor approximation for that expands at the center for is called a Taylor polynomial:
(1) 
In this work, we consider the uncertainty of initial states of the target vehicle’s position, position, and yaw. Their initial bounds are userdefined parameters due to perception errors. Note that the future control actions for the target vehicle over the prediction horizon are not observable. To ensure the safety guarantee, we augment ODEs for the control actions, i.e., and . The upper bound and the lower bound set in our paper are and [zhang2018lane]. The interpretation for the velocity interval is that the maximum deceleration and acceleration are and respectively. The interval is the maximal change in the steering angle within a second. Nondeterministic behaviors of the target vehicle are allowed in deriving the enclosure of the reachable region.
IiiA2 Longterm prediction with recursive least squares filter
In this paper, we use an adaptive linear autonomous dynamical model: , where . is a timevariant parameter matrix. With the new observation of a true state, is updated accordingly. The cost function and its gradient form are as follows:
(2) 
(3) 
where
is a vector containing the parameters in
we want to optimize, is the predictive model, and is the forgetting factor, which can be tuned as an adaptive rate. The explicit update form can be found in the literature [goodwin2014adaptive, liu2015safe]. The advantage of such a linear filter is its efficiency and adaption ability. The disadvantage is that kinematic constraints, road geometry, and drivers’ intentions are not considered. Sophisticated longterm prediction methods such as LSTM and RMIN [Dong2019RMIN] network can be used to replace it. However, considering the realtime demand of the whole framework and the purpose of using longterm prediction to avoid the local optimum problem of shortterm planning, the linear adaptive filter approach is applied in our work.IiiB Iterative Linear Quadratic Regulator
Autonomous vehicle obstaclefree motion planning (i.e., lanekeeping) can be formulated as a standard ILQR problem with nonlinear system dynamics:
(4)  
s.t.  (5) 
where and are the state and the control input at time step . Equation 5 is the system dynamics constraint, which is a transition function mapping state and control at step to state at step .
Since the standard LQR only solves optimization problems with quadratic cost and linear systematic constraints, this problem can be reformulated. By linearizing the systematic constraint at multiple points, we can relax the nonlinearity of the ILQR problem into the linear problem required by LQR. The steps are listed below.

Start with a feasible initial guess and obtain using the system dynamics constraint. If no feasible initial guesses are available, all zero initializations with small perturbations are feasible as well.

Calculate the derivatives of the dynamics and the cost function about the trajectory.
(6) (7) (8) 
Run LQR backward pass on state and action .

Run forward pass with real nonlinear dynamics.

Update and based on states and actions in forward pass.

Iterate the whole process until the cost value converges.
Although dynamic programming provides an efficient solution for ILQR, ILQR itself has the drawback of its constraintfree nature and it also requires the cost function to be quadratic. The constraintfree nature makes it insufficient for direct application to the autonomous vehicle motion planning problem. In the meantime, the quadratic requirement can be fullfilled by applying Taylor Approximation.
IiiC Constrained Iterative Linear Quadratic Regulator
The constrained ILQR (CILQR) algorithm offers the inclusion of different constraints into the objective function through barrier functions. Constraints can be generalized into two categories by linearity. First, any nonlinear constraints can be converted to linear constraints via a secondorder Taylor Expansion. Then, a barrier function is applied and quadratized. Equation 9 and Equation 10 demonstrate this process. The quadratized linear barrier function can now be incorporated into the ILQR algorithm.
An exponential barrier function is chosen, as it has easytoderive analytic derivatives in contrast to use of a hightimecomplexity finite difference method. The detailed algorithm can be found in [cILQR].
(9) 
(10) 
where is the constraint function at time step . Ideally, a constraint function (e.g. log function) that imposes a hard constraint on obstacle avoidance is preferred over the exponential constraint due to its numerical nature (undefined for negative region). However, if trajectory initialization is infeasible, the optimization will fail because the region is undefined. In this case, we impose large weights on the soft constraints to ensure their dominance in the cost function.
One drawback of the CILQR is its need to have specific sets of weights tuned for specific sets of scenarios. For instance, using a set of weights tuned for an overtaking maneuver will always tend to encourage overtaking even if the target vehicle velocity is high. To improve the generality of the original CILQR and decrease the dependence on behaviorlevel planning, we propose analytical functions that take account of the state difference between the ego vehicle and the target vehicle along with the velocity of the target vehicle. These functions compute cost function weight adaptively and allow the planner to perform multiple tasks without switching weights.
IiiD Adaptive Weight Tuning Function
Intuitively, we want a lane keeping behavior when the forward target vehicle is far away from the ego vehicle irrespective of the speed of the target vehicle. When the distance gap closes, if the target vehicle is travelling close to the desired reference speed, a lane keeping behavior is desired. If the same target vehicle is travelling at a slow speed, a safe overtaking maneuver is desired. As a result, behaviors are closely related to target vehicle velocity and the difference between ego vehicle state and target vehicle state.
In terms of cost function weights, the larger the velocity weight is, the more the planner encourages the ego vehicle to follow the reference velocity. The larger the reference weight is, the more the planner discourages the ego vehicle to explore options away from the reference trajectory.
Based on this observation, weights that affect the velocity and reference tracking term can be expressed in the following way:
(11)  
(12) 
where is the target vehicle speed; is the distance difference between target and ego vehicles, and , , , and are parameters to tune.
Iv Problem Formulation
Iva System Dynamics
The kinematic bicycle model as shown in Figure 2 is used in this paper. The kinematic model considers a single pair of wheels at the center of front and rear axles instead of left and right pairs. The zero lateral slip assumption is made. The state consists of and the control input is , where , represent the position coordinates of the rear wheel, is the vehicle velocity, is the orientation of the vehicle, and and are the acceleration and steering angle of the vehicle respectively. Equation 13 represents the transition function with nonzero steering angle. The horizontal increment is calculated by and the vertical increment is calculated by . is the curvature and is the distance travelled at time t with discretization step .
(13) 
However, when is zero, will be zero, which will a cause numerical issue in Equation 13. Hence, a different update shown in Equation 14 is used.
(14) 
One thing to note is that in Figure 2 steering angle is present but the wheel slip angle is not. We assume there is no slipping for the vehicle, which eliminates the necessity for its inclusion in the model.
IvB Objective Function
The general definition of the objective function in Equation 4 can be made specific in Equation 15. Each term in the summation represents the control effort cost, the reference trajectory tracking, the reference velocity tracking, and the constraints cost (i.e. control limits and obstacle avoidance). is the end state cost, which is the sum of the latter three terms as there is no control effort at the end state.
(15) 
The reason we choose to separate the velocity term and the reference trajectory tracking term is the ”pulling” effect stated in [cILQR]. By taking only the closest point on the reference trajectory as the tracking point, we remove the time information otherwise encoded in purepursuit tracking.
IvB1 control effort cost
The acceleration and the steering angle are weighted differently as shown in 16, but they are both judged against zero, as we desire a minimum amount of control effort.
(16) 
IvB2 reference tracking and velocity cost
The reference tracking term assigns a cost to each state based on its distance to the closest point on the reference trajectory. The closest distance can either be calculated against the nearest waypoint, or it can be calculated based on its projection on the reference trajectory. The velocity cost penalizes the ego vehicle for the difference between its velocity and the reference velocity. The combined cost is written in matrix form in Equation 17. is the difference between the ego vehicle state and the reference state at time t.
(17) 
IvB3 constraint cost
All inequality constraints can be expressed in a negative null form shown in Equation 18 in which is the maximum or minimum boundary value and is some sort of function on the decision variable.
(18) 
For linear constraints like acceleration and steering limits, we can write them as for instance:
Then, a barrier function can be used as stated in Equation 9.
The ego vehicle is represented by two circles centered at the middle of the rear and front axle. For the obstacle avoidance term, we can either use the closest distance between two vehicles and set a minimum safety distance or use a geometric collision check as the inequality constraint. Since we need to investigate the worst performance of different planning strategies later, minimum safety distance is set to zero and the problem is formulated as a geometrybased cost function. Obstacles are formulated as ellipses with major and minor axes adjusted for ego vehicle geometry for computation ease when taking the Jacobian and the Hessian of the cost function for the LQR backward pass. The overfitting is the minimum area ellipse and the inequality constraint is shown in Equation 20. is the heading angle of the obstacle at time , and are semi major and minor axes’ lengths. is the position difference between the ego vehicle and the obstacle.
(19)  
(20) 
For shortterm prediction, since the output of the reachability analysis is a sequence of rectangular bounding boxes, each box essentially represents the upper bound and lower bound of the mass point’s future position. We further inflate the boxes into ellipses considering the vehicle’s shape. These ellipses will be addressed as the obstacles over the shortterm prediction horizon.
The same inequality constraint needs to be repeated for the front center of the ego vehicle as well. Reformulated inequality constraints can then be used after linearization and barrier function transformation.
V Experimental Results
In this section, the environment is set up to have two lanes. The top lane ranges from to with the center of the lane located at . The bottom lane ranges from to with the center of the lane located at . Solid black lines and red dashed lines represent lane separators and lane centers, respectively. The reference velocity for all test cases is set to 15 m/s (roughly 35 mph), which is a common speed limit. The acceleration is limited to and the steering angle is limited to .
The ego vehicle is pictured in blue with its transparency decreased over time (i.e. white is the frame at , the less transparent the color is the later a frame is in time). Target vehicles are depicted in red and the transparency works the same way. The finite planning horizon is set to 4 seconds with a discretization of 0.1 seconds.
Three subsections are presented below:

The ego vehicle overtakes a slowtravelling target vehicle in front with an opposing vehicle travelling in the opposite lane. Also, comparisons against CILQR with set weights are conducted in this section.

The ego vehicle performs an evasive maneuver against an incoming cutin vehicle and then performs a carfollowing behavior behind the target vehicle.

The target vehicle speeds up unexpectedly during overtaking maneuver, which shows the importance of the shortterm prediction comparing to ones using deterministic trajectories [cILQR, cILQR2].
Va General Performance Evaluation
In this scenario, a slow vehicle is initially 20 meters ahead of the ego vehicle in the adjacent lane and it is travelling at 8 m/s. There is an incoming vehicle travelling in the opposing lane and the task is to safely overtake the front vehicle. As seen in the velocity profile of Figure 3
, the ego vehicle first slows down behind the target vehicle to ensure safety and then pulls over for the overtake while avoiding the incoming vehicle. During the overtaking maneuver, the ego vehicle smoothly accelerates to the reference velocity of 15 m/s. This test case is used to show the basic ability of the planning framework we propose. This is an extreme case designed to test the performance of the planner while the ideal action is probably overtaking after the opposing vehicle has passed.
For comparisons with the original CILQR method, we use the transient behavior percentage as a performance benchmark. We define the transient behavior as behaviors in which the ego vehicle performs a hesitating maneuver with a sinusoidal like trajectory. The transient behavior is potentially dangerous because these sinusoidal trajectories have significant deviations from the lane center and could pose collision hazards to surrounding vehicles especially when the surrounding vehicles need to perform a lane change into the ego vehicle’s lane. A set of weights is tuned for overtaking in the CILQR method and a set of adaptive weighting function parameters is tuned to encourage overtaking against target vehicles with velocity less than 13 m/s. The results are listed in Table I. For CILQR, when the target vehicle velocity exceeds 12 m/s, all planned trajectories are transient ones. For the method we propose, there are no transient behaviors observed and planned trajectories are either car following ones or overtaking ones. To achieve the two different behaviors with CILQR, two sets of weights need to be tuned manually, which is avoided in the proposed method. This indicates that the framework we propose is a safer planner with increased generality.
Method  CILQR  Adaptive Weight Tuning  

Behaviors 







occurrences  48  22  0  56  0  14  

68.57  31.43  0  80.00  0  20.00  

811.9  1214.9  0  813.5  0  13.614.9 
VB Adjacent Vehicle Violently Cuts in
Due to blind spot awareness, drivers from adjacent lanes might perform a lane change maneuver without environmental awareness. This is a dangerous situation, as the target vehicle is unlikely to yield and distance between two vehicles is very close, which limits the reaction time.
In this scenario, a target vehicle that is initially 5 meters ahead of the ego vehicle in the adjacent lane violently cuts into the ego vehicle’s lane, as shown in Figure 4. Trajectories of the ego vehicle and the target vehicle are depicted in solid blue and red lines in the top figure correspondingly. The bottom plot shows the velocity and the acceleration profile. The blue and the purple lines are velocities of the ego vehicle and the target vehicle, respectively. The red line is the acceleration of the ego vehicle. The target trajectory is generated by a fifthorder polynomial fitting. It completes the lane changing maneuver in 3 seconds.
Around , the planner reacts to the cutin vehicle. It slows down and performs a yielding turn to avoid the incoming target vehicle. As the target vehicle passes by, it then follows the target vehicle using a carfollowing behavior. We further test for the safety margin of the planner with decreased cutin vehicle velocity and shorter reaction time. The failure cases require the adjacent vehicle to travel at a speed around 8 m/s and finish the cutin maneuver within 0.3 to 0.4 seconds. As this minimum safety criterion is hardly kinematically feasible, the safety of the planner is guaranteed.
We then compare the planned trajectory against a planner using deterministic knowledge of the obstacle. In this planner, we used the obstacle future trajectory directly as the prediction information to generate an optimal solution as the baseline. The baseline optimal solution, shown in Figure 5, is to speed up and go around the incoming target vehicle. Compared to the optimal solution, we can see that the planner we proposed is more conservative in terms of whether to overtake the target vehicle when it cuts in. This is safer than surpassing the target vehicle in reality.
VC Target Vehicle Suddenly Accelerates during Overtaking Maneuver
In this scenario, the target vehicle that is being overtaken suddenly starts to accelerate to avoid being overtaken. It imitates an aggressive style of driving. Green boxes are reachability analysis at a certain time step. It predicts 0.5 seconds ahead of time with a step of 0.1 seconds each. They are also drawn in Figure 8 for visualization purposes against planned trajectory for 0.5 seconds (shown in blue asteroid lines).
In Figure 8, only the longterm prediction is used for the whole 4s planning. Note again that the shortterm prediction is disabled in this case and the visible green reachable set is only used for investigating its capability of safety guarantee. The target vehicle pictured in red starts to accelerate at around 105 meters mark in Figure (a)a. At time , the planned trajectory that is 0.5 seconds ahead intersects with the reachable set, which is labeled as in Figure (b)b. It demonstrates that the planned trajectory is not guaranteed to be collisionfree. Not surprisingly, at the succeeding time step , the ego vehicle collides with the target vehicle because the long term planner does not react timely to the acceleration.
In Figure 9, all planned trajectories at each time step take reachability analysis into account and the planner avoids the collision case in Figure (a)a. One thing to note is that to examine the worst performance of two motion planning frameworks, the obstacle avoidance constraint is relaxed from keeping a minimum distance to a zero distancekeeping so that the performance is not affected by another parameter. As a result, for extreme cases, the shortterm prediction guarantees the planned trajectory robust and safe enough against sudden target vehicle movement changes. The reason is that planning to avoid the reachable region allows the target vehicle to perform any nondeterministic but kinematically feasible behaviors over the shortterm horizon.
The average loop time for these experiments is 60 ms with a standard deviation of 25 ms. The simulation is written in Matlab and runs on a laptop with a 2.80GHz Intel Core i77700HQ CPU. With possible implementation in C++, the performance can be further accelerated.
Vi Conclusion
In this paper, the existing CILQR framework is improved in two respects. We first added uncertainty awareness to the framework and made it more robust to sudden target vehicle maneuvers. The safety can be guaranteed because reachability is essentially an enclosure of all possible trajectories of the target vehicle over the shortterm horizon. The longterm prediction is utilized to avoid overconservativness. Secondly, an adaptive weight calculation function is added to the algorithm. It allows the planner to perform multiple maneuvers without tuning for different cost function weights.
There still exist several interesting problems. The first is the tradeoff between the hard obstacle avoidance constraint and the numerical stability when incorporating the barrier function. The second problem is the inclusion of jerk within the cost function for the purpose of comfort. One possible method is to expand the current statespace model for the inclusion of the jerk or introduce jerk constraints on the acceleration.
Vii Acknowledgment
This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA875018C0092. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA.
Comments
There are no comments yet.