 # A Family of Iterative Gauss-Newton Shooting Methods for Nonlinear Optimal Control

This paper introduces a family of iterative algorithms for unconstrained nonlinear optimal control. We generalize the well-known iLQR algorithm to different multiple-shooting variants, combining advantages like straight-forward initialization and a closed-loop forward integration. All algorithms have similar computational complexity, i.e. linear complexity in the time horizon, and can be derived in the same computational framework. We compare the full-step variants of our algorithms and present several simulation examples, including a high-dimensional underactuated robot subject to contact switches. Simulation results show that our multiple-shooting algorithms can achieve faster convergence, better local contraction rates and much shorter runtimes than classical iLQR, which makes them a superior choice for nonlinear model predictive control applications.

Comments

There are no comments yet.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

### I-a Overview and Motivation

In this paper, we discuss a family of iterative Gauss-Newton shooting methods for numerically solving unconstrained optimal control problems, and illustrate the effectiveness of our algorithms with various robotics examples. We outline the connection between a number of ‘direct’ optimal control methods and Gauss-Newton methods from the class of Differential Dynamic Programming (DDP)  algorithms. Additionally, we present a natural extension arising from this connection and introduce a family of hybrid Gauss-Newton Multiple Shooting methods.

In direct approaches to optimal control, infinite-dimensional optimal control problems are transcribed into finite dimensional Nonlinear Programs (NLPs). Two prominent ways of transcription by ‘shooting’ are direct single shooting (SS) and direct multiple shooting (MS) , which differ in the choice of decision variables. In single shooting, solely the control inputs are the decision variables. Generally speaking, the control trajectory is discretized in a piece-wise polynomial fashion (for simplicity, we focus on piece-wise constant controls in this paper, c.f. Fig. 1a). A corresponding state trajectory is obtained by means of numerical forward integration of the system dynamics, starting at a given initial state. SS is often called a ‘sequential’ approach. In multiple shooting, the same discretization scheme is employed for the control inputs, but additionally, intermediate states are added to the decision variables. This provides several advantages 

, but requires the introduction of additional matching constraints to ensure continuity of the state trajectory. The technique of introducing these additional degrees of freedom into the original problem, combined with adding matching constraints, is called

lifting , and results in a ‘simultaneous’ method.

The formulation of both SS and MS as standard NLPs is straightforward and any state-of-the-art NLP solvers can be used to solve them. It is important to note that under the assumption of having a piece-wise polynomial control parameterization, the intrinsic sparsity structure of the underlying optimal control problem of both SS and MS allow them to achieve linear time complexity by performing a Riccati recursion .

Classical single shooting often does not perform well for unstable systems due to the pure open-loop forward integration of the system dynamics. In DDP, this is handled by doing a closed-loop forward integration, using a feedforward plus a time-varying state-feedback control law. The Riccati backward sweep designs time-varying feedback gains on the fly without additional computational cost. DDP is an exact-Hessian method, requiring the computation of second derivatives of the dynamics. While this gives the algorithm quadratic convergence, this can be impractical for use in systems with complex dynamics. For that reason, Hessian-approximating variants of DDP have become quite popular in the robotics community [6, 7, 8, 9, 10].

An important Hessian-approximating variant of DDP is the iterative Linear-Quadratic Regulator (iLQR) , which is also known as Sequential Linear Quadratic Optimal Control 

. This method can be classified as closed-loop single shooting using a Gauss-Newton Hessian approximation and a Riccati backward sweep to solve linear-quadratic (LQ) subproblems. The Gauss-Newton Hessian approximation is based on the assumption that the objective function can be locally approximated as a sum of quadratic terms, and requires only first-order derivatives of the system dynamics. This comes at the cost of giving only linear convergence, however. The Gauss-Newton approach can be lifted, too, which has for example been shown in

. While it initially appears to be a drawback to increase the number of decision variables, it is important to emphasize that the lifted problem can be solved at approximately the same computational cost as the original non-lifted problem, and can lead to a significant increase of convergence speed . Therefore, the fundamental motivation for this paper is to combine the benefits of iLQR with a multiple-shooting approach.

### I-B Contribution

In this work, we derive a lifted equivalent of iLQR, called Gauss-Newton Multiple Shooting (GNMS), which introduces the intermediate states as additional decision variables. Next, we extend this relationship to form an entire family of open-loop multiple-shooting algorithms, denoted GNMS(), and closed-loop multiple shooting algorithms, denoted as iLQR-GNMS(). The latter is shown to be a generalization of iLQR and can be considered multiple-shooting iLQR. We outline the relationship between these algorithms and existing methods. We give simulation examples including a complex underactuated robot and compare the performance of the full-step algorithms using data gained from hardware experiments. Furthermore, we show the benefits of iLQR-GNMS() methods for nonlinear model predictive control.

### I-C Outline

This paper is structured as follows. In Section II, we derive GNMS, and present the basic update routine for state and control trajectories. Using these update equations, we generalize iLQR and GNMS to a family of algorithms in Section III. Section IV showcases several simulation results, based on data gained from hardware experiments. A discussion and outlook concludes the paper in Section V.

## Ii Gauss-Newton Multiple Shooting Fig. 1: Intuition about different shooting variants. (a) A zero-order hold control parameterization, with a constant control input at each stage. (b) Single shooting, where the state trajectory is obtained through a single forward integration over the whole problem horizon. (c) In GNMS, intermediate states at every time-index n are introduced as additional decision variables. (d) In hybrid versions of GNMS, the multiple-shooting intervals span several control intervals. The intermediate states at the beginning of the multiple-shooting intervals are decision variables, and the states in between are obtained by forward integration.

In the following, a brief derivation of the unconstrained Gauss-Newton Multiple Shooting method is presented. We show the derivation using an intuitive value-function approach in the style of  in order to highlight the close relationship between GNMS and iLQR. However, from the beginning, we lift the optimization problem and introduce intermediate states as additional decision variables besides the controls. In that sense, having each control decision variable accompanied with a state-decision variable, GNMS is closely related to the original multiple-shooting algorithm .

Consider the following discrete-time, finite-horizon, nonlinear optimal control problem

 minun,xn{Φ(xN)+N−1∑n=0Ln(xn,un,n)} (1) s.t.xn+1−fn(xn,un,n)=0,x0=xinit (2)

with state-vector

and control input vector . Let be the intermediate cost at time-step and the terminal cost at the time horizon .

### Ii-a Forming LQ subproblems

The optimal control law is computed in an iterative way. In each iteration , we construct a LQ optimal control problem around the state trajectory , with and the control trajectory . In the first iteration, , the LQ problem is hence constructed around the initial guesses for , . Possible initialization strategies are summarized in Section III-D.

At each iteration, we numerically forward integrate all multiple shooting intervals using the respective control inputs , starting at every state . Fig. 1c shows a sketch of the multiple-shooting intervals in GNMS, where the resulting state at the end of each interval is denoted . Accordingly, we define the ‘defect’ between the integrated trajectory segment and the next intermediate state as

 d[k]n=F[k](x[k]n,u[k]n)−x[k]n+1 . (3)

Defining state and control increments and for every single time-stage , we can write the nonlinear system dynamics constraint (2) in terms of the simulated interval as

 x[k]n+1+δx[k]n+1−F[k](x[k]n+δx[k]n,u[k]n+δu[k]n)=0 (4)

which can also be considered a matching condition which ensures the continuity of the state trajectory w.r.t. state and control increments. Performing a first-order Taylor expansion of Equation (4) w.r.t. and , denoting the sensitivities w.r.t state and control and and taking into account the defects as defined by Equation (3), results in the following affine system dynamics constraint

 δx[k]n+1−A[k]nδx[k]n−B[k]nδu[k]n−d[k]n=0 %. (5)

Analogously performing a second-order Taylor expansion of the nonlinear cost function (1) gives rise to the following LQ optimal control problem

 minδun,δxn{ qN+δx⊤NqN+12δx⊤NQNδxN +N−1∑n=0qn+δx⊤nqn+δu⊤nrn+12δx⊤nQnδxn +12δu⊤nRnδun+δu⊤nPnδxn} (6) s.t. (7) x0=xinit (8)

where we assume and . Here, and in the following subsection, we drop the superscript indices  for better readability.

### Ii-B Computing the Optimal Control by Riccati Recursion

Considering the LQ subproblem (6)-(8), the optimal control and state updates can be computed using a value-function approach. Assume a quadratic value function of the form

 Vn(δxn)=sn+δx⊤nsn+12δx⊤nSnδxn (9)

with weighting matrices , and . The optimal control update can be derived by minimizing the value function as a function of .

As Equation (9) is quadratic in at time , it remains quadratic during back-propagation in time, given the affine system dynamics and the linear-quadratic cost in Equations (6)-(8). Due to Bellman’s Principle of Optimality, the optimal control at time can be computed from

 +δu⊤n(rn+12Rnδun)+V∗n+1(Anδxn+Bnδun+dn)]

Inserting Equation (9) and the affine system dynamics (7) and minimizing the overall expression w.r.t. leads to an optimal control update of the form

 δu∗n=−H−1nhn−H−1nGnδxn (10)

where we have defined

 hn =rn+B⊤n(sn+1+Sn+1dn) Gn =Pn+B⊤nSn+1An Hn =Rn+B⊤nSn+1Bn

and . After equating coefficients with a quadratic value function ansatz (9) for , we define and and obtain the following recursive Riccati difference equations for , and

 Sn =Qn+A⊤nSn+1An−L⊤nHnLn (11) sn =qn+A⊤n(sn+1+Sn+1dn) +G⊤nln+L⊤n(hn+Hnln) (12) sn =qn+sn+1+d⊤nsn+1+12d⊤nSn+1dn +l⊤n(hn+12Hnln) (13)

for . For the final time-step  we obtain the terminal conditions , and , and the recursion is subsequently swept backwards. Note that Equation (13) does not contribute to the control update and can therefore be omitted in practice.

### Ii-C Updating State and Control Trajectories

Finally, using Equations (7) and (10), and readopting the superscript indices for the iteration count, we obtain equations for a forward sweep resulting in a full-step update for the control and state decision variables ,

 u[k+1]n =u[k]n+l[k]n+L[k]n(x[k+1]n−x[k]n) (14) x[k+1]n+1 =x[k]n+1+(A[k]n+B[k]nL[k]n)(x[k+1]n−x[k]n) +B[k]nl[k]n+d[k]n (15)

with initial condition . The updated decision variables are dynamically consistent w.r.t. the LQ subproblem dynamics. The nonlinear optimal control problem is solved iteratively, starting from Section II-A and solving LQ subproblems at each iteration, until convergence.

## Iii A Family of iLQR-GNMS Algorithms

Equations (14) and (15) present the GNMS update rule where all states and controls (except for ) are decision variables. For every time-step, both states and controls are updated using a linear forward sweep. Considering Equations (14) and (15), we can now draw connections between GNMS and other existing algorithms and extend them to a bigger family of ‘hybrid’ variants.

### Iii-a Connection to iLQR and Single Shooting

Interestingly, full-step iLQR employs the very same control update rule as in Equation (14). In fact, GNMS can be transcribed into iLQR by substituting the state update equation (15) with a numeric forward integration of the nonlinear system (2) using the time-varying state-feedback control law provided by Equation (14). In this case, the forward integration naturally results in a dynamically consistent state trajectory, all defects become zero and the formulation from section II-B drops back to the well-known iLQR Riccati recursion. Moreover, standard unconstrained single shooting can be recovered by additionally ignoring the state feedback gains and running the forward-integration purely open-loop.

### Iii-B Hybrid Algorithms

Consider a case where the overall time horizon is split into an integer number of multiple shooting intervals with length and , while the control input discretization is kept at its original resolution. Without loss of generality, let us assume that the MS integration intervals start at time indices , with . Fig. 1d sketches an example of such a hybrid case with . Every interval is simulated using the nonlinear system dynamics (2) and the initial states and controls and . All with are overwritten by the integration. For an open-loop forward integration, remains as is, but for a closed-loop forward integration, we additionally overwrite all with using the given feedback control law. Note that in this case, the defect equation (3) remains valid, but is zero along the multiple-shooting intervals. The only non-zero defects occur at . In this setting, the LQ approximation, Riccati recursion and state- and control updates (14)-(15) can be performed as described before. This gives rise to two ‘hybrid’ GNMS variants:

• GNMS(M), using solely the feedforward control and thus performing an open-loop forward integration on each of the multiple shooting intervals, which themselves are multiples of the control interval. Herewith, standard single shooting is the limit case GNMS(1).

• iLQR-GNMS(M), using the full state feedback controller (14) and a closed-loop forward integration of each multiple-shooting interval. In other words, this is equivalent to a multiple-shooting variant of iLQR. The standard iLQR algorithm is the limit case of iLQR-GNMS(1), with only one multiple-shooting interval.

Note that both GNMS() and iLQR-GNMS(), with the number of multiple shooting-intervals being equal to the number of stages, revert to the standard GNMS formulation as introduced in Section II. Table I provides a compact overview of the algorithmic variants and compares their features.

### Iii-C Main Iteration and Implementation

We emphasize that all algorithmic variants feature linear complexity in the time horizon, . All algorithms execute almost identical linear algebra operations during one major iteration and therefore have very similar computational effort. Since the discussed family of GNMS algorithms only differs in a few features, it can be summarized in one framework, given in Algorithm 1. From a software-engineering perspective, the algorithmic variants are easy to implement and can all be treated at once, given a proper design of classes and interfaces. We provide an open-source C++ implementation of all discussed algorithms in .

### Iii-D Initialization

The GNMS variants listed in Table I differ in their requirements for initialization. For iLQR and SS, the nominal state (and control) input trajectories are first updated through a forward integration. This implies that for unstable systems, an initialization with a stabilizing initial control policy, which keeps the first rollout in the vicinity of the expected optimum, is essential. For iLQR, the initially provided state trajectory serves as state reference trajectory for the feedback controller. For SS, it is irrelevant, except for the initial state. Common choices for SS and iLQR initial guesses are policies that stabilize the given initial state or draw the system towards the goal state, for example simple LQR or PD controllers. Generally, the increased efforts for initial guess design for SS and iLQR can be a significant disadvantage. In the worst case, a poor initial guess can lead to a local minimum with a solution far from desired behavior.

Multiple-shooting algorithms, by contrast, offer greater flexibility and simplicity at initial guess design, and are often more robust w.r.t. bad initial policies. It is a well known fact that the convergence of multiple-shooting methods can be accelerated through an ‘educated’ initial guess, such as direct interpolation between initial and desired final state. For the hybrid algorithms iLQR-GNMS(

) and GNMS() it often depends on the system characteristics if a stabilizing control policy is required, or if the multiple-shooting intervals are short enough to prevent significant divergence during integration. In the video attachment , we show two simulation examples where initialization with a bad state-feedback controller significantly extends the runtime of iLQR compared to GNMS, or even causes iLQR to fail.

Note that, when all possible GNMS variants are initialized with a dynamically consistent state trajectory and corresponding control trajectory, the defects for the first iteration are zero and the feedforward control updates are identical.

## Iv Results and Comparison

### Iv-a An Illustrative, One-Dimensional System Fig. 2: Results for a one-dimensional illustrative system, main text for detailed description.

As an illustrative example, we present a simple one-dimensional system, which is slightly nonlinear, unstable and constructed to help the reader build an intuition about the methods. The system dynamics are , and discretized with  s, . The cost function is defined as quadratic cost of form (6) with desired terminal state , and . Fig. 2 shows results for iLQR, GNMS and the hybrid variants with five multiple-shooting intervals, GNMS(5) and iLQR-GNMS(5). We plot the state, control and defect trajectories for the first iteration of the algorithms, along with the initial guess and the converged solution.

The state and control trajectories illustrate the relationship between the algorithms: the multiple-shooting intervals of GNMS(5) and iLQR-GNMS(5) start with states and controls lying on the respective GNMS trajectories. For GNMS(5), the system is simulated open-loop, the controls are identical to GNMS, and the state trajectories on the multiple-shooting intervals start to diverge. By contrast, for iLQR-GNMS(5), in every multiple-shooting interval both state and control trajectories converge asymptotically towards the simulated iLQR state and control trajectories. For the hybrid variants, a defect occurs every 0.6s, for GNMS, the defect is evenly distributed across all time intervals. Due to the long shooting-intervals, GNMS(5) requires one iteration more to catch up with the other algorithms in terms of overall cost. Importantly, the control update plot shows that the asymptotic contraction rates, which are defined as

 C=limk→∞\nicefrac|U[k+1]−U∗|2|U[k]−U∗|2 (16)

are not the same. In this example, GNMS and GNMS(5) show better contraction than iLQR. Asymptotic contraction rates are investigated in more detail in Section IV-C.

### Iv-B Quadruped Trot Optimization Example

The quadrupedal robot ‘HyQ’  is an 18 DoF, floating-base underactuated robot subject to contacts with the environment, c.f. Fig. 3. In this paper, the contacts are not Fig. 3: The quadruped HyQ

incorporated as constraints, but added to the system dynamics using an explicit contact model. We employ a static, plain environment and a ‘soft’ contact model, consisting of a nonlinear spring in surface-normal direction and a nonlinear damping term. The contact model is detailed in . Using this formulation, the contact force is a function of the current robot state only. It is clear that such a soft contact model presents only a rough approximation of the complicated physics of contact, and also introduces a number of potential disadvantages such as increased stiffness and nonlinearity of the combined system dynamics. However, the contact model allows a straight-forward computation of derivatives , which creates an ideal test-bed for comparing our shooting algorithms. We obtain exact discrete sensitivities and through evaluating a sensitivity differential equation on the multiple-shooting intervals .

The example task considered is the optimization of a periodic trotting gait. To achieve the trotting gait, we impose a time-varying quadratic penalty on the leg joint positions. Furthermore, we penalize the intermediate and final position of the robot’s trunk and the intermediate and final velocities of the leg joints. For an in-depth description of the cost modelling to achieve different gait patterns the reader is referred to . In the following, the trotting gait optimization is used to compare the algorithms developed in this paper. For a meaningful comparison, we initialize all algorithms with identical state trajectories and control policies. The initial guess corresponds to standing still in a steady state. We optimize over 36 states, 12 control inputs and a total time horizon of 2.5 seconds with . Fig. 4: Comparing different algorithms from the family of Gauss-Newton shooting methods w.r.t. cost descent, total defects and control update norm, illustrated with the example of a quadruped trot optimization problem.

Fig. 4 compares iLQR, GNMS, and iLQR-GNMS() with three different numbers of multiple-shooting intervals in terms of cost descent, control update norms and total defects. Note that SS and GNMS() are unstable due to the strong instability of the system. All remaining algorithms converge to the same minimum within 20 iterations. iLQR and iLQR-GNMS(25) show short phases of increasing cost, which we accept in this simulation example. Since the provided initial guess is dynamically consistent, the initial defects are zero. GNMS, having the largest number of multiple-shooting intervals (2500), also shows the largest total defect sum after the first iteration. The multiple-shooting iLQR methods, all having significantly fewer continuity constraints to enforce, feature a lower total defect.

As expected for a Gauss-Newton method, all approaches show linear convergence. Considering the control update norms, we see that GNMS and iLQR feature a similar contraction rate for this example. In fact, the contraction rate of GNMS is slightly better, which is visually hard to distinguish here, but is detailed in following example. For the hybrid multiple-shooting iLQR variants, we observe a significantly better contraction rate than for both iLQR and GNMS. When applying an identical termination criterion based on the relative change of the cost function and a defect threshold to all algorithms, all lifted methods converge in fewer iterations than iLQR. Furthermore, all displayed iLQR-GNMS() variants converge notably faster than GNMS. Screen recordings of the optimized trotting motions are provided in the video attachment .

### Iv-C Local Contraction Rates for Quadruped Trot Tracking

While Section IV-B gives an optimization example for a single motion, starting with an initial guess far from the optimal solution, we now show a comparison based on statistical data from  runs: the trotting gait from Section IV-B is now considered in a tracking MPC problem. All algorithms are initialized with an optimal, dynamically consistent solution, but the initial state is locally perturbed. The state perturbations are sampled from the hardware-experiments detailed in . For every perturbation, we let different algorithms iterate until convergence. Fig. 5 compares average asymptotic contraction rates for four different algorithms. It shows the normalized difference between a fully converged optimal feedforward trajectory and trajectories obtained at previous iterations. Furthermore it shows first-order regressions approximating the local contraction rates, in terms of the slopes of the difference norms in the semi-logarithmic plot. It can be seen that GNMS outperforms iLQR in terms of local contraction rate. GNMS(50) shows a contraction rate similar to GNMS. The example indicates better local convergence for iLQR-GNMS(50) than for classical iLQR, GNMS and GNMS(50). Fig. 5: A statistical comparison of local contraction rates for four different algorithms. This plot shows the normalized difference between solutions to perturbed optimal control problems and a fully converged reference solution. The normalization is w.r.t. the iLQR control trajectory. The data is averaged from 1000 samples, the corresponding standard deviations are plotted as error-bars. Estimates for the contraction rates are indicated by straight lines. GNMS outperforms iLQR in terms of local contraction rate, and gets closer to the true optimal solution in fewer iterations. In contrast to the example in Section IV-B, GNMS(50) is stable and shows a contraction rate similar to GNMS. iLQR-GNMS(50) clearly outperforms all other algorithms with significantly better local contraction rate. After 4 iterations, it is on average 0.1% away from the fully converged reference solution, while iLQR is on average 14% away.

Fig. 6 generalizes the result from Fig. 5 for a range of multiple-shooting intervals , showing numerically approximated asymptotic contraction rates, Equation (16), as a function of . Again, GNMS() is unstable for overly long multiple-shooting intervals, similar to the limiting case open-loop single shooting. For closed-loop shooting, the asymptotic contraction rates for all multiple-shooting variants are better than for iLQR, and the contraction rates for the hybrid variants outperform the limiting case GNMS. In this example, the relative improvement over iLQR is up to a factor two. Note that the resulting iLQR-GNMS() contraction rates differ slightly from the ones in Fig. 4 for , which is due to the different problem setting. However, both experiments exhibit the same trend. Fig. 6: Comparing asymptotic contraction rates as a function of the number of multiple-shooting intervals M. For GNMS(M), the integration on the multiple-shooting intervals is only stable for M≥20. For this range of ‘stable’ multiple-shooting intervals, the contraction rates are almost constant and similar to the limiting case GNMS. The iLQR multiple-shooting methods are stable for all M. Here, the contraction rates for intermediate numbers of multiple-shooting intervals are distinctively better than for the limit cases iLQR and GNMS.

### Iv-D Nonlinear MPC on HyQ

The suitability of iLQR for nonlinear model-predictive control (NMPC) in robotics applications has been shown many times before, in [6, 20, 21]. In this section, we show that GNMS and its hybrid variants are even more promising for NMPC applications. First, they converge faster to the optimal solution, c.f. Section IV-C. A second advantage of the multiple-shooting variants of the presented algorithms is that the forward integrations can be parallelized. Therefore, the achievable MPC cycle time decreases approximately linearly with the number of available CPU cores. By combining faster update rates with better contraction rate, our multiple-shooting algorithms outperform classical iLQR-NMPC.

In the following simulation example, we compare the NMPC performance of our Gauss-Newton shooting algorithms against iLQR-NMPC in a HyQ simulation environment. In each NMPC cycle, we run an adapted version of the main iteration in Algorithm 1 and ‘warm-start’ it with the previous solution. In such a setting, we can separate an NMPC iteration into a ‘preparation’ and a ‘feedback’ phase , thus minimizing the latency between receiving a state-measurement and sending an updated policy to the control system. Our NMPC loop is described in Algorithm 2.

In this experiment, we run a trotting gait on HyQ, in closed-loop MPC in a simulation environment . For the NMPC optimal control problem, we choose a time-step size of 4 ms and . We parallelize the integration of all multiple-shooting intervals and the sensitivity computation on four cores, and run both simulator and MPC controller on the same notebook equipped with an Intel Core i7 (2.8 GHz) processor. For four different algorithmic combinations, we record the executed trot under identical conditions for 18 seconds and compute the resulting accumulated intermediate cost. A summary of the achieved average cost and NMPC frequencies is given in Fig. 7. In these experiments, iLQR results in the highest accumulated cost. The multiple-shooting variants outperform iLQR, with relative cost differences up to 5%. At the same time, due to shorter runtimes, the multiple-shooting variants achieve up to 40% higher MPC frequencies, with a maximum of 103 Hz.

In our simulation, all four algorithm variants run in a stable and robust fashion. The relatively small cost difference is an indicator of better convergence, but the main reason why the multiple-shooting variants should be preferred over iLQR in real-world applications, is the superior control bandwidth. The algorithms in this paper have been validated in hardware experiments on two different quadruped platforms, where a variety of motions and gaits was implemented. However, a in-depth description of the experimental setups, the optimized computational framework and practical tuning considerations are beyond the scope of this paper. The interested reader is therefore referred to , where we apply the GNMS-algorithm for full-body NMPC on the quadrupeds HyQ and ANYmal , explain the robotic setup in detail and present a variety of hardware experiments. As outlook, a video sequence of GNMS-NMPC running on hardware is provided in the attachment . Fig. 7: Comparing four different solvers for HyQ-MPC. The left plot compares costs accumulated over 18.0 seconds of trotting-MPC execution. iLQR shows the highest accumulated cost. The multiple-shooting variants outperform iLQR, but the relative cost difference is small. GNMS runs at 96.3% of the iLQR cost, GNMS(25) at 95.9% and iLQR-GNMS(25) at 95.0%. Regarding the achieved average MPC frequencies, shown on the right, we find up to 40% higher frequencies for the multiple-shooting variants. The maximum average MPC frequency, 103 Hz, is achieved using GNMS.

## V Conclusion and Outlook

In this paper, we have shown how the well-known iLQR algorithm can be lifted and transformed into Gauss-Newton Multiple Shooting, GNMS. We have generalized the concept to form a family of Gauss-Newton shooting algorithms, which can be distinguished into sequential and simultaneous algorithms and closed and open-loop algorithms. Some algorithms partially overwrite decision variables by means of a numeric forward integration. All presented variants have approximately the same computational cost and feature linear time-complexity. Furthermore, all discussed algorithms share a large number of computational routines, and it is not difficult to implement all of the presented variants in a single software framework.

We have compared the performance of the algorithms in different simulation experiments, which indicate that the lifted algorithms can outperform classical iLQR. While not included in this paper for reasons of compactness, similar results were obtained for other rigid-body dynamic systems including a 6 DoF fixed-base arm model. A more fundamental investigation for formalizing the conditions that result in improved convergence rates for GNMS() and iLQR-GNMS() is subject to ongoing work.

In the application examples, we limited the comparison to full-step variants of all considered algorithms. However, for even more nonlinear dynamics or cost functions, where the LQ optimal control problem is a bad approximation to the nonlinear problem, a globalization strategy may be required. For single-shooting methods, a straight-forward solution is to employ a line-search scheme. This is simple to implement, as it is sufficient to search over the cost for different control update step-sizes. For multiple-shooting approaches, however, there are additional continuity constraints, and we need to line-search over a merit-function which trades off the costs and defects. It is typically required to introduce additional tuning variables or to implement a complex filter-scheme . Our open-source reference implementation  provides a line-search scheme using a simple merit function.

For complex robot trajectory optimization problems, we do not recommend to generally prioritize one of the presented algorithms over another. While the multiple-shooting algorithms allow for advanced initialization strategies and are more robust w.r.t. bad initial guesses, they may require slightly more tuning efforts when the full-step algorithm is not sufficient. By contrast, in NMPC applications with well-defined cost functions and using warm-starting, additional globalization steps are rarely required at all. Here, our multiple-shooting algorithms offer significant advantages, better local contraction rates and much shorter runtimes.

The focus of this paper is on unconstrained optimal control problems without general (in)equality path constraints. It is obvious that the lifting approach naturally transfers to equality-constrained variants of iLQR, such as . The inclusion of general (in)equality path constraints is part of ongoing work. One option to include them in the existing framework is to replace the standard Riccati backward sweep with a dedicated solver for constrained LQ optimal control problems . In this way, general (in)equality path constraints can be included while keeping linear time-complexity.

While this work treats algorithms using a Gauss-Newton Hessian approximation, it similarly transfers to exact-Hessian approaches, resulting in a multiple-shooting DDP algorithm combining the advantages of simultaneous methods, quadratic convergence and closed-loop integration.

## Acknowledgements

The authors would like to thank Dimitris Kouzoupis, Gianluca Frison, Mario Zanon and Timothy Sandy for fruitful discussions. This research was supported by the Swiss National Science Foundation through the NCCR Digital Fabrication, the NCCR Robotics and a Professorship Award to Jonas Buchli. Further, this research was supported by the EU via FP7-ITN-TEMPO (607 957) and H2020-ITN-AWESCO (642 682), by the Federal Ministry for Economic Affairs and Energy (BMWi) via eco4wind and DyConPV, and by DFG via Research Unit FOR 2401.

## References

•  D. Mayne, “A second-order gradient method for determining optimal trajectories of non-linear discrete-time systems,” International Journal of Control, vol. 3, no. 1, pp. 85–95, 1966.
•  H. G. Bock and K.-J. Plitt, “A multiple shooting algorithm for direct solution of optimal control problems,” in Proceedings of the IFAC World Congress, 1984.
•  M. Diehl, H. G. Bock, H. Diedam, and P.-B. Wieber, “Fast direct multiple shooting algorithms for optimal robot control,” in Fast motions in biomechanics and robotics, pp. 65–93, Springer, 2006.
•  J. Albersmeyer and M. Diehl, “The Lifted Newton Method and its Application in Optimization,” SIAM Journal on Optimization, vol. 20, no. 3, pp. 1655–1684, 2010.
•  C. V. Rao, S. J. Wright, and J. B. Rawlings, “Application of interior-point methods to model predictive control,” Journal of Optimization Theory and Applications, vol. 99, pp. 723–757, Dec 1998.
•  J. Koenemann, A. Del Prete, Y. Tassa, E. Todorov, O. Stasse, M. Bennewitz, and N. Mansard, “Whole-body model-predictive control applied to the HRP-2 humanoid,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3346–3351, 2015.
•  Y. Tassa, T. Erez, and E. Todorov, “Synthesis and stabilization of complex behaviors through online trajectory optimization,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4906–4913, Oct 2012.
•  M. Neunert, F. Farshidian, A. W. Winkler, and J. Buchli, “Trajectory optimization through contacts and automatic gait discovery for quadrupeds,” IEEE Robotics and Automation Letters (RA-L), 2017.
•  B. Ponton, S. Schaal, and L. Righetti, “Risk sensitive nonlinear optimal control with measurement uncertainty,” CoRR, 2016.
•  F. Farshidian, M. Neunert, A. W. Winkler, G. Rey, and J. Buchli, “An efficient optimal planning and control framework for quadrupedal locomotion,” in IEEE International Conference on Robotics and Automation (ICRA), 2017.
•  E. Todorov and W. Li, “A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems,” in American Control Conference, 2005. Proceedings of the 2005, pp. 300–306, IEEE, 2005.
•  A. Sideris and J. E. Bobrow, “An efficient sequential linear quadratic algorithm for solving nonlinear optimal control problems,” Transactions on Automatic Control, vol. 50, no. 12, pp. 2043–2047, 2005.
•  A. Schäfer, Efficient Reduced Newton-Type Methods for Solution of Large-Scale Structured Optimization Problems with Application to Biological and Chemical Processes. PhD thesis, University of Heidelberg, 2005.
•  “The ‘Control Toolbox’ – An Open Source Library for Robotics and Optimal Control.” https://adrlab.bitbucket.io/ct, 2017. [Online; accessed 25-November-2017].
• 
•  C. Semini, N. G. Tsagarakis, E. Guglielmino, M. Focchi, F. Cannella, and D. G. Caldwell, “Design of HyQ–a hydraulically and electrically actuated quadruped robot,” Institution of Mechanical Engineers, Journal of Systems and Control Engineering, vol. 225, pp. 831–849, 2011.
•  M. Giftthaler, M. Neunert, M. Stäuble, M. Frigerio, C. Semini, and J. Buchli, “Automatic Differentiation of Rigid Body Dynamics for Optimal Control and Estimation.” Advanced Robotics, November 2017.
• 

R. P. Dickinson and R. J. Gelinas, “Sensitivity analysis of ordinary differential equation systems—a direct method,”

Journal of Computational Physics, vol. 21, no. 2, pp. 123 – 143, 1976.
•  M. Neunert, M. Stäuble, M. Giftthaler, C. D. Bellicoso, J. Carius, C. Gehring, M. Hutter, and J. Buchli, “Whole-Body Nonlinear Model Predictive Control Through Contacts for Quadrupeds,” 2017. arXiv:1712.02889 [cs.RO].
•  M. Neunert, C. de Crousaz, F. Furrer, M. Kamel, F. Farshidian, R. Siegwart, and J. Buchli, “Fast nonlinear model predictive control for unified trajectory optimization and tracking,” in IEEE International Conference on Robotics and Automation (ICRA), 2016.
•  M. Giftthaler, F. Farshidian, T. Sandy, L. Stadelmann, and J. Buchli, “Efficient Kinematic Planning for Mobile Manipulators with Non-holonomic Constraints Using Optimal Control,” in IEEE International Conference on Robotics and Automation (ICRA), 2017.
•  M. Diehl, H. Bock, J. P. Schlöder, R. Findeisen, Z. Nagy, and F. Allgöwer, “Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations,” Journal of Process Control, vol. 12, no. 4, pp. 577 – 585, 2002.
•  S. Schaal, “The SL simulation and real-time control software package,” tech. rep., Los Angeles, CA, 2009.
•  M. Hutter, C. Gehring, D. Jud, A. Lauber, C. D. Bellicoso, V. Tsounis, J. Hwangbo, K. Bodie, P. Fankhauser, M. Bloesch, R. Diethelm, S. Bachmann, A. Melzer, and M. Hoepflinger, “Anymal - a highly mobile and dynamic quadrupedal robot,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 38–44, Oct 2016.
•  J. Nocedal and S. J. Wright, Numerical Optimization. Springer, 1999.
•  A. Sideris and L. A. Rodriguez, “A Riccati approach for constrained linear quadratic optimal control,” International Journal of Control, vol. 84, no. 2, pp. 370–380, 2011.
•  G. Frison, Algorithms and Methods for Fast Model Predictive Control. PhD thesis, Technical University of Denmark, 2015.