Neural Lander: Stable Drone Landing Control using Learned Dynamics

by   Guanya Shi, et al.

Precise trajectory control near ground is difficult for multi-rotor drones, due to the complex ground effects caused by interactions between multi-rotor airflow and the environment. Conventional control methods often fail to properly account for these complex effects and fall short in accomplishing smooth landing. In this paper, we present a novel deep-learning-based robust nonlinear controller (Neural-Lander) that improves control performance of a quadrotor during landing. Our approach blends together a nominal dynamics model coupled with a Deep Neural Network (DNN) that learns the high-order interactions. We employ a novel application of spectral normalization to constrain the DNN to have bounded Lipschitz behavior. Leveraging this Lipschitz property, we design a nonlinear feedback linearization controller using the learned model and prove system stability with disturbance rejection. To the best of our knowledge, this is the first DNN-based nonlinear feedback controller with stability guarantees that can utilize arbitrarily large neural nets. Experimental results demonstrate that the proposed controller significantly outperforms a baseline linear proportional-derivative (PD) controller in both 1D and 3D landing cases. In particular, we show that compared to the PD controller, Neural-Lander can decrease error in z direction from 0.13m to zero, and mitigate average x and y drifts by 90 respectively, in 1D landing. Meanwhile, Neural-Lander can decrease z error from 0.12m to zero, in 3D landing. We also empirically show that the DNN generalizes well to new test inputs outside the training domain.


page 1

page 2

page 3

page 4

page 5

page 6

page 7


Neural-Swarm: Decentralized Close-Proximity Multirotor Control Using Learned Interactions

In this paper, we present Neural-Swarm, a nonlinear decentralized stable...

Neural-Swarm2: Planning and Control of Heterogeneous Multirotor Swarms using Learned Interactions

We present Neural-Swarm2, a learning-based method for motion planning an...

Deep Neural Networks for Improved, Impromptu Trajectory Tracking of Quadrotors

Trajectory tracking control for quadrotors is important for applications...

A Complex Stiffness Human Impedance Model with Customizable Exoskeleton Control

The natural impedance, or dynamic relationship between force and motion,...

Stability-Certified Reinforcement Learning via Spectral Normalization

In this article, two types of methods from different perspectives based ...

Continuous Lyapunov Controller and Chaotic Non-linear System Optimization using Deep Machine Learning

The introduction of unexpected system disturbances and new system dynami...

I Introduction

Unmanned Aerial Vehicles (UAVs) require high precision control of aircraft positioning, especially during landing and take-off. This problem is challenging largely due to complex interactions of rotor airflows with the ground. The aerospace community has long identified the change in aerodynamic forces when helicopters or aircraft fly close to the ground. Such ground effects cause an increased lift force and a reduced aerodynamic drag, which can be both helpful and disruptive in flight stability [1], and the complications are exacerbated with multiple rotors. Therefore, performing automatic landing of UAVs is risk-prone, and requires expensive high-precision sensors as well as carefully designed controllers.

Compensating for ground effect is a long-standing problem in the aerial robotics community. Prior work has largely focused on mathematical modeling (e.g. [2]) as part of system identification (ID). These ground-effect models are later used to approximate aerodynamics forces during flights close to the ground and combined with controller design for feed-forward cancellation (e.g. [3]). However, existing theoretical ground effect models are derived based on steady-flow conditions, whereas most practical cases exhibit unsteady flow. Alternative approaches, such as integral or adaptive control methods, often suffer from slow response and delayed feedback. [4] employs Bayesian Optimization for open-air control but not for take-off/landing. Given these limitations, the precision of existing fully automated systems for UAVs are still insufficient for landing and take-off, thereby necessitating the guidance of a human UAV operator during those phases.

To capture complex aerodynamic interactions without not being overly-constrained by conventional modeling assumptions, we take a machine-learning (ML) approach to build a black-box ground effect model using Deep Neural Networks (DNNs). However, incorporating black-box models into a UAV controller faces three key challenges. First, DNNs are notoriously data-hungry and it is challenging to collect sufficient real-world training data. Second, due to high-dimensionality, DNNs can be unstable and generate unpredictable output, which makes the system susceptible to instability in the feedback control loop. Third, DNNs are often difficult to analyze, which makes it difficult to design provably stable DNN-based controllers.

The aforementioned challenges pervade previous works using DNNs to capture high-order non-stationary dynamics. For example, [5, 6] use DNNs to improve system ID of helicopter aerodynamics but not for downstream controller design. Other approaches aim to generate reference inputs or trajectories from DNNs [7, 8, 9, 10]. However, such approaches can lead to challenging optimization problems [7], or heavily rely on well-designed closed-loop controller and require a large number of labeled training data [8, 9, 10]. A more classical approach of using DNNs is direct inverse control [11, 12, 13] but the non-parametric nature of a DNN controller also makes it challenging to guarantee stability and robustness to noise. [14]

proposes a provably stable model-based Reinforcement Learning method based on Lyapunov analysis. However, their approach requires a potentially expensive discretization step and relies on the native Lipschitz constant of the DNN.

Contributions. In this paper, we propose a learning-based controller, Neural-Lander, to improve the precision of quadrotor landing with guaranteed stability. Our approach does directly learns the ground effect on coupled unsteady aerodynamics and vehicular dynamics. We use deep learning for system ID of residual dynamics and then integrate it with nonlinear feedback linearization control.

We train DNNs with spectral normalization of layer-wise weight matrices. We prove that the resulting controller is globally exponentially stable under bounded learning errors. This is achieved by exploiting the Lipschitz bound of spectrally normalized DNNs. It has earlier been shown that spectral normalization of DNNs leads to good generalization, i.e. stability in a learning-theoretic sense [15]. It is intriguing that spectral normalization simultaneously guarantees stability both in a learning-theoretic and a control-theoretic sense.

We evaluate Neural-Lander for trajectory tracking of quadrotor during take-off, landing and near ground maneuvers. Neural-Lander is able to land a quadrotor much more accurately than a naive PD controller with a pre-identified system. In particular, we show that compared to the PD controller, Neural-Lander can decrease error in direction from 0.13m to zero, and mitigate and drifts by 90% and 34% respectively, in 1D landing. Meanwhile, Neural-Lander can decrease error from 0.12m to zero, in 3D landing.111Demo videos: We also demonstrate that the learned ground-effect model can handle temporal dependency, and is an improvement over the steady-state theoretical models in use today.

Ii Problem Statement: Quadrotor Landing

Given quadrotor states as global position , velocity , attitude rotation matrix , and body angular velocity , we consider the following dynamics:



is the gravity vector, and

and are the total thrust and body torques from four rotors predicted by a nominal model. We use to denote the output wrench. The linear equation relates the control input of squared motor speeds to the output wrench with its nominal relation given as :


where and denote some empirical coefficient values for force and torque generated by an individual rotor, and denotes the length of each rotor arm.

The key difficulty of precise landing is the influence of unknown disturbance forces and torques , which originate from complex aerodynamic interactions between the quadrotor and the environment. For example, during the landing process, when the quadrotor is close to ground, vertical aerodynamic force will be significant. Also, as increases, air drag will be exacerbated, which contributes to .

Problem Statement: For (1), our goal is to learn the unknown disturbance forces and torques from partial states and control inputs, in order to improve the controller accuracy. In this paper, we are only interested in position dynamics (the first two equations in eq. 1). As we mainly focus on landing and take-off, the attitude dynamics is limited and the aerodynamic disturbance torque is bounded. We take a deep learning approach by approximating using a Deep Neural Network (DNN), followed by spectral normalization to guarantee the stability of the DNN outputs. We then design an exponentially-stabilizing controller with superior robustness than using only the nominal system dynamics. Training is done off-line, and the learned dynamics is applied in the on-board controller in real-time.

Iii Learning Stable DNN Dynamics

To learn the residual dynamics, we employ a deep neural network with Rectified Linear Units (ReLU) activation. In general, DNNs equipped with ReLU converge faster during training, demonstrate more robust behavior with respect to hyperparameters changes, and have fewer vanishing gradient problems compared to other activation functions such as

sigmoid, tanh [16].

Iii-a ReLU Deep Neural Networks

A ReLU deep neural network represents the functional mapping from the input to the output , parameterized by the DNN weights :


where the activation function is called the element-wise ReLU function. ReLU is less computationally expensive than tanh and sigmoid because it involves simpler mathematical operations. However, deep neural networks are usually trained by first-order gradient based optimization, which is highly dependent on the curvature of the training objective and can be very unstable [17]. To alleviate this issue, we apply the spectral normalization technique [15] in the feedback control loop to guarantee stability.

Iii-B Spectral Normalization

Spectral normalization stabilizes DNN training by constraining the Lipschitz constant of the objective function. Spectral normalization has also been shown to generalize well [18] and in machine learning generalization is a notion of stability. Mathematically, the Lipschitz constant of a function is defined as the smallest value such that

It is known that the Lipschitz constant of a general differentiable function

is the maximum spectral norm (maximum singular value) of its gradient over its domain


The ReLU DNN in eq. 3 is a composition of functions. Thus we can bound the Lipschitz constant of the network by constraining the spectral norm of each layer . Therefore, for a linear map , the spectral norm of each layer is given by . Using the fact that the Lipschitz norm of ReLU activation function is equal to 1, with the inequality , we can find the following bound on :


In practice, we can apply spectral normalization to the weight matrices in each layer during training as follows:


The following lemma bounds the Lipschitz constant of a ReLU DNN with spectral normalization.

Lemma III.1

For a multi-layer ReLU network , defined in eq. 3 without an activation function on the output layer. Using spectral normalization, the Lipschitz constant of the entire network satisfies:

with spectrally-normalized parameters .

As in eq. 4, the Lipschitz constant can be written as a composition of spectral norms over all layers. The proof follows from the spectral norms constrained as in eq. 5.

Iii-C Constrained Training

We apply first-order gradient-based optimization to train the ReLU DNN. Estimating

in (1) boils down to optimizing the parameters in the ReLU network model in eq. 3, given observed value of

and the target output. In particular, we want to control the Lipschitz constant of the ReLU network.

The optimization objective is as follows, where we minimize the prediction error with constrained Lipschitz constant:

subject to (6)

Here is the observed disturbance forces and is the observed states and control inputs. According to the upper bound in eq. 4

, we can substitute the constraint by minimizing the spectral norm of the weights in each layer. We use stochastic gradient descent (SGD) to optimize

eq. 6 and apply spectral normalization to regulate the weights. From Lemma III.1, the trained ReLU DNN has a bounded Lipschitz constant.

Iv Neural Lander Controller Design

We design our controller to allow 3D landing trajectory tracking for quadrotors. Our controller integrates a DNN-based dynamic learning module with a proportional-derivative (PD) controller. In order to keep the design simple, we re-design the PD controller to account for the disturbance force term learned from the ReLU DNN. We solve for the resulting nonlinear controller using fixed-point iteration.

Iv-a Reference Trajectory Tracking

The position tracking error is defined as . We design an integral controller with the composite variable:


with as a positive diagonal matrix. Then is a manifold on which exponentially quickly. Now we have transformed the position tracking problem to a velocity tracking one, we would like the actual force exerted by the rotor to satisfy:


so that the closed-loop dynamics would simply become . Hence, these exponentially-stabilizing dynamics guarantee that converge exponentially and globally to with bounded error, if is bounded [19, 20](see Sec. V). Let denote the total desired force vector from the quadrotor, then total thrust and desired force direction can be computed from eq. 8,


with being the direction of rotor thrust (typically -axis of quadrotors). Using and fixing a desired yaw angle, SO(3) or a desired value of any attitude representation can be obtained [21]. We assume the attitude controller comes in the form of desired torque to be generated by the four rotors. One such example is:


where with , or see [20] for SO(3) tracking control. Note that eq. 10 guarantees exponential tracking of a desired attitude trajectory within some bounded error in the presence of some disturbance torques.

Iv-B Learning-based Discrete-time Nonlinear Controller

Using methods described in Sec. III, we define as the approximation to the disturbance aerodynamic forces, with being the partial states used as input features. Then desired total force is revised as .

Because of the dependency of on , the control synthesis problem here uses a non-affine control input for :


With , We propose the following fixed-point iterative method for solving eq. 11


and is the control input from the previous time-step in the controller. The stability of the system and convergence of controller eq. 12 will be proved in Sec. V.

V Nonlinear Stability Analysis

The closed-loop tracking error analysis provides a direct correlation on how to tune the neural network and controller parameter to improve control performance and robustness.

V-a Control Allocation as Contraction Mapping

We first show that converges to the solution of eq. 11 when all states are fixed.

Lemma V.1

Fixing all current states, define mapping based on eq. 12:


If is -Lipschitz continuous, and ; then is a contraction mapping, and converges to unique solution of .

with being a compact set of feasible control inputs; and given fixed states as , and , then:

Thus, . Hence, is a contraction mapping.

V-B Stability of Learning-based Nonlinear Controller

Before continuing to prove stability of the full system, we make the following assumptions.

Assumption 1

The desired states along the position trajectory , , and are bounded.

Note that trajectory generation can guarantee tight bounds through optimization [21, 22] or simple clipping.

Assumption 2

updates much faster than position controller. And one-step difference of control signal satisfies with a small positive .

Tikhonovs’s Theorem (Theorem 11.1 [23]) provides a foundation for such a time-scale separation, where converges much faster than the slower dynamics. From eq. 13, we can derive the following approximate relation with :

By using the fact that the frequencies of attitude control () and motor speed control () are much higher than that of the position controller () in practice, we can safely assume that , , and in one update step become negligible. Furthermore, can be limited internally by the attitude controller. It leads to:

With being a small constant and from Lemma. V.1, we can deduce that rapidly converges to a small ultimate bound between each position controller update.

Assumption 3

The approximation error of over the compact sets , is upper bounded by , where .

DNNs have been shown to generalize well to the set of unseen events that are from almost the same distribution as a training set [24, 25]. This empirical observation is also theoretically studied in order to shed more light toward an understanding of the complexity of these models [26, 18, 27, 28]. Our experimental results show that our proposed training method in Sec. III generalizes well on unseen events and results in a better performance on unexplored data (Sec. VI-C). Composing our stability result rigorously with generalization error would be an interesting direction for future work. Based on these assumptions, we can now present our overall robustness result.

Theorem V.2

Under Assumptions 1-3, for a time-varying desired trajectory , the controller defined in eqs. 12 and 8 with achieves exponential convergence to the error ball and


where is further broken into with a constant and .

We begin the proof by selecting a Lyapunov function based on as , then by applying the controller eq. 8, we get the time-derivative of :


denote the minimum eigenvalue of the positive-definite matrix

. By applying the Lipschitz property of the network approximator lemma III.1 and Assumption 2, we obtain

Using the Comparison Lemma[23], we define and to obtain

It can be shown that this leads to finite-gain stability and input-to-state stability (ISS) [29]. Furthermore, the hierarchical combination between and in eq. 7 yields (14). Note that disabling integral control in eq. 7 (i.e., ) results in .

By designing the controller gain and Lipschitz constant of the DNN, we ensure and achieve exponential tracking within bound .

Vi Experiments

In our experiments, we evaluate both the generalization performance of our DNN as well as overall control performance of Neural-Lander. The experimental setup is composed of 17 motion capture cameras, the communication router for sending signals and the drone. The data was collected from an Intel Aero quadrotor weighting 1.47 kg with a computer running on it (2.56 GHz Intel Atom x7 processor, 4 GB DDR3 RAM). We retrofit the drone with eight reflective markers to allow for accurate position, attitude and velocity estimation at 100Hz. The Intel Aero drone and the test space are shown in Fig. 1.

Vi-a Bench Test

To identify a good nominal model, we first performed bench tests to estimate , , , , and , which are mass, diameter of rotor, air density, gravity, and thrust coefficient, respectively. The nondimensional thrust coefficient, , is defined as . Note that is a function of propeller speed, , and here we used a nominal value when RPM (the idle RPM) for following data collection session. How changes with is also discussed in Sec. VI-C.

Vi-B Real-World Flying Data and Preprocessing

In order to estimate the effect of disturbance force , we collected states and control inputs, while flying the drone close to the ground, manually controlled by an expert pilot.

Fig. 1: Intel Aero drone during experiments.
Fig. 2: Training data trajectory.

Our training data is shown in Fig. 2. We collected a single trajectory with varying heights and velocities. The trajectory has two parts. Part I (0s-250s in Fig. 2) contains maneuver at different fixed (0.05m-1.5m) with random and . This can be used to estimate the ground effect. Part II (250s-350s in Fig. 2) includes random , and motions to cover the feasible state space as much as possible. For this part, we aim to learn about non-dominant aerodynamics such as air drag. We note that our training data is quite modest in size by the standards of deep learning.

Since our learning task is to regress from state and control inputs, we also need output data of . We utilized the relation from eq. 1 to calculate . Here is calculated based on the nominal from the bench test (Sec. VI-A). Our training set consists of sequences of , where is the observed value of . The entire dataset was split into training (60%), test (20%) and validation set (20%) for model hyper-parameter tuning.

Fig. 3: (a) Learned compared to the ground effect model with respect to height . m/s and other dimensions of state are fixed ( m/s, , RPM). Ground truth points are from hovering data at different heights. (b) Learned with respect to rotation speed ( m, m/s and other dimensions of state are fixed), compared to measured in the bench test. (c) Heatmaps of learned versus and , and other dimensions are fixed. (Left) Learned from ReLU network with spectral normalization with . (Right) Learned from ReLU network without spectral normalization with .

Vi-C DNN Prediction Performance

We train using a deep ReLU network, where , with , , ,

corresponding to global height, global velocity, attitude, and control input. We build the ReLU network using PyTorch, an open-source deep learning library

[30]. Our ReLU network consists of four fully-connected hidden layers, with input and the output dimensions 12 and 3, respectively. We use spectral normalization (SN) eq. 5 to bound the Lipschitz constant.

To investigate how well our DNN can estimate , especially when close to the ground, we compare with a well-known 1D steady ground effects model  [1, 3]:


where is the thrust generated by propellers, is the rotation speed, is the idle RPM, and depends on the number and arrangement of propellers ( for a single propeller, but must be tuned for multiple propellers). Note that is a function of . Thus, we can derive from .

Fig. 3(a) shows the comparison between the estimated from DNN and the theoretical ground effect model eq. 15 as we vary the global height (assuming when ). We see that our DNN can achieve much better estimates than the theoretical ground effect model. We further investigate the trend of with respect to the rotation speed . Fig. 3(b) shows the learned over the rotation speed at a given height, in comparison with the measured from the bench test. We observe that the increasing trend of the estimates is consistent with bench test results for .

To understand the benefits of SN, we compared predicted by DNNs trained both with and without SN. Fig. 3(c) shows the results. Note that -1 m/s to 1 m/s is covered in our training set but -2 m/s to -1 m/s is not. We see differences in:

  1. Ground effect: increases as decreases, which is also shown in Fig. 3(a).

  2. Air drag: increases as the drone goes down () and it decreases as the drone goes up ().

  3. Generalization: the spectral normalized DNN is much smoother and can also generalize to new input domains not contained in the training set.

In [18], the authors theoretically show that spectral normalization can provide tighter generalization guarantees on unseen data, which is consistent with our empirical results. An interesting future direction is to connect generalization theory more tightly with our robustness guarantees.

Fig. 4: PD and Neural-Lander

performance in 1D take-off and landing. Means (solid curves) and standard deviations (shaded areas) of 10 trajectories.

Fig. 5: PD and Neural-Lander performance in 3D take-off and landing. Means (solid curves) and standard deviations (shaded areas) of 10 trajectories.

Vi-D Control Performance

We used PD controller as the baseline controller and implemented both the baseline and Neural-Lander without an integral term in (7-8). First we tested these two controller for the 1D take-off/landing task, i.e., moving the drone from to and then returning it to , as shown in Fig. 4. Second we compare the controllers for the 3D take-off/landing task, i.e., moving the drone from to and then returning it to , as shown in Fig. 5. For both tasks, we repeated the experiments times and computed the means and the standard deviations of the take-off/landing trajectories.222Demo videos:

From Figs. 4 and 5, we can conclude that the main benefits of our Neural-Lander are: (a) In both 1D and 3D cases, Neural-Lander can control the drone to precisely land on the ground surface while the baseline controller cannot land due to the ground effect. (b) In both 1D and 3D cases, Neural-Lander could mitigate drifts in and directions, as it also learned about non-dominant aerodynamics such as air drag.

In experiments, we observed a naive un-normalized DNN () can even result in crash, which also implies the importance of spectral normalization.

Vii Conclusions

In this paper, we present Neural-Lander, a deep learning based nonlinear controller with guaranteed stability for precise quadrotor landing. Compared to traditional ground effect models, Neural-Lander is able to significantly improve control performance. The main benefits are: (1) our method can learn from coupled unsteady aerodynamics and vehicle dynamics, and provide more accurate estimates than theoretical ground effect models; (2) our model can capture both the ground effect and the nondominant aerodynamics, and outperforms the conventional controller in all directions (, and ); (3) we provide rigorous theoretical analysis of our method and guarantee the stability of the controller, which also implies generalization to unseen domains.

Future work includes further generalization of the capabilities of Neural-Lander handling unseen state and disturbance domains even generated by a wind fan array. Another interesting direction would be to capture a long-term temporal correlation with RNNs.


The authors thank Joel Burdick, Mory Gharib and Daniel Pastor Moreno. The work is funded in part by Caltech’s Center for Autonomous Systems and Technologies and Raytheon Company.