Unmanned Aerial Vehicles (UAVs) require high precision control of aircraft positioning, especially during landing and take-off. This problem is challenging largely due to complex interactions of rotor airflows with the ground. The aerospace community has long identified the change in aerodynamic forces when helicopters or aircraft fly close to the ground. Such ground effects cause an increased lift force and a reduced aerodynamic drag, which can be both helpful and disruptive in flight stability , and the complications are exacerbated with multiple rotors. Therefore, performing automatic landing of UAVs is risk-prone, and requires expensive high-precision sensors as well as carefully designed controllers.
Compensating for ground effect is a long-standing problem in the aerial robotics community. Prior work has largely focused on mathematical modeling (e.g. ) as part of system identification (ID). These ground-effect models are later used to approximate aerodynamics forces during flights close to the ground and combined with controller design for feed-forward cancellation (e.g. ). However, existing theoretical ground effect models are derived based on steady-flow conditions, whereas most practical cases exhibit unsteady flow. Alternative approaches, such as integral or adaptive control methods, often suffer from slow response and delayed feedback.  employs Bayesian Optimization for open-air control but not for take-off/landing. Given these limitations, the precision of existing fully automated systems for UAVs are still insufficient for landing and take-off, thereby necessitating the guidance of a human UAV operator during those phases.
To capture complex aerodynamic interactions without not being overly-constrained by conventional modeling assumptions, we take a machine-learning (ML) approach to build a black-box ground effect model using Deep Neural Networks (DNNs). However, incorporating black-box models into a UAV controller faces three key challenges. First, DNNs are notoriously data-hungry and it is challenging to collect sufficient real-world training data. Second, due to high-dimensionality, DNNs can be unstable and generate unpredictable output, which makes the system susceptible to instability in the feedback control loop. Third, DNNs are often difficult to analyze, which makes it difficult to design provably stable DNN-based controllers.
The aforementioned challenges pervade previous works using DNNs to capture high-order non-stationary dynamics. For example, [5, 6] use DNNs to improve system ID of helicopter aerodynamics but not for downstream controller design. Other approaches aim to generate reference inputs or trajectories from DNNs [7, 8, 9, 10]. However, such approaches can lead to challenging optimization problems , or heavily rely on well-designed closed-loop controller and require a large number of labeled training data [8, 9, 10]. A more classical approach of using DNNs is direct inverse control [11, 12, 13] but the non-parametric nature of a DNN controller also makes it challenging to guarantee stability and robustness to noise. 
proposes a provably stable model-based Reinforcement Learning method based on Lyapunov analysis. However, their approach requires a potentially expensive discretization step and relies on the native Lipschitz constant of the DNN.
Contributions. In this paper, we propose a learning-based controller, Neural-Lander, to improve the precision of quadrotor landing with guaranteed stability. Our approach does directly learns the ground effect on coupled unsteady aerodynamics and vehicular dynamics. We use deep learning for system ID of residual dynamics and then integrate it with nonlinear feedback linearization control.
We train DNNs with spectral normalization of layer-wise weight matrices. We prove that the resulting controller is globally exponentially stable under bounded learning errors. This is achieved by exploiting the Lipschitz bound of spectrally normalized DNNs. It has earlier been shown that spectral normalization of DNNs leads to good generalization, i.e. stability in a learning-theoretic sense . It is intriguing that spectral normalization simultaneously guarantees stability both in a learning-theoretic and a control-theoretic sense.
We evaluate Neural-Lander for trajectory tracking of quadrotor during take-off, landing and near ground maneuvers. Neural-Lander is able to land a quadrotor much more accurately than a naive PD controller with a pre-identified system. In particular, we show that compared to the PD controller, Neural-Lander can decrease error in direction from 0.13m to zero, and mitigate and drifts by 90% and 34% respectively, in 1D landing. Meanwhile, Neural-Lander can decrease error from 0.12m to zero, in 3D landing.111Demo videos: https://youtu.be/C_K8MkC_SSQ We also demonstrate that the learned ground-effect model can handle temporal dependency, and is an improvement over the steady-state theoretical models in use today.
Ii Problem Statement: Quadrotor Landing
Given quadrotor states as global position , velocity , attitude rotation matrix , and body angular velocity , we consider the following dynamics:
is the gravity vector, andand are the total thrust and body torques from four rotors predicted by a nominal model. We use to denote the output wrench. The linear equation relates the control input of squared motor speeds to the output wrench with its nominal relation given as :
where and denote some empirical coefficient values for force and torque generated by an individual rotor, and denotes the length of each rotor arm.
The key difficulty of precise landing is the influence of unknown disturbance forces and torques , which originate from complex aerodynamic interactions between the quadrotor and the environment. For example, during the landing process, when the quadrotor is close to ground, vertical aerodynamic force will be significant. Also, as increases, air drag will be exacerbated, which contributes to .
Problem Statement: For (1), our goal is to learn the unknown disturbance forces and torques from partial states and control inputs, in order to improve the controller accuracy. In this paper, we are only interested in position dynamics (the first two equations in eq. 1). As we mainly focus on landing and take-off, the attitude dynamics is limited and the aerodynamic disturbance torque is bounded. We take a deep learning approach by approximating using a Deep Neural Network (DNN)， followed by spectral normalization to guarantee the stability of the DNN outputs. We then design an exponentially-stabilizing controller with superior robustness than using only the nominal system dynamics. Training is done off-line, and the learned dynamics is applied in the on-board controller in real-time.
Iii Learning Stable DNN Dynamics
To learn the residual dynamics, we employ a deep neural network with Rectified Linear Units (ReLU) activation. In general, DNNs equipped with ReLU converge faster during training, demonstrate more robust behavior with respect to hyperparameters changes, and have fewer vanishing gradient problems compared to other activation functions such assigmoid, tanh .
Iii-a ReLU Deep Neural Networks
A ReLU deep neural network represents the functional mapping from the input to the output , parameterized by the DNN weights :
where the activation function is called the element-wise ReLU function. ReLU is less computationally expensive than tanh and sigmoid because it involves simpler mathematical operations. However, deep neural networks are usually trained by first-order gradient based optimization, which is highly dependent on the curvature of the training objective and can be very unstable . To alleviate this issue, we apply the spectral normalization technique  in the feedback control loop to guarantee stability.
Iii-B Spectral Normalization
Spectral normalization stabilizes DNN training by constraining the Lipschitz constant of the objective function. Spectral normalization has also been shown to generalize well  and in machine learning generalization is a notion of stability. Mathematically, the Lipschitz constant of a function is defined as the smallest value such that
It is known that the Lipschitz constant of a general differentiable function
is the maximum spectral norm (maximum singular value) of its gradient over its domain.
The ReLU DNN in eq. 3 is a composition of functions. Thus we can bound the Lipschitz constant of the network by constraining the spectral norm of each layer . Therefore, for a linear map , the spectral norm of each layer is given by . Using the fact that the Lipschitz norm of ReLU activation function is equal to 1, with the inequality , we can find the following bound on :
In practice, we can apply spectral normalization to the weight matrices in each layer during training as follows:
The following lemma bounds the Lipschitz constant of a ReLU DNN with spectral normalization.
For a multi-layer ReLU network , defined in eq. 3 without an activation function on the output layer. Using spectral normalization, the Lipschitz constant of the entire network satisfies:
with spectrally-normalized parameters .
Iii-C Constrained Training
We apply first-order gradient-based optimization to train the ReLU DNN. Estimatingin (1) boils down to optimizing the parameters in the ReLU network model in eq. 3, given observed value of
and the target output. In particular, we want to control the Lipschitz constant of the ReLU network.
The optimization objective is as follows, where we minimize the prediction error with constrained Lipschitz constant:
Here is the observed disturbance forces and is the observed states and control inputs. According to the upper bound in eq. 4
, we can substitute the constraint by minimizing the spectral norm of the weights in each layer. We use stochastic gradient descent (SGD) to optimizeeq. 6 and apply spectral normalization to regulate the weights. From Lemma III.1, the trained ReLU DNN has a bounded Lipschitz constant.
Iv Neural Lander Controller Design
We design our controller to allow 3D landing trajectory tracking for quadrotors. Our controller integrates a DNN-based dynamic learning module with a proportional-derivative (PD) controller. In order to keep the design simple, we re-design the PD controller to account for the disturbance force term learned from the ReLU DNN. We solve for the resulting nonlinear controller using fixed-point iteration.
Iv-a Reference Trajectory Tracking
The position tracking error is defined as . We design an integral controller with the composite variable:
with as a positive diagonal matrix. Then is a manifold on which exponentially quickly. Now we have transformed the position tracking problem to a velocity tracking one, we would like the actual force exerted by the rotor to satisfy:
so that the closed-loop dynamics would simply become . Hence, these exponentially-stabilizing dynamics guarantee that converge exponentially and globally to with bounded error, if is bounded [19, 20](see Sec. V). Let denote the total desired force vector from the quadrotor, then total thrust and desired force direction can be computed from eq. 8,
with being the direction of rotor thrust (typically -axis of quadrotors). Using and fixing a desired yaw angle, SO(3) or a desired value of any attitude representation can be obtained . We assume the attitude controller comes in the form of desired torque to be generated by the four rotors. One such example is:
where with , or see  for SO(3) tracking control. Note that eq. 10 guarantees exponential tracking of a desired attitude trajectory within some bounded error in the presence of some disturbance torques.
Iv-B Learning-based Discrete-time Nonlinear Controller
Using methods described in Sec. III, we define as the approximation to the disturbance aerodynamic forces, with being the partial states used as input features. Then desired total force is revised as .
Because of the dependency of on , the control synthesis problem here uses a non-affine control input for :
With , We propose the following fixed-point iterative method for solving eq. 11
V Nonlinear Stability Analysis
The closed-loop tracking error analysis provides a direct correlation on how to tune the neural network and controller parameter to improve control performance and robustness.
V-a Control Allocation as Contraction Mapping
We first show that converges to the solution of eq. 11 when all states are fixed.
Fixing all current states, define mapping based on eq. 12:
If is -Lipschitz continuous, and ; then is a contraction mapping, and converges to unique solution of .
with being a compact set of feasible control inputs; and given fixed states as , and , then:
Thus, . Hence, is a contraction mapping.
V-B Stability of Learning-based Nonlinear Controller
Before continuing to prove stability of the full system, we make the following assumptions.
The desired states along the position trajectory , , and are bounded.
updates much faster than position controller. And one-step difference of control signal satisfies with a small positive .
Tikhonovs’s Theorem (Theorem 11.1 ) provides a foundation for such a time-scale separation, where converges much faster than the slower dynamics. From eq. 13, we can derive the following approximate relation with :
By using the fact that the frequencies of attitude control () and motor speed control () are much higher than that of the position controller () in practice, we can safely assume that , , and in one update step become negligible. Furthermore, can be limited internally by the attitude controller. It leads to:
With being a small constant and from Lemma. V.1, we can deduce that rapidly converges to a small ultimate bound between each position controller update.
The approximation error of over the compact sets , is upper bounded by , where .
DNNs have been shown to generalize well to the set of unseen events that are from almost the same distribution as a training set [24, 25]. This empirical observation is also theoretically studied in order to shed more light toward an understanding of the complexity of these models [26, 18, 27, 28]. Our experimental results show that our proposed training method in Sec. III generalizes well on unseen events and results in a better performance on unexplored data (Sec. VI-C). Composing our stability result rigorously with generalization error would be an interesting direction for future work. Based on these assumptions, we can now present our overall robustness result.
We begin the proof by selecting a Lyapunov function based on as , then by applying the controller eq. 8, we get the time-derivative of :
denote the minimum eigenvalue of the positive-definite matrix. By applying the Lipschitz property of the network approximator lemma III.1 and Assumption 2, we obtain
Using the Comparison Lemma, we define and to obtain
It can be shown that this leads to finite-gain stability and input-to-state stability (ISS) . Furthermore, the hierarchical combination between and in eq. 7 yields (14). Note that disabling integral control in eq. 7 (i.e., ) results in .
By designing the controller gain and Lipschitz constant of the DNN, we ensure and achieve exponential tracking within bound .
In our experiments, we evaluate both the generalization performance of our DNN as well as overall control performance of Neural-Lander. The experimental setup is composed of 17 motion capture cameras, the communication router for sending signals and the drone. The data was collected from an Intel Aero quadrotor weighting 1.47 kg with a computer running on it (2.56 GHz Intel Atom x7 processor, 4 GB DDR3 RAM). We retrofit the drone with eight reflective markers to allow for accurate position, attitude and velocity estimation at 100Hz. The Intel Aero drone and the test space are shown in Fig. 1.
Vi-a Bench Test
To identify a good nominal model, we first performed bench tests to estimate , , , , and , which are mass, diameter of rotor, air density, gravity, and thrust coefficient, respectively. The nondimensional thrust coefficient, , is defined as . Note that is a function of propeller speed, , and here we used a nominal value when RPM (the idle RPM) for following data collection session. How changes with is also discussed in Sec. VI-C.
Vi-B Real-World Flying Data and Preprocessing
In order to estimate the effect of disturbance force , we collected states and control inputs, while flying the drone close to the ground, manually controlled by an expert pilot.
Our training data is shown in Fig. 2. We collected a single trajectory with varying heights and velocities. The trajectory has two parts. Part I (0s-250s in Fig. 2) contains maneuver at different fixed (0.05m-1.5m) with random and . This can be used to estimate the ground effect. Part II (250s-350s in Fig. 2) includes random , and motions to cover the feasible state space as much as possible. For this part, we aim to learn about non-dominant aerodynamics such as air drag. We note that our training data is quite modest in size by the standards of deep learning.
Since our learning task is to regress from state and control inputs, we also need output data of . We utilized the relation from eq. 1 to calculate . Here is calculated based on the nominal from the bench test (Sec. VI-A). Our training set consists of sequences of , where is the observed value of . The entire dataset was split into training (60%), test (20%) and validation set (20%) for model hyper-parameter tuning.
Vi-C DNN Prediction Performance
We train using a deep ReLU network, where , with , , ,
corresponding to global height, global velocity, attitude, and control input. We build the ReLU network using PyTorch, an open-source deep learning library. Our ReLU network consists of four fully-connected hidden layers, with input and the output dimensions 12 and 3, respectively. We use spectral normalization (SN) eq. 5 to bound the Lipschitz constant.
where is the thrust generated by propellers, is the rotation speed, is the idle RPM, and depends on the number and arrangement of propellers ( for a single propeller, but must be tuned for multiple propellers). Note that is a function of . Thus, we can derive from .
Fig. 3(a) shows the comparison between the estimated from DNN and the theoretical ground effect model eq. 15 as we vary the global height (assuming when ). We see that our DNN can achieve much better estimates than the theoretical ground effect model. We further investigate the trend of with respect to the rotation speed . Fig. 3(b) shows the learned over the rotation speed at a given height, in comparison with the measured from the bench test. We observe that the increasing trend of the estimates is consistent with bench test results for .
To understand the benefits of SN, we compared predicted by DNNs trained both with and without SN. Fig. 3(c) shows the results. Note that -1 m/s to 1 m/s is covered in our training set but -2 m/s to -1 m/s is not. We see differences in:
Ground effect: increases as decreases, which is also shown in Fig. 3(a).
Air drag: increases as the drone goes down () and it decreases as the drone goes up ().
Generalization: the spectral normalized DNN is much smoother and can also generalize to new input domains not contained in the training set.
In , the authors theoretically show that spectral normalization can provide tighter generalization guarantees on unseen data, which is consistent with our empirical results. An interesting future direction is to connect generalization theory more tightly with our robustness guarantees.
Vi-D Control Performance
We used PD controller as the baseline controller and implemented both the baseline and Neural-Lander without an integral term in (7-8). First we tested these two controller for the 1D take-off/landing task, i.e., moving the drone from to and then returning it to , as shown in Fig. 4. Second we compare the controllers for the 3D take-off/landing task, i.e., moving the drone from to and then returning it to , as shown in Fig. 5. For both tasks, we repeated the experiments times and computed the means and the standard deviations of the take-off/landing trajectories.222Demo videos: https://youtu.be/C_K8MkC_SSQ
From Figs. 4 and 5, we can conclude that the main benefits of our Neural-Lander are: (a) In both 1D and 3D cases, Neural-Lander can control the drone to precisely land on the ground surface while the baseline controller cannot land due to the ground effect. (b) In both 1D and 3D cases, Neural-Lander could mitigate drifts in and directions, as it also learned about non-dominant aerodynamics such as air drag.
In experiments, we observed a naive un-normalized DNN () can even result in crash, which also implies the importance of spectral normalization.
In this paper, we present Neural-Lander, a deep learning based nonlinear controller with guaranteed stability for precise quadrotor landing. Compared to traditional ground effect models, Neural-Lander is able to significantly improve control performance. The main benefits are: (1) our method can learn from coupled unsteady aerodynamics and vehicle dynamics, and provide more accurate estimates than theoretical ground effect models; (2) our model can capture both the ground effect and the nondominant aerodynamics, and outperforms the conventional controller in all directions (, and ); (3) we provide rigorous theoretical analysis of our method and guarantee the stability of the controller, which also implies generalization to unseen domains.
Future work includes further generalization of the capabilities of Neural-Lander handling unseen state and disturbance domains even generated by a wind fan array. Another interesting direction would be to capture a long-term temporal correlation with RNNs.
The authors thank Joel Burdick, Mory Gharib and Daniel Pastor Moreno. The work is funded in part by Caltech’s Center for Autonomous Systems and Technologies and Raytheon Company.
-  I. Cheeseman and W. Bennett, “The effect of ground on a helicopter rotor in forward flight,” 1955.
-  K. Nonaka and H. Sugizaki, “Integral sliding mode altitude control for a small model helicopter with ground effect compensation,” in American Control Conference (ACC), 2011. IEEE, 2011, pp. 202–207.
-  L. Danjun, Z. Yan, S. Zongying, and L. Geng, “Autonomous landing of quadrotor based on ground effect modelling,” in Control Conference (CCC), 2015 34th Chinese. IEEE, 2015, pp. 5647–5652.
-  F. Berkenkamp, A. P. Schoellig, and A. Krause, “Safe controller optimization for quadrotors with Gaussian processes,” in Proc. of the IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 493–496. [Online]. Available: https://arxiv.org/abs/1509.01066
-  P. Abbeel, A. Coates, and A. Y. Ng, “Autonomous helicopter aerobatics through apprenticeship learning,” The International Journal of Robotics Research, vol. 29, no. 13, pp. 1608–1639, 2010.
-  A. Punjani and P. Abbeel, “Deep learning helicopter dynamics models,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, 2015, pp. 3223–3230.
-  S. Bansal, A. K. Akametalu, F. J. Jiang, F. Laine, and C. J. Tomlin, “Learning quadrotor dynamics using neural network for flight control,” in Decision and Control (CDC), 2016 IEEE 55th Conference on. IEEE, 2016, pp. 4653–4660.
-  Q. Li, J. Qian, Z. Zhu, X. Bao, M. K. Helwa, and A. P. Schoellig, “Deep neural networks for improved, impromptu trajectory tracking of quadrotors,” in Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017, pp. 5183–5189.
-  S. Zhou, M. K. Helwa, and A. P. Schoellig, “Design of deep neural networks as add-on blocks for improving impromptu trajectory tracking,” in Decision and Control (CDC), 2017 IEEE 56th Annual Conference on. IEEE, 2017, pp. 5201–5207.
-  C. Sánchez-Sánchez and D. Izzo, “Real-time optimal control via deep neural networks: study on landing problems,” Journal of Guidance, Control, and Dynamics, vol. 41, no. 5, pp. 1122–1135, 2018.
-  S. Balakrishnan and R. Weil, “Neurocontrol: A literature survey,” Mathematical and Computer Modelling, vol. 23, no. 1-2, pp. 101–117, 1996.
-  M. T. Frye and R. S. Provence, “Direct inverse control using an artificial neural network for the autonomous hover of a helicopter,” in Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on. IEEE, 2014, pp. 4121–4122.
-  H. Suprijono and B. Kusumoputro, “Direct inverse control based on neural network for unmanned small helicopter attitude and altitude control,” Journal of Telecommunication, Electronic and Computer Engineering (JTEC), vol. 9, no. 2-2, pp. 99–102, 2017.
-  F. Berkenkamp, M. Turchetta, A. P. Schoellig, and A. Krause, “Safe model-based reinforcement learning with stability guarantees,” in Proc. of Neural Information Processing Systems (NIPS), 2017. [Online]. Available: https://arxiv.org/abs/1705.08551
-  T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” arXiv preprint arXiv:1802.05957, 2018.
-  T. Salimans and D. P. Kingma, “Weight normalization: A simple reparameterization to accelerate training of deep neural networks,” in Advances in Neural Information Processing Systems, 2016, pp. 901–909.
-  P. L. Bartlett, D. J. Foster, and M. J. Telgarsky, “Spectrally-normalized margin bounds for neural networks,” in Advances in Neural Information Processing Systems, 2017, pp. 6240–6249.
-  J. Slotine and W. Li, Applied Nonlinear Control. Prentice Hall, 1991.
-  S. Bandyopadhyay, S.-J. Chung, and F. Y. Hadaegh, “Nonlinear attitude control of spacecraft with a large captured object,” Journal of Guidance, Control, and Dynamics, vol. 39, no. 4, pp. 754–769, 2016.
-  D. Morgan, G. P. Subramanian, S.-J. Chung, and F. Y. Hadaegh, “Swarm assignment and trajectory optimization using variable-swarm, distributed auction assignment and sequential convex programming,” Int. J. Robotics Research, vol. 35, no. 10, pp. 1261–1285, 2016.
-  D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in 2011 IEEE International Conference on Robotics and Automation, May 2011, pp. 2520–2525.
-  H. Khalil, Nonlinear Systems, ser. Pearson Education. Prentice Hall, 2002.
-  C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530, 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
-  B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “A pac-bayesian approach to spectrally-normalized margin bounds for neural networks,” arXiv preprint arXiv:1707.09564, 2017.
-  G. K. Dziugaite and D. M. Roy, “Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data,” arXiv preprint arXiv:1703.11008, 2017.
-  B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “Exploring generalization in deep learning,” in Advances in Neural Information Processing Systems, 2017, pp. 5947–5956.
-  S.-J. Chung, S. Bandyopadhyay, I. Chang, and F. Y. Hadaegh, “Phase synchronization control of complex networks of Lagrangian systems on adaptive digraphs,” Automatica, vol. 49, no. 5, pp. 1148–1161, 2013.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017.