Neural-Swarm: Decentralized Close-Proximity Multirotor Control Using Learned Interactions

03/06/2020 ∙ by Guanya Shi, et al. ∙ California Institute of Technology 0

In this paper, we present Neural-Swarm, a nonlinear decentralized stable controller for close-proximity flight of multirotor swarms. Close-proximity control is challenging due to the complex aerodynamic interaction effects between multirotors, such as downwash from higher vehicles to lower ones. Conventional methods often fail to properly capture these interaction effects, resulting in controllers that must maintain large safety distances between vehicles, and thus are not capable of close-proximity flight. Our approach combines a nominal dynamics model with a regularized permutation-invariant Deep Neural Network (DNN) that accurately learns the high-order multi-vehicle interactions. We design a stable nonlinear tracking controller using the learned model. Experimental results demonstrate that the proposed controller significantly outperforms a baseline nonlinear tracking controller with up to four times smaller worst-case height tracking errors. We also empirically demonstrate the ability of our learned model to generalize to larger swarm sizes.



There are no comments yet.


page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The ongoing commoditization of unmanned aerial vehicles (UAVs) is propelling interest in advanced control methods for large aerial swarms [7, 24]. Potential applications are plentiful, including manipulation, search, surveillance, mapping, amongst many others. Many settings require the UAVs to fly in close proximity to each other, also known as dense formation control. For example, consider a search-and-rescue mission where the aerial swarm must enter and search a collapsed building. In such scenarios, close-proximity flight enables the swarm to navigate the building much faster compared to swarms that must maintain large distances from each other.

A major challenge of close-proximity control is that the small distance between UAVs creates complex aerodynamic interactions. For instance, one multirotor flying above another causes the so-called downwash effect on the lower one, which is difficult to model using conventional approaches [14]. In lieu of better downwash interaction modeling, one must require a large safety distance between vehicles, e.g., for the small Crazyflie 2.0 quadrotor ( rotor-to-rotor) [11]. However, the downwash for two Crazyflie quadrotors hovering on top of each other is only , which is well within their thrust capabilities, and suggests that proper modeling of downwash and other interaction effects can lead to more precise dense formation control.

In this paper, we propose a learning-based controller, Neural-Swarm, to improve the precision of close-proximity control of homogeneous multirotor swarms. In particular, we train a regularized permutation-invariant deep neural network (DNN) to predict the residual interaction forces not captured by nominal models of free-space aerodynamics. The DNN only requires relative positions and velocities of neighboring multirotors as inputs, similar to existing collision-avoidance techniques [31], which enables a fully decentralized computation. We use the predicted interaction forces as a feed-forward term in the multirotors’ position controller, which enables close-proximity flight. Our solution is computationally efficient and can run in real-time on a small 32-bit microcontroller. We validate our approach on different tasks using two to five quadrotors. To our knowledge, our approach is the first that models interactions between more than two multirotor vehicles.

Fig. 1: We learn complex interaction between multirotors using regularized deep sets and design nonlinear stable controller for close-proximity flight.

From a learning perspective, we leverage two state-of-the-art tools to arrive at effective DNN models. The first is spectral normalization [2], which ensures the DNN is Lipschitz continuous. As in our prior work [28], Lipschitz continuity enables us to derive stability guarantees, and also helps the DNN generalize well on test examples that lie outside the training set. We also employ deep sets [33] to encode multi-vehicle interactions in an index-free or permutation-invariant manner, enabling better generalization to new formations and varying number of vehicles.

Related Work

The use of DNNs to learn higher-order residual dynamics or control outputs is becoming increasingly common across a range of control and reinforcement learning settings

[28, 17, 30, 5, 20, 27, 13]. The closest approach to ours is the Neural Lander [28], which uses a DNN to capture the interaction between a single UAV and the ground, i.e., the well-studied ground effect [4, 32, 14]. In contrast, our work focuses on learning inter-vehicle aerodynamic interactions between several multirotors.

The interaction between two rotor blades has been studied in a lab setting to optimize the placement of rotors on a multirotor [29]

. However, it remains an open question how this influences the flight of two or more multirotors in close proximity. Interactions between two multirotors can be estimated using a propeller velocity field model 

[12]. Unfortunately, this method is hard to generalize to the multi-robot case and this method only considers the stationary case, which will not work for many scenarios like swapping in Fig. 1. We instead use a learning-based method that can directly estimate the interaction forces of multiple neighboring robots from training data.

For motion planning, empirical models have been used to avoid harmful interactions [24, 23, 8, 21]. Typical safe interaction shapes are ellipsoids or cylinders and such models work for homogeneous and heterogeneous multirotor teams. Estimating such shapes requires potentially dangerous flight tests and the shapes are in general conservative. In contrast, we use learning to estimate the interaction forces accurately and use those forces in the controller to improve trajectory tracking performance in close-proximity flight. The learned forces can potentially be used for motion planning as well.

Ii Problem Statement: Swarm Interactions

Ii-a Single Multirotor Dynamics

A single multirotor’s state comprises of the global position , global velocity , attitude rotation matrix , and body angular velocity . We consider the following dynamics: equationparentequation


where and are the mass and inertia matrix of the system, respectively;

is a skew-symmetric mapping;

is the gravity vector; and

and are the total thrust and body torques from the rotors, respectively. The output wrench is linearly related to the control input , where is the squared motor speeds for a vehicle with rotors and is the actuation matrix. The key difficulty stems from disturbance forces and disturbance torques , generated by other multirotors.

Ii-B Swarm Dynamics

Consider homogeneous multirotors. To simplify notations, we use to denote the state of the multirotor. Then Eq. 1 can be simplified as:


where is the nominal dynamics and and are unmodeled force and torque from interactions between other multirotors.

We use to denote the relative state component between robot and , e.g., . For robot , the unmodeled force and torque in Eq. 2 are functions of relative states to its neighbors,


where is the set of the relative states of the neighbors of . Note that here we assume the swarm system is homogeneous, i.e., each robot has the same functions , , and .

Ii-C Problem Statement & Approach

We aim to improve the control performance of a multirotor swarm during close formation flight, by learning the unknown interaction terms and . Here, we focus on the position dynamics Eq. 1a so is our primary concern.

We first approximate using a permutation invariant deep neural network (DNN), and then incorporate the DNN in our exponentially-stabilizing controller. Training is done offline, and the learned interaction dynamics model is applied in the on-board controller in real-time.

Iii Learning Approach

We employ state-of-the-art deep learning methods to capture the unknown (or residual) multi-vehicle interaction effects. In particular, we require that the deep neural nets (DNNs) have strong Lipschitz properties (for stability analysis), can generalize well to new test cases, and use compact encodings to achieve high computational and statistical efficiency. To that end, we employ deep sets

[33] and spectral normalization [2] in conjunction with a standard feed-forward neural architecture.111

An alternative approach is to discretize the input space and employ convolutional neural networks (CNNs), which also yields a permutation-invariant encoding. However, CNNs suffer from two limitations: 1) they require much more training data and computation; and 2) they are restricted to a pre-determined resolution and input domain.

Iii-a Permutation-Invariant Neural Networks

The permutation-invariant aspect of the interaction term Eq. 3 can be characterized as:


for any permutation . Since our goal is to learn the function using DNNs, we need to guarantee that the learned DNN is permutation-invariant. The following lemma (a corollary of Theorem 7 in [33]) gives the necessary and sufficient condition for a DNN to be permutation-invariant.

Lemma 1 (adapted from Thm 7 in [33])

A continuous function , with , is permutation-invariant if and only if it is decomposable into , for some functions and .

The proof from [33] is highly non-trivial and only holds for a fixed number of vehicles . Furthermore, their proof technique (which is likely loose) involves a large expansion in the intrinsic dimensionality (specifically ) compared to the dimensionality of . We will show in our experiments that and can be learned using relatively compact DNNs, and can generalize well to larger swarms.

Lemma 1 implies we can consider the following “deep sets” [33] architecture to approximate :


where and are two DNNs, and and are their corresponding parameters. The output of is a hidden state to represent “contributions” from each neighbor, and is a nonlinear mapping from the summation of these hidden states to the total effect. The advantages of this approach are:

  • Representation ability. Since Lemma 1 is necessary and sufficient, we do not lose approximation power by using this constrained framework. We demonstrate strong empirical performance using relatively compact DNNs for and .

  • Computational and sampling efficiency and scalability. Since the input dimension of is always the same as the single vehicle case, the feed-forward computational complexity of Eq. 5 grows linearly with the number of neighboring vehicles. Moreover, given training data from vehicles, under the homogeneous dynamics assumption, we can reuse the data times. In practice, we found that a few minutes flight data is sufficient to accurately learn interactions between two to five multirotors.

  • Generalization to varying swarm size. Given learned and , Eq. 5 can be used to predict interactions for any swarm size. In other words, a model trained on swarms of a certain size may also accurately model (slightly) larger swarms. In practice, we found that trained with data from three multirotor swarms, our model can give good predictions for five multirotor swarms.

Iii-B Spectral Normalization for Robustness and Generalization

To improve robustness and generalization of DNNs, we use spectral normalization [2]

for training optimization. Spectral normalization stabilizes DNN training by constraining its Lipschitz constant. Spectrally normalized DNNs have been shown to generalize well, which is an indication of stability in machine learning. Spectrally normalized DNNs have also been shown to be robust, which can be used to provide control-theoretic stability guarantees 

[19, 28].

Mathematically, the Lipschitz constant of a function is defined as the smallest value such that:


be a ReLU DNN parameterized by the DNN weights



where the activation function

is called the element-wise ReLU function. In practice, we apply the spectral normalization to the weight matrices in each layer after each batch gradient descent as follows:



is the maximum singular value of


is a hyperparameter. With

Eq. 7, will be upper bounded by . Since spectrally normalized is Lipschitz continuous, it is robust to noise , i.e., is always bounded by . In this paper, we apply the spectral normalization on both the and DNNs in Eq. 5.

Iii-C Data Collection

Learning a DNN to approximate requires collecting close formation flight data. However, the downwash effect causes the nominally controlled multirotors (without compensation for the interaction forces) to move apart from each other, see Fig. 1. Thus, we use a cumulative/curriculum learning approach: first, we collect data for two multirotors without a DNN and learn a model. Second, we repeat the data collection using our learned model as feed-forward term, which allows closer-proximity flight of the two vehicle. Third, we repeat the procedure with increasing number of vehicles, using the current best model.

Note that our data collection and learning are independent of the controller used and independent of the compensation. In particular, if we actively compensate for a learned , this will only affect in (1a) and not the observed .

Iv Nonlinear Decentralized Controller Design

Our Neural-Swarm controller is a nonlinear feedback linearization controller using the learned interaction term . Note that Neural-Swarm is decentralized, since is a function of the neighbor set, , of vehicle . Moreover, the computational complexity of grows linearly as the size of , since we employ deep sets to encode .

Iv-a Reference Trajectory Tracking

Similar to [28], we employ an integral controller that accounts for the predicted residual dynamics, which in our case are the multi-vehicle interaction effects. For vehicle , we define the position tracking error as and the composite variable as:


where is the reference velocity. We design the total desired rotor force as:


Note that the position control law in Eq. 9 is decentralized, because we only consider the relative states in the controller.

Using , the desired total thrust and desired attitude can be easily computed [1]. Given , we can use any attitude controller to compute , for example robust nonlinear tracking control with global exponential stability [1], or geometric tracking control on  [18]. From this process, we get , and then the desired control signal of each vehicle is , which can be computed in a decentralized manner for each vehicle.

Iv-B Nonlinear Stability and Robustness Analysis

Note that since , we can not guarantee the tracking error . However, under some mild assumptions, we can guarantee input-to-state stability (ISS) using exponential stability [6] for all the vehicles.

Assumption 1

The desired position trajectory , and are bounded for all .

Assumption 2

Define the learning error as , with two components: , where is some constant bias and is a time-varying term. We assume that for vehicle , is upper bounded by .

Theorem 2

Under Assumptions 1 and 2, for vehicle , for some desired trajectory , Eq. 9 achieves exponential convergence of the tracking error to an error ball:


For vehicle , consider the Lyapunov function . With controller Eq. 9, we get the time-derivative of :

Using , we have


Using the Comparison Lemma [15], we obtain

Note that , and the hierarchical combination between and results in , yielding Eq. 10.

V Experiments

We use a slightly modified Crazyflie 2.0 (CF) as our quadrotor platform, a small ( rotor-to-rotor) and lightweight () product that is commercially available. We use the Crazyswarm [26] package to control multiple Crazyflies simultaneously. Each quadrotor is equipped with four reflective markers for pose tracking at

using a motion capture system. The nonlinear controller, extended Kalman filter, and neural network evaluation are running on-board the STM32 microcontroller.

For data collection, we use the uSD card extension board and store binary encoded data roughly every . Each dataset is timestamped using the on-board microsecond timer and the clocks are synchronized before takeoff using broadcast radio packets. The drift of the clocks of different Crazyflies can be ignored for our short flight times (less than ).

V-a Calibration and System Identification

Prior to learning the residual term , we first calibrate the nominal dynamics model . We found that existing motor thrust models [3, 10] are not very accurate, because they only consider a single motor and ignore the effect of the battery state of charge. We calibrate each Crazyflie by mounting the whole quadrotor on a load cell which is directly connected to a custom extension board. We collect the current battery voltage, PWM signals (identical for all 4 motors), and measured force from the load cell for various motor speeds. We use this data to find two polynomial functions. The first computes the PWM signal given the current battery voltage and desired force. The second computes the maximum achievable force, given the current battery voltage. This second function is important for thrust mixing when motors are saturated [9].

We notice that the default motors and propellers can only produce a total force of about with a full battery, resulting in a best-case thrust-to-weight ratio of 1.4. Thus, we replaced the motors with more powerful ones (that have the same physical dimensions) to improve the best-case thrust-to-weight ratio to 2.6. We use the remaining parameters (, thrust-to-torque ratio) from the existing literature [10].

V-B Data Collection and Learning

We utilize two types data collection tasks: random walk and swapping. For random walk, we implement a simple reactive collision avoidance approach based on artificial potentials on-board each Crazyflie [16]. The host computer randomly selects new goal points within a small cube for each vehicle in a fixed frequency. Those goal points are used as an attractive force, while neighboring vehicles contribute a repulsive force. For swapping, we place vehicles in different horizontal planes on a cylinder and let them move to the opposite side. All vehicles are vertically aligned for one time instance, causing a large interaction force, see Fig. 4, 2, and 1 for examples with two, three, and four vehicles. The random walk data helps us to explore the whole space quickly, while the swapping data ensures that we have data for a specific task of interest. For both task types, we varied the scenarios from two to four vehicles, and collected one minute of data for each scenario.

To learn the interaction function , we collect the timestamped states for each vehicle . We then compute as the observed value of . We compute using in Eq. 1a, where is calculated based on our system identification in Sec. V-A. Our training data consists of sequences of pairs, where is the set of the relative states of the neighbors of . In practice, we compute the relative states from our collected data as (i.e., relative global position and relative global velocity), since the attitude information and are not dominant for . In this work, we only learn the component of since we found the other two components, and , are very small, and do not significantly alter the nominal dynamics.

Since our swarm is homogeneous, each vehicle has the same function . Thus, we stack all the vehicle’s data and train on them together, which implies more training data overall for larger swarms. Let denote the training data of vehicle , where the input-output pair is . We use the ReLU network class for both and neural networks and our training loss is:


where and are neural network weights to be learned. Our DNN has four layers with architecture , and our DNN also has four layers, with architecture

. We use PyTorch 

[25] for training and implementation of spectral normalization (see Sec. III-B) of and . We found that spectral normalization is in particular important for the small Crazyflie quadrotors, because their IMUs are directly mounted on the PCB frame causing more noisy measurements compared to bigger quadrotors.

Using the learned weights and , we generate C-code to evaluate both networks efficiently on-board the quadrotor, similar to prior work [22]. The STM32 microcontroller can evaluate each of the networks in about . Thus, we can compute in less than for 6 or less neighbors, which is sufficient for real-time operations.

V-C Neural-Swarm Control Performance

ControllerFlight test 2 CF Swap 3 CF Swap 4 CF Swap 5 CF Swap
Baseline 0.094 0.139 0.209 0.314
Trained w/ 2 CF 0.027 0.150 0.294 N.A.
Trained w/ 3 CF 0.026 0.082 0.140 0.159
Trained w/ 4 CF 0.024 0.061 0.102 0.150
TABLE I: Maximum -error (in meters) for varying swarm size swapping tasks and neural networks. Training on more vehicles leads to the best overall performance for all swarm sizes.

We study the performance and generalization of different controllers on a swapping task using varying number of quadrotors. An example of our swapping task for two vehicles is shown in Fig. 1. The swapping task for multiple vehicles causes them to align vertically at one point in time with vertical distances of   to   between neighbors. This task is challenging, because: i) the lower vehicles experience downwash from multiple vehicles flying above; ii) the different velocity vectors of each vehicle creates interesting effects, including an effect where is positive for a short period of time (see Fig. 3(b) for an example); and iii) for the case with more than two vehicles, the aerodynamic effect is not a simple superposition of each pair (see Fig. 3(c-f) for examples).

We use the following four controllers: 1) The baseline controller uses our position tracking controller Eq. 9 with and a nonlinear attitude tracking controller [18]; 2) – 4) The same controller with the same gains, but computed using different neural networks (trained on data flying 2, 3, and 4 quadrotors, respectively.) Note that all controllers, including the baseline controller, always have integral control compensation parts. Though an integral gain can cancel steady-state error during set-point regulation, it can struggle with complex time-variant interactions between vehicles. This issue is also reflected in the tracking error bound in Theorem 2. In Theorem 2, the tracking error will converge to . For our baseline we have , which means if is changing fast as in the swapping task, our baseline will not perform well.

Fig. 2: Three vehicles moving using different control policies (corresponding to Table I, column 2). Each quadrotor flies at a different fixed height ( vertical separation) and swaps sides such that all vehicles align vertically at and . Our approach trained on 3 or 4 vehicles controls the height much better than the baseline approach.

We repeat the swapping task for each controller six times, and report the maximum -error that occurred for any vehicle over the whole flight. We also verified that the - and -error distributions are similar across the different controllers and do not report those numbers for brevity.

Results. Our results, described in Table I, show three important results: i) our controller successfully reduces the worst-case -error by a factor of two to four (e.g., instead of for the two vehicle case); ii) our controller successfully generalizes to cases with more vehicles when trained with at least three vehicles (e.g., the controller trained with three quadrotors significantly improves flight performance even when flying five quadrotors); and iii) our controllers do not marginalize small-vehicle cases (e.g., the controller trained with four quadrotors works very well for the two-vehicle case). The observed maximum -error for the test cases with three to five quadrotors is larger compared to the two-vehicle case because we occasionally saturate the motors during flight.

Fig. 2 depicts an example of the swapping task for three quadrotors (showing two out of the six swaps), which corresponds to column 2 of Table I. We observe that: i) when trained on at least three quadrotors, our approach significantly outperforms the baseline controller; and ii) the performance degrades significantly when only trained on two quadrotors, since the training data does not include data on superpositions.

Fig. 3: generated by and networks trained with 3 CF data. Each heatmap gives the prediction of of a vehicle in different horizontal and vertical (global) positions, and the (global) position of neighboring vehicles are represented by blue stars. A more detailed explanation is in Sec. V-D.

V-D Learned Neural Network Visualization

Fig. 3 depicts the prediction of , trained with flying data of 3 multirotors. The color encodes the magnitude of for a single multirotor positioned at different global coordinates. The blue stars indicate the (global) coordinates of neighboring multirotors. All quadrotors are in the same -plane. For example, in Fig. 3(c) there are two quadrotors hovering at and . If we place a third quadrotor at , it would estimate as indicated by the white color in that part of the heatmap. All quadrotors are assumed to be stationary except for Fig. 3(b), where the one neighbor is moving at .

We observe that the interaction between quadrotors is non-stationary and sensitive to relative velocity, as well as not a simple superposition between pairs. In Fig. 3(b), the vehicle’s neighbor is moving, and the prediction becomes significantly different from Fig. 3(a), where the neighbor is just hovering. Moreover, in Fig. 3(b) there is an interesting region with relatively large positive , which is consistent with our observations in flight experiments. We can also observe that the interactions are not a simple superposition of different pairs. For instance, Fig. 3(e) shows a significantly stronger updraft effect outside the downwash region than expected from a simple superposition of the prediction in Fig. 3(a).

Our approach can generalize well using data for 3 vehicles to a larger 4-vehicle system. In Fig. 3, all the predictions are from and networks trained with 3 CF data, but predictions for a 4-vehicle team (as shown in Fig. 3(e-f)) are still reasonable and work well in real flight tests (see Table I and Fig. 2). For this 4 CF swapping task, we compare ground truth and its prediction in Fig. 4. As before, the prediction is computed using neural networks trained with 3 CF flying data. We found that 1) for multirotor 3 and 4, is so high such that we cannot fully compensate it within our thrust limits; and 2) the prediction matches the ground truth very well, even for complex interactions (e.g., multirotor 2 in Fig. 4), which indicates that our approach generalizes well.

Fig. 4: Ground truth and for a 4-vehicle swapping task. is generated by neural networks trained with data from three vehicles. Our method generalizes well and predicts with high accuracy.

Vi Conclusion

In this paper, we present a decentralized controller that enables close-proximity flight of homogeneous multirotor teams. Our solution, Neural-Swarm, uses deep neural networks to learn the interaction forces between multiple quadrotors and only relies on relative positions and velocities of neighboring vehicles. We demonstrate in flight tests that our training method generalizes well to a varying number of neighbors, is computationally efficient, and reduces the worst-case height error by a factor of two or better. To our knowledge, our solution is the first that models interactions between more than two multirotors.

There are many directions for future work. First, one can extend our work to heterogeneous swarms, which may require extending the neural net architecture beyond spectral normalized deep sets. Second, one can use the learned interaction forces for motion planning and control of dynamically changing formations. Third, one can learn as well as to improve the flight performance during aggressive maneuvers even further.


  • [1] S. Bandyopadhyay, S. Chung, and F. Y. Hadaegh (2016) Nonlinear attitude control of spacecraft with a large captured object. Journal of Guidance, Control, and Dynamics 39 (4), pp. 754–769. Cited by: §IV-A.
  • [2] P. L. Bartlett, D. J. Foster, and M. Telgarsky (2017) Spectrally-normalized margin bounds for neural networks. In Conference on Neural Information Processing Systems (NIPS), pp. 6240–6249. External Links: Link Cited by: §I, §III-B, §III.
  • [3] Bitcraze (2015)(Website) External Links: Link Cited by: §V-A.
  • [4] I. Cheeseman and W. Bennett (1955) The effect of ground on a helicopter rotor in forward flight. Aeronautical Research Council Reports And Memoranda. Cited by: §I.
  • [5] R. Cheng, A. Verma, G. Orosz, S. Chaudhuri, Y. Yue, and J. Burdick (2019)

    Control regularization for reduced variance reinforcement learning

    In International Conference on Machine Learning (ICML), pp. 1141–1150. External Links: Link Cited by: §I.
  • [6] S. Chung, S. Bandyopadhyay, I. Chang, and F. Y. Hadaegh (2013) Phase synchronization control of complex networks of lagrangian systems on adaptive digraphs. Automatica 49 (5), pp. 1148–1161. Cited by: §IV-B.
  • [7] S. Chung, A. A. Paranjape, P. M. Dames, S. Shen, and V. Kumar (2018) A survey on aerial swarm robotics. IEEE Transactions on Robotics (T-RO) 34 (4), pp. 837–855. External Links: Link, Document Cited by: §I.
  • [8] M. Debord, W. Hönig, and N. Ayanian (2018) Trajectory planning for heterogeneous robot teams. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7924–7931. External Links: Link, Document Cited by: §I.
  • [9] M. Faessler, D. Falanga, and D. Scaramuzza (2017) Thrust mixing, saturation, and body-rate control for accurate aggressive quadrotor flight. IEEE Robotics and Automation Letters (RA-L) 2 (2), pp. 476–482. External Links: Link, Document Cited by: §V-A.
  • [10] J. Förster (2015) System identification of the crazyflie 2.0 nano quadrocopter. Master’s Thesis, ETH Zurich, ETH Zurich, Zurich, (en). External Links: Document Cited by: §V-A, §V-A.
  • [11] W. Hönig, J. A. Preiss, T. K. S. Kumar, G. S. Sukhatme, and N. Ayanian (2018) Trajectory planning for quadrotor swarms. IEEE Trans. Robotics 34 (4), pp. 856–869. External Links: Link, Document Cited by: §I.
  • [12] K. P. Jain, T. Fortmuller, J. Byun, S. A. Mäkiharju, and M. W. Mueller (2019) Modeling of aerodynamic disturbances for proximity flight of multirotors. In 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Vol. , pp. 1261–1269. External Links: Document, Link, ISSN 2575-7296 Cited by: §I.
  • [13] T. Johannink, S. Bahl, A. Nair, J. Luo, A. Kumar, M. Loskyll, J. A. Ojea, E. Solowjow, and S. Levine (2019) Residual reinforcement learning for robot control. In International Conference on Robotics and Automation (ICRA), pp. 6023–6029. External Links: Link, Document Cited by: §I.
  • [14] X. Kan, J. Thomas, H. Teng, H. G. Tanner, V. Kumar, and K. Karydis (2019-10) Analysis of ground effect for small-scale uavs in forward flight. IEEE Robotics and Automation Letters 4 (4), pp. 3860–3867. External Links: Document, ISSN 2377-3766, Link Cited by: §I, §I.
  • [15] H.K. Khalil (2002) Nonlinear systems. Pearson Education, Prentice Hall. External Links: ISBN 9780130673893, LCCN 95045804 Cited by: §IV-B.
  • [16] O. Khatib (1985) Real-time obstacle avoidance for manipulators and mobile robots. In IEEE International Conference on Robotics and Automation (ICRA), pp. 500–505. External Links: Link, Document Cited by: §V-B.
  • [17] H. M. Le, A. Kang, Y. Yue, and P. Carr (2016)

    Smooth imitation learning for online sequence prediction

    In International Conference on Machine Learning (ICML), Vol. 48, pp. 680–688. External Links: Link Cited by: §I.
  • [18] T. Lee, M. Leok, and N. H. McClamroch (2010) Geometric tracking control of a quadrotor UAV on SE(3). In IEEE Conference on Decision and Control (CDC), pp. 5420–5425. External Links: Link, Document Cited by: §IV-A, §V-C.
  • [19] A. Liu, G. Shi, S. Chung, A. Anandkumar, and Y. Yue (2019) Robust regression for safe exploration in control. CoRR abs/1906.05819. External Links: Link, 1906.05819 Cited by: §III-B.
  • [20] C. D. McKinnon and A. P. Schoellig (2019) Learn fast, forget slow: safe predictive learning control for systems with unknown and changing dynamics performing repetitive tasks. IEEE Robotics and Automation Letters (RA-L) 4 (2), pp. 2180–2187. External Links: Link, Document Cited by: §I.
  • [21] D. Mellinger, A. Kushleyev, and V. Kumar (2012) Mixed-integer quadratic program trajectory generation for heterogeneous quadrotor teams. In IEEE International Conference on Robotics and Automation (ICRA), pp. 477–483. External Links: Link, Document Cited by: §I.
  • [22] A. Molchanov, T. Chen, W. Hönig, J. A. Preiss, N. Ayanian, and G. S. Sukhatme (2019) Sim-to-(multi)-real: transfer of low-level robust control policies to multiple quadrotors. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 59–66. External Links: Link, Document Cited by: §V-B.
  • [23] D. Morgan, S. Chung, and F. Y. Hadaegh (2014) Model predictive control of swarms of spacecraft using sequential convex programming. Journal of Guidance, Control, and Dynamics 37 (6), pp. 1725–1740. Cited by: §I.
  • [24] D. Morgan, G. P. Subramanian, S. Chung, and F. Y. Hadaegh (2016) Swarm assignment and trajectory optimization using variable-swarm, distributed auction assignment and sequential convex programming. International Journal of Robotics Research (IJRR) 35 (10), pp. 1261–1285. External Links: Link, Document Cited by: §I, §I.
  • [25] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Conference on Neural Information Processing Systems (NeurIPS), pp. 8024–8035. External Links: Link Cited by: §V-B.
  • [26] J. A. Preiss, W. Hönig, G. S. Sukhatme, and N. Ayanian (2017) Crazyswarm: A large nano-quadcopter swarm. In IEEE International Conference on Robotics and Automation (ICRA), pp. 3299–3304. External Links: Link, Document Cited by: §V.
  • [27] M. Saveriano, Y. Yin, P. Falco, and D. Lee (2017) Data-efficient control policy search using residual dynamics learning. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4709–4715. External Links: Link, Document Cited by: §I.
  • [28] G. Shi, X. Shi, M. O’Connell, R. Yu, K. Azizzadenesheli, A. Anandkumar, Y. Yue, and S. Chung (2019) Neural Lander: stable drone landing control using learned dynamics. In International Conference on Robotics and Automation (ICRA), pp. 9784–9790. External Links: Link, Document Cited by: §I, §I, §III-B, §IV-A.
  • [29] D. Shukla and N. Komerath (2018) Multirotor drone aerodynamic interaction investigation. Drones 2 (4). External Links: Link, ISSN 2504-446X, Document Cited by: §I.
  • [30] A. J. Taylor, V. D. Dorobantu, H. M. Le, Y. Yue, and A. D. Ames (2019) Episodic learning with control lyapunov functions for uncertain robotic systems. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6878–6884. External Links: Link, Document Cited by: §I.
  • [31] J. van den Berg, S. J. Guy, M. C. Lin, and D. Manocha (2009) Reciprocal n-body collision avoidance. In International Symposium on Robotics Research (ISRR), Vol. 70, pp. 3–19. External Links: Link, Document Cited by: §I.
  • [32] D. Yeo, E. Shrestha, D. A. Paley, and E. M. Atkins (2015) An empirical model of rotorcrafy uav downwash for disturbance localization and avoidance. In AIAA Atmospheric Flight Mechanics Conference, pp. . External Links: Document, Link, Cited by: §I.
  • [33] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Póczos, R. Salakhutdinov, and A. J. Smola (2017) Deep sets. In Conference on Neural Information Processing Systems (NIPS), pp. 3391–3401. External Links: Link Cited by: §I, §III-A, §III-A, §III-A, §III, Lemma 1.