Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor

03/04/2020
by   Andrea Tagliabue, et al.
MIT
Carnegie Mellon University
0

Disturbance estimation for Micro Aerial Vehicles (MAVs) is crucial for robustness and safety. In this paper, we use novel, bio-inspired airflow sensors to measure the airflow acting on a MAV, and we fuse this information in an Unscented Kalman Filter (UKF) to simultaneously estimate the three-dimensional wind vector, the drag force, and other interaction forces (e.g. due to collisions, interaction with a human) acting on the robot. To this end, we present and compare a fully model-based and a deep learning-based strategy. The model-based approach considers the MAV and airflow sensor dynamics and its interaction with the wind, while the deep learning-based strategy uses a Long Short-Term Memory (LSTM) neural network to obtain an estimate of the relative airflow, which is then fused in the proposed filter. We validate our methods in hardware experiments, showing that we can accurately estimate relative airflow of up to 4 m/s, and we can differentiate drag and interaction force.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

07/11/2019

Wind Estimation Using Quadcopter Motion: A Machine Learning Approach

In this article, we study the well known problem of wind estimation in a...
05/27/2021

Airflow-Inertial Odometry for Resilient State Estimation on Multirotors

We present a dead reckoning strategy for increased resilience to positio...
07/14/2019

Hybrid Model-Based and Data-Driven Wind Velocity Estimator for the Navigation System of a Robotic Airship

In the context of autonomous airships, several works in control and guid...
10/30/2018

Simultaneous Contact and Aerodynamic Force Estimation (s-CAFE) for Aerial Robots

In this paper, we consider the problem of multirotor flying robots physi...
01/13/2021

A Recurrent Neural Network Approach to Roll Estimation for Needle Steering

Steerable needles are a promising technology for delivering targeted the...
10/10/2021

Uncertainty in Data-Driven Kalman Filtering for Partially Known State-Space Models

Providing a metric of uncertainty alongside a state estimate is often cr...
02/16/2022

A Dynamic Model of a Skydiver With Validation in Wind Tunnel and Free Fall

An innovative approach of gaining insight into motor skills involved in ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The deployment of MAVs in uncertain and constantly changing atmospheric conditions [1, 2, 3] requires the ability to estimate and adapt to disturbances such as the aerodynamic drag force applied by wind gusts. Simultaneously, as many new interaction-based missions [4, 5, 6] arise, so increases the need to better differentiate between forces caused by aerodynamic disturbances and other sources of interaction [7, 8, 9, 10]. Differentiating between aerodynamic drag force and interaction force can be extremely important for safety reasons. For example, the controller of a robot should react differently depending on whether a large disturbance is caused by a wind gust, or by a human trying to interact with the machine [11].

Distinguishing between drag and interaction disturbances can be challenging, as they both apply forces to the center of mass (CoM) of the multirotor that cannot be easily differentiated by examining the inertial information commonly available from the robot’s onboard IMU or odometry estimator. Successful approaches for this task include a model-based method that measures the change in thrust-to-power ratio of the propellers caused by wind [12] and an approach which monitors the frequency component of the total disturbance (estimated via inertial information) to distinguish between the two possible sources of force [13].

This work presents a strategy for simultaneously estimating the interaction force and the aerodynamic drag disturbances using novel bio-inspired, whisker-like sensors that measure the airflow around a multirotor, as shown in Fig. 1. Our approach takes inspiration from the way insects sense airflow [14], which is by measuring the deflections caused by the aerodynamic drag force acting on the appendix of some receptors. By fusing the information of four heterogeneous airflow-sensors distributed across the surface of the robot, we can create a three-dimensional estimate of the relative velocity of the MAV with respect to the surrounding airflow. This information is then fused in a UKF-based force estimator that uses an aerodynamic model together with the robot’s pose and velocity to predict the wind, the expected drag force, and other interaction forces.

Fig. 1: MAV equipped with four bio-inspired airflow sensors used to estimate a three-dimensional wind vector from which we can distinguish aerodynamic drag from other forces (e.g., due to interaction).

To account for the complex aerodynamic interactions between sensors and propellers [15, 16], we extend this model-based approach (based on first-order physical principles) with a data-driven strategy. This strategy employs a Recurrent Neural Network (RNN) based on a LSTM network to provide an estimate of the relative airflow of the robot, which is then fused in the proposed model-based estimation scheme. We experimentally show that our approach achieves an accurate estimate of the relative airflow with respect to the robot with velocities up to  m/s, and enables interaction forces and aerodynamic drag forces to be distinguished. We experimentally compare the model-based and learning-based approaches, highlighting their advantages and disadvantages.

To summarize, the contributions of this paper are: 1) model- and deep learning-based strategies to simultaneously estimate wind, drag force, and other interaction forces using novel bio-inspired sensors similar to the one discussed in [17]; and 2) experimental validation of our approaches, showing that we can accurately estimate relative airflow of up to  m/s and distinguish between interaction force and aerodynamic force.

Ii Related Work

Distinguishing between interaction and aerodynamic disturbance is a challenging task, and most of the current approaches focus on the estimation of one or the other disturbance. Aerodynamic disturbances: Accurate wind or airflow sensing is at the heart of the techniques employed for aerodynamic disturbance estimation. A common strategy is based on directly measuring the airflow surrounding the robot via sensors, such as pressure sensors [18], ultrasonic sensors [19], or whisker-like sensors [20]. Other strategies estimate the airflow via its inertial effects on the robot, for example using model-based approaches [21, 22], learning-based approaches [23, 24], or hybrid (model-based and learning-based) solutions [25]. Generic wrench-like disturbances: Multiple related works focus instead on estimating wrench disturbances, without explicitly differentiating for the effects of the drag force due to wind: [7, 26, 6, 8] propose a model-based approach which utilizes an UKF for wrench estimation, while [27] proposes a factor graph-based estimation scheme.

Iii Sensor Design

Fig. 2: Illustration of an airflow sensor and its reference frame S, with the main components labeled.

In order to measure the relative wind in 3D affecting a MAV, lightweight, economical, and multi-directional sensors need to be used. This paper adopts sensors based in [17], which satisfy these requirements.

Iii-a Sensor design and considerations

The sensors, shown in Fig. 2, consist of a base and an easily-exchangeable tip. The base is composed of a magnetic field sensor connected to a conditioning circuit that interfaces with the robot via I2C and a 3D-printed case that encloses the sensor. The tip consists of a planar spring mounted in a 3D-printed enclosure that fits with the base, with a permanent magnet attached to its bottom and a carbon-fiber rod glued on the spring’s top. Eight foam fins are attached on the other end of this rod. When the sensor is subjected to airflow, the drag force from the air on the fins causes a rotation about the center of the planar spring which results in a displacement of the magnet. This displacement is then measured by the magnetic sensor. The fins are placed with even angular distribution in order to achieve homogeneous drag for different airflow directions. Foam and carbon fiber were chosen as the material of the fin structure due to their low density, which is crucial to minimize the inertia of the sensor. See [17] for more information about the sensor characteristics and manufacturing procedure.

Due to the complex aerodynamic interactions between the relative airflow and the blade rotor wakes, the sensor placement needs to be chosen carefully [15, 16]. To determine the best locations, we attached short pieces of string both directly on the vehicle and on metal rods extending away from it horizontally and vertically. We then flew the hexarotor indoors and observed that the pieces of string on top of the vehicle and on the propeller guards were mostly unaffected by the blade rotor wakes. Therefore, these are the two locations chosen to mount the sensors, as seen in Fig. 1. They are distributed so that the relative airflow coming from any direction excites at least one sensor (that is, for at least one sensor, the relative airflow is not aligned with its length).

Iii-B Sensor measurements

The sensors detect the magnetic field , but the model outlined in Section IV-B requires the deflection angles of the th sensor and , which correspond to the rotation of the carbon fiber rod about the and axes in reference frame . At the spring’s equilibrium, the rod is straight and , where if the magnet’s north pole is facing the carbon-fiber rod. The angles are then

(1)

Note that if the magnet was assembled with the south pole facing upward instead, must be used in Eq. 1.

Iv Model-Based Approach

In this section, we present the model-based approach used to simultaneously estimate airflow, interaction force, and aerodynamic drag force on a MAV. The estimation scheme is based on the UKF [28] approach presented in our previous work [6, 8], augmented with the ability to estimate a three-dimensional wind vector via the relative airflow measurements provided by the whiskers. Here we summarize the approach and present a measurement model for the airflow sensors. A diagram of the most important signals and system-level blocks related to our approach is included in Fig. 3.

Reference frame definition

We consider an inertial reference frame W, a body-fixed reference frame B attached to the CoM of the robot, and the -th sensor reference frame , with , as shown in Fig. 2.

Iv-a Mav dynamic model

We consider a MAV of mass

and inertia tensor

, and the dynamic equations of the robot can be written as

(2)

where and represent the position and velocity of the MAV, respectively, is the rotation matrix representing the attitude of the robot (i.e., such that a vector ), and

denotes the skew-symmetric matrix. The vector

is the thrust force produced by the propellers along the -axis of the body frame, is the gravitational acceleration, and is the interaction force expressed in the inertial frame. For simplicity we have assumed that interaction and aerodynamic disturbances do not cause any torque on the MAV, due to its symmetric shape and the fact that interactions (in our hardware setup) can only safely happen in proximity of the center of mass of the robot. Vector represents the torque generated by the propellers and the angular velocity of the MAV, both expressed in the body reference frame. Here is the aerodynamic drag force on the robot, expressed as an isotropic drag [29]

(3)

is the velocity vector of the relative airflow acting on the CoM of the MAV (expressed in the inertial frame)

(4)

and is the velocity vector of the wind expressed in the inertial frame.

Fig. 3: Diagram of the most important signals used by each step of the proposed model-based approach for simultaneous estimation of wind, drag force, and interaction force.

Iv-B Airflow sensor model

We consider the -th airflow sensor to be rigidly attached to the body reference frame , with . The reference frame of each sensor is translated with respect to by a vector and rotated according to the rotation matrix . To derive a model of the whiskers subject to aerodynamic drag, we make the following assumptions. Each whisker is massless; its tilt angle is not significantly influenced by the accelerations from the base (due to the high stiffness of its spring and the low mass of the fins), but is subject to the aerodynamic drag force .

We further assume that each sensor can be modeled as a stick hinged at the base via a linear torsional spring. Each sensor outputs the displacement angle and , which correspond to the rotation of the stick around the and axis of the reference frame. We can then express the aerodynamic drag force acting on the aerodynamic surface of each sensor as a function of the (small) displacement of the angle

(5)

where represents the stiffness of the torsional spring, the length of the sensor, and

(6)

captures the assumption that the aerodynamic drag acting on the -axis of the sensor is small (given the fin shapes) and has a negligible effect on the sensor deflection.

We now consider the aerodynamic force acting on a whisker. Assuming a non-isotropic drag, proportional to the squared relative velocity w.r.t. the relative airflow, we obtain

(7)

where is the density of the air, is the aerodynamic drag coefficient, is the aerodynamic section of each dimension, and the corresponding drag coefficient. Due to the small vertical surface of the fin of the sensor, we assume . The vector is the velocity of the relative airflow experienced by the -th whisker, and expressed in the -th whisker reference frame, and can be obtained as

(8)

where is the relative airflow in the CoM of the robot expressed in the body frame, given by:

(9)

Iv-C Model-based estimation scheme

Iv-C1 Process model, state and output

We discretize the MAV dynamic model described in Eq. 2 augmenting the state vector with the unknown wind and unknown interaction force that are to be estimated. We assume that these two state variables evolve as:

(10)

where and represent the white Gaussian process noise, with covariances used as tuning parameters.

The full, discrete time state of the system used for estimation is

(11)

where is the more computationally efficient quaternion-based attitude representation of the robot, obtained from the rotation matrix .

The filter output is then

(12)

where is obtained from Eq. 3 and Eq. 4, and is obtained from Eq. 9.

Iv-C2 Measurements and measurement model

We assume that two sets of measurements are available asynchronously:

Odometry

The filter fuses odometry measurements (position , attitude , linear velocity and angular velocity ) provided by a cascaded state estimator

(13)

the odometry measurement model is linear, as shown in [6].

Airflow sensors

We assume that the sensors are sampled synchronously, providing the measurement vector

(14)

The associated measurement model for the -th sensor can be obtained by combining Eq. 5 and Eq. 7

(15)

where is obtained using information about the attitude of the robot , its velocity , and angular velocity , and the estimated windspeed as described in Eq. 8 and Eq. 9. The synchronous measurement update is obtained by repeating Eq. 15 for every sensor .

Iv-C3 Prediction and update step

Prediction

The prediction step (producing the a priori state estimate) [28] is performed using the Unscented Quaternion Estimator (USQUE) [30] prediction technique for the attitude quaternion. The process model is propagated using the commanded thrust force and torque output of the position and attitude controller on the MAV.

Update

The odometry measurement update step is performed using the linear Kalman filter update step [28], while the airflow-sensor measurement update is performed via the Unscented Transformation [28] due to the non-linearities in the associated measurement model.

V Deep-Learning Based Approach

Fig. 4: Signal diagram of the interface between the learning-based and the data-driven approach.

In this section we present a deep-learning based strategy, which makes use of a RNN based on the LSTM architecture to create an estimate of the relative airflow using the airflow sensors and other measurements available on board of the robot. The complexity in modeling the effects of the aerodynamic interference caused by the airflow between the propellers, the body of the robot and the surrounding air, as observed in the literature [15] [16] and in our own experimental results, motivates the use of a learning-based strategy to map sensors’ measurement to relative airflow.

V-a Output and inputs

The output of the network is the relative airflow of the MAV. The inputs to the network are the airflow sensor measurements , the angular velocity of the robot , the raw acceleration measurement from the IMU and the normalized throttle commanded to the six propellers (which ranges between 0 and 1). The sign of the throttle is changed for the propellers spinning counterclockwise, in order to provide information to the network about the spinning direction of each propeller. The reason for the choice of the input is dictated by the derivation from our model-based approach: from Eq. 8 and Eq. 7 we observe that the relative airflow depends on the angle of the sensors and on the angular velocity of the robot. The acceleration from the IMU is included to provide information about hard to model effects, such as the orientation of the body frame w.r.t. gravity (which causes small changes in the angle measured by the sensors), as well as the effects of accelerations of the robot. Information about the throttle and spinning direction of the propellers is instead added to try to capture the complex aerodynamic interactions caused by their induced velocity. We chose to express every output and input of the network in the body reference frame, in order to make the network invariant to the orientation of the robot, thus potentially reducing the amount of training data needed.

V-B Network architecture

We employ an LSTM architecture, which is able to capture time-dependent effects [31, 32], such as, in our case, the dynamics of the airflow surrounding the robot and the dynamics of the sensor. We chose a 2-layer LSTM, with the size of the hidden layer set to 16 (with the input size, 20, and the output size, 3). We add a single fully connected layer to the output of the network, mapping the hidden layer into the the desired output size.

V-C Interface with the model-based approach

The UKF treats the LSTM output as a new sensor which provides relative airflow measurements , replacing the airflow sensor’s measurement model provided in Section IV. The output of the LSTM is fused via the measurement model in Eq. 9, using the Unscented Transformation. A block-diagram representing the interface between learning-based approach and model-based approach is represented in Fig. 4.

Vi Experimental Evaluation

Vi-a System identification

Vi-A1 Drag force

Estimating the drag force acting on the vehicle is required to differentiate from force due to relative airflow and force due to other interactions with the environment. To this purpose, the vehicle was commanded to follow a circular trajectory at speeds of 1 to 5 m/s, keeping its altitude constant (see Section VI-B for more information about the trajectory generator). In this scenario, the thrust produced by the MAV’s propellers is

(16)

where is the vehicle’s mass, is the gravity acceleration, and and are respectively the roll and pitch angles of the MAV. The drag force is then

(17)

where , and is the unit vector in the direction of the vehicle’s velocity in body frame. By fitting a second-degree polynomial to the collected data, we obtain and (see Eq. 3).

Vi-A2 Sensor parameters identification

The parameters required to fuse the output of -th airflow sensor are its position and rotation with respect to the body frame B of the MAV, and a lumped parameter coefficient mapping the relative airflow to the measured deflection angle . The coefficient can be obtained by re-arranging Eq. 15 and by solving

(18)

and the velocity is obtained from indoor flight experiments (assuming no wind, so that ), or by wind tunnel experiments. Wind tunnel experiments have also been used to validate our model choice (quadratic relationship between wind speed and sensor deflection), as show in Fig. 5. Furthermore, these experiments also confirmed our assumption on the structure of , i.e., the variation of the sensor’s deflection with respect to the direction of the wind speed is small and therefore it can be considered that .

Fig. 5: Roll deflection angle of the sensor as a function of the wind speed, for the case where the wind vector is aligned with a fin (1), and the case where it is most misaligned with a fin (2).

Vi-A3 LSTM training

We train the LSTM using two different datasets collected in indoor flight. In the first flight the hexarotor follows a circular trajectory at a set of constant velocities ranging from 1 to 5 m/s, spaced of 1 m/s each. In the second data-set we command the robot via a joystic, making aggressive maneuvers, while reaching velocities up to 5.5 m/s. Since the robot flies indoor (and thus wind can be considered to be zero) we assume that the relative airflow of the MAV corresponds to its estimated velocity

, which we use to train the network. The network is implemented and trained using PyTorch

[33]

. The data is pre-process by re-sampling it at 50 Hz, since the inputs of the network used for training have different rates (e.g. 200 Hz for the acceleration data from the IMU and 50 Hz from the airflow sensors). The network is trained for 400 epochs using sequences of 5 samples of length, with a learning rate of 10

, using the Adam optimizer [34] and the Mean Squared Error (MSE) loss. Unlike the model-based approach, the LSTM does not require any knowledge of the position and orientation of the sensors, nor the identification of the lumped parameter for each sensor. Once the network has been trained, however, it is not possible to reconfigure the position or the type of sensors used.

Vi-B Implementation details

Vi-B1 System architecture

We use a custom-built hexarotor of 1.31 kg of mass. The pose of the robot is provided by a motion capture system, while odometry information is obtained by an estimator running on-board, which fuses the pose information with the inertial data from an IMU. Our algorithms run on the onboard Nvidia Jetson TX2 and are interfaced with the rest of the system via ROS. We use Aerospace Controls Laboratory’s snap-stack [35] for controlling the MAV.

Vi-B2 Sensor driver

The sensors are connected via I2C to the TX2. A ROS node (sensor driver) reads the magnetic field data at 50 Hz and publishes the deflection angles as in Eq. 1

. Slight manufacturing imperfections are handled via an initial calibration of offset angles . The sensor driver rejects any measured outliers by comparing each component of

with a low-pass filtered version. If the difference is large, the measurement is discarded, but the low-pass filter is updated nevertheless. Therefore, if the sensor deflects very rapidly and the measurement is incorrectly regarded an outlier, the low-pass filtered quickly approaches the true value and consequent negative positives do not occur.

Fig. 6: Comparison of the relative velocity estimated by the model based (UKF) and the learning-based (LSTM) approaches. We assume that the ground truth (GT) is given by the velocity of the robot.

Vi-B3 Trajectory generator

A trajectory generator ROS node commands the vehicle to follow a circular path at different constant speeds or a line trajectory between two points with a maximum desired velocity. This node also handles the finite state machine transitions: take off, flight to the initial position of the trajectory, execution of the trajectory, and landing where the vehicle took off. We use this trajectory generator to identify the drag coefficient of the MAV (see Section VI-A), to collect data for training, and to execute the experiments described below.

Vi-C Relative airflow estimation

For this experiment, we commanded the vehicle with a joystick along our flight space at different speeds, to show the ability of our approach to estimate the relative airflow. Since the space is indoors (no wind), we assume that the relative airflow is opposite to the velocity of the MAV. We thus compare the velocity of the MAV (obtained from a motion capture system) to the opposite relative airflow estimated via the model-based strategy and the deep-learning based strategy.

Figure 6 shows the results of the experiment. Each subplot presents the velocity of the vehicle in body frame. The ground truth (GT) in red is the MAV’s speed obtained via the motion capture system, the green dotted line represents the relative airflow velocity in body frame as estimate via the deep-learning based strategy (LSTM), and the blue dashed line represents as estimated by the the fully model-based strategy (UKF). The root mean squared errors of the UKF and LSTM’s estimation for this experiment are shown in Table I. The results demonstrate that both approaches are effective, but show that the LSTM is more accurate.

Vi-D Wind gust estimation

To demonstrate the ability to estimate wind gusts, we flew the vehicle in a straight line commanded by the trajectory generator outlined in Section VI-B along the diagonal of the flight space while a leaf blower was pointing approximately to the middle of this trajectory. Figure 7 shows in red the estimated wind speed of the UKF drawn at the 2D position where this value was produced, and in green the leaf blower pose obtained with the motion capture system. As expected, the wind speed is increased in the area affected by the leaf blower.

Fig. 7: In this plot the vehicle is flown in a straight line at high speed, from left to right, while a leaf blower (shown in black) aims at the middle of its trajectory. The red arrows indicate the intensity of the estimated wind speed.
Method RMS error RMS error RMS error Unit
UKF 0.44 0.34 0.53 m/s
LSTM 0.38 0.31 0.28 m/s
TABLE I: RMS between LSTM and UKF in the estimation of the relative velocity of the robot on joystick dataset

Vi-E Simultaneous estimation of drag and interaction forces

Our approach can differentiate between drag and interaction forces, which is shown in the following experiments. There are four main parts to the experiment: hovering with no external force, hovering in a wind field generated by three leaf blowers, simultaneously pulling the vehicle with a string attached to it while the vehicle is still immersed in the wind field, and turning off the leaf blowers so that there is only interaction force. Figure 8 shows the forces acting on the MAV in world frame estimated by the UKF: and . As expected, the drag force is close to zero when no wind is present even when the MAV is pulled, and similarly the interaction force is approximately zero when the vehicle is not pulled even when the leaf blowers are acting on it. Therefore, drag and interaction forces are differentiated correctly. Note that the leaf blowers turn on quickly and thus the drag force resembles a step, while the interaction force was caused by manually pulling the MAV with a string following approximately a ramp from 0 to 4 N as measured with a dynamometer. The UKF estimates to about 6N, potentially due to inaccuracies of our external force ground truth measurement procedure and mis-calibration of the commanded throttle to thrust mapping. As for the wind speed generated by the leaf blowers, it has an average value of 3.6 m/s at the distance where the vehicle was flying. According to our model, a drag force of approximately 1.2 N as shown in Fig. 8 should correspond to a wind speed of 3 m/s. The difference is due to the fact that the leaf blowers are not perfectly aimed to the MAV, and the wind field that they generate is narrow.

Fig. 8: Simultaneous estimation of drag and interaction force. Vertical bars separate the four phases of the experiment.

Vii Conclusion

We presented a model- and a learning-based approach to estimate the relative airflow, the drag force and the interaction force acting on a hexarotor using bio-inspired sensors. The results obtained in flight experiments show that our approach allows to accurately identify the relative airflow experienced by a multirotor in flight, and that we are able to detect wind gusts acting on the MAV. Via experimental results, we showed that the proposed deep-learning based strategy is more accurate than the model-based strategy, and does not require a significant amount of training data. The deep-learning based strategy, however, does not allow the flexibility to re-position or easily change sensors without having to re-train the network. Additionally, we show that we can correctly distinguish between drag and interaction forces. Future work includes leveraging our drag estimation results for improved trajectory tracking performance. We additionally plan to further evaluate our deep-learning based approach, and to compare different learning algorithms, data collection, and training strategies.

Acknowledgment

This work was funded by the Air Force Office of Scientific Research MURI FA9550-19-1-0386 and by Ford Motor Company. The authors would like to thank Parker Lusk for his help in the system setup.

References

  • [1] “Zipline - vital, on-demand delivery for the world,” https://flyzipline.com/, (Accessed on 02/24/2020).
  • [2] “Flyability — drones for indoor inspection and confined space,” https://www.flyability.com/, (Accessed on 02/24/2020).
  • [3] “Skydio 2: The drone you’ve been waiting for. – skydio, inc.” https://www.skydio.com/, (Accessed on 02/24/2020).
  • [4] “(STTR) Navy FY13A - airborne sensing for ship airwake surveys,” https://www.navysbir.com/n13_A/navst13a-015.htm, (Accessed on 02/26/2020).
  • [5] “Voliro airborne robotics,” https://www.voliro.com/, (Accessed on 02/24/2020).
  • [6] A. Tagliabue, M. Kamel, R. Siegwart, and J. Nieto, “Robust collaborative object transportation using multiple mavs,” The International Journal of Robotics Research, vol. 38, no. 9, pp. 1020–1044, 2019.
  • [7] F. Augugliaro and R. D’Andrea, “Admittance control for physical human-quadrocopter interaction,” in 2013 European Control Conference (ECC).   IEEE, 2013, pp. 1805–1810.
  • [8] A. Tagliabue, M. Kamel, S. Verling, R. Siegwart, and J. Nieto, “Collaborative transportation using mavs via passive force control,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 5766–5773.
  • [9] T. Lew, T. Emmei, D. D. Fan, T. Bartlett, A. Santamaria-Navarro, R. Thakker, and A.-a. Agha-mohammadi, “Contact inertial odometry: Collisions are your friend,” arXiv preprint arXiv:1909.00079, 2019.
  • [10] A. Paris, B. T. Lopez, and J. P. How, “Dynamic landing of an autonomous quadrotor on a moving platform in turbulent wind conditions,” arXiv preprint arXiv:1909.11071, 2019.
  • [11] “’My fingers were almost cut off by a drone’ - BBC news,” https://www.bbc.com/news/uk-40697682, (Accessed on 02/24/2020).
  • [12] T. Tomić, K. Schmid, P. Lutz, A. Mathers, and S. Haddadin, “The flying anemometer: Unified estimation of wind velocity from aerodynamic power and wrenches,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 1637–1644.
  • [13] T. Tomić and S. Haddadin, “Simultaneous estimation of aerodynamic and contact forces in flying robots: Applications to metric wind estimation and collision detection,” in 2015 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2015, pp. 5290–5296.
  • [14] S. P. Sane, A. Dieudonné, M. A. Willis, and T. L. Daniel, “Antennal mechanosensors mediate flight control in moths,” science, vol. 315, no. 5813, pp. 863–866, 2007.
  • [15] S. Prudden, A. Fisher, M. Marino, A. Mohamed, S. Watkins, and G. Wild, “Measuring wind with small unmanned aircraft systems,” Journal of Wind Engineering and Industrial Aerodynamics, vol. 176, pp. 197–210, 2018.
  • [16] P. Ventura Diaz and S. Yoon, “High-fidelity computational aerodynamics of multi-rotor unmanned aerial vehicles,” in 2018 AIAA Aerospace Sciences Meeting, 2018, p. 1266.
  • [17]

    S. Kim and C. Velez, “A magnetically transduced whisker for angular displacement and moment sensing,” in

    IEEE/RSJ International Conference on Robots and Systems, 2019.
  • [18] P. Bruschi, M. Piotto, F. Dell’Agnello, J. Ware, and N. Roy, “Wind speed and direction detection by means of solid-state anemometers embedded on small quadcopters,” Procedia Engineering, vol. 168, pp. 802–805, 2016.
  • [19] D. Hollenbeck, G. Nunez, L. E. Christensen, and Y. Chen, “Wind measurement and estimation with small unmanned aerial systems (suas) using on-board mini ultrasonic anemometers,” in 2018 International Conference on Unmanned Aircraft Systems (ICUAS).   IEEE, 2018, pp. 285–292.
  • [20] W. Deer and P. E. Pounds, “Lightweight whiskers for contact, pre-contact, and fluid velocity sensing,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1978–1984, 2019.
  • [21] Y. Demitrit, S. Verling, T. Stastny, A. Melzer, and R. Siegwart, “Model-based wind estimation for a hovering vtol tailsitter uav,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 3945–3952.
  • [22] L. Sikkel, G. de Croon, C. De Wagter, and Q. Chu, “A novel online model-based wind estimation approach for quadrotor micro air vehicles using low cost mems imus,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 2141–2146.
  • [23] G. Shi, X. Shi, M. O’Connell, R. Yu, K. Azizzadenesheli, A. Anandkumar, Y. Yue, and S.-J. Chung, “Neural lander: Stable drone landing control using learned dynamics,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 9784–9790.
  • [24] S. Allison, H. Bai, and B. Jayaraman, “Estimating wind velocity with a neural network using quadcopter trajectories,” in AIAA Scitech 2019 Forum, 2019, p. 1596.
  • [25] A. S. Marton, A. R. Fioravanti, J. R. Azinheira, and E. C. de Paiva, “Hybrid model-based and data-driven wind velocity estimator for the navigation system of a robotic airship,” arXiv preprint arXiv:1907.06266, 2019.
  • [26] C. D. McKinnon and A. P. Schoellig, “Unscented external force and torque estimation for quadrotors,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 5651–5657.
  • [27] B. Nisar, P. Foehn, D. Falanga, and D. Scaramuzza, “Vimo: Simultaneous visual inertial model-based odometry and force estimation,” IEEE Robotics and Automation Letters, 2019.
  • [28] D. Simon, Optimal state estimation: Kalman, H infinity, and nonlinear approaches.   John Wiley & Sons, 2006.
  • [29] A. Tagliabue, X. Wu, and M. W. Mueller, “Model-free online motion adaptation for optimal range and endurance of multicopters,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 5650–5656.
  • [30] J. L. Crassidis and F. L. Markley, “Unscented filtering for spacecraft attitude estimation,” Journal of guidance, control, and dynamics, vol. 26, no. 4, pp. 536–542, 2003.
  • [31] Z. C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of recurrent neural networks for sequence learning,” arXiv preprint arXiv:1506.00019, 2015.
  • [32] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning.   MIT press, 2016.
  • [33] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems, 2019, pp. 8024–8035.
  • [34] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [35] Aerospace Controls Laboratory, “snap-stack: Autopilot code and host tools for flying snapdragon flight-based vehicles,” https://gitlab.com/mit-acl/fsw/snap-stack, (Accessed on 02/23/2020).