I Introduction
The deployment of MAVs in uncertain and constantly changing atmospheric conditions [1, 2, 3] requires the ability to estimate and adapt to disturbances such as the aerodynamic drag force applied by wind gusts. Simultaneously, as many new interactionbased missions [4, 5, 6] arise, so increases the need to better differentiate between forces caused by aerodynamic disturbances and other sources of interaction [7, 8, 9, 10]. Differentiating between aerodynamic drag force and interaction force can be extremely important for safety reasons. For example, the controller of a robot should react differently depending on whether a large disturbance is caused by a wind gust, or by a human trying to interact with the machine [11].
Distinguishing between drag and interaction disturbances can be challenging, as they both apply forces to the center of mass (CoM) of the multirotor that cannot be easily differentiated by examining the inertial information commonly available from the robot’s onboard IMU or odometry estimator. Successful approaches for this task include a modelbased method that measures the change in thrusttopower ratio of the propellers caused by wind [12] and an approach which monitors the frequency component of the total disturbance (estimated via inertial information) to distinguish between the two possible sources of force [13].
This work presents a strategy for simultaneously estimating the interaction force and the aerodynamic drag disturbances using novel bioinspired, whiskerlike sensors that measure the airflow around a multirotor, as shown in Fig. 1. Our approach takes inspiration from the way insects sense airflow [14], which is by measuring the deflections caused by the aerodynamic drag force acting on the appendix of some receptors. By fusing the information of four heterogeneous airflowsensors distributed across the surface of the robot, we can create a threedimensional estimate of the relative velocity of the MAV with respect to the surrounding airflow. This information is then fused in a UKFbased force estimator that uses an aerodynamic model together with the robot’s pose and velocity to predict the wind, the expected drag force, and other interaction forces.
To account for the complex aerodynamic interactions between sensors and propellers [15, 16], we extend this modelbased approach (based on firstorder physical principles) with a datadriven strategy. This strategy employs a Recurrent Neural Network (RNN) based on a LSTM network to provide an estimate of the relative airflow of the robot, which is then fused in the proposed modelbased estimation scheme. We experimentally show that our approach achieves an accurate estimate of the relative airflow with respect to the robot with velocities up to m/s, and enables interaction forces and aerodynamic drag forces to be distinguished. We experimentally compare the modelbased and learningbased approaches, highlighting their advantages and disadvantages.
To summarize, the contributions of this paper are: 1) model and deep learningbased strategies to simultaneously estimate wind, drag force, and other interaction forces using novel bioinspired sensors similar to the one discussed in [17]; and 2) experimental validation of our approaches, showing that we can accurately estimate relative airflow of up to m/s and distinguish between interaction force and aerodynamic force.
Ii Related Work
Distinguishing between interaction and aerodynamic disturbance is a challenging task, and most of the current approaches focus on the estimation of one or the other disturbance. Aerodynamic disturbances: Accurate wind or airflow sensing is at the heart of the techniques employed for aerodynamic disturbance estimation. A common strategy is based on directly measuring the airflow surrounding the robot via sensors, such as pressure sensors [18], ultrasonic sensors [19], or whiskerlike sensors [20]. Other strategies estimate the airflow via its inertial effects on the robot, for example using modelbased approaches [21, 22], learningbased approaches [23, 24], or hybrid (modelbased and learningbased) solutions [25]. Generic wrenchlike disturbances: Multiple related works focus instead on estimating wrench disturbances, without explicitly differentiating for the effects of the drag force due to wind: [7, 26, 6, 8] propose a modelbased approach which utilizes an UKF for wrench estimation, while [27] proposes a factor graphbased estimation scheme.
Iii Sensor Design
In order to measure the relative wind in 3D affecting a MAV, lightweight, economical, and multidirectional sensors need to be used. This paper adopts sensors based in [17], which satisfy these requirements.
Iiia Sensor design and considerations
The sensors, shown in Fig. 2, consist of a base and an easilyexchangeable tip. The base is composed of a magnetic field sensor connected to a conditioning circuit that interfaces with the robot via I2C and a 3Dprinted case that encloses the sensor. The tip consists of a planar spring mounted in a 3Dprinted enclosure that fits with the base, with a permanent magnet attached to its bottom and a carbonfiber rod glued on the spring’s top. Eight foam fins are attached on the other end of this rod. When the sensor is subjected to airflow, the drag force from the air on the fins causes a rotation about the center of the planar spring which results in a displacement of the magnet. This displacement is then measured by the magnetic sensor. The fins are placed with even angular distribution in order to achieve homogeneous drag for different airflow directions. Foam and carbon fiber were chosen as the material of the fin structure due to their low density, which is crucial to minimize the inertia of the sensor. See [17] for more information about the sensor characteristics and manufacturing procedure.
Due to the complex aerodynamic interactions between the relative airflow and the blade rotor wakes, the sensor placement needs to be chosen carefully [15, 16]. To determine the best locations, we attached short pieces of string both directly on the vehicle and on metal rods extending away from it horizontally and vertically. We then flew the hexarotor indoors and observed that the pieces of string on top of the vehicle and on the propeller guards were mostly unaffected by the blade rotor wakes. Therefore, these are the two locations chosen to mount the sensors, as seen in Fig. 1. They are distributed so that the relative airflow coming from any direction excites at least one sensor (that is, for at least one sensor, the relative airflow is not aligned with its length).
IiiB Sensor measurements
The sensors detect the magnetic field , but the model outlined in Section IVB requires the deflection angles of the th sensor and , which correspond to the rotation of the carbon fiber rod about the and axes in reference frame . At the spring’s equilibrium, the rod is straight and , where if the magnet’s north pole is facing the carbonfiber rod. The angles are then
(1) 
Note that if the magnet was assembled with the south pole facing upward instead, must be used in Eq. 1.
Iv ModelBased Approach
In this section, we present the modelbased approach used to simultaneously estimate airflow, interaction force, and aerodynamic drag force on a MAV. The estimation scheme is based on the UKF [28] approach presented in our previous work [6, 8], augmented with the ability to estimate a threedimensional wind vector via the relative airflow measurements provided by the whiskers. Here we summarize the approach and present a measurement model for the airflow sensors. A diagram of the most important signals and systemlevel blocks related to our approach is included in Fig. 3.
Reference frame definition
Iva Mav dynamic model
We consider a MAV of mass
and inertia tensor
, and the dynamic equations of the robot can be written as(2) 
where and represent the position and velocity of the MAV, respectively, is the rotation matrix representing the attitude of the robot (i.e., such that a vector ), and
denotes the skewsymmetric matrix. The vector
is the thrust force produced by the propellers along the axis of the body frame, is the gravitational acceleration, and is the interaction force expressed in the inertial frame. For simplicity we have assumed that interaction and aerodynamic disturbances do not cause any torque on the MAV, due to its symmetric shape and the fact that interactions (in our hardware setup) can only safely happen in proximity of the center of mass of the robot. Vector represents the torque generated by the propellers and the angular velocity of the MAV, both expressed in the body reference frame. Here is the aerodynamic drag force on the robot, expressed as an isotropic drag [29](3) 
is the velocity vector of the relative airflow acting on the CoM of the MAV (expressed in the inertial frame)
(4) 
and is the velocity vector of the wind expressed in the inertial frame.
IvB Airflow sensor model
We consider the th airflow sensor to be rigidly attached to the body reference frame , with . The reference frame of each sensor is translated with respect to by a vector and rotated according to the rotation matrix . To derive a model of the whiskers subject to aerodynamic drag, we make the following assumptions. Each whisker is massless; its tilt angle is not significantly influenced by the accelerations from the base (due to the high stiffness of its spring and the low mass of the fins), but is subject to the aerodynamic drag force .
We further assume that each sensor can be modeled as a stick hinged at the base via a linear torsional spring. Each sensor outputs the displacement angle and , which correspond to the rotation of the stick around the and axis of the reference frame. We can then express the aerodynamic drag force acting on the aerodynamic surface of each sensor as a function of the (small) displacement of the angle
(5) 
where represents the stiffness of the torsional spring, the length of the sensor, and
(6) 
captures the assumption that the aerodynamic drag acting on the axis of the sensor is small (given the fin shapes) and has a negligible effect on the sensor deflection.
We now consider the aerodynamic force acting on a whisker. Assuming a nonisotropic drag, proportional to the squared relative velocity w.r.t. the relative airflow, we obtain
(7) 
where is the density of the air, is the aerodynamic drag coefficient, is the aerodynamic section of each dimension, and the corresponding drag coefficient. Due to the small vertical surface of the fin of the sensor, we assume . The vector is the velocity of the relative airflow experienced by the th whisker, and expressed in the th whisker reference frame, and can be obtained as
(8) 
where is the relative airflow in the CoM of the robot expressed in the body frame, given by:
(9) 
IvC Modelbased estimation scheme
IvC1 Process model, state and output
We discretize the MAV dynamic model described in Eq. 2 augmenting the state vector with the unknown wind and unknown interaction force that are to be estimated. We assume that these two state variables evolve as:
(10) 
where and represent the white Gaussian process noise, with covariances used as tuning parameters.
The full, discrete time state of the system used for estimation is
(11)  
where is the more computationally efficient quaternionbased attitude representation of the robot, obtained from the rotation matrix .
IvC2 Measurements and measurement model
We assume that two sets of measurements are available asynchronously:
Odometry
The filter fuses odometry measurements (position , attitude , linear velocity and angular velocity ) provided by a cascaded state estimator
(13) 
the odometry measurement model is linear, as shown in [6].
Airflow sensors
We assume that the sensors are sampled synchronously, providing the measurement vector
(14) 
The associated measurement model for the th sensor can be obtained by combining Eq. 5 and Eq. 7
(15) 
where is obtained using information about the attitude of the robot , its velocity , and angular velocity , and the estimated windspeed as described in Eq. 8 and Eq. 9. The synchronous measurement update is obtained by repeating Eq. 15 for every sensor .
IvC3 Prediction and update step
Prediction
The prediction step (producing the a priori state estimate) [28] is performed using the Unscented Quaternion Estimator (USQUE) [30] prediction technique for the attitude quaternion. The process model is propagated using the commanded thrust force and torque output of the position and attitude controller on the MAV.
Update
V DeepLearning Based Approach
In this section we present a deeplearning based strategy, which makes use of a RNN based on the LSTM architecture to create an estimate of the relative airflow using the airflow sensors and other measurements available on board of the robot. The complexity in modeling the effects of the aerodynamic interference caused by the airflow between the propellers, the body of the robot and the surrounding air, as observed in the literature [15] [16] and in our own experimental results, motivates the use of a learningbased strategy to map sensors’ measurement to relative airflow.
Va Output and inputs
The output of the network is the relative airflow of the MAV. The inputs to the network are the airflow sensor measurements , the angular velocity of the robot , the raw acceleration measurement from the IMU and the normalized throttle commanded to the six propellers (which ranges between 0 and 1). The sign of the throttle is changed for the propellers spinning counterclockwise, in order to provide information to the network about the spinning direction of each propeller. The reason for the choice of the input is dictated by the derivation from our modelbased approach: from Eq. 8 and Eq. 7 we observe that the relative airflow depends on the angle of the sensors and on the angular velocity of the robot. The acceleration from the IMU is included to provide information about hard to model effects, such as the orientation of the body frame w.r.t. gravity (which causes small changes in the angle measured by the sensors), as well as the effects of accelerations of the robot. Information about the throttle and spinning direction of the propellers is instead added to try to capture the complex aerodynamic interactions caused by their induced velocity. We chose to express every output and input of the network in the body reference frame, in order to make the network invariant to the orientation of the robot, thus potentially reducing the amount of training data needed.
VB Network architecture
We employ an LSTM architecture, which is able to capture timedependent effects [31, 32], such as, in our case, the dynamics of the airflow surrounding the robot and the dynamics of the sensor. We chose a 2layer LSTM, with the size of the hidden layer set to 16 (with the input size, 20, and the output size, 3). We add a single fully connected layer to the output of the network, mapping the hidden layer into the the desired output size.
VC Interface with the modelbased approach
The UKF treats the LSTM output as a new sensor which provides relative airflow measurements , replacing the airflow sensor’s measurement model provided in Section IV. The output of the LSTM is fused via the measurement model in Eq. 9, using the Unscented Transformation. A blockdiagram representing the interface between learningbased approach and modelbased approach is represented in Fig. 4.
Vi Experimental Evaluation
Via System identification
ViA1 Drag force
Estimating the drag force acting on the vehicle is required to differentiate from force due to relative airflow and force due to other interactions with the environment. To this purpose, the vehicle was commanded to follow a circular trajectory at speeds of 1 to 5 m/s, keeping its altitude constant (see Section VIB for more information about the trajectory generator). In this scenario, the thrust produced by the MAV’s propellers is
(16) 
where is the vehicle’s mass, is the gravity acceleration, and and are respectively the roll and pitch angles of the MAV. The drag force is then
(17) 
where , and is the unit vector in the direction of the vehicle’s velocity in body frame. By fitting a seconddegree polynomial to the collected data, we obtain and (see Eq. 3).
ViA2 Sensor parameters identification
The parameters required to fuse the output of th airflow sensor are its position and rotation with respect to the body frame B of the MAV, and a lumped parameter coefficient mapping the relative airflow to the measured deflection angle . The coefficient can be obtained by rearranging Eq. 15 and by solving
(18) 
and the velocity is obtained from indoor flight experiments (assuming no wind, so that ), or by wind tunnel experiments. Wind tunnel experiments have also been used to validate our model choice (quadratic relationship between wind speed and sensor deflection), as show in Fig. 5. Furthermore, these experiments also confirmed our assumption on the structure of , i.e., the variation of the sensor’s deflection with respect to the direction of the wind speed is small and therefore it can be considered that .
ViA3 LSTM training
We train the LSTM using two different datasets collected in indoor flight. In the first flight the hexarotor follows a circular trajectory at a set of constant velocities ranging from 1 to 5 m/s, spaced of 1 m/s each. In the second dataset we command the robot via a joystic, making aggressive maneuvers, while reaching velocities up to 5.5 m/s. Since the robot flies indoor (and thus wind can be considered to be zero) we assume that the relative airflow of the MAV corresponds to its estimated velocity
, which we use to train the network. The network is implemented and trained using PyTorch
[33]. The data is preprocess by resampling it at 50 Hz, since the inputs of the network used for training have different rates (e.g. 200 Hz for the acceleration data from the IMU and 50 Hz from the airflow sensors). The network is trained for 400 epochs using sequences of 5 samples of length, with a learning rate of 10
, using the Adam optimizer [34] and the Mean Squared Error (MSE) loss. Unlike the modelbased approach, the LSTM does not require any knowledge of the position and orientation of the sensors, nor the identification of the lumped parameter for each sensor. Once the network has been trained, however, it is not possible to reconfigure the position or the type of sensors used.ViB Implementation details
ViB1 System architecture
We use a custombuilt hexarotor of 1.31 kg of mass. The pose of the robot is provided by a motion capture system, while odometry information is obtained by an estimator running onboard, which fuses the pose information with the inertial data from an IMU. Our algorithms run on the onboard Nvidia Jetson TX2 and are interfaced with the rest of the system via ROS. We use Aerospace Controls Laboratory’s snapstack [35] for controlling the MAV.
ViB2 Sensor driver
The sensors are connected via I2C to the TX2. A ROS node (sensor driver) reads the magnetic field data at 50 Hz and publishes the deflection angles as in Eq. 1
. Slight manufacturing imperfections are handled via an initial calibration of offset angles . The sensor driver rejects any measured outliers by comparing each component of
with a lowpass filtered version. If the difference is large, the measurement is discarded, but the lowpass filter is updated nevertheless. Therefore, if the sensor deflects very rapidly and the measurement is incorrectly regarded an outlier, the lowpass filtered quickly approaches the true value and consequent negative positives do not occur.ViB3 Trajectory generator
A trajectory generator ROS node commands the vehicle to follow a circular path at different constant speeds or a line trajectory between two points with a maximum desired velocity. This node also handles the finite state machine transitions: take off, flight to the initial position of the trajectory, execution of the trajectory, and landing where the vehicle took off. We use this trajectory generator to identify the drag coefficient of the MAV (see Section VIA), to collect data for training, and to execute the experiments described below.
ViC Relative airflow estimation
For this experiment, we commanded the vehicle with a joystick along our flight space at different speeds, to show the ability of our approach to estimate the relative airflow. Since the space is indoors (no wind), we assume that the relative airflow is opposite to the velocity of the MAV. We thus compare the velocity of the MAV (obtained from a motion capture system) to the opposite relative airflow estimated via the modelbased strategy and the deeplearning based strategy.
Figure 6 shows the results of the experiment. Each subplot presents the velocity of the vehicle in body frame. The ground truth (GT) in red is the MAV’s speed obtained via the motion capture system, the green dotted line represents the relative airflow velocity in body frame as estimate via the deeplearning based strategy (LSTM), and the blue dashed line represents as estimated by the the fully modelbased strategy (UKF). The root mean squared errors of the UKF and LSTM’s estimation for this experiment are shown in Table I. The results demonstrate that both approaches are effective, but show that the LSTM is more accurate.
ViD Wind gust estimation
To demonstrate the ability to estimate wind gusts, we flew the vehicle in a straight line commanded by the trajectory generator outlined in Section VIB along the diagonal of the flight space while a leaf blower was pointing approximately to the middle of this trajectory. Figure 7 shows in red the estimated wind speed of the UKF drawn at the 2D position where this value was produced, and in green the leaf blower pose obtained with the motion capture system. As expected, the wind speed is increased in the area affected by the leaf blower.
Method  RMS error  RMS error  RMS error  Unit 

UKF  0.44  0.34  0.53  m/s 
LSTM  0.38  0.31  0.28  m/s 
ViE Simultaneous estimation of drag and interaction forces
Our approach can differentiate between drag and interaction forces, which is shown in the following experiments. There are four main parts to the experiment: hovering with no external force, hovering in a wind field generated by three leaf blowers, simultaneously pulling the vehicle with a string attached to it while the vehicle is still immersed in the wind field, and turning off the leaf blowers so that there is only interaction force. Figure 8 shows the forces acting on the MAV in world frame estimated by the UKF: and . As expected, the drag force is close to zero when no wind is present even when the MAV is pulled, and similarly the interaction force is approximately zero when the vehicle is not pulled even when the leaf blowers are acting on it. Therefore, drag and interaction forces are differentiated correctly. Note that the leaf blowers turn on quickly and thus the drag force resembles a step, while the interaction force was caused by manually pulling the MAV with a string following approximately a ramp from 0 to 4 N as measured with a dynamometer. The UKF estimates to about 6N, potentially due to inaccuracies of our external force ground truth measurement procedure and miscalibration of the commanded throttle to thrust mapping. As for the wind speed generated by the leaf blowers, it has an average value of 3.6 m/s at the distance where the vehicle was flying. According to our model, a drag force of approximately 1.2 N as shown in Fig. 8 should correspond to a wind speed of 3 m/s. The difference is due to the fact that the leaf blowers are not perfectly aimed to the MAV, and the wind field that they generate is narrow.
Vii Conclusion
We presented a model and a learningbased approach to estimate the relative airflow, the drag force and the interaction force acting on a hexarotor using bioinspired sensors. The results obtained in flight experiments show that our approach allows to accurately identify the relative airflow experienced by a multirotor in flight, and that we are able to detect wind gusts acting on the MAV. Via experimental results, we showed that the proposed deeplearning based strategy is more accurate than the modelbased strategy, and does not require a significant amount of training data. The deeplearning based strategy, however, does not allow the flexibility to reposition or easily change sensors without having to retrain the network. Additionally, we show that we can correctly distinguish between drag and interaction forces. Future work includes leveraging our drag estimation results for improved trajectory tracking performance. We additionally plan to further evaluate our deeplearning based approach, and to compare different learning algorithms, data collection, and training strategies.
Acknowledgment
This work was funded by the Air Force Office of Scientific Research MURI FA95501910386 and by Ford Motor Company. The authors would like to thank Parker Lusk for his help in the system setup.
References
 [1] “Zipline  vital, ondemand delivery for the world,” https://flyzipline.com/, (Accessed on 02/24/2020).
 [2] “Flyability — drones for indoor inspection and confined space,” https://www.flyability.com/, (Accessed on 02/24/2020).
 [3] “Skydio 2: The drone you’ve been waiting for. – skydio, inc.” https://www.skydio.com/, (Accessed on 02/24/2020).
 [4] “(STTR) Navy FY13A  airborne sensing for ship airwake surveys,” https://www.navysbir.com/n13_A/navst13a015.htm, (Accessed on 02/26/2020).
 [5] “Voliro airborne robotics,” https://www.voliro.com/, (Accessed on 02/24/2020).
 [6] A. Tagliabue, M. Kamel, R. Siegwart, and J. Nieto, “Robust collaborative object transportation using multiple mavs,” The International Journal of Robotics Research, vol. 38, no. 9, pp. 1020–1044, 2019.
 [7] F. Augugliaro and R. D’Andrea, “Admittance control for physical humanquadrocopter interaction,” in 2013 European Control Conference (ECC). IEEE, 2013, pp. 1805–1810.
 [8] A. Tagliabue, M. Kamel, S. Verling, R. Siegwart, and J. Nieto, “Collaborative transportation using mavs via passive force control,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 5766–5773.
 [9] T. Lew, T. Emmei, D. D. Fan, T. Bartlett, A. SantamariaNavarro, R. Thakker, and A.a. Aghamohammadi, “Contact inertial odometry: Collisions are your friend,” arXiv preprint arXiv:1909.00079, 2019.
 [10] A. Paris, B. T. Lopez, and J. P. How, “Dynamic landing of an autonomous quadrotor on a moving platform in turbulent wind conditions,” arXiv preprint arXiv:1909.11071, 2019.
 [11] “’My fingers were almost cut off by a drone’  BBC news,” https://www.bbc.com/news/uk40697682, (Accessed on 02/24/2020).
 [12] T. Tomić, K. Schmid, P. Lutz, A. Mathers, and S. Haddadin, “The flying anemometer: Unified estimation of wind velocity from aerodynamic power and wrenches,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 1637–1644.
 [13] T. Tomić and S. Haddadin, “Simultaneous estimation of aerodynamic and contact forces in flying robots: Applications to metric wind estimation and collision detection,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 5290–5296.
 [14] S. P. Sane, A. Dieudonné, M. A. Willis, and T. L. Daniel, “Antennal mechanosensors mediate flight control in moths,” science, vol. 315, no. 5813, pp. 863–866, 2007.
 [15] S. Prudden, A. Fisher, M. Marino, A. Mohamed, S. Watkins, and G. Wild, “Measuring wind with small unmanned aircraft systems,” Journal of Wind Engineering and Industrial Aerodynamics, vol. 176, pp. 197–210, 2018.
 [16] P. Ventura Diaz and S. Yoon, “Highfidelity computational aerodynamics of multirotor unmanned aerial vehicles,” in 2018 AIAA Aerospace Sciences Meeting, 2018, p. 1266.

[17]
S. Kim and C. Velez, “A magnetically transduced whisker for angular displacement and moment sensing,” in
IEEE/RSJ International Conference on Robots and Systems, 2019.  [18] P. Bruschi, M. Piotto, F. Dell’Agnello, J. Ware, and N. Roy, “Wind speed and direction detection by means of solidstate anemometers embedded on small quadcopters,” Procedia Engineering, vol. 168, pp. 802–805, 2016.
 [19] D. Hollenbeck, G. Nunez, L. E. Christensen, and Y. Chen, “Wind measurement and estimation with small unmanned aerial systems (suas) using onboard mini ultrasonic anemometers,” in 2018 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, 2018, pp. 285–292.
 [20] W. Deer and P. E. Pounds, “Lightweight whiskers for contact, precontact, and fluid velocity sensing,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1978–1984, 2019.
 [21] Y. Demitrit, S. Verling, T. Stastny, A. Melzer, and R. Siegwart, “Modelbased wind estimation for a hovering vtol tailsitter uav,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 3945–3952.
 [22] L. Sikkel, G. de Croon, C. De Wagter, and Q. Chu, “A novel online modelbased wind estimation approach for quadrotor micro air vehicles using low cost mems imus,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 2141–2146.
 [23] G. Shi, X. Shi, M. O’Connell, R. Yu, K. Azizzadenesheli, A. Anandkumar, Y. Yue, and S.J. Chung, “Neural lander: Stable drone landing control using learned dynamics,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 9784–9790.
 [24] S. Allison, H. Bai, and B. Jayaraman, “Estimating wind velocity with a neural network using quadcopter trajectories,” in AIAA Scitech 2019 Forum, 2019, p. 1596.
 [25] A. S. Marton, A. R. Fioravanti, J. R. Azinheira, and E. C. de Paiva, “Hybrid modelbased and datadriven wind velocity estimator for the navigation system of a robotic airship,” arXiv preprint arXiv:1907.06266, 2019.
 [26] C. D. McKinnon and A. P. Schoellig, “Unscented external force and torque estimation for quadrotors,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 5651–5657.
 [27] B. Nisar, P. Foehn, D. Falanga, and D. Scaramuzza, “Vimo: Simultaneous visual inertial modelbased odometry and force estimation,” IEEE Robotics and Automation Letters, 2019.
 [28] D. Simon, Optimal state estimation: Kalman, H infinity, and nonlinear approaches. John Wiley & Sons, 2006.
 [29] A. Tagliabue, X. Wu, and M. W. Mueller, “Modelfree online motion adaptation for optimal range and endurance of multicopters,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 5650–5656.
 [30] J. L. Crassidis and F. L. Markley, “Unscented filtering for spacecraft attitude estimation,” Journal of guidance, control, and dynamics, vol. 26, no. 4, pp. 536–542, 2003.
 [31] Z. C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of recurrent neural networks for sequence learning,” arXiv preprint arXiv:1506.00019, 2015.
 [32] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016.
 [33] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, highperformance deep learning library,” in Advances in Neural Information Processing Systems, 2019, pp. 8024–8035.
 [34] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
 [35] Aerospace Controls Laboratory, “snapstack: Autopilot code and host tools for flying snapdragon flightbased vehicles,” https://gitlab.com/mitacl/fsw/snapstack, (Accessed on 02/23/2020).
Comments
There are no comments yet.