Accurate position tracking with a single UWB anchor

05/21/2020 ∙ by Yanjun Cao, et al. ∙ Technische Universität München Universität Tübingen 0

Accurate localization and tracking are a fundamental requirement for robotic applications. Localization systems like GPS, optical tracking, simultaneous localization and mapping (SLAM) are used for daily life activities, research, and commercial applications. Ultra-wideband (UWB) technology provides another venue to accurately locate devices both indoors and outdoors. In this paper, we study a localization solution with a single UWB anchor, instead of the traditional multi-anchor setup. Besides the challenge of a single UWB ranging source, the only other sensor we require is a low-cost 9 DoF inertial measurement unit (IMU). Under such a configuration, we propose continuous monitoring of UWB range changes to estimate the robot speed when moving on a line. Combining speed estimation with orientation estimation from the IMU sensor, the system becomes temporally observable. We use an Extended Kalman Filter (EKF) to estimate the pose of a robot. With our solution, we can effectively correct the accumulated error and maintain accurate tracking of a moving robot.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Accurate localization and tracking are fundamental services for an autonomous system. Many options are available for localization: GPS for open outdoor areas, motion capture systems in a laboratory, visual systems. However, they are generally limited by the environment or time-consuming and labor-intensive setup work and expensive infrastructure[15]. Ultra-wideband (UWB) technology provides another venue to accurately locate devices both indoors and outdoors. Most available UWB systems are based on multi-anchor arrangements, which need some labor-intensive setup work, like mounting anchors and calibration. Furthermore, it is often difficult to set up such systems outdoors or in an unstructured environment. We believe that a localization system which can accurately track devices without complex setup is highly desirable.

Tracking with a single anchor is attractive because one can easily drop an anchor in the environment as reference. However, it is also quite challenging: a single source of distance information is generally too limited for tracking. Current research in underwater robotics proposes fusing distance to an acoustic anchor with odometry, but they usually rely on very expensive sensors (e.g., high accuracy IMU, doppler anemometer)[6, 23]. Our goal is to enable single anchor localization with low-cost UWB and IMU sensors. Robots and IoT devices can easily be equipped with these two sensors, while velocity sensors (encoders, doppler anemometers, etc.) are much more rare and generally too expensive for IoT devices. Moreover, nowadays UWB is becoming pervasive, being present in the latest Apple iPhone [13] for spatial awareness at the time of writing.

Fig. 1: Trajectory of a real-world experiment with a differential wheel robot, Duckiebot. The robot only has IMU and UWB sensors. A UWB anchor is placed on (0,0) in the right bottom. There is no encoder or other velocity sensor used in the robot. The robot is controlled manually to moving a "TUM" like trajectory in a basketball pitch in TUM Garching campus.

Getting odometry from low-cost IMUs is challenging. Velocity, integrated from acceleration coming from the IMU, drifts quickly and cannot be used for odometry. Unfortunately, velocity is crucial for the observability of mobile robot localization system, especially for single anchor localization [16]. To solve this problem, we propose a novel algorithm to estimate velocity by combining UWB and IMU measurements. An EKF fuses the range, orientation, and speed estimation to estimate the robot pose. Simulations and real-world experiments validate our algorithm. We believe our system can unlock localization of a large number of devices in practical applications.

Ii Related Work

Using UWB technology to locate devices has recently become popular. Most applications are based on a multi-anchor configuration [22, 17, 29, 7, 18, 19]. We aim at simplifying the infrastructure to a single anchor as reference. Many researchers have studied single anchor localization, especially for underwater robotics [6, 24]. Underwater robots usually use acoustic sensors, top-of-the-line IMUs, and expensive doppler sensors. Guo et al. [8] study a cooperative relative localization algorithm. They propose an optimization-based single-beacon localization algorithm to get an initial position for collaborative localization. However, they only observe a sine-like moving pattern and they require a velocity sensor. Similar with a recent work proposed by Nguyen et al.  [20], they also use odometry measurements from optical flow sensors. In our study, we only use UWB and a low-cost IMU, dropping the need for a velocity sensor.

To better understand the single anchor localization problem, which is typically non-linear because of distance and angle, an observability study is necessary. Based on the groundwork of Hermann and Krener [9], researchers have studied the observability of range-only localization system, from one fixed anchor [24] to the relative range and bearing measurements between two mobile robots [16]. Bastista et al. [1, 12] used an augmented state to linearize the problem, enabling classical observability analysis methods. A recent study[27] explores the leader-follower experiment for drones with UWB ranging between robots, also with velocity measurement either from the motion capture system or optical flow. However, all these studies assume the velocities are available as a direct measurement, which we do not have.

Although we do not use velocity sensors, the system still needs velocity to become observable. Getting a reliable velocity from a low-cost IMU or UWB is challenging: the integration of acceleration drifts dramatically using low-cost MEMS IMU sensors [14, 28]. For position estimation, IMUs are often combined with other sensor measurements, like GPS, muti-anchor UWB[10], and cameras [11].

One straightforward way to estimate velocity is the distance change from a UWB anchor when the robot is moving along a radical line from the anchor. This situation is rarely lasting in reality, but the range changing pattern can be used as a speed estimator. We propose a method based on simple geometry relations under the assumption that the robot moves at constant velocity. The estimated speed coupled with data from the IMU gyroscope can provide a velocity estimate to keep the system observable. Finally, we use an EKF to fuse range, orientation, and velocity estimation to get the robot pose. The contributions for this paper are:

  • a speed estimator using only UWB range information, which changes an unobservable system to observable;

  • error analysis for the speed estimator to help design a sensor fusion algorithm;

  • a loosely coupled tracking algorithm fusing IMU, UWB, and the proposed speed estimation;

  • simulation and real-world experiments to validate our methodology.

Iii Proposed method

Iii-a System Definition and Observability Analysis

In this paper, we consider a robot moving on a 2D plane in proximity of a UWB anchor. The robot has a state vector

, where are the coordinates of the robot, is the heading, and are linear and angular velocities. The system kinematics are described as:

(1)

where and are linear and angular acceleration, respectively. The measurement functions are

(2)

where , are the coordinate of the UWB anchor. is the range measurement function for the UWB sensor. is a heading measurement function that takes the output orientation of a complementary filter, which fuses the measurements of the accelerometer, gyroscope, and magnetometer as the heading measurement of the IMUs.

In control theory, the observability of a system refers to the ability to reconstruct its initial state from the control inputs and outputs. For a linear time invariant system, if the observability matrix is nonsingular, the system is observable [3]. We refer to an approach by Hermann [9] using differential geometry to analyze the observability of non-linear systems.

To easily compare the impact of velocity, we extend the measurement functions (2) to a typical system that has linear and angular velocity measurements like (3), similar with [24]. In addition, we define the anchor position as the origin and map distance measurement to to simplify like in [25]. Then we have:

(3)

We rewrite the model in (1) to the following format [9]:

with a state and control input . Then we extract a vector field of the following functions on the state space:

Next we find the Lie derivatives of the observation functions on state space along the vector field . The zero order Lie derivatives are, ; ; ; and , which are same as observation functions. Then we compute the first-order derivative as ; ; , and all the other first-order Lie derivatives, .

We write the observation space spanned by , for . Note that all constant Lie derivatives are eliminated when computing the state derivative as in (4). Finally, we compute the state derivatives of space and get:

(4)

where , , , , , , and because is occupied by the anchor.

is with full rank when and , which are reasonable assumptions. If the robot is static, it is difficult to locate the robot just from the range and the orientation. The second condition means that the robot moves along a radial line from the anchor, which is rarely lasting in practice. However, if we do not have velocity measurements, which is the situation we proposed, the fourth row of the state space becomes . The dimension of space is reduced to four, and therefore the system does not meet the observability rank condition.

Therefore, velocity is crucial for system observability, and we estimate it from inertial measurements and UWB ranging.

Iii-B Speed Estimation Model

Fig. 2: An illustration of our speed estimator. Three pairs of range measurements are used to calculate the speed.

By observing the range change pattern as a robot moves on a straight line, we propose a speed estimator based on simple geometric relations. As shown in Fig.2, three pairs of range and time measurements, , and are given when the robot passes points A, B and C with a constant velocity. is the virtual distance from the anchor to the motion line. is the length from starting point to the virtual intersection . Based on the Pythagorean theorem, we can get the algebraic solution of the moving speed :

(5)

where

From these three functions, we can solve and . As we are interested only in the velocity, we just show:

For simplicity, assuming the ranging measurements have a fixed frequency , we have . Then

(6)
Fig. 3: A simulation of speed estimation using noise-free ranging measurements to one anchor. The speed can be tracked correctly during constant velocity phases. Speed estimation produces erroneous peaks during speed change time, which can be avoided by filtering.

The positive value is the current speed (as the kinematics model shows the robot can only move forward). We present a simulation that includes ten stages, with different configurations of linear and angular velocities, as shown in Fig.3. The velocities change at the beginning of each stage and remain constant thereafter. The range measurements (cyan) is continuously fed into the estimator. As we can see, the velocity can be correctly provided (green). The changing pattern of the range in each phase suggests the speed value.

The peaks between stages are caused by velocity changes, and should be expected as the changes break the constant velocity assumption. Even if we cannot estimate the velocity correctly when it is suddenly changing, our algorithm does not lose its generality, as constant velocity motion is still the predominant motion in most real-world scenarios. One can estimate the speed over these periods and maintain a correct pose estimate. Furthermore, our sensor fusion algorithm also provides tolerance to velocity changes.

Iii-C Speed Estimator Error Analysis

UWB ranging is considered as fairly accurate. This is true compared with WIFI or Bluetooth technologies that provide meter accuracy, but UWB can only achieve decimeter accuracy. For instance, DW1000 from Decawave [4] provides the accuracy of cm using two-way ranging (TWR) time-of-flight (TOF) protocol, which is still too noisy to calculate the speed from range measurements directly. In this section, we analyze the error propagation for the speed estimator and design the speed estimation algorithm accordingly.

The range measurement model is expressed as , where the measurement is the true range plus some noise

. We represent standard deviation as

, e.g., for the standard deviation of .

To determine the properties of from three range measurements with deviation , we compute the error propagation as in [2] (Ch4). For (6). We get:

this can be rewritten as:

Fig. 4: The velocity amplitude is estimated only from range measurements. The robot moves from the point (10,0) to (10, 250) at 10.0 m/s with a step of 0.01s. The anchor is located at (0,0). The deviation of the direct speed estimation (blue) increases with the range. Using a variable window size average filter, our speed estimator (green) gives the correct speed.

Since , , and are measurements from the same sensor, we have , which is for UWB sensors in our situation. is a constant variable depending on the ranging update rate. Then we simplify the equation above as:

namely,

(7)

From above equation, we can see if three measurements, , , and , are similar, the denominator becomes small, and then the error becomes very large, a condition we want to avoid. Similar , , and can be caused by an excessively fast sample rate or by very slow motion.

To avoid the arrival of similar ranging measurements, we update the speed based on the ranging change, instead of updating at regular intervals as usual. In other words, when the range measurement difference exceeds a given threshold, the estimator is triggered to compute the speed. This way, the noise in the estimation can be effectively eliminated. Note that the update rate depends on the speed of the robot and also the direction of motion. If the robot moves quickly and on a radial line from the anchor, the range measurement difference quickly reaches the threshold and the update rate is high, and vice versa.

Furthermore, we can also see the standard deviation of the speed is positively correlated with in the numerator in (7). The value of is the intermediate measurement of the distance from the robot to the anchor. As Fig.4 shows, the deviation of the speed estimation (blue) increases as the range measurement increases. Thus, if the robot is very far away from the anchor, the standard deviation becomes large, leading to noisy speed estimation.

To solve this problem, we implement a variable window size filter, based on the distance from the robot to the anchor, to smooth the speed estimation. As Fig.4 shows, the deviation of direct speed estimation increases as the range increases. However, our variable window speed estimation can track the actual speed well.

Iii-D Sensor Fusion System

Fig. 5: Sensor Fusion system.

The block diagram in Fig.5 shows our localization system. As mentioned above, the system has two kinds of sensors: UWB sensors for range measurement and low-cost IMUs, with gyroscopes, accelerators, and magnetometers, for orientation. The UWB range measurement is used in two pipelines: first, it goes to the EKF; at the same time, it is fed to a Kalman filter and then into the speed estimator. The speed estimation output goes into the EKF (red line). The robot heading is estimated by a complementary filter [26] that provides a quaternion estimation of the orientation using an algebraic solution from the observation of gravity and magnetic field. Finally, the range measurements, robot heading, and speed estimation are fed into the EKF to estimate the robot pose.

Algorithm 1 outlines the approach in detail. Sensor readings from the IMU are fed into a complementary filter to obtain accurate heading in a non-magnetic field inferred environment, as shown in line 10. The UWB range and the computed heading are fed into an EKF to do a classical state estimation for position, heading, linear and angular velocities (lines 11-12 in Alg. 1).

The novelty of our system is the speed estimation, which corrects the velocity estimation in the EKF. As explained above, the UWB range measurements are fed into a separate Kalman filter to obtain smooth range measurements as (lines 13-14 in Alg.1). When a change in is greater than the threshold and the robot moves in a linear trajectory, the speed estimator will calculate the speed. This speed is then used to correct the velocity estimated from the EKF (lines 15-19). A variable window size filter is applied to get a smooth speed estimate result (lines 6-7) as discussed in III-C.

input : 
output : 
1 = ([range,timestamp],...) # keyRangePairs
2 Function VelEstimator():
3        newKeyPair = []
4       
5        ;
6       
7       
8        return vel
9
10while True do
11       
12       
13       
14       
15       
16        if  and  then
17              
18               =
19              
20              
21              
22        end if
23       
24 end while
Algorithm 1 State estimation algorithm from single UWB anchor.

Iv Experiments

We validate our algorithm through simulations and real robot experiments. We used two types of robots: a DJI M100 quadcopter and a Duckiebot [21]. Robots are equipped with Decawave DW1000[4] based UWB modules [30]. The quadcopter experiment specifically validated our speed estimation algorithm and gave quantitative localization error based on GPS. The ground mobile robot experiment shows that our algorithm can track very simple robots even in complex trajectories. From the error summary in Table I, we can see our method has improved the accuracy by a factor 2 in simulations and real robot experiments. Please note we use RMSE for simulation and absolute trajectory error (ATE) for real robots experiments because we have exact ground truth from simulations, but not for real robot experiments.

Error (m) Simulation (RMSE) Drone Exp. (ATE)
Without speed estimator 1.73 2.81
With speed estimator 0.48 1.05
TABLE I: Error comparison between EKF with or without speed estimator.

Iv-a Simulation

In our simulations, we generate a random trajectory with a differential wheel robot kinematics model[5]. There are five stages of motion, with different headings and speed settings. The anchor is set at position (0,0) and the robot starts at the point (10,0). The range measurement is corrupted by white Gaussian noise with 0.2 m standard derivation, which is similar to the actual UWB measurement derivation. The noise added to orientation is with a deviation of 0.1 rad.

Fig. 6: The trajectories of EKF with our speed estimator (proposed method, in green) and without speed estimator (vanilla EKF, in red), refer to the ground truth trajectory (gray).

As the top of Fig. 9 shows, our method (green) can track the ground truth velocity (gray) most of the time, which results in an accurate trajectory in Fig.6. However, the vanilla EKF model (red) cannot recover from the drift errors accumulated during the first stage. This proves that our algorithm can correct the accumulated error as long as the speed estimator gives accurate estimations. More specifically, compared with the bottom figure in Fig. 9, the RMSE of our method drops rapidly (starts from 1200 steps) when the speed estimation is available (around 1200 steps). We reduce the RMSE of the vanilla EKF by more than 70% and achieve the accuracy of 0.48 m, which is impressive given the limited information.

(a)
(b)
Fig. 9: The top figure shows the velocity estimations of our method and vanilla EKF. We can see that although there are some delays, our algorithm can track the actual speed most times. However, the EKF without speed estimator can only catch up the trend of speed. The bottom figure is the comparison of RMSE. The error of our algorithm drops rapidly when the correct speed estimation is available.

Iv-B Speed Estimation for Flying Robot

We have used a quadcopter (DJI M100) to validate our tracking and speed estimation in the real world, which also illustrates the potential use for 3D applications as well. We programmed a triangle trajectory with a speed parameter of 2 m/s for the M100. First, we compare the estimated results with the velocity feedback from DJI_SDK software. Fig.11 shows our algorithm can give correct speed estimation (red) based on range measurements (cyan). Then we calculate ATE between our estimated trajectory and GPS trajectory. As Fig.10 and table I show, the trajectory from our algorithm is much closer than that of the EKF without our speed estimator. With only one anchor setup, we get around 1 m ATE error, which is much higher than the vanilla EKF with the accuracy of 2.8 m. Please also note that the DJI velocity feedback in Fig.11 only works as reference, instead of ground truth: for the first ten seconds, the robot is hovering but the speed indicated from DJI_SDK is around 0.3 m/s.

Fig. 10: Experiments with a DJI M100 quadcopter. The robot is programmed to fly in a triangle trajectory. The anchor is placed on the ground. Our trajectory is much closer to the GPS trajectory than the vanilla EKF without speed estimator.
Fig. 11: In the quadcopter experiment, we can estimate the speed (red) simply by using the range measurement (cyan). The reference from the DJI_SDK is plotted in green.

Iv-C Tracking of Ground Mobile Robot

The ground robot platform is a simple indoor robot, the Duckiebot [21]. Our version of the Duckiebot has an RPi2 on-board computer. We add a 9DOF IMUs sensor and a UWB module to the robot, as shown in Fig.1. The robot does not have wheel encoders to get the speed or displacement. We manually control the robot to run a university logo trajectory (TUM) in an outside basketball court. We use our localization system to track the robot pose. Fig.1 shows that the algorithm can track the trajectory correctly. This experiment shows a qualitative evaluation, indicating that our algorithm can be easily applied to simple robots and IoT devices.

V Conclusion and Discussion

In this paper, we propose a localization algorithm for robots that are equipped with low-cost IMUs and UWB sensors in an environment configured with only a single UWB anchor. We estimate the speed from UWB range changes, which makes the system temporally observable. Our algorithm effectively reduces the accumulated errors by 60%. With this algorithm, a large number of devices can be localized, including IoT devices or cellphones with IMU and UWB sensors.

References

  • [1] P. Batista, C. Silvestre, and P. Oliveira (2011) Single range aided navigation and source localization: observability and filter design. Systems & Control Letters 60 (8), pp. 665–673. External Links: ISSN 01676911 Cited by: §II.
  • [2] G. Bohm and G. Zech (2010) Introduction to statistics and data analysis for physicists. Vol. 1, Desy Hamburg. Cited by: §III-C.
  • [3] R. C. Dorf and R. H. Bishop (2011) Modern control systems. Pearson. Cited by: §III-A.
  • [4] (Website) External Links: Link Cited by: §III-C, §IV.
  • [5] L. Feng, H. R. Everett, and J. Borenstein (1994) Where am I?: sensors and methods for autonomous mobile robot positioning. Technical report Cited by: §IV-A.
  • [6] B. Ferreira, A. Matos, and N. Cruz (2010) Single beacon navigation: localization and control of the MARES AUV. In OCEANS 2010 MTS/IEEE SEATTLE, pp. 1–9. Cited by: §I, §II.
  • [7] C. Gentner and M. Ulmschneider (2017) Simultaneous localization and mapping for pedestrians using low-cost ultra-wideband system and gyroscope. In 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–8. External Links: ISBN 978-1-5090-6299-7 Cited by: §II.
  • [8] K. Guo, Z. Qiu, W. Meng, L. Xie, and R. Teo (2017) Ultra-wideband based cooperative relative localization algorithm and experiments for multiple unmanned aerial vehicles in GPS denied environments. International Journal of Micro Air Vehicles 9 (3), pp. 169–186. External Links: ISSN 1756-8293 Cited by: §II.
  • [9] R. Hermann and A. Krener (1977) Nonlinear controllability and observability. IEEE Transactions on automatic control 22 (5), pp. 728–740. Cited by: §II, §III-A, §III-A.
  • [10] J. D. Hol, F. Dijkstra, H. Luinge, and T. B. Schon (2009)

    Tightly coupled UWB/IMU pose estimation

    .
    In 2009 IEEE International Conference on Ultra-Wideband, pp. 688–692. External Links: ISBN 978-1-4244-2930-1 Cited by: §II.
  • [11] J. D. Hol (2011) Sensor fusion and calibration of inertial sensors, vision, ultra-wideband and gps. Ph.D. Thesis, Linköping University Electronic Press. Cited by: §II.
  • [12] G. Indiveri, D. De Palma, and G. Parlangeli (2016) Single range localization in 3-d: observability and robustness issues. IEEE Transactions on Control Systems Technology 24 (5), pp. 1853–1860. External Links: ISSN 1063-6536, 1558-0865 Cited by: §II.
  • [13] (2019) IPhone 11 - technical specifications. External Links: Link Cited by: §I.
  • [14] M. Kok, J. D. Hol, and T. B. Schön (2017) Using inertial sensors for position and orientation estimation. Foundations and Trends in Signal Processing 11 (1), pp. 1–153. External Links: ISSN 1932-8346, 1932-8354, 1704.06053 Cited by: §II.
  • [15] D. Lymberopoulos, J. Liu, X. Yang, R. R. Choudhury, S. Sen, and V. Handziski (2015) Microsoft indoor localization competition: experiences and lessons learned. In ACM SIGMOBILE Mobile Computing and Communications Review, Vol. 18, pp. 24–31. External Links: ISSN 15591662 Cited by: §I.
  • [16] A. Martinelli and R. Siegwart (2005) Observability analysis for mobile robot localization. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1471–1476. Cited by: §I, §II.
  • [17] G. Miraglia, K. N. Maleki, and L. R. Hook (2017) Comparison of two sensor data fusion methods in a tightly coupled UWB/IMU 3-d localization system. In 2017 International Conference on Engineering, Technology and Innovation (ICE/ITMC), pp. 611–618. External Links: ISBN 978-1-5386-0774-9 Cited by: §II.
  • [18] M. W. Mueller, M. Hamer, and R. D’Andrea (2015) Fusing ultra-wideband range measurements with accelerometers and rate gyroscopes for quadrocopter state estimation. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1730–1736. External Links: ISBN 978-1-4799-6923-4 Cited by: §II.
  • [19] T. M. Nguyen, A. H. Zaini, K. Guo, and L. Xie (2016) An ultra-wideband-based multi-UAV localization system in GPS-denied environments. In 2016 International Micro Air Vehicles Conference, Cited by: §II.
  • [20] T. Nguyen, Z. Qiu, M. Cao, T. H. Nguyen, and L. Xie (2019) Single landmark distance-based navigation. IEEE Transactions on Control Systems Technology, pp. 1–8. External Links: ISSN 1063-6536, 1558-0865, 2374-0159 Cited by: §II.
  • [21] L. Paull, J. Tani, H. Ahn, J. Alonso-Mora, L. Carlone, M. Cap, Y. F. Chen, C. Choi, J. Dusek, and Y. Fang (2017) Duckietown: an open, inexpensive and flexible platform for autonomy education and research. In IEEE International Conference on Robotics and Automation (ICRA), pp. 20–25. Cited by: §IV-C, §IV.
  • [22] A. Prorok and A. Martinoli (2014) Accurate indoor localization with ultra-wideband using spatial models and collaboration. The International Journal of Robotics Research 33 (4), pp. 547–568. Cited by: §II.
  • [23] J. O. Reis, P. T. M. Batista, P. Oliveira, and C. Silvestre (2018) Source localization based on acoustic single direction measurements. IEEE Transactions on Aerospace and Electronic Systems 54 (6), pp. 2837–2852. External Links: ISSN 0018-9251, 1557-9603, 2371-9877 Cited by: §I.
  • [24] A. Ross and J. Jouffroy (2005) Remarks on the observability of single beacon underwater navigation. In Proc. Intl. Symp. Unmanned Unteth. Subm. Tech, Cited by: §II, §II, §III-A.
  • [25] Q. Shi, X. Cui, S. Zhao, J. Wen, and M. Lu (2019) Range-only collaborative localization for ground vehicles. In Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, pp. 2063–2077. Cited by: §III-A.
  • [26] R. G. Valenti, I. Dryanovski, and J. Xiao (2015) Keeping a good attitude: a quaternion-based orientation filter for IMUs and MARGs. Sensors 15 (8), pp. 19302–19330. Cited by: §III-D.
  • [27] S. van der Helm, M. Coppola, K. N. McGuire, and G. C. de Croon (2019) On-board range-based relative localization for micro air vehicles in indoor leader–follower flight. Autonomous Robots, pp. 1–27. Cited by: §II.
  • [28] O. J. Woodman (2007) An introduction to inertial navigation. Technical report University of Cambridge, Computer Laboratory. Cited by: §II.
  • [29] L. Yao, Y. A. Wu, L. Yao, and Z. Z. Liao (2017) An integrated IMU and UWB sensor based indoor positioning system. In 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–8. External Links: ISBN 978-1-5090-6299-7 Cited by: §II.
  • [30] YCHIOT Wenzhou Yanchuang IOT Technology. (en-US). External Links: Link Cited by: §IV.