I Introduction
Accurate localization and tracking are fundamental services for an autonomous system. Many options are available for localization: GPS for open outdoor areas, motion capture systems in a laboratory, visual systems. However, they are generally limited by the environment or timeconsuming and laborintensive setup work and expensive infrastructure[15]. Ultrawideband (UWB) technology provides another venue to accurately locate devices both indoors and outdoors. Most available UWB systems are based on multianchor arrangements, which need some laborintensive setup work, like mounting anchors and calibration. Furthermore, it is often difficult to set up such systems outdoors or in an unstructured environment. We believe that a localization system which can accurately track devices without complex setup is highly desirable.
Tracking with a single anchor is attractive because one can easily drop an anchor in the environment as reference. However, it is also quite challenging: a single source of distance information is generally too limited for tracking. Current research in underwater robotics proposes fusing distance to an acoustic anchor with odometry, but they usually rely on very expensive sensors (e.g., high accuracy IMU, doppler anemometer)[6, 23]. Our goal is to enable single anchor localization with lowcost UWB and IMU sensors. Robots and IoT devices can easily be equipped with these two sensors, while velocity sensors (encoders, doppler anemometers, etc.) are much more rare and generally too expensive for IoT devices. Moreover, nowadays UWB is becoming pervasive, being present in the latest Apple iPhone [13] for spatial awareness at the time of writing.
Getting odometry from lowcost IMUs is challenging. Velocity, integrated from acceleration coming from the IMU, drifts quickly and cannot be used for odometry. Unfortunately, velocity is crucial for the observability of mobile robot localization system, especially for single anchor localization [16]. To solve this problem, we propose a novel algorithm to estimate velocity by combining UWB and IMU measurements. An EKF fuses the range, orientation, and speed estimation to estimate the robot pose. Simulations and realworld experiments validate our algorithm. We believe our system can unlock localization of a large number of devices in practical applications.
Ii Related Work
Using UWB technology to locate devices has recently become popular. Most applications are based on a multianchor configuration [22, 17, 29, 7, 18, 19]. We aim at simplifying the infrastructure to a single anchor as reference. Many researchers have studied single anchor localization, especially for underwater robotics [6, 24]. Underwater robots usually use acoustic sensors, topoftheline IMUs, and expensive doppler sensors. Guo et al. [8] study a cooperative relative localization algorithm. They propose an optimizationbased singlebeacon localization algorithm to get an initial position for collaborative localization. However, they only observe a sinelike moving pattern and they require a velocity sensor. Similar with a recent work proposed by Nguyen et al. [20], they also use odometry measurements from optical flow sensors. In our study, we only use UWB and a lowcost IMU, dropping the need for a velocity sensor.
To better understand the single anchor localization problem, which is typically nonlinear because of distance and angle, an observability study is necessary. Based on the groundwork of Hermann and Krener [9], researchers have studied the observability of rangeonly localization system, from one fixed anchor [24] to the relative range and bearing measurements between two mobile robots [16]. Bastista et al. [1, 12] used an augmented state to linearize the problem, enabling classical observability analysis methods. A recent study[27] explores the leaderfollower experiment for drones with UWB ranging between robots, also with velocity measurement either from the motion capture system or optical flow. However, all these studies assume the velocities are available as a direct measurement, which we do not have.
Although we do not use velocity sensors, the system still needs velocity to become observable. Getting a reliable velocity from a lowcost IMU or UWB is challenging: the integration of acceleration drifts dramatically using lowcost MEMS IMU sensors [14, 28]. For position estimation, IMUs are often combined with other sensor measurements, like GPS, mutianchor UWB[10], and cameras [11].
One straightforward way to estimate velocity is the distance change from a UWB anchor when the robot is moving along a radical line from the anchor. This situation is rarely lasting in reality, but the range changing pattern can be used as a speed estimator. We propose a method based on simple geometry relations under the assumption that the robot moves at constant velocity. The estimated speed coupled with data from the IMU gyroscope can provide a velocity estimate to keep the system observable. Finally, we use an EKF to fuse range, orientation, and velocity estimation to get the robot pose. The contributions for this paper are:

a speed estimator using only UWB range information, which changes an unobservable system to observable;

error analysis for the speed estimator to help design a sensor fusion algorithm;

a loosely coupled tracking algorithm fusing IMU, UWB, and the proposed speed estimation;

simulation and realworld experiments to validate our methodology.
Iii Proposed method
Iiia System Definition and Observability Analysis
In this paper, we consider a robot moving on a 2D plane in proximity of a UWB anchor. The robot has a state vector
, where are the coordinates of the robot, is the heading, and are linear and angular velocities. The system kinematics are described as:(1) 
where and are linear and angular acceleration, respectively. The measurement functions are
(2) 
where , are the coordinate of the UWB anchor. is the range measurement function for the UWB sensor. is a heading measurement function that takes the output orientation of a complementary filter, which fuses the measurements of the accelerometer, gyroscope, and magnetometer as the heading measurement of the IMUs.
In control theory, the observability of a system refers to the ability to reconstruct its initial state from the control inputs and outputs. For a linear time invariant system, if the observability matrix is nonsingular, the system is observable [3]. We refer to an approach by Hermann [9] using differential geometry to analyze the observability of nonlinear systems.
To easily compare the impact of velocity, we extend the measurement functions (2) to a typical system that has linear and angular velocity measurements like (3), similar with [24]. In addition, we define the anchor position as the origin and map distance measurement to to simplify like in [25]. Then we have:
(3) 
We rewrite the model in (1) to the following format [9]:
with a state and control input . Then we extract a vector field of the following functions on the state space:
Next we find the Lie derivatives of the observation functions on state space along the vector field . The zero order Lie derivatives are, ; ; ; and , which are same as observation functions. Then we compute the firstorder derivative as ; ; , and all the other firstorder Lie derivatives, .
We write the observation space spanned by , for . Note that all constant Lie derivatives are eliminated when computing the state derivative as in (4). Finally, we compute the state derivatives of space and get:
(4) 
where , , , , , , and because is occupied by the anchor.
is with full rank when and , which are reasonable assumptions. If the robot is static, it is difficult to locate the robot just from the range and the orientation. The second condition means that the robot moves along a radial line from the anchor, which is rarely lasting in practice. However, if we do not have velocity measurements, which is the situation we proposed, the fourth row of the state space becomes . The dimension of space is reduced to four, and therefore the system does not meet the observability rank condition.
Therefore, velocity is crucial for system observability, and we estimate it from inertial measurements and UWB ranging.
IiiB Speed Estimation Model
By observing the range change pattern as a robot moves on a straight line, we propose a speed estimator based on simple geometric relations. As shown in Fig.2, three pairs of range and time measurements, , and are given when the robot passes points A, B and C with a constant velocity. is the virtual distance from the anchor to the motion line. is the length from starting point to the virtual intersection . Based on the Pythagorean theorem, we can get the algebraic solution of the moving speed :
(5) 
where
From these three functions, we can solve and . As we are interested only in the velocity, we just show:
For simplicity, assuming the ranging measurements have a fixed frequency , we have . Then
(6) 
The positive value is the current speed (as the kinematics model shows the robot can only move forward). We present a simulation that includes ten stages, with different configurations of linear and angular velocities, as shown in Fig.3. The velocities change at the beginning of each stage and remain constant thereafter. The range measurements (cyan) is continuously fed into the estimator. As we can see, the velocity can be correctly provided (green). The changing pattern of the range in each phase suggests the speed value.
The peaks between stages are caused by velocity changes, and should be expected as the changes break the constant velocity assumption. Even if we cannot estimate the velocity correctly when it is suddenly changing, our algorithm does not lose its generality, as constant velocity motion is still the predominant motion in most realworld scenarios. One can estimate the speed over these periods and maintain a correct pose estimate. Furthermore, our sensor fusion algorithm also provides tolerance to velocity changes.
IiiC Speed Estimator Error Analysis
UWB ranging is considered as fairly accurate. This is true compared with WIFI or Bluetooth technologies that provide meter accuracy, but UWB can only achieve decimeter accuracy. For instance, DW1000 from Decawave [4] provides the accuracy of cm using twoway ranging (TWR) timeofflight (TOF) protocol, which is still too noisy to calculate the speed from range measurements directly. In this section, we analyze the error propagation for the speed estimator and design the speed estimation algorithm accordingly.
The range measurement model is expressed as , where the measurement is the true range plus some noise
. We represent standard deviation as
, e.g., for the standard deviation of .To determine the properties of from three range measurements with deviation , we compute the error propagation as in [2] (Ch4). For (6). We get:
this can be rewritten as:
Since , , and are measurements from the same sensor, we have , which is for UWB sensors in our situation. is a constant variable depending on the ranging update rate. Then we simplify the equation above as:
namely,
(7) 
From above equation, we can see if three measurements, , , and , are similar, the denominator becomes small, and then the error becomes very large, a condition we want to avoid. Similar , , and can be caused by an excessively fast sample rate or by very slow motion.
To avoid the arrival of similar ranging measurements, we update the speed based on the ranging change, instead of updating at regular intervals as usual. In other words, when the range measurement difference exceeds a given threshold, the estimator is triggered to compute the speed. This way, the noise in the estimation can be effectively eliminated. Note that the update rate depends on the speed of the robot and also the direction of motion. If the robot moves quickly and on a radial line from the anchor, the range measurement difference quickly reaches the threshold and the update rate is high, and vice versa.
Furthermore, we can also see the standard deviation of the speed is positively correlated with in the numerator in (7). The value of is the intermediate measurement of the distance from the robot to the anchor. As Fig.4 shows, the deviation of the speed estimation (blue) increases as the range measurement increases. Thus, if the robot is very far away from the anchor, the standard deviation becomes large, leading to noisy speed estimation.
To solve this problem, we implement a variable window size filter, based on the distance from the robot to the anchor, to smooth the speed estimation. As Fig.4 shows, the deviation of direct speed estimation increases as the range increases. However, our variable window speed estimation can track the actual speed well.
IiiD Sensor Fusion System
The block diagram in Fig.5 shows our localization system. As mentioned above, the system has two kinds of sensors: UWB sensors for range measurement and lowcost IMUs, with gyroscopes, accelerators, and magnetometers, for orientation. The UWB range measurement is used in two pipelines: first, it goes to the EKF; at the same time, it is fed to a Kalman filter and then into the speed estimator. The speed estimation output goes into the EKF (red line). The robot heading is estimated by a complementary filter [26] that provides a quaternion estimation of the orientation using an algebraic solution from the observation of gravity and magnetic field. Finally, the range measurements, robot heading, and speed estimation are fed into the EKF to estimate the robot pose.
Algorithm 1 outlines the approach in detail. Sensor readings from the IMU are fed into a complementary filter to obtain accurate heading in a nonmagnetic field inferred environment, as shown in line 10. The UWB range and the computed heading are fed into an EKF to do a classical state estimation for position, heading, linear and angular velocities (lines 1112 in Alg. 1).
The novelty of our system is the speed estimation, which corrects the velocity estimation in the EKF. As explained above, the UWB range measurements are fed into a separate Kalman filter to obtain smooth range measurements as (lines 1314 in Alg.1). When a change in is greater than the threshold and the robot moves in a linear trajectory, the speed estimator will calculate the speed. This speed is then used to correct the velocity estimated from the EKF (lines 1519). A variable window size filter is applied to get a smooth speed estimate result (lines 67) as discussed in IIIC.
Iv Experiments
We validate our algorithm through simulations and real robot experiments. We used two types of robots: a DJI M100 quadcopter and a Duckiebot [21]. Robots are equipped with Decawave DW1000[4] based UWB modules [30]. The quadcopter experiment specifically validated our speed estimation algorithm and gave quantitative localization error based on GPS. The ground mobile robot experiment shows that our algorithm can track very simple robots even in complex trajectories. From the error summary in Table I, we can see our method has improved the accuracy by a factor 2 in simulations and real robot experiments. Please note we use RMSE for simulation and absolute trajectory error (ATE) for real robots experiments because we have exact ground truth from simulations, but not for real robot experiments.
Error (m)  Simulation (RMSE)  Drone Exp. (ATE) 

Without speed estimator  1.73  2.81 
With speed estimator  0.48  1.05 
Iva Simulation
In our simulations, we generate a random trajectory with a differential wheel robot kinematics model[5]. There are five stages of motion, with different headings and speed settings. The anchor is set at position (0,0) and the robot starts at the point (10,0). The range measurement is corrupted by white Gaussian noise with 0.2 m standard derivation, which is similar to the actual UWB measurement derivation. The noise added to orientation is with a deviation of 0.1 rad.
As the top of Fig. 9 shows, our method (green) can track the ground truth velocity (gray) most of the time, which results in an accurate trajectory in Fig.6. However, the vanilla EKF model (red) cannot recover from the drift errors accumulated during the first stage. This proves that our algorithm can correct the accumulated error as long as the speed estimator gives accurate estimations. More specifically, compared with the bottom figure in Fig. 9, the RMSE of our method drops rapidly (starts from 1200 steps) when the speed estimation is available (around 1200 steps). We reduce the RMSE of the vanilla EKF by more than 70% and achieve the accuracy of 0.48 m, which is impressive given the limited information.
IvB Speed Estimation for Flying Robot
We have used a quadcopter (DJI M100) to validate our tracking and speed estimation in the real world, which also illustrates the potential use for 3D applications as well. We programmed a triangle trajectory with a speed parameter of 2 m/s for the M100. First, we compare the estimated results with the velocity feedback from DJI_SDK software. Fig.11 shows our algorithm can give correct speed estimation (red) based on range measurements (cyan). Then we calculate ATE between our estimated trajectory and GPS trajectory. As Fig.10 and table I show, the trajectory from our algorithm is much closer than that of the EKF without our speed estimator. With only one anchor setup, we get around 1 m ATE error, which is much higher than the vanilla EKF with the accuracy of 2.8 m. Please also note that the DJI velocity feedback in Fig.11 only works as reference, instead of ground truth: for the first ten seconds, the robot is hovering but the speed indicated from DJI_SDK is around 0.3 m/s.
IvC Tracking of Ground Mobile Robot
The ground robot platform is a simple indoor robot, the Duckiebot [21]. Our version of the Duckiebot has an RPi2 onboard computer. We add a 9DOF IMUs sensor and a UWB module to the robot, as shown in Fig.1. The robot does not have wheel encoders to get the speed or displacement. We manually control the robot to run a university logo trajectory (TUM) in an outside basketball court. We use our localization system to track the robot pose. Fig.1 shows that the algorithm can track the trajectory correctly. This experiment shows a qualitative evaluation, indicating that our algorithm can be easily applied to simple robots and IoT devices.
V Conclusion and Discussion
In this paper, we propose a localization algorithm for robots that are equipped with lowcost IMUs and UWB sensors in an environment configured with only a single UWB anchor. We estimate the speed from UWB range changes, which makes the system temporally observable. Our algorithm effectively reduces the accumulated errors by 60%. With this algorithm, a large number of devices can be localized, including IoT devices or cellphones with IMU and UWB sensors.
References
 [1] (2011) Single range aided navigation and source localization: observability and filter design. Systems & Control Letters 60 (8), pp. 665–673. External Links: ISSN 01676911 Cited by: §II.
 [2] (2010) Introduction to statistics and data analysis for physicists. Vol. 1, Desy Hamburg. Cited by: §IIIC.
 [3] (2011) Modern control systems. Pearson. Cited by: §IIIA.
 [4] (Website) External Links: Link Cited by: §IIIC, §IV.
 [5] (1994) Where am I?: sensors and methods for autonomous mobile robot positioning. Technical report Cited by: §IVA.
 [6] (2010) Single beacon navigation: localization and control of the MARES AUV. In OCEANS 2010 MTS/IEEE SEATTLE, pp. 1–9. Cited by: §I, §II.
 [7] (2017) Simultaneous localization and mapping for pedestrians using lowcost ultrawideband system and gyroscope. In 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–8. External Links: ISBN 9781509062997 Cited by: §II.
 [8] (2017) Ultrawideband based cooperative relative localization algorithm and experiments for multiple unmanned aerial vehicles in GPS denied environments. International Journal of Micro Air Vehicles 9 (3), pp. 169–186. External Links: ISSN 17568293 Cited by: §II.
 [9] (1977) Nonlinear controllability and observability. IEEE Transactions on automatic control 22 (5), pp. 728–740. Cited by: §II, §IIIA, §IIIA.

[10]
(2009)
Tightly coupled UWB/IMU pose estimation
. In 2009 IEEE International Conference on UltraWideband, pp. 688–692. External Links: ISBN 9781424429301 Cited by: §II.  [11] (2011) Sensor fusion and calibration of inertial sensors, vision, ultrawideband and gps. Ph.D. Thesis, Linköping University Electronic Press. Cited by: §II.
 [12] (2016) Single range localization in 3d: observability and robustness issues. IEEE Transactions on Control Systems Technology 24 (5), pp. 1853–1860. External Links: ISSN 10636536, 15580865 Cited by: §II.
 [13] (2019) IPhone 11  technical specifications. External Links: Link Cited by: §I.
 [14] (2017) Using inertial sensors for position and orientation estimation. Foundations and Trends in Signal Processing 11 (1), pp. 1–153. External Links: ISSN 19328346, 19328354, 1704.06053 Cited by: §II.
 [15] (2015) Microsoft indoor localization competition: experiences and lessons learned. In ACM SIGMOBILE Mobile Computing and Communications Review, Vol. 18, pp. 24–31. External Links: ISSN 15591662 Cited by: §I.
 [16] (2005) Observability analysis for mobile robot localization. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1471–1476. Cited by: §I, §II.
 [17] (2017) Comparison of two sensor data fusion methods in a tightly coupled UWB/IMU 3d localization system. In 2017 International Conference on Engineering, Technology and Innovation (ICE/ITMC), pp. 611–618. External Links: ISBN 9781538607749 Cited by: §II.
 [18] (2015) Fusing ultrawideband range measurements with accelerometers and rate gyroscopes for quadrocopter state estimation. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1730–1736. External Links: ISBN 9781479969234 Cited by: §II.
 [19] (2016) An ultrawidebandbased multiUAV localization system in GPSdenied environments. In 2016 International Micro Air Vehicles Conference, Cited by: §II.
 [20] (2019) Single landmark distancebased navigation. IEEE Transactions on Control Systems Technology, pp. 1–8. External Links: ISSN 10636536, 15580865, 23740159 Cited by: §II.
 [21] (2017) Duckietown: an open, inexpensive and flexible platform for autonomy education and research. In IEEE International Conference on Robotics and Automation (ICRA), pp. 20–25. Cited by: §IVC, §IV.
 [22] (2014) Accurate indoor localization with ultrawideband using spatial models and collaboration. The International Journal of Robotics Research 33 (4), pp. 547–568. Cited by: §II.
 [23] (2018) Source localization based on acoustic single direction measurements. IEEE Transactions on Aerospace and Electronic Systems 54 (6), pp. 2837–2852. External Links: ISSN 00189251, 15579603, 23719877 Cited by: §I.
 [24] (2005) Remarks on the observability of single beacon underwater navigation. In Proc. Intl. Symp. Unmanned Unteth. Subm. Tech, Cited by: §II, §II, §IIIA.
 [25] (2019) Rangeonly collaborative localization for ground vehicles. In Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, pp. 2063–2077. Cited by: §IIIA.
 [26] (2015) Keeping a good attitude: a quaternionbased orientation filter for IMUs and MARGs. Sensors 15 (8), pp. 19302–19330. Cited by: §IIID.
 [27] (2019) Onboard rangebased relative localization for micro air vehicles in indoor leader–follower flight. Autonomous Robots, pp. 1–27. Cited by: §II.
 [28] (2007) An introduction to inertial navigation. Technical report University of Cambridge, Computer Laboratory. Cited by: §II.
 [29] (2017) An integrated IMU and UWB sensor based indoor positioning system. In 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–8. External Links: ISBN 9781509062997 Cited by: §II.
 [30] YCHIOT Wenzhou Yanchuang IOT Technology. (enUS). External Links: Link Cited by: §IV.