Graph Optimization Approach to Localization with IMU and Ultra-Wideband Measurements

02/28/2018 ∙ by Chen Wang, et al. ∙ Zoho Nanyang Technological University 0

An ultra-wideband (UWB) aided localization system is presented. Existing filter-based methods using UWB need kinetic model which is generally hard to obtain because of the complex structure of robots. This paper proposes a graph optimization approach to localization with inertial measurement unit (IMU) and UWB measurements which converts the localization problem into an optimization problem on Lie-Manifold, so that kinetic model can be avoided. This method makes full use of measurement data to localize a robot with acceptable computational complexity. It is achieved by minimizing the trajectory error based on several constrained equations arising from different types of sensor measurements. Our first contribution is graph realization in Manifold instead of Euclidean space for UWB-based systems, and new type of edge is defined and created in order to apply exiting graph optimization tool. Our second contribution is online approach that enables this approach to robots with ultra-low power processor. Experiments under a variety of scenarios verify the stability of this method and demonstrate that the algorithm achieves a much higher localization accuracy than the filter-based methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Accurate, efficient and reliable localization plays important roles in real-time robot-related applications, such as path following, target searching and obstacle avoidance. However, existing localization technologies are difficult or expensive to be deployed directly for some application scenarios. For example, the widely-used global positioning system (GPS) can only be applied in outdoor environment due to the non-line of signal (NLOS) blockage in an indoor environment. The vision-based real-time simultaneous localization and mapping (SLAM) technologies [1, 2, 3] have an unacceptable drift without odometer correction and need significant computational resources for dense mapping, which are not suitable for ultra-low power processor. The WiFi-based localization [4, 5, 6] has the problem of large accuracy fluctuations caused by the variation of signals and its low accuracy localization performance makes it inapplicable to robots such as unmanned aerial vehicle (UAV). Motion capture systems can provide millimeter level of accuracy and focus on challenging tasks such as formation and swarming [7, 8]. But they are very expensive and only applicable to small-scale spaces.

An alternative method which utilizes the ultra-wideband (UWB) communication has attracted researchers’ attention due to its robustness to multipath and non-line of signal effects. Robots carrying UWB modules are able to exchange information and calculate their relative distances to other UWB modules (referred to as anchors having information on their positions) by measuring the time of arrival (TOA), which are then used for estimating positions of the robots. UWB modules can also be carried by other robots which may not have information about their locations.

However, it is still difficult for the existing algorithms to be applied directly in low power processors for real-time applications. First, many algorithms leveraging on optimization, e.g. [9, 10, 11, 12], assume that the mobile robots receive multiple measurements concurrently. The neglect of minor time difference between consecutive measurements brings estimation error, especially when the system is highly real-time and dynamic. For example, the triangulation [13] and multidimensional scaling (MDS) algorithms [11, 12] determine the location of a point by forming triangles to it from known points. Second, some algorithms consider such minor time difference, but need an accurate kinetic model. The representative examples are moving horizon estimation [10] and the filter-based methods, including extended Kalman filter (EKF) and particle filter [14, 15, 16, 17, 18, 19].

An obvious issue is that an accurate kinetic model is hard to obtain due to the complex structure of robots, and a simplified or linearized kinetic model degrade their performance or even cause divergence. Since a filter only utilizes the current measurement and a priori estimation, it may diverge due to short-term measurement loss. Last, the recent trend towards machine learning based-methods stimulates a new wave of research, but generally it is still difficult to achieve good performance real-time [20, 21]. Moreover, the learning-based algorithms need to be pre-trained, which is sometimes infeasible. These challenges open space for accurate, reliable, flexible, and portable localization techniques.

To this end, we propose a novel framework for range-based localization that leverages on pose graph optimization. It jointly imposes the trajectory smoothness and range constraints on a sliding pose trajectory window, that has several advantages. First, robot kinetic model is avoided, which makes the system flexible and portable. Second, the accuracy performance is dramatically improved, especially in the scenarios that existing methods cannot work well. Third, since more measurements are involved in the trajectory estimation, the proposed framework is robust to outliers and even short-term measurement loss. Last but not least, extensive experiments demonstrate that the proposed framework is lightweight and can be easily deployed to micro low power real-time systems. In summary, the main contributions of this paper are:

  1. A range-based localization framework is proposed based on a pose graph that jointly optimize the trajectory smoothness and range constraints on a sliding trajectory window.

  2. An algorithm is designed for outlier rejection, which makes our system robust to NLOS effects.

  3. Our algorithm is implemented in the robot operating system (ROS). The experiments demonstrate its stability, and portability, as well as the high accuracy, especially in the altitude direction, which is quite challenging for existing methods due to an asymmetrical anchor distribution.

Ii Related Works

The localization algorithms using UWB include three main methods, i.e. filtering, machine learning, and geometry optimization, which will be reviewed respectively.

Ii-a Filtering Methods

Due to the simplicity and efficiency, filtering methods are prevalent in engineering and have many practical applications. Since the range measurement model is non-linear, many UWB localization systems leverage on extended Kalman filter (EKF) and particle filter. For instance, the EKF estimator is applied for estimating the positions of objects using the range measurements to fixed UWB anchors [14, 15]. To improve the stability, a sustained EKF estimator is initialized by the trilateration algorithm to aid navigation of quadcopters [16]

. Combining with non-linear regression, the EKF based UWB localization system is able to support multi-UAV formation in GPS-denied environments

[17]. To improve the accuracy, the EKF estimator is also applied for fusing information from UWB, IMU, and vision sensors [18]. To investigate the effects of collaboration of relative positioning, the UWB together with a relative range and bearing model are developed, resulting a unified localization technique based on particle filter [19].

Despite the efficiency, the drawbacks of the filtering methods are also obvious as an accurate kinetic model of robot may not be available and state estimation is sensitive to measurement outliers. Moreover, we find that the filtering methods do not perform well, especially when the UWB anchors are distributed asymmetrically, which is the usual case for practical applications due to structural constraints and difficulty to place anchors in some locations such as ceiling.

Ii-B Machine Learning Methods

The machine learning methods have its advantage in localization and object target tracking due to the adaptability in the uncertain and dynamic environment. Morelande [22]

used Bayesian algorithm to localize a large number of nodes with only few anchor nodes, which can also handle incomplete data sets by investigating prior knowledge and probabilities. To investigate neural network based localization, Shareef

[23]

compared the performance of localization schemes using multi-layer perceptron (MLP), radial basis function (RBF) and recurrent neural networks (RNN). From the simulation results, the RBF achieved the best localization accuracy and the MLP has the best computational and memory resource requirements. To avoid using specialized ranging hardware or assisting mobile devices, support vector machine (SVM) is used to localize the wireless sensor network based on mere connectivity information

[24]

. The localization error can be upper-bounded by any small threshold given an appropriate training data size. In addition, support vector regression based localization and decision tree based localization

[25, 26] also achieve good results.

However, the learning-based algorithms need to be pre-trained which is sometimes infeasible, and the real-time performance is still low. Moreover, the computational and memory resource requirements for training large amounts of data may restrict its application in complex environment.

Ii-C Geometry Optimization Methods

The literature on geometry optimization methods focuses on finding Euclidean positions for the vertices of a graph given a set of edge lengths. The main problems are: The existence of unique solution relies on the topology of the graph [27]; Communication links suffer from temporary failures when the nodes are moving [28]; The limitation of communication channel restricts the applications of distributed localization algorithms [28]; The noise on the range measurement compounds the difficulty of finding a correct solution.

A theory of network localization in terms of graph rigidity theory is proposed [29] to find a unique solution. However, incorrect estimation may arise under the presence of measurement noise. To consider the measurement noise, Wang [30] proposes a noise model to get the noise characteristics of UWB measurements. They use a maximum likelihood estimator for prelocalization in Cartesian coordinates and obtaining noise covariance matrix, which is then used to update the state estimation recursively. But this algorithm still needs to use the simplified kinetic model and it is limited to 2-D localization. To improve the efficiency, Franco [31] proposes a method for calibration-free, infrastructure-free localization in sensor networks to circumvent a-priori calibration. However, this method is used for estimating multiple fixed anchors and not for estimating moving nodes. Other theoretical works such as distributed localization address the geometric relations between robots and fixed anchors [32, 33]. But these theoretical works assume that the mobile robots receive multiple measurements concurrently. The neglect of minor time difference between consecutive measurements brings estimation error, especially when the system is highly real-time and dynamic.

Fig. 1: Localization system is shown in this figure. One robot moves in 2-D plane or 3-D space and UWB anchors are placed in fixed positions. The robot is equipped with UWB module. Each edge in this graph represents a constrained equation. At each time instant , we can construct a range constrained equation and a trajectory smoothness constrained equation . If the robot received the NLOS measurement, the corresponding range constrained equation such as and trajectory smoothness constrained equation such as will not be added to the cost function.

Iii Preliminary

Manifold is a topological space that locally resembles Euclidean space [34]. A group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The Lie group SE(3) is a kind of manifold [34]. Considering the robot motion in 3-D space, poses of robot can be regarded as rigid transformations which belong to the Lie group SE(3). The rigid transformation can be represented by:

(1)

where is the rotation matrix and is the translation vector. For the rotation matrix , we can get its corresponding vector by an exponential map:

(2)

where

is the skew symmetric matrices. A transformation

can be represented by via exponential mapping, i.e,

(3)

Therefore, the function can be defined as:

(4)

Different from translation, the rotation matrices span over non-Euclidean space. Therefore, localization based on transformation becomes an optimization problem on Manifold, and the function can be applied to build the relationship between transformations. For example, the transformations satisfying can be represented by the corresponding vectors , i.e.,

(5)

Iv Problem Formulation

Iv-a UWB based Localization

The robot position at time instant is denoted by , so that the robot trajectory can be denoted by . Our UWB ranging algorithm uses two-way TOA measurement to calculate the distance. For a localization system with fixed UWB anchors, the robot equipped with a UWB module at position is able to range to one of the fixed UWB anchors at position shown in Fig. 1. The corresponding distance measurement between and is denoted by , which is obtained by the multiplication of light speed and the measurement of time of flight:

(6)

where and are the synchronized time instants when the UWB ranging radio is sent and received relative to the robot’s clock respectively, and is the processing time delay by the UWB anchor. is assumed to be a zero mean Gaussian noise. At time instant , the range constrained equation is defined as:

(7a)
(7b)

where

is the Pseudo-Huber loss function defined as

that is designed to approximate the quadratic function for small values and linear function for large values of . This ensures the derivatives are continuous for all degrees. is the weight term given by:

(8)

where is a free-parameter, and

is variance of UWB measurement noise. Equation (

7) only gives a constraint on a set of sparse points and fails to form a smooth trajectory. To solve this problem, a trajectory smoothness constrained equation between adjacent positions and is introduced, which is defined as:

(9a)
(9b)

where is the Pseudo-Huber loss function, and is the weight term designed as:

(10)

where is the time interval between positions and . is a free-parameter, and is the maximum velocity of the robot. The maximum distance between two consecutive positions is . Therefore, the design of weight term can be interpreted as the probabilistic distance between two consecutive positions by applying the 3-

rule under the assumption that the linear velocity of the robot follows a normal distribution.

Compared with existing graph optimization methods, the design of trajectory smoothness edge addresses the drawback of neglecting time difference between consecutive measurements. In addition, the trajectory smoothness edge (9) is constructed without the need of a robot kinetic model, which avoids estimation error induced by a simplified or inaccurate kinetic model, which is a well-known issue in model-based moving horizon estimation [10] and filtering methods. Therefore, our proposed method can obtain higher accuracy in 2-D plane and 3-D space than existing methods as demonstrated later in the experiments.

At time instant when a UWB measurement is received, we construct a range constrained equation (7) and a trajectory smoothness constrained equation (9), thus a trajectory can be obtained by solving the following cost function:

(11a)
(11b)

where is the estimated trajectory with being the estimated robot position at time instant .

Iv-B Fusion with Other Information

The graph constructed in Section IV-A can only provide position estimation. In this section, an example for fusing orientation with the range and trajectory smoothness constraints is provided. In the experiments, the IMU [35] is used to verify this method. The robot pose at time instant is denoted by , so that the robot trajectory can be denoted by . For adjacent poses and , we have the following relationship:

(12)

Then, the trajectory smoothness constrained equation between adjacent poses and is designed as:

(13a)
(13b)

where is defined in (4). is the Pseudo-Huber loss function and matrix is the weight term designed as:

(14)

where is the covariance matrix of rotation, and is the covariance matrix of translation designed as:

(15)

where is the time interval between and . is the maximum velocity of the robot, and is a free-parameter.

Since the acceleration of the robot can be obtained from IMU, if we can acquire the velocity of the robot from other sensors such as optical flow, the weight term in (10) and all diagonal elements in can be designed as:

(16)

where and are the robot velocity and acceleration at time instant . The range constrained equation in (7) can also be rewritten in a form of pose:

(17a)
(17b)

Therefore, combining (13) and (17), a trajectory can be obtained by the following cost function:

(18a)
(18b)

where is the estimated trajectory with being the estimated robot pose at time instant .

V Optimization

We proposed two cost functions (11) and (18) for localization. Cost function (11) is optimized in Euclidean space but cost function (18) is optimized on Lie-Manifold.

Assume the residues in (7), (9), (13) and (17) are , and , respectively. Then, the corresponding constrained equations in (7), (9), (13) and (17) can be rewritten as:

(19)

where subscript represents symbol , or . is re-weighted residue.

V-a Optimization in Euclidean Space

From (7), (9) and (19), we know the re-weighted residues and are functions with respect to position . Therefore, it is easy to get its Taylor expansion around the initial guess vector :

(20)

where superscript represents symbol or , is the Jacobian at , is an increment of . Then, we can obtain:

(21)

where , , and . Therefore, for cost function (11) , we have:

(22)

where , , , and . Taking the derivative with respect to the increment , we have:

(23)

However, if the matrix is singular, the increment cannot be acquired uniquely. The Levenberg-Marquardt [36] method introduces a damping factor to solve this problem:

(24)

The optimized solution is obtained by:

(25)

where is the initial guess. The matrix

is sparse which can be decomposed by the LU decomposition or QR decomposition. The optimization process ((

20)-(25)) continues until the maximum iteration number is reached. Then the estimated trajectory can be obtained.

V-B Optimization on Manifold

The cost function (18) on Manifold cannot be optimized in Euclidean space directly. An idea is to map the cost function (18) into Euclidean space by the mapping function (4). The pose can be represented by the vector , so that the cost function (18) can be written as:

(26a)
(26b)

Therefore, similar to the process (20)-(25), the estimated trajectory can be obtained in Euclidean space. Then, the estimated poses can be acquired by (4).

V-C Outlier Rejector

UWB localization may be trapped in NLOS scenarios that induce multi-path signal components. The NLOS measurements will bring unsmoothed and inaccurate localization. In this paper, an outlier rejector is designed to reject the NLOS measurements. Therefore, only LOS measurements are used for localization.

In our proposed method, each time when a new LOS ranging measurement is acquired, a range constrained equation and a trajectory smoothness constrained equation are added to the cost function, the system optimizes the trajectory and obtains the updated estimated trajectory.

It is preferred that robot is placed at an initial environment that doesn’t have NLOS measurements, and the outlier rejector is designed to work after receiving distance measurements. Therefore, at time instant , the robot can obtain estimated positions . At time instant , the robot receives a new distance measurement from one of the fixed UWB anchors . The distance measurement is considered as NLOS measurement and rejected if the following condition is satisfied:

(27)

where is the latest position at . is the NLOS parameter and is the maximum velocity of the robot. is the frequency of UWB sensor. At time instant , the outlier rejector will check the distance measurement . If is not rejected, a range constrained equation and a trajectory smoothness constrained equation about position will be added to the cost function, then the system optimizes the trajectory and obtains the updated estimated positions , and the latest position will also be updated. There is an example shown in Fig. 1, at time instant , the robot received a NLOS measurement. The corresponding range constrained equation and trajectory smoothness constrained equation will not be added to the cost function, and the position will not be optimized.

The robot is trapped in an extremely bad environment if the outlier rejector keeps rejecting continuous incoming distance measurements. For using our proposed algorithm, the robot is required to avoid moving in such extremely bad environment.

V-D Sliding Trajectory Window

Considering that the computation will increase as the number of constrained equations increase, a sliding trajectory window is designed to reduce the computation requirement. At each time instant we optimize the trajectory of sliding window instead of the whole trajectory shown in Fig. 1, where only latest positions in the sliding trajectory window will be optimized. In this case, cost function (11) at time instant will become . Therefore, the number of constrained equations in the cost function decreases from to , so that the estimated trajectory becomes . For the window size shown in Fig. 1, at time instant , only the latest positions , and are optimized. We also applied the sliding trajectory window to the cost function (18).

Fig. 2: Real-time localization system on ROS platform.

V-E Computational Complexity

In our proposed method, graph optimization, outlier rejector and sliding trajectory window are combined efficiently to offer accurate localization. Suppose that is the total number of distance measurements received by the robot, and is the maximum iteration number used for Levenberg-Marquardt method. The computational complexity is without sliding trajectory window. After applying the sliding trajectory window, the computational complexity becomes , where is the window size. Therefore, the computational complexity decreases greatly because is far less than . This is verified in the experiments shown in Table I-III with different window sizes.

Vi Estimation Error Analysis

In this section, we analyze the estimation error of proposed method. We choose cost function (11) for analysis. The analysis for cost function (18) is similar. From (22), the increment of cost function (11) can be obtained by:

(28)

where denotes high-order nonlinear terms and with being the initial guess.

From Lemma 2.4 in [36], is chosen such that is positive definite, then is monotonically decreasing. Considering the measurement noise, the cost function can be rewritten as:

(29)

where is the noise term and is the cost function in the absence of noise. Suppose that with being the optimal solution in the absence of noise and being the estimation error. Then we can get the following equation:

(30)

where . The Hessian matrix

is real and symmetric. There exists an orthogonal matrix

such that . is a diagonal matrix and is denoted by . Thus,

(31)

where , . Then, equation (31) becomes:

(32)

If , then the norm of the estimation error is:

(33)

As shown in Fig. 1, each position has at least three constrained equations, which contribute to converging to the truth. The localization workflow is shown in Fig. 2.

Vii Experimental Results

Vii-a Hardware

We use UAV to test the performance of the proposed localization algorithm. Quadrotor UAV consists of four rotors which are configured in a cross shape as shown in Fig. 3, which is equipped with UWB module, IMU sensor and Pixhawk Autopilot.

UWB transceivers used in our experiments are P440 modules from Time Domain shown in Fig. 3. Our UWB ranging algorithm uses two-way TOA measurement to calculate the distance shown in (6). The large bandwidth of UWB is from 3.1GHz to 5.3GHz, which is able to provide relatively steady measurements and has the ranging area of with ranging error within . The IMU sensor is from myAHRS+ (altitude heading reference system).

Fig. 3: Experiment environment and platform.

Vii-B Experimental Environment

The UAV tests were carried out in an area of shown in Fig. 3. The four anchors are non-coplanar with their positions being , , and , respectively. The initial position of UAV is . The NLOS parameter in (27) is chosen as . The parameter in the outlier rejector is chose as . For the system parameters, in (8) and in (14) are obtained from the outputs of UWB and IMU sensors. The selections of the weight term and covariance matrix are presented in (10) and (14).

UAV Proposed Method(2-D) Proposed Method(3-D) EKF(3-D) UKF(3-D) Nonlinear Regression(3-D) MDS(3-D)
Mean Error of (m) 0.015 0.023 0.114 0.062 0.102 -
Mean Error of (m) 0.014 0.022 0.123 0.066 0.116 -
Mean Error of (m) 0.024 0.077 0.353 0.232 0.346 -
Translation Error (m) 0.032 0.083 0.391 0.249 0.379 0.698
TABLE I: Position Estimated 2-norm Error Comparison

In our experiments, UAV will send requests to four fixed UWB anchors sequentially to get the distance measurements. The maximum velocity of UAV is 2. The frequency of IMU (100.3 ) is about three times larger than that of UWB (32.46 ). The ground truth is provided by a VICON system which has the accuracy of mm-level. In order to analyze the localization accuracy, the mean and root mean square error (RMSE) of translation are given by:

(34)
(35)

where is the estimation error.

Since IMU and UWB measurements may not be obtained at the same time instant, a time synchronizer filter is used to synchronize incoming IMU and UWB measurements. At each time instant when a UWB measurement is received, we get a ranging measurement for constructing range constrained equation (17), then IMU measurement whose time instant is closest to the UWB time instant is chosen for constructing trajectory smoothness constrained equation (13). Since the IMU has a much higher data rate, the remaining IMU measurements are not used.

Fig. 4: UAV moved along a circle or rectangle in a 2-D plane. In the left figure, the red line is Ground truth from VICON system, and the blue line is the estimation using only UWB measurements. The right figure is its corresponding box figure.

Vii-C Comparison with Existing Methods

The UAV moved along a circle or a rectangle in our experiments. Localization in 2-D plane is based on the cost function shown in (11), and 3-D in (18).

For more than 50 experiments in 2-D plane, 1 experiment result is chosen and shown in Fig. 4, where the ground truth and estimated trajectory are shown in red and blue lines, respectively. The mean error and root mean square error of translation of all experiments are 3.2 and 1.7 centimeters, respectively. The corresponding box figure of the translation error in 2-D plane shows that about 75 percent of translation errors are within 4 centimeters.

Fig. 5: UAV moved in a 3-D space. In the left figure, the red line is ground truth from VICON system, and the blue line is the estimation combining UWB and IMU measurements. The right figure is its corresponding box figure.

For more than 50 experiments in 3-D space, 1 experiment result is shown in Fig. 5. The corresponding box figure of the translation error in 3-D space shows that about 75 percent of translation errors are within 8 centimeters. In the comparison with existing methods which are implemented in our lab [11, 16, 17], the same devices and same experimental environment are guaranteed, so this comparison is fair and convincing. The translation error of multidimensional scaling [11] is about 70 centimeters. The comparison with EKF/UKF and nonlinear regression methods which also use UWB-IMU data fusion is presented in Table I. The mean errors of existing methods are about 6 to 12 centimeters in the and directions, and about 25 to 35 centimeters in the direction. But the mean errors of our proposed algorithm are about 2.3 and 2.2 centimeters in the and directions and about 7.7 centimeters in the direction, which are much more accurate than EKF/UKF and nonlinear regression methods.

Next, we will analyze two important parameters: window size and maximum number of iterations. Then, the outlier rejector experiment and the comparison with barometer are presented.

Fig. 6: Outlier rejector experiment.

Vii-D Parameter Selection

From the analysis of computational complexity, we can know there are two important parameters needed to be analyzed. One is the window size, and the other is the maximum number of iterations. The effects of the window size and maximum number of iterations on localization accuracy are analyzed on 2.2GHz intel core i7 processor.

Vii-D1 Window size

The window size determines the number of latest positions on the sliding trajectory window, which will be optimized. The statistics of estimation error and update rate of localization with different window sizes in 2-D plane and 3-D space are shown in Table II- IV. It is obvious that larger window size will increase the computation and decrease the updated rate of localization.

Vii-D2 Maximum number of iterations

The maximum number of iterations also determines computational complexity. From Table II and Table III, we can know that better localization accuracy is obtained if we increase the maximum number of iterations.

From Table II, we know that the mean error of translation will not be influenced, when the update rate of localization is close to or higher than that of UWB (32.46). In this situation, the mean error of translation is within 3.4 centimeters. But some UWB measurements will not be utilized for optimization when the update rate of localization is much lower than that of UWB, the mean error of translation becomes greater (4.4 centimeters with 300 positions in the moving window). Similar results can also be found in Table III and Table IV.

When applied to ultra-low power processor such as Raspberry Pi processor and intel Atom x7-Z8750 processor, the maximum number of iterations is suggested to be set as 10 and window size is taken as 10. In this case, the update rate of the proposed method is about 33.2 on Raspberry Pi processor and 176.5 on intel Atom x7-Z8750 processor, which are higher than that of UWB.

Fig. 7: The comparison between barometer sensor (blue line) and our method (red line) in estimating altitude.
Window Size 10 20 50 100 300
Mean Error (m) 0.032 0.033 0.033 0.034 0.044
Standard Deviation (m) 0.018 0.019 0.018 0.018 0.028
Update Rate (Hz) 418.8 285.6 118.12 53.44 20.42
TABLE II: Translation error versus updating window size under the maximum number of iterations of 10 in 2-D plane
Window Size 10 20 50 100 200
Mean Error (m) 0.028 0.028 0.028 0.028 0.091
Standard Deviation (m) 0.017 0.018 0.017 0.018 0.037
Update Rate (Hz) 242.6 140.1 63.54 30.58 16.52
TABLE III: Translation error versus updating window size under the maximum number of iterations of 20 in 2-D plane
Window Size 10 20 50 100 200
Mean Error (m) 0.067 0.067 0.067 0.069 0.167
Standard Deviation (m) 0.068 0.066 0.067 0.068 0.098
Update Rate (Hz) 254.5 134.1 61.92 33.82 17.83
TABLE IV: Translation error versus updating window size under the maximum number of iterations of 20 in 3-D Space

Vii-E Outlier Rejector Experiment

An outlier rejector experiment with four fixed UWB anchors and one fixed robot is shown in Fig. 6. The distance measurements from the four anchors should be almost constant if there are no NLOS measurement. In this experiment, anchors UWB100-UWB103 have NLOS measurements. For example, anchor UWB103 has NLOS measurements from second to second and second to second, respectively. The outlier rejector rejected these NLOS measurements and only LOS measurements are used for localization.

The positions of four fixed UWB anchors (UWB100-UWB103) are , , and , respectively. The experiment result showed that we can get accurate robot estimation which is close to the truth during the whole seconds.

Vii-F Comparison with Barometer

The general way for estimating altitude is to use sensors such as barometer and Laser beam. It can be seen from Fig. 7 that barometer reading is noisy, but our proposed method is smooth due to the design of trajectory smoothness constrained equation between adjacent positions. In addition, our proposed method can also be applied to uneven ground that the Laser beam cannot. Therefore, our method is a breakthrough in the area of range-based localization. It can not only obtain promising estimation in altitude that existing methods cannot achieve, and its performance is even better than the barometer sensor.

Viii Conclusion

In this paper, a range-based localization system was proposed based on a graphical optimization approach. The proposed algorithm does not require a kinetic model and can be implemented real-time on a ROS platform. It has a sliding trajectory window which can be adjusted together with the number of maximum iterations to suit for the available computing resource. Compared with existing localization methods, better localization accuracy in 2-D plane and 3-D space is obtained, especially in altitude direction. The high accuracy can be guaranteed as long as the update rate of the localization is not less than that of UWB. This can be achieved through tuning the maximum number of iterations or the window size.

References

  • [1] J. Engel, V. Koltun, and D. Cremers, ”Direct Sparse Odometry,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-14, Apr. 2017.
  • [2] E. Jakob, S. Thomas, and C. Danie, ”“LSD-SLAM: Large-Scale Di-rect Monocular SLAM,”

    European Conference on Computer Vision. Springer, Cham,

    pp. 834-849, 2014.
  • [3] T. W. J. M. A. J. D. Ankur Handa, ”A Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM,” International Conference on Intelligent Robots and Systems, 2014.
  • [4] Z. Chen, H. Zou, H. Jiang, Q. Zhu, Y. Soh, and L. Xie, ”Fusion of WiFi, Smartphone Sensors and Landmarks Using the Kalman Filter for Indoor Localization,” sensors, vol. 15, no. 1, pp. 715-732, Jan. 2015.
  • [5] S. Sen, J. Lee, K.-H. Kim, and P. Congdon, ”Avoiding Multipath to Revive Inbuilding WiFi Localization,” Proceeding of the annual international conference on Mobile systems, applications, and services, ACM, pp. 1-14, May 2013.
  • [6] Z.-A. Deng, Y. Hu, J. Yu, and Z. Na, ”Extended Kalman Filter for Real Time Indoor Localization by Fusing WiFi and Smartphone Inertial Sensors,” Micromachines, vol. 6, no. 4, pp. 523-543, Apr. 2015.
  • [7] S. Lupashin, M. Hehn, M. W. Mueller, A. P. Schoellig, M. Sherback, and R. DAndrea, ”A platform for aerial robotics research and demonstration:The Flying Machine Arena,” Mechatronics, vol. 24, no. 1, pp. 41-54,Feb. 2014.
  • [8] A. Kushleyev, D. Mellinger, and V. Kumar, ”owards A Swarm of Agile Micro Quadrotors,” Autonomous Robots, vol. 35, no. 4, pp. 287-300,2013.
  • [9] Z. Lin, M. Fu, and Y. Diao, ”Distributed Self Localization for RelativePosition Sensing Networks in 2D Space,” IEEE Transactions on Signal Processing, vol. 63, no. 14, pp. 3751-3761, Jun. 2015.
  • [10] A. Haber and M. Verhaegen, ”Moving Horizon Estimation for Large-Scale Interconnected Systems,” IEEE Transactions on Automatic Control, vol. 58, no. 11, pp. 2834-2847, Oct. 2013.
  • [11] W. Cui, C. Wu, W. Meng, B. Li, Y. Zhang, and L. Xie, ”Dynamic Multidimensional Scaling Algorithm for 3-D Mobile Localization,” IEEE Transactions on Instrumentation and Measurement, vol. 65, no. 12, pp. 2853-2865, Oct. 2016.
  • [12] Y. Shang and W. Ruml, ”Improved MDS-based localization,” in IEEE INFOCOM 2004. Dec. 2003, pp. 2640-2651.
  • [13] S. E. Joao, C. Adriano, and C. Carlos, ”Generalized Geometric Triangulation Algorithm for Mobile Robot Absolute Self-Localization,” in IEEE International Symposium on Industrial Electronics, 2003, pp. 1-6.
  • [14] A. Ledergerber, M. Hamer, and R. DAndrea, ”A Robot Self-Localization System using One-Way Ultra-Wideband Communication,” IEEE/RSJ International Conference on Intelligent Robots and Systems IROS, pp.3131-3137, Oct. 2015.
  • [15] M. W. Mueller, M. Hamer, and R. D’Andrea, ”Fusing ultra-wideband range measurements with accelerometers and rate gyroscopes forquadrocopter state estimation,” inRobotics and Automation (ICRA),2015 IEEE International Conference on. IEEE, 2015, pp. 1730-1736.
  • [16] K. Guo, Z. Qiu, C. Miao, A. H. Zaini, C.-L. Chen, W. Meng, and L. Xie, ”Ultra-Wideband-Based Localization for Quadcopter Navigation,” Unmanned Systems, vol. 04, no. 01, pp. 23-34, Jan. 2016.
  • [17] T. M. Nguyen, A. H. Zaini, K. Guo, and L. Xie, ”An Ultra-Wideband-based Multi-UAV Localization System in GPS-denied environments,” The International Micro Air Vehicle Conference and Competition, 2016.
  • [18] A. Benini, A. Mancini, and S. Longh, ”An IMU/UWB/Vision-based Extended Kalman Filter for Mini-UAV Localization in Indoor Environment using 802.15.4a Wireless Sensor Network,” Journal of Intelligent and Robotic Systems, vol. 70, pp. 461-476, Aug. 2012.
  • [19] A. Prorok and A. Martinoli, ”Accurate indoor localization with ultra-wideband using spatial models and collaboration,” The InternationalJournal of Robotics Research, vol. 33, no. 4, pp. 547-568, Apr. 2014.
  • [20] T. Van Nguyen, Y. Jeong, H. Shin, and M. Z. Win, ”Machine Learning for Wideband Localization,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 7, pp. 1357-1380, Jun. 2015.
  • [21] A. Mohammad Abu, L. Shaowei, N. Dusit, and T. Hwee-Pink, ”Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications,” IEEE Communications Surveys and Tutorials, vol. 16, no. 4, pp. 652-656, 2015.
  • [22] M. R. Morelande, B. Moran, and M. Brazil, ”Bayesian node localisationin wireless sensor networks,” in ICASSP 2008-2008 IEEE International Conference on Acoustics, Speech and Signal Processing., IEEE, 2008, pp. 2545-2548.
  • [23] A. Shareef, Y. Zhu, and M. Musavi, ”Localization Using Neural Networks in Wireless Sensor Networks,” In proceeding of the st International Conference on Mobile and Ubiquitous Systems: Networking and Services, 2008, pp. 1-7.
  • [24] T. Duc A and N. Thinh, ”Localization In Wireless Sensor Networks based on Support Vector Machines,” IEEE Transactions on Parallel and Distributed Systems, pp. 981-994, 2008.
  • [25] K. Woojin, P. Jaemann, and K. H Jin, ”Target Localization Using Ensemble Support Vector Regression in Wireless Sensor Networks,” iIn Wireless Communication and Networking Conference, 2010, pp. 1-5.
  • [26] Z. Merhi, M. Elgamel, and M. Bayoumi, ”A Lightweight Collaborative Fault Tolerant Target Localization System for Wireless Sensor Networks,” IEEE Transactions on Mobile Computing, vol. 8, no. 12, pp.1690-1704, Oct. 2009.
  • [27] H. Bruce, ”Conditions for unique graph realizations,” SIAM journal oncomputing, pp. 65-84, 1992.
  • [28] B. Prabir and H. Joao P, ”Estimationon Graphs from Relative Measurements,” IEEE Control Systems, pp. 57-74, 2007.
  • [29] J. Aspnes, T. Eren, D. K. Goldenberg, A. S. Morse, W. Whiteley,Y. R. Yang, B. Anderson, and P. N. Belhumeur, ”A Theory of Network Localization,” IEEE Transaction on Mobile Computing,, pp. 1663-1678, 2006.
  • [30] X. Wang, M. Fu, and H. Zhang, ”Target Tracking in Wireless SensorNetworks Based on the Combination of KF and MLE Using DistanceMeasurements,” IEEE Transactions on Mobile Computing, vol. 11, no. 4,pp. 567-576, Feb. 2012.
  • [31] C. Di Franco, A. Prorok, N. Atanasov, B. Kempke, P. Dutta, V. Kumar,and G. J. Pappas, Calibration-free network localization using non-line-of-sight ultra-wideband measurements,” in the 16th ACM/IEEE International Conference, New York, USA: ACM Press,2017, pp. 235-246.
  • [32] Y. Diao and Z. Lin, ”Barycentric Coordinate based Distributed Localization Algorithm for Sensor Networks,” IEEE Transactions on Signal Processing, pp. 4760-4771, 2014.
  • [33] U. A. Khan, S. Kar, and J. M. F. Moura, ”DILAND: An Algorithm for Distributed Sensor Localization with Noisy Distance Measurements,” IEEE Transactions on Signal Processing, pp. 1940-1947, 2009.
  • [34] H. C. Brian, ”Lie Groups, Lie Algebras, and Representations,” Springer 2015.
  • [35]

    C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, ”IMU Preinte-gration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori Estimation,”

    Robotics Science and Systems XI. No. EPFL-CONF, 2015.
  • [36] X. Longluo, L. Zhiliao, and H. WahTam, ”Convergence analysis of the Levenberg-Marquardt method,” Optimization Methods and Software, pp. vol. 22, no. 4, pp. 659-678, Aug. 2007.