Robust Navigation In GNSS Degraded Environment Using Graph Optimization

06/23/2018 ∙ by Ryan M. Watson, et al. ∙ 0

Robust navigation in urban environments has received a considerable amount of both academic and commercial interest over recent years. This is primarily due to large commercial organizations such as Google and Uber stepping into the autonomous navigation market. Most of this research has shied away from Global Navigation Satellite System (GNSS) based navigation. The aversion to utilizing GNSS data is due to the degraded nature of the data in urban environment (e.g., multipath, poor satellite visibility). The degradation of the GNSS data in urban environments makes it such that traditional (GNSS) positioning methods (e.g., extended Kalman filter, particle filters) perform poorly. However, recent advances in robust graph theoretic based sensor fusion methods, primarily applied to Simultaneous Localization and Mapping (SLAM) based robotic applications, can also be applied to GNSS data processing. This paper will utilize one such method known as the factor graph in conjunction several robust optimization techniques to evaluate their applicability to robust GNSS data processing. The goals of this study are two-fold. First, for GNSS applications, we will experimentally evaluate the effectiveness of robust optimization techniques within a graph-theoretic estimation framework. Second, by releasing the software developed and data sets used for this study, we will introduce a new open-source front-end to the Georgia Tech Smoothing and Mapping (GTSAM) library for the purpose of integrating GNSS pseudorange observations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Traditionally, GNSS data is not utilized to its full potential for autonomously navigating vehicles in urban environments. This is largely ascribable to the possibility of GNSS observables be degraded (e.g., multipath, poor satellite geometry) when confronted with an urban environment. However, the inclusion of GNSS data in such systems would provide substantial information to their navigation algorithms. So, the ability to safely incorporate GNSS observables into existing inference algorithms, even when confronted with environments that have the potential to degrade GNSS observables, is of obvious importance.

To overcome the aforementioned issues, we will leverage the advances made within the robotics community surrounding graph-based simultaneous localization and mapping (SLAM) to efficiently and robustly process GNSS data. Within this community, the advances surrounding optimization have been made on two fronts: optimization efficiency and robustness. For robust graph optimization, the literature can be divided into two broad subsets: traditional M-Estimators [1], and more recent graph based robust methods [2, 3, 4]. This paper will evaluate the effectiveness of the mentioned robust optimization techniques when applied specifically to GNSS pseudorange data processing.

The remainder of this paper is organized as follows. First, the technical approach utilized for this study is described, which begins with a discussion on factor graph optimization and then evolves into a discussion on how to make that optimization more robust to erroneous data. Next, the experiential setup and collected data sets are discussed. Then, the results obtained using the previously described models and data sets are provided. Finally, the paper ends with concluding remarks and proposed steps for continued research.

I Technical Approach

This section provides concise overview of the sensor fusion approach utilized in this study. For the reasons mentioned above, it was decided to use a graph-theoretic approach for sensor fusion as opposed to the more common Kalman [5] or particle filters [6]. To describe our approach, first, the factor graph [7] is discussed. Then, a discussion is provided on the incorporation of GNSS pseudorange observables into the factor graph framework. Finally, a discussion is provided on methods to make these graph theoretic approaches more tolerant to measurement faults, specifically, in the context of GNSS.

I-a Factor Graphs

The factor graph was purposed in  [7] as a mathematical tool to factorize functions of many variables into smaller subsets. This formulation can be utilized to infer the posterior distribution of the GNSS based state estimation problem. The factorization is represents as a bipartite graph, , where there are two types of vertices: the states to be estimated, , and the probabilistic constraints applied to those states, . An edge only exists between a state vertex and a factor vertex if that factor is a constraint applied on that time-step. An example factor graph is depicted in Figure 1, where represents the states to be estimate at time-step (i.e., user position, user velocity, zenith point troposphere delay, GNSS receiver clock bias), represents the constraint applied to the state by a measurement (i.e., a GNSS pseudorange observable), and represents a probabilistic constraint applied between time-steps (e.g., incorporation of inertial navigation into the factor graph).

width=0.5

Fig. 1: Example factor graph for GNSS data processing

In the graph, represents an error function or probabilistic constraints applied to the state at the specified time-step. When Gaussian noise is assumed, the error function is equivalent innovation residual of the traditional Kalman filter, as shown in Eq 1. Utilizing this information it is easy to see that the optimal state estimate can be calculated through a traditional non-linear least squares formulation (i.e., Levenberg Marquardt  [8]) that minimizes the error over the graph.

(1)
(2)

I-A1 Pseudorange Factor

With the general factor graph framework discussed, the discussion can now move to optimization of GNSS navigation applications. To construct the pseudorange factor, the dual-undifferenced GNSS observables are utilized. Because undifferenced data is being used, methods for mitigating other GNSS error-sources (e.g., troposphere and ionosphere delays, and GNSS receiver clock bias) are incorporated in the measurement models.

To mitigate the ionospheric delay, the dispersive nature of the ionosphere is leveraged, and a linear combination of the GPS and frequencies (1575.42 MHz and 1227.60 MHz, respectively) is formed to produce ionospheric-free (IF) pseudorange measurements in order to eliminate the delay to first order [9]. The IF pseudorange combination can be seen in Eq. 3: where, and are the and frequencies, and and are the pseudorange measurements on the and frequencies. The superscript in Eq. 3 is used to designate the measurement between the platform and satellite .

(3)

Using the IF combination, the pseudorange measurements are modeled as shown in Eq. 4: where is the receiver’s clock bias, is the GNSS satellite’s clock bias, is the tropospheric delay in the zenith direction, and is a user to satellite elevation angle dependent mapping function, is the relativistic correction, is the phase center offset between the orbit products and the propagating satellites antenna, and is the correction term to mitigate the differential code bias. For this study, all of the above mentioned models were are incorporated through the utilization of GPSTk [10], which is an open-source GNSS software package. All additional un-modeled error components are included in Eq. 4 as the term

(4)

Using the modeled pseudorange observable, we can construct the pseudorange factor as shown in Eq. 5. In Eq. 5, the superscript designate the measurement between the platform and satellite, is the observed pseudorange value, is the estimated pseudorange value as calculated in Eq. 4, and is the uncertainty in the pseudorange observable.

(5)

I-B Fault Tolerance

Due to the degraded nature of the GNSS data in urban environments ( i.e., subject to multipath, poor satellite visibility ), methods to make the state estimation process more robust must be incorporated into the optimization routine. For this study, we will broadly classify these robust optimization techniques as traditional M-Estimators and more recent graph based techniques. Both methods are described in detail below.

I-B1 Traditional M-Estimators

The field of M-Estimators dates back to the seminal paper published by Huber  [11], which was later extended into a comprehensive survey on the subject  [1]. The field of research related to M-estimation can be reduced to minimizing the influence of erroneous data by replacing the cost function with a modified cost function, , which penalizes observables that do not conform to the user defined observation model.

All cost functions can be classified as being redescending or non-redescending. A redescending cost function conforms to the property that where , which implies that the weight approaches 0 as the magnitude of the residual approaches . For this study, two M-estimators were selected, one M-estimator that is redescending — the Cauchy cost function  [12] — and one M-estimator that is non-redescending — the Huber cost function  [11].

The m-estimator utilized in this study are shown in greater detail in Table I. In Table I, represents the modified cost function, is the first derivative of the cost function with respect to the residual, specifies the weight applied to corresponding entries in the information matrix, and k is the user defined M-estimator kernel width.

M-Estimator
Huber
Cauchy
TABLE I: Selected M-Estimators

I-B2 Graph Based Approaches

Next, three more recent advances in robust optimization: switchable constraints, dynamic covariance scaling, and max-mixtures are discussed in the context of robust GNSS optimization.

Switch Constraints

Switchable Constraints were first introduced by  [2] as a method to reject false positive loop-closure constraints in the simultaneous localization and mapping (SLAM) problem. Additionally, this method has been evaluated for its ability to mitigate the affect of multipath in urban environment  [13, 14]. The robustness of this optimization technique is granted through the incorporation of a new switchable constraint for every factor of interest in the graph. The switchable constraint can be thought of as an observation weight that is be optimized concurrently with the state estimates. The modified cost function that must be optimized to find the minimum error over the entire graph is shown in Eq. 6, where is the switchable constraint, is the initial estimate for the switch constraint, and is a real-valued function such that . A graphical depiction of the modified graph is shown in Figure 2, where the residual associated with measurement

at epoch

exceeds the residual threshold.

(6)

width=0.5

Fig. 2: Example GNSS factor graph with switchable constraint
Dynamic Covariance Scaling

A pitfall of the switchable constraint robust optimization algorithm is that additional latent variable are added to the optimization process every time a observable’s residual exceed the defined threshold. This means that the optimizer is not only working over the original search space composed of the state vector, but is now also optimizing over all added switchable constraints. The inclusion of additional variables into the optimization process not only increases computation cost, but could also potentially decrease convergence speed. To overcome these issues, a closed formed solution to switchable constraints was introduced in  

[3] as dynamic covariance scaling. The closed formed solution can be see in Eq. 7, where is the inverse of the prior uncertainty on the switchable constraint, and is the initial error in the state.

(7)
Max-Mixtures

The two previous robust models are still confined to uni-modal Gaussian error distributions, which in many situations will not fully capture the true distribution of the data in trying environments. To overcome this limitation, the next robust method extends that traditional uni-modal Gaussian factor graph optimization to a Gaussian mixture model. Generally, to move from a traditional uni-modal Gaussian optimization to a more complication distribution, the sum of multiple Gaussian components is utilized, as shown in Eq.

8.

(8)

However, this solution greatly complicates the calculation of the maximum likelihood estimate because the logarithm cannot be “pushed” inside of the summation operator. To overcome this additional complexity,  [4]

recommend the utilization of the the max operator as a Gaussian component selector to approximation the true Gaussian summation. For GNSS data processing, this means that each observable can be modeled using two independent distributions: one distribution that defines data free of outliers, while a second distribution represents faulty data (i.e., the null hypothesis). The null hypothesis can be modeled as a Gaussian distribution centered at the mean of the error-free observable distribution, but with a substantially larger variance. This null hypothesis is tested against the hypothesis within the optimizer to iteratively find the weighting applied to the information matrix.

Ii Algorithm Implementation And Performance

Ii-a Software Implementation

For implementation, this work uses the Georgia Tech Smoothing and Mapping Library (GTSAM) [15] as the estimation engine. GTSAM is an open-source C++ library that abstracts the process of formulating an estimation problem as a factor graph. Important to our proposed application of robust GNSS, it allows developers to easily add custom measurement factor types, which allows for the flexible integration of different sensors. GTSAM also includes a variety of nonlinear optimizers for efficiently solving graphs over a specified subset of measurements.

Ii-B Experimental Data Sets

To evaluate the algorithms positioning performance, two urban data-sets were collected – the East-North ground trace for one of the collected data-sets is depicted in Figure 5 – that contain multiple scenarios which are known to degrade GNSS data. A depiction of the encountered scenarios are provided in Figure 3. To see how these adverse scenarios can affect GNSS positioning performance, the real-time, tightly-coupled GNSS/INS NovAtel positioning solution is compared to the post-processed RTK solution in Figure 4. As can be seen in Figure 4, the adverse environments when driving in an urban setting can induce large fluctuations in positioning error.

These data-sets provide approximately 4 hours of GNSS/INS data. Each data-set contains dual-frequency GNSS pseudorange and carrier-phase observables at 10 Hz. Additionally, 50 Hz IMU data measured by the navigation grade IMU contained in the NovAtel SPAN system is included.

(a) Clear sky
(b) Partial satellite occlusion
(c) Total satellite occlusion
Fig. 3: Multiple scenarios encountered during the GNSS data collect. Two of which are known to degrade GNSS observables (i.e., partial and total satellite occlusion).
Fig. 4: RSOS positioning difference between the reference solution generated by RTK and the positioning solution generated by tightly-coupled GNSS/INS NovAtel’s real-time filter.
Fig. 5: East-North ground trace for example data-set generated by Google Earth.

Ii-C Reference Solution

To generate an accurate comparison solution, static data was collected concurrently with the roving platform. The static data sets provide 1 Hz dual-frequency GNSS pseudorange and carrier-phase observables collected by a NovAtel SPAN receiver. This data is used to generate a Kalman filter/smoother Carrier-Phase Differential GPS (CP-DGPS) positioning solution with respect to the local reference station. The CP-DGPS reference solution was calculated using RTKLIB [16].

Ii-D Evaluation

Using the data-set shown in Figure 5, the factor graph formulation with the above mentioned noise models can be tested against the RTK positioning solution. As a visual comparison, the RTK positioning solution is compared to the solution obtained when utilizing a factor graph with optimization in Figure 6. As seen in the figure, the two solutions agree agree fairly well.

The results for the six cost functions detailed in the previous sections is provided in Table II. This Table provides the mean, median, and max deviation of Residual Sum of Squares (RSOS) difference between the RTK positioning solution and the factor graph solution. From this result it should be noted that no one optimization routine has a clear advantage. This maybe attributed to the fact that — although the data was collected in an environment that will degrade the observables (i.e., building induced multipath, and poor satellite geometry) — the environment may not have been harsh enough to fully leverage the benefits that can be gained by the robust noise models.

Fig. 6: East-North positioning comparison in meters between the reference solution and the factor graph.
median (m) mean (m) max (m)
4.27 10.58 427.38
Huber 5.16 12.30 450.00
Cauchy 5.17 12.50 873.15
DCS 4.28 10.63 547.42
Switch Factor 4.29 11.07 564.44
Max-Mix 4.29 11.34 566.50
TABLE II: Optimization statistic comparison when no artificial faults are included.

To more clearly understand when the robust noise models become beneficial, simulated faults were incorporated into the study. The artificial faults were simulated as Gaussian random variable with a mean of zero and a standard deviation of 50 meters. The simulation faults were add onto the measured pseudorange observable at random, where it is possible that up to 49% of the observables can contain artificial faults.

With the artificial faults added to the pseudorange observables, we will first look at how the traditional M-Estimators, which are detailed in Table I, perform. The RSOS positioning error increase as the percentage of faults increases is depicted in Figure 7. From Figure 7, a clear advantage can be seen with the M-Estimators with respect to both the mean and median RSOS positioning error when compared to traditional

optimization. When a comparison is conducted only between the M-Estimators, a rather large RSOS positioning error decrease is granted when the Cauchy cost function is utilized in place of the Huber. This benefit is not believed to be a general statement about the estimators. Instead, it is probably a byproduct of the limited diversity of data utilized for this study. For example, it was noted in  

[17] — where they were analyzing robust loop closure methods for solving the SLAM problem — that both the Cauchy and the Huber performed similarly.

Fig. 7: RSOS positioning error in meters for M-Estimators as the percentage of artificially degraded observations is increased (A) depicts the median RSOS positioning error and (B) depicts the mean RSOS positioning error.

The same analysis was conducted for the graph based robust inference methods. The RSOS positioning error as a function of the percentage of artificial faults can be seen in Figure 8. From this plot it can be noted that the dynamic covariance scaling is performing considerably worse than any of the other robust estimation techniques evaluated in this study. Additionally, it should be noted how relatively little the positioning performance is affected when switchable constraints or the max-mixtures approaches are utilized at lower observation fault probabilities; however, when the data is highly faulty (i.e., when around 35 percent of the observations contain faults) the switch factor method provides a clear advantage.

Fig. 8: RSOS positioning error in meters for graph based robust methods as the percentage of artificially degraded observations is increased (A) depicts the median RSOS positioning error and (B) depicts the mean RSOS positioning error.

Iii Conclusion

For autonomous navigation systems, there is a requirement that the system can robustly infer its desired states when confronted with adverse environments. To meet this requirement, we evaluated several robust optimization techniques from various fields for the application of GNSS pseudorange data processing in an urban environment. From this analysis, it has been shown that traditional M-Estimators can aid the optimization scheme when confronted with an adverse environment; however, it has also been shown that including Switchable Constraints in the graph considerably outperforms the other methods implemented in this study, specifically as the number of faulty observables increases.

Additionally, to allow the GNSS community to build upon the advances made within other research communities, the software developed and data sets used for this evaluation has been released publicly at https://github.com/wvu-navLab/RobustGNSS, which provides a GNSS front-end integrated with several robust noise models to the GTSAM library.

Acknowledgment

The material in this report was approved for public release on the of June 2017, case number 88ABW-2017-3109.

References

  • [1] P. Huber, Robust Statistics. Wiley New York, 1981.
  • [2] N. Sunderhauf and P. Protzel, “Switchable Constraints for Robust Pose Graph SLAM,” in Intelligent Robots and Systems, 2012.
  • [3] P. Agarwal, G. Tipaldi, L. Spinello, C. Stachniss, and W. Burgard, “Robust Map Optimization Using Dynamic Covariance Scaling,” in International Conference on Robotics and Automation, 2013.
  • [4] E. Olson and P. Agarwal, “Inference on Networks of Mixtures for Robust Robot Mapping,” 2012.
  • [5] R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” Transactions of the ASME–Journal of Basic Engineering, vol. 82, no. Series D, pp. 35–45, 1960.
  • [6] S. Thrun, “Particle Filters in Robotics,” in In Proceedings of Uncertainty in AI, 2002.
  • [7] F. Kschischang and B. Frey, “Factor Graphs and the Sum-Product Algorithm,” in IEEE Transactions on Information Theory, vol. 42, 2001.
  • [8] M. Lourakis, “A Brief Description of the Levenberg-Marquardt Algorithm Implemened by levmar,” 2005.
  • [9] P. Misra and P. Enge, Global Positioning System: Signals, Measurements and Performance Second Edition. Lincoln, MA: Ganga-Jamuna Press, 2006.
  • [10] B. Tolman, R. B. Harris, T. Gaussiran, D. Munton, J. Little, R. Mach, S. Nelsen, and B. Renfro, “The GPS Toolkit: Open Source GPS Software,” in Proceedings of the 16th International Technical Meeting of the Satellite Division of the Institute of Navigation, (Long Beach, California), 2004.
  • [11] P. Huber, “Robust Estimation of a Location Parameter,” The Annals of Mathematical Statistics, vol. 35, no. 1, pp. 73–101, 1964.
  • [12] D. J. Olive, “Applied Robust Statistics,” Preprint M-02-006, 2008.
  • [13] N. Sünderhauf, M. Obst, G. Wanielik, and P. Protzel, “Multipath mitigation in gnss-based localization using robust optimization,” in Intelligent Vehicles Symposium (IV), 2012 IEEE, pp. 784–789, IEEE, 2012.
  • [14] N. Sünderhauf, M. Obst, S. Lange, G. Wanielik, and P. Protzel, “Switchable constraints and incremental smoothing for online mitigation of non-line-of-sight and multipath effects,” in Intelligent Vehicles Symposium (IV), 2013 IEEE, pp. 262–268, IEEE, 2013.
  • [15] F. Dellaert, “Factor graphs and GTSAM: A hands-on introduction,” tech. rep., Georgia Institute of Technology, 2012.
  • [16] T. Takasu, “RTKLIB: Open Source Program Package for RTK-GPS,” in FOSS4G 2009 Tokyo, Japan, November 2, 2009.
  • [17]

    G. H. Lee, F. Fraundorfer, and M. Pollefeys, “Robust pose-graph loop-closures with expectation-maximization,” in

    Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, pp. 556–563, IEEE, 2013.