DeepAI
Log In Sign Up

Centimeter-level Positioning by Instantaneous Lidar-aided GNSS Ambiguity Resolution

High-precision vehicle positioning is key to the implementation of modern driving systems in urban environments. Global Navigation Satellite System (GNSS) carrier phase measurements can provide millimeter- to centimeter-level positioning, provided that the integer ambiguities are correctly resolved. Abundant code measurements are often used to facilitate integer ambiguity resolution (IAR), however, they suffer from signal blockage and multipath in urban canyons. In this contribution, a lidar-aided instantaneous ambiguity resolution method is proposed. Lidar measurements, in the form of 3D keypoints, are generated by a learning-based point cloud registration method using a pre-built HD map and integrated with GNSS observations in a mixed measurement model to produce precise float solutions, which in turn increase the ambiguity success rate. Closed-form expressions of the ambiguity variance matrix and the associated Ambiguity Dilution of Precision (ADOP) are developed to provide a priori evaluation of such lidar-aided ambiguity resolution performance. Both analytical and experimental results show that the proposed method enables successful instantaneous IAR with limited GNSS satellites and frequencies, leading to centimeter-level vehicle positioning.

READ FULL TEXT VIEW PDF

page 2

page 10

12/11/2022

3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons

GNSS and LiDAR odometry are complementary as they provide absolute and r...
12/12/2021

3D LiDAR Aided GNSS NLOS Mitigation in Urban Canyons

In this paper, we propose a 3D LiDAR aided global navigation satellite s...
09/02/2021

Time-correlated Window Carrier-phase Aided GNSS Positioning Using Factor Graph Optimization for Urban Positioning

This paper proposes an improved global navigation satellite system (GNSS...
05/14/2020

High precision indoor positioning by means of LiDAR

The trend towards autonomous driving and the continuous research in the ...
04/15/2020

Performance Analysis for Autonomous Vehicle 5G-Assisted Positioning in GNSS-Challenged Environments

Standalone Global Navigation Satellite Systems (GNSS) are known to provi...
03/06/2019

Multiple configurations for puncturing robot positioning

The paper presents the Inverse Kinematics (IK) close form derivation ste...

I Introduction

With the growing interest in the development of autonomous driving systems, accurate positioning of vehicles in urban environments becomes the fundamental requirement to accomplish such applications. In order to achieve the current goal of autonomous driving, namely the Level 4 systems which should enable driverless operations in a restricted domain, decimeter- to centimeter-level positioning accuracy is required to maintain a vehicle on the road within its lane [12]. It has been suggested that for typical road geometries and driving scenarios, the positioning accuracy of a passenger vehicle on urban roads should be at least 10 cm, 10 cm and 48 cm in the lateral (across the road), longitudinal (along the road) and vertical directions, respectively [26]. Realizing and maintaining such high accuracy does therefore demand a rigorous integration of multiple measuring sensors. As a result, an array of multiple measuring sensors including, but not limited to, Global Navigation Satellite System (GNSS) receivers and Light Detection and Ranging (lidar) devices, is commonly found on modern vehicles. While GNSS is considered an essential positioning technology as it provides globally referenced solutions, its signals suffer from blockages and multipath effects in cities due to the high density of buildings [10, 39]. This is the more so as high-precision GNSS positioning requires the utilization of multi-frequency carrier phase signals transmitted by a modest number of visible satellites, a condition that is challenging to be met in urban areas [40]. In contrast, lidar is not subject to the aforementioned error sources, providing abundant measurements in cities due to the existence of rich geometric features, yet it only offers locally referenced positioning solutions [41]. In this contribution, we therefore aim to exploit the complimentary advantages of GNSS and lidar for urban positioning with a particular focus on the ultra-precise carrier phase measurements.

Although GNSS code (pseudo-range) measurements are easily accessible and served for standard positioning services, it is their carrier phase counterparts that can deliver precise parameter solutions [1, 27, 36, 10, 8, 13]. GNSS carrier phase measurements are approximately two orders of magnitude more precise than the corresponding code versions, and are crucial for achieving centimeter-level positioning. The challenge of using carrier phase measurements, however, is that they are biased by a) unknown integer-valued ambiguities and b) instrumental phase delays [27]. The latter can be eliminated by the widely used positioning technique of real-time kinematic (RTK). In RTK, observations of a nearby reference GNSS station are subtracted from those of the to-be-positioned receiver to remove or largely reduce common sources of GNSS errors like clock offsets, instrumental biases and atmospheric delays [22]

. However, the former, i.e. the unknown integer ambiguities, have to be estimated as real-valued float solutions first. Depending on the precision of the float solutions, Integer ambiguity resolution (IAR) is then employed to map such float ambiguities to their correct integers. If performed successfully, IAR yields ambiguity-resolved carrier phase measurements that can act like ultra-precise code measurements for positioning.

Whether or not IAR is successful is determined by the probability of correct integer estimation, the so-called ambiguity ‘success rate’ 

[29]. The success rate is driven by the float ambiguity variance matrix, which on its turn, is governed by the number and precision of the measurements. In the event that the ambiguity success rate is low, one must refrain from fixing the float ambiguities as it often leads to unacceptably large positioning errors. On the other hand, for the vehicle positioning case where the location of the moving GNSS receiver is highly varying in time, successful IAR is required to be carried out in an instantaneous

manner so as to maintain continuous centimeter-level positioning. Instantaneous or single-epoch IAR is only made possible when a large number of measurements from multiple satellite systems/frequencies are available 

[32], which is often not the case in densely built-up areas due to signal blockage. In such environments, complementary sensing devices are needed for additional measurement.

As a prominent example of such complementary devices, lidar is capable of collecting feature-rich point clouds of the surrounding environments in urban canyons, providing 3D point maps which can be used for mitigation of GNSS errors, see, e.g., [37, 38, 3]. In particular, lidar’s aiding role in GNSS IAR has been investigated by the recent studies [24, 25, 17, 18]

, which rely on features described by geometric characteristics that can be unavailable in complex environments. Meanwhile, recent advances in deep learning has offered data-driven approaches for producing point feature descriptors which are more detailed and invariant than those from the traditional methods, thus improving the chance of successful point cloud registration 

[43, 7]. The goal of the present contribution is to leverage such learning-based lidar features for high-precision positioning and present a GNSS-lidar integration method so as to realize successful instantaneous IAR. To enable an a priori prediction of the method’s IAR performance, we develop closed-form analytical expressions for both the lidar-aided float ambiguity variance matrix and the associated Ambiguity Dilution of Precision (ADOP) [28, 21]. As an intrinsic measure for the average precision of the float ambiguities, ADOP indicates the extent to which the mapping of the float ambiguities to their integers is successful. The applicability of analytical IAR measures presented are supported by experimental results, showing how lidar measurements provide conditions under which instantaneous IAR becomes successful even for the single-frequency GNSS data of low-cost receivers.

The remainder of this paper proceeds as follows. In section II, a lidar-based method using a high-definition (HD) map for aiding GNSS ambiguity resolution is proposed. Accordingly, lidar measurements are generated by registering the measured point cloud with the HD map, which contains georeferenced scans of the road environment collected from a previous time, via a learning-based keypoint extraction strategy [42]. Such learning-based point cloud registration method is used to produce lidar observations for positioning. Section III presents the mixed measurement model with which one can integrate the stated lidar observations with their GNSS counterparts. The lidar observations take the form of matched keypoints’ coordinates, aiding GNSS observations to estimate the float ambiguities. Analytical measures for the precision of the lidar-aided float ambiguities are then presented in Section IV. We thereby show how the precision of the integrated solution competes with that of the lidar-only solution to reduce the ADOP, thus improving the corresponding IAR performance. Section V presents the configurations of the simulated experiment that is followed by Section VI showing the corresponding numerical results. A discussion on the results is given in Section VII. Finally, concluding remarks are drawn in Section VIII.

Ii Lidar observations generated by point cloud registration

Fig. 1: Overview of the proposed lidar-aided ambiguity resolution method.

The key to enabling GNSS carrier phase ambiguity resolution is to improve the precision of float solutions with redundant measurements. In this paper, lidar measurements are integrated with GNSS code observations by employing the Weighted Least Squares (WLS) method to obtain the float solutions. To achieve the ‘minimum-variance’ float solutions, the weight matrix underlying WLS is taken as the inverse of the measurements’ variance matrix [30]. The measurements are produced by registering rover (online) scans collected by the laser scanner on-board a vehicle with reference (offline) scans obtained from a pre-built HD map. The corresponding keypoints are extracted to estimate a rigid transformation which provides the position of the origin of a rover scan, i.e., the vehicle, provided that the laser scanner is calibrated to align with the center of the vehicle. This lidar positioning method has been shown to be effective in terms of availability and accuracy of positioning in our previous work [42]. Fig. 1 depicts the workflow of the proposed lidar-aided ambiguity resolution method, in which the lidar observation generation discussed in this section is reflected in the steps in the top left part.

Several coordinate systems are used throughout the following sections. We define e-frame as the geocentric WGS84 frame, which is used for all GNSS measurements and the computations of the positioning solutions. The point clouds are originally collected in a Cartesian frame with the sensor at its center, namely the l-frame. To integrate the generated lidar measurements with GNSS, reference scans in the HD map are transformed and aligned to a local frame c-frame with an arbitrary origin first, then to the e-frame.

Ii-a HD map definition

An HD map includes information needed for vehicle positioning and can take various forms [19]. In this research, we use an HD map that utilizes accurately georeferenced point clouds of the road environment collected from a previous time. Each point cloud is stored in its original l-frame with the laser scanner as the origin, as well as a georeferencing matrix to transform it to e-frame so that the positioning solutions are in this coordinate system. Such point clouds that make up of the HD map are referred to as reference scans and must be available prior to the vehicle positioning tasks. A procedure for preparing an HD map using KITTI dataset [6] is provided in Section V-B.

Ii-B Learning-based keypoint extraction

The registration between a pair of rover and reference scans requires correctly corresponded keypoints. For fast and accurate keypoint extraction, we use MS-SVConv, a multi-scale deep neural network that outputs feature vectors from point clouds

[11]. A model needs to be pre-trained on a large number of point clouds with ground truth alignments before it can be used to compute feature vectors.

Assuming that for one epoch a rover scan is collected by the lidar sensor and a nearby reference scan which shares an overlap is identified from the HD map using position estimates from less demanding techniques such as Standard Point Positioning (SPP), the pre-trained MS-SVConv model is applied to produce the feature vectors per point, based on which the keypoint matching is performed. Since keypoints extracted by MS-SVConv often contain incorrect correspondences, a subset of keypoints with their feature vectors are selected to register the two point clouds using Random Sample Consensus (RANSAC) [4]

, which finds an outlier-free set of keypoint correspondences to estimate a transformation matrix that georeferences the rover scan from

l-frame to e-frame. Positions of successfully matched keypoints are then used as lidar observations for vehicle positioning.

Ii-C Lidar observation equations

The lidar observations are constructed by the estimation of the transformation from the rover scan to the reference scan, in which the translational parameters are identical to the vehicle position. The coordinates of the keypoints from the rover scan in l-frame serve as the measurements, corresponding with the known coordinates of their matched counterparts from the reference scan in e-frame. Thus, each keypoint provides 3 observations. For pairs of matched keypoints, the observation equation of lidar measurements is therefore as follows

(1)

in which denotes the Kronecker product [9], and is a matrix of ones. is the vector of the estimated vehicle position in e-frame and is a matrix of the unknown rotational parameters. The Jacobian matrix is given by . By defining as the vector of the measured coordinates of one keypoint in l-frame and concatenating all the keypoints, the vectors of measured keypoint coordinates, measurement residuals and known keypoint coordinates, namely , and , are formed as

For integration with GNSS observations in the WLS method which will be discussed in the next section, the uniform weight matrix of lidar measurements is defined by

(2)

with the root mean squared residual distance , in which is the residual distance between a registered keypoint and its correspondence in the reference scan in the RANSAC estimation.

Iii GNSS-lidar integration and ambiguity resolution

In this section, the model integrating lidar and double-differenced (DD) GNSS observations to deliver the float solutions including ambiguities and using them for IAR is discussed. It is also reflected in the top right and bottom parts of Fig. 1. We base our method on the assumptions that the collected lidar and GNSS measurements are time-synchronized and are both referenced from the center of the vehicle.

Iii-a Double-differenced GNSS observation equations

Using to denote the vectors of DD observed-minus-computed code and carrier phase measurements for visible satellites and frequencies collected in one epoch, respectively, the two can be concatenated as . Assuming that the baseline between the GNSS receiver installed on the vehicle and the reference station is short enough so that the DD atmospheric delays can be ignored, the linearized GNSS observation equation is given as [14, 13]

(3)

where with diagonal matrix links the DD ambiguities with the GNSS measurements by the wavelength of each observed frequency. The DD satellite-to-receiver range vector can be linearized as , with being its approximate version. The coefficient matrix has the dimensions of , with the matrix containing the satellite-to-receiver direction unit vectors, and the matrix forming the between-satellite differences. Therefore, is a vector of the increments to the unknown vehicle position. Similar to the lidar observation equation (1), the vector contains the residuals of the GNSS code and carrier phase measurements.

The GNSS measurements are weighted according to the satellite elevation angles . The dimensionless weight matrix for undifferenced GNSS measurements can hence be constructed as

(4)

Accordingly, the weight matrices of DD code and carrier phase measurements are given as and , respectively, where and

denote the zenith-referenced standard deviations of the undifferenced code and phase observations 

[16]. The phase observations are assumed to be 100 times more precise than their code counterparts in our implementation. Thus , with being the phase-to-code variance ratio. The factor in both expressions indicates that the variances of the measurements are doubled due to differencing.

Iii-B Float solution by GNSS-lidar integration

With the observation equations (1) and (3) in place, which share the common unknown vehicle position , we can estimate the float solution using the WLS principle. Due to the intertwining of the unknown rotational parameters and the measurements in the lidar observation equations, the mixed measurement model is employed to perform the estimation [42, 30]. The mixed model combining measurements from the two sensors can be formed as follows

(5)

where the vector of unknown parameters , with being the vector form of a close approximation of , contains the DD ambiguities, the vehicle position and the rotational parameters of the rover scan. and are the vectors of measurements and residuals. The mixed model is linearized using the first-order Taylor expansion of about the point to give the expression on the right hand side of (5), in which is the approximated version of in each iteration, giving the unknown increment vector . Hence, . The Jocabian matrices and for and are given as follows

(6)

with denoting a block diagonal matrix. The Jacobian sub-matrices are given by , and , in which matrix is formed by the lidar measurements (). Application of WLS to the mixed model (5) gives the float increment solution and their variance matrix as

(7)

with . The weight matrix is constructed as . The float solution is iteratively computed as , in which is replaced by the solution from the previous iteration. The solution convergence is considered reached by a stopping criterion, for instance, when the magnitude of is smaller than a specified threshold.

Iii-C Integer ambiguity resolution

The float solution obtained from (7) contains the estimated vehicle position (), DD ambiguities which are real-valued () and the rotational parameters for the georeferencing of the rover scan (). For simplicity, we use to denote the non-ambiguity unknown parameters here. In order to utilize the highly precise carrier phase measurements for positioning, the float ambiguities need to be mapped to the correct integers using an ambiguity resolution method to yield the fixed solution [32]

, provided that the estimated float parameters follow the normal distribution:

(8)

In this paper, we use the well known LAMBDA method for IAR, which employs the Integer Least-Squares (ILS) ambiguity estimator [27]. By assessing the float and , LAMBDA outputs the fixed integer ambiguities , as well as the evaluated formal success rate [35]. In order to determine whether the fixed ambiguities should be accepted, an acceptance test is needed. For example, the formal success rate can be required to be higher than a given threshold such as 99.9%. Otherwise, the float solution is retained. Once is accepted, the fixed solution of the remaining parameters, say and its variance matrix , can be obtained by [27]

(9)

where the first 3 elements of are the fixed position .

Iv Theoretical performance assessment using ADOP

So far we have discussed the model to integrate GNSS and lidar measurements for IAR. To provide a prediction of the method’s IAR performance before taking any measurements, one needs the variance matrix of the float ambiguity solutions. Such variance matrix enables us to a priori evaluate the ambiguity success rate and quantify the IAR performance using its associated ADOP, which indicates the upper bound of the ambiguity success rate [28, 36]. In this section, we therefore present closed-form expressions for both the ambiguity variance matrix and its ADOP. Given such closed-form analytical measures, we will then evaluate the formal ambiguity success rates and ADOP of the proposed lidar-aided model with the aid of Ps-LAMBDA software [34].

Iv-a Variance matrix of the float solution

Firstly, the expressions of the precision of float ambiguities and positioning solution are developed to support the evaluation. Applying the WLS principle yields the normal matrix of the estimated parameters for lidar as [30, 42]

(10)

with . The reduced normal matrix of the estimated position is therefore , which leads to the variance matrix of the lidar-only positioning solution as follows

(11)

On the other hand, the variance matrix of the float solution obtained with GNSS code observations is given as

(12)

with the projection matrix . Therefore, (11) and (12) can be combined to give the variance matrix of the integrated float solution:

(13)

Likewise, the variance matrix of float ambiguities computed using both code and lidar measurements takes the following form

(14)

in which one can replace with (12) to obtain the variance matrix of the GNSS-only float ambiguities . The ambiguity variance matrix (14) can serve as input to evaluate formal ambiguity success rates.

Iv-B ADOP of lidar-aided ambiguity resolution

Next to to formal ambiguity success rates, one can also evaluate the ADOP of one’s measurement model to assess the underlying IAR performance. ADOP is defined on the basis of the ambiguity variance matrix as follows [28]

(15)

where denotes the determinant of a matrix. The smaller the ADOP, the higher the ambiguity success rate becomes [13, 31]. For a minimum success rate of 99.9%, ADOP should be smaller than 0.12 cycles, whereas for 99%, it should be smaller than 0.14 cycles [21]. For the single-epoch GNSS-only model (3), ADOP can be expressed as [21]:

(16)

with , and , where is the diagonal entries of . In order to compare the ADOP of GNSS-only (16), denoted by , with that of the lidar-aided method, say , one can express their ratio as follows (see Appendix)

(a)
(b)
Fig. 2: ADOP-ratio (17) (solid lines) and its three approximate versions (19) (dashed lines) as functions of the number of visible satellites for a single-frequency GNSS receiver (), when 44 matched lidar keypoints are given. (a) ADOP-ratios with m, m, m. (b) ADOP-ratios with m, m, m.
(17)

where the eigenvalues

() are the roots of the characteristic equations

(18)

The ADOP-ratio (17) tells us the extent to which the ADOP of the lidar-aided model is smaller than that of the GNSS-only model given in (16). The first expression of (17) indicates that the precision of the integrated solution in (13) competes with that of the lidar-only solution in (11) to reduce the ADOP-ratio. Consider the hypothetical scenario where the precision of the lidar measurements is extremely poor (i.e., ), the ADOP-ratio reduces to 1, that is, . This would imply that the lidar measurements do not contribute to the integrated solutions. Now consider another extreme case where the lidar measurements are significantly more precise than the GNSS code measurements so that the precision of the integrated solutions becomes almost identical to that of the lidar-only solutions, i.e., or . Given the small phase-to-code variance ratio , such extreme case would therefore make the ADOP-ratio close to zero. This again makes sense, showing the more precise lidar measurements the smaller the ADOP-ratio (17).

The second expression of (17) reveals the link between the IAR performance of the GNSS-lidar integrated solution and the eigenvalues defined in (18). These eigenvalues are in fact the stationary values of the objective function  [28], with being an arbitrary unit direction vector. The smallest one, i.e., , indicates the minimum reduction in the variance of the lidar-only positioning solution when GNSS code data is integrated with the lidar data. Likewise, the largest eigenvalue indicates the maximum reduction in the variance of the positioning solution.

Each of these three eigenvalues can be served to ‘approximate’ the ADOP-ratio (17). For instance, by assuming that the eigenvalues are equal to one another, i.e. (), three different approximate versions of (17) are made as follows

(19)

Fig. 2 presents the ADOP-ratio (17) (solid lines) and its three approximate versions (19) (dashed lines) as functions of the number of visible satellites when a single-frequency GNSS receiver is aided with 44 correctly matched lidar keypoints, which is the empirical minimum number of keypoints per epoch found in our experiment (cf. Section VI). The phase-to-code variance ratio is assumed to be . In Fig. (a)a, precise lidar measurements are considered with m, which is the mean value obtained from our experiment, whereas Fig. (b)b shows the results for less precise lidar measurements with m, which is approximately the maximum point spacing in point clouds used in the experiment collected by a Velodyne HDL-64E scanner [33]. In either case, it is observed that the ADOP-ratio (17) gets closer to 1 the higher the number of satellites, meaning that the lidar-integration cannot contribute to the GNSS-only IAR performance by much when the receiver tracks a rather large number of satellites (e.g., ). Interestingly however, the close-to-zero ADOP-ratios for show that the lidar-integration can be indeed instrumental when not too many satellites are tracked. This is often the case in urban canyons where GNSS receivers frequently lose the tracking of GNSS signals. For the cases where standalone GNSS-only IAR is not possible, i.e. when , the integrated solution becomes almost identical to that of the lidar-only solutions, i.e., . That is why the ADOP-ratio is shown to be almost zero in the figure. In the following, this notion will be made more precise by comparing the underlying ADOPs and ambiguity success rates of the GNSS-only and lidar-aided models under various scenarios.

Iv-C Ambiguity resolution performance compared

(a)
(b)
Fig. 3: ADOP and ambiguity success rates of GNSS-only and lidar-aided models against different numbers of satellites. (a) ADOP and ambiguity success rates with m, m, m. (b) ADOP and ambiguity success rates with m, m, m.

We now analyze the IAR performance using (16) and (17), as well as the ambiguity success rates under various configurations for both the GNSS-only and lidar-aided models to study how many satellites they require for successful IAR. We first present and discuss the ambiguity resolution performance of the GNSS-only model in a generalized scenario without considering satellite elevation angles by simplifying to equal weights, i.e., . The precision of undifferenced code and phase data, namely and , are assumed to be 0.2 m and 0.002 m, respectively, for high-grade receivers. Hence, the precision of phase observations is about 1% of their wavelengths (). Fig. 3 depicts the ADOP and ambiguity success rates (evaluated by Ps-LAMBDA, denoted by ) computed in different setups with respect to the number of tracked satellites. It is shown in Fig. (a)a that with dual-frequency GNSS-only data (), and the corresponding success rate can reach 0.097 cycles and 99.9% with only 5 satellites, meaning that successful ambiguity resolution with single-system, dual-frequency observations is indeed possible if the corresponding GNSS code data are not imprecise (e.g., m). In this case, there is no need for the inclusion of lidar data. In comparison, when only single-frequency data from 5 satellites is available () for the GNSS-only model, and the success rate are evaluated as about 0.547 cycles and 11.2%, respectively, which are insufficient for IAR. In order to constrain to 0.12 cycles, has to be increased to 8, while Ps-LAMBDA only reports a success rate higher than 99.9% when .

Note that the use of equal weights in this approximation ignores the elevation-dependent effects on the GNSS signals, an assumption which cannot always hold in practice. This highlights that the ambiguity resolution performance can be expected to be even poorer in practice. Moreover, low-grade GNSS receivers can have much less precise code observations. For example, assuming that code data is severely affected by the low quality of the receiver and/or antenna by setting m and the same as above, using single-frequency data becomes 1.247 cycles for 5 satellites and at least 10 satellites are needed for cycles, as illustrated in Fig. (b)b. To ensure an evaluated success rate of 99.9%, at least 11 satellites are required. Therefore, it is not feasible to pursue instantaneous IAR using the GNSS-only model with single-frequency data, unless a large number of satellites from multiple systems can be tracked.

We now examine the lidar measurements for the float solutions to show their impact on ambiguity resolution. Similar to the optimistic configuration above, with m and m, each lidar observation is assumed to be slightly more precise than a code observation with m. Using the same number of keypoints as that in Fig. 2, and () obtained using (18), and with respect to the number of satellites are also presented in Fig. (a)a. It is indicated that the integrated data consistently achieves similar or even better ambiguity resolution performance than dual-frequency GNSS-only data, with the two quantities always being around 0.02 cycles and 100%. This advantage is more evident when there are only a few tracked satellites, as lidar becomes the main contributor to the float solutions. Notably, in the case of tracking only 2 or 3 satellites, the GNSS-only model fails to resolve integer ambiguities due to the lack of measurements, whereas the lidar-aided model can still enable IAR since the lidar measurements devote to the estimation of the three positional unknown parameters. Furthermore, as lidar provides a large number of measurements, the difference between the ambiguity resolution performances of using single-frequency or dual-frequency GNSS data becomes negligible.

On the other hand, lidar measurements can suffer from observational noise, environmental changes, dynamic objects, etc., leading to a lower precision. In a pessimistic scenario with m, as well as m for imprecise code measurements, , and their corresponding are shown in Fig. (b)b. Notably, due to the lower precision of code observations, the IAR performance of GNSS-only model decreases, requiring more satellites to be tracked for an ambiguity success rate of 99.9%. In contrast, the lidar-aided counterpart is similar to previous results, with and consistently being around 0.05 cycles and 99.9%, suggesting that successful ambiguity resolution can still be achieved with any number of satellites and/or frequencies under such pessimistic assumptions.

Iv-D ADOP evaluation of lidar-aided ambiguity resolution

So far we have learned that with the tracking of a sufficient number of GNSS satellites, IAR can be realized without lidar contribution, and the same holds true when there are limited GNSS observations but a number of lidar keypoints. To study the requirement of lidar precision for successful IAR, which is the other factor affecting the performance of the lidar-aided model, let us now consider the situation in which the number of corresponding keypoints is either 4 or 44, which are the theoretical minimum number to make (1) solvable and the empirical minimum used in previous comparison, respectively. and are maintained as m and m in this analysis. In contrast to the previous investigation, this analysis is established in an elevation-weighted scenario using GNSS satellites with high elevation angles to study the ambiguity resolution performance for vehicle positioning in urban canyons, where satellites with low elevation angles can be blocked by buildings. The satellite skyplot in Fig. 4

indicates the geometry of the satellites used in this analysis with respect to the receiver, where the satellite elevation angles are above 40°. These satellites are included by descending order of their elevation angles for the evaluation. Since the IAR performance can be sensitive to the geometric distributions of the keypoints when their number is low, each ADOP value is computed as the average of 100 trials with keypoints randomly selected around the receiver.

Fig. 4: Skyplot of satellites in analyzed elevation-weighted scenario, satellite elevation angles are above 40°.
(a)
(b)
(c)
Fig. 5: ADOP evaluation results for for different combinations of the number of satellites and lidar precision. (a) ADOP evaluation results with and 4 correctly matched keypoints; (b) ADOP evaluation results with and 4 correctly matched keypoints; (c) ADOP evaluation results with and 44 correctly matched keypoints.

Fig. 5 presents the values for numerous combinations of the numbers of satellites () and lidar precision () for the lidar-aided instantaneous ambiguity resolution method. For comparison, to obtain values below 0.12 cycles computed using (16), the GNSS-only model requires at least 9 high-elevation satellites for single-frequency GNSS data (), which can be difficult to access in densely built-up areas for vehicle positioning, while 5 high-elevation satellites with dual-frequency GNSS data () are needed to produce the similar . With the lidar-aided model, on the other hand, by using single-frequency GNSS observations and the theoretical minimum of 4 keypoints, the number of high-elevation satellites needed for the value of 0.12 cycles reduces to 7, provided that m, as shown in Fig. (a)a. Similarly, the lidar precision requirement can be relaxed to 0.77 m for 0.14 cycles or the success rate upper bound of 99%. It is therefore evident that 4 matched keypoints only can already reduce the required satellites by 2 for successful single-frequency IAR. In addition, Fig. (b)b shows that by introducing dual-frequency GNSS observations, is always below 0.12 cycles for 3 and more satellites, whereas for 2 satellites, would increase above 0.12 cycles but is still below 0.14 cycles for poor lidar precision ( m). Nonetheless, it is observed in Fig. (c)c that once the number of keypoints increases to the empirical minimum of 44, can be kept below 0.12 cycles for any number of satellites from 2 even with the lowest lidar precision m, which corresponds with the results in Fig. (b)b. It should be remarked that using the proposed keypoint extraction strategy, the quantity of corresponded keypoints can be expected to be at least equal to the empirical minimum upon successful registration. Conceivably, lidar-aided IAR is even more accessible by using dual-frequency GNSS data with this configuration, hence the results demonstrating the values for dual-frequency GNSS observations and 44 correctly matched keypoints are omitted. In summary, the theoretical ADOP analysis has established that even with the theoretical minimum number of keypoints (i.e., 4), successful IAR is possible for any number of satellites using the lidar-aided method when dual-frequency GNSS data is available. More importantly, when a reasonable amount of lidar keypoints are present (e.g., 44), the proposed method can substantially improve the ambiguity resolution performance to the extent that even single-frequency IAR is made feasible for only a few high-elevation satellites without requiring highly precise lidar measurements. This is a great advantage for vehicle positioning since the GNSS-only model can fail IAR in urban canyons due to restricted satellite visibility.

V Experimental setup

The proposed instantaneous lidar-aided ambiguity resolution method is evaluated in an experiment simulated using GNSS and lidar measurements from two real datasets. In this section, the data collection and pre-processing details are presented. To verify the performance predictions made earlier in Section IV, we collected GNSS data from a 30-minute session of observations on a stationary point in a controlled environment, while the lidar data was obtained from the KITTI dataset [6] and simulated around the same point to build the HD map.

V-a GNSS data collection

The GNSS raw observations used in the experiment were collected using a low-cost u-blox F9P dual-frequency receiver with an ANN–MB patch antenna between 1:56:29 AM and 2:26:28 AM, GPST (GPS time) on 29 June 2021, at the sampling rate of 1 Hz. The antenna was placed on a fixed point in an open-sky environment in Melbourne, Australia. A survey-grade GNSS receiver Leica GS16 was also used to measure the same point simultaneously to provide the ground truth coordinates as reference. From now on, this point will be referred to as Target. The equipment configuration is shown in Fig. 6.

Fig. 6: Equipment used for GNSS data collection, including ANN–MB patch antenna, u-blox F9P receiver and Leica GS16.

For data obtained from both receivers, differential code and phase observations are derived using a nearby Continuously Operating Reference Station (CORS), namely EMEL, which is equipped with a Trimble NETR9 receiver. Therefore, a short baseline Target–EMEL illustrated in Fig. 7 is formed with the approximate distance of 1470 m. By defining empirical success rate as the proportion of positioning epochs with correctly fixed integer ambiguities, we take the DD ambiguities computed using GPS+QZSS (L1/L2) solutions obtained with u-blox F9P and EMEL as the benchmark ambiguities to evaluate the IAR capability of the proposed method, as the formal ambiguity success rates are assessed higher than 99.9% for all epochs using this configuration.

Fig. 7: Target–EMEL baseline in Melbourne, Australia. Left: Target point; middle: baseline configuration; right: EMEL CORS, basemap from Google, 2018.

V-B KITTI lidar data pre-processing

In Section II we presented the process of producing lidar measurements by learning-based point cloud registration. Here we provide the pre-processing of lidar data obtained from the KITTI dataset for simulating the pre-built and georeferenced HD map in e-frame, so that the generated lidar measurements can be integrated with their GNSS counterparts. A total of 3600 point clouds collected in Karlsruhe, Germany with Velodyne HDL-64E are split into equal numbers of rover and reference scans. In other words, we assume that the rover scans are collected by the lidar sensor on-board of the rover vehicle, whereas the reference scans are used for the HD map. Due to the lack of SPP positions in the dataset, the rover-reference scan matches are pre-assigned with a 3 s time interval to ensure that each pair of rover and reference scans contains a reasonable amount of overlap to enable registration.

The point clouds are originally in l-frame, they need to be transformed to c-frame, which is equivalent to the l-frame of the first scan in the sequence, then to e-frame to produce lidar measurements that can be combined with their GNSS counterparts. For the () point cloud in the sequence, the transformation which transforms it to c-frame is given in the ground truth poses of KITTI. Hence, for each pair of rover and reference scans we have transformations and , and the transformation to align the two can be obtained as

(20)

Note that this transformation is not used for positioning, but rather simulating the reference scans at locations near Target so that the estimated vehicle positions should coincide with Target. A transformation matrix that georeferences the rover scan is defined with only a translation that moves its origin (i.e., ) to the ground truth coordinates of Target measured by the survey-grade receiver. The reference scan is therefore georeferenced in e-frame using:

(21)

which is applied to the matched keypoints found in the reference scans upon successful registration so that the computed lidar measurements are in e-frame.

Vi Results

In this section, we present the experimental results in terms of positioning accuracy and IAR performance of the proposed lidar-aided ambiguity resolution method. Using the GNSS and lidar data prepared in Section V, the proposed method is tested by resolving the position of Target for 1800 epochs using GPS (L1)+lidar data to simulate the limited satellite visibility in urban environments. In addition, other positioning approaches using GPS (L1), GPS (L1/L2), GPS+QZSS (L1/L2) and Lidar-only measurements are used for comparison. In terms of the precision of GNSS code and carrier phase measurements, and are chosen as 0.2 m and 0.002 m, respectively, as the GNSS data was collected in a controlled environment despite a low-cost receiver was used. On the other hand, for the weighting of lidar measurements is computed as approximately 0.15 m on average, therefore .

Vi-a Keypoint matching accuracy

Keypoint matching to produce lidar measurements is the first step of the proposed positioning method. Feature vectors are computed for all points in the scans using MS-SVConv and 3000 pairs of them per epoch are randomly selected to estimate the transformation to align the rover scan to the reference scan using RANSAC [4]. Instead of training and testing the deep learning model on the same dataset, we use the model pre-trained with ETH dataset[23] and perform inference on the KITTI lidar data to test the transferability of MS-SVConv. In order to numerically evaluate the accuracy of the matched keypoints, we use SRE (scaled registration error)[5], a scalar measure of registration accuracy which considers both rotational and translational errors. For registered point cloud and its ground truth counterpart with corresponded points, namely and , given that is the geometric centroid of , the SRE is computed with

(22)

where denotes the Euclidean norm. Since the registration error of each point is scaled by its distance to the centroid, one can infer that SRE reflects the phenomenon that rotational errors have a larger impact on points further from the laser scanner. As a result, registration is successfully conducted for all of the 1800 pairs of rover and reference scans, averaging 134 keypoints per epoch, with the minimum number of 44. Fig. 8 shows the distribution of the SRE values, in which 1044 epochs have SRE smaller than 0.005, while the maximum is 0.027. In comparison, Fontana et al. [5] obtained a mean SRE of 0.408 using the Iterative Closest Point (ICP) algorithm on multiple public datasets. Therefore, all of the registrations are considered accurate, and MS-SVConv exhibits a good transferability on point clouds from different environments.

Fig. 8: Distribution of SRE values of the tested scans registered using MS-SVConv.

Vi-B Positioning accuracy

To demonstrate the accuracy of the tested positioning approaches, we compute the RMSE (root mean squared error) of the solutions with respect to the ground truth. Lidar-only positioning solutions are produced using the generated lidar measurements since the estimated translational parameters are equivalent to the sought unknown positions. For GNSS-involved methods, we use an acceptance test that integer ambiguities are fixed when the formal ambiguity success rate is evaluated as equal to or above 99.9%, otherwise the float solutions are retained.

Positioning Horizontal Vertical 3D
Method RMSE [m] RMSE [m] RMSE [m]
GPS (L1) 0.648 1.190 1.355
GPS (L1/L2) 0.052 0.065 0.083
GPS+QZSS (L1/L2) 0.009 0.018 0.020
Lidar-only 0.031 0.021 0.038
GPS (L1)+lidar 0.008 0.015 0.017
TABLE I: Horizontal, Vertical and 3D RMSE of Positioning Solutions from All Tested Methods (ambiguities are fixed when ).

Positioning errors per epoch, time series of the numbers of satellites and ADOP of the tested methods are provided in Fig. 9. In addition, Table I illustrates the horizontal, vertical and 3D RMSE of the 5 positioning approaches. It is shown that for the positioning duration, 6 to 7 GPS satellites and 3 QZSS satellites can be tracked. According to the analysis using ADOP in Section IV, at least 8 satellites are required to achieve the success rate of 99.9% with one frequency. This corresponds with the GPS (L1) results in Fig. (a)a, in which all of the solutions are float because of high ADOP (or low success rates). Fig. (a)a also suggested that the minimum number of satellites needed for ADOP 0.12 cycles with two frequencies is 5, which agrees with Fig. (b)b, in which all solutions are fixed for GPS (L1/L2) since sufficient satellites are observed. However, two epochs are found with wrongly fixed solutions when the numbers of satellites decrease from 7 to 6. Unsurprisingly, the accuracy of GPS (L1) is the lowest among all, giving a meter-level 3D RMSE of 1.355 m, whereas GPS (L1/L2) improves it to 0.083 m because of having more observations to enable IAR. In comparison, GPS+QZSS (L1/L2) can successfully fix integer ambiguities for all epochs due to the additional satellites, significantly improving the precision of the positioning results. The horizontal and 3D RMSE also dramatically decrease to 0.009 m and 0.02 m, offering millimeter- to centimeter-level accuracy.

Moving on to the positioning methods with the integration of lidar measurements, although lidar registration is considered highly accurate in this experiment (Fig. 8), the horizontal, vertical and 3D RMSE of the Lidar-only positioning solutions are 0.031 m, 0.021 m and 0.038 m, respectively, which are less accurate than GPS+QZSS (L1/L2). However, as predicted in Fig. (c)c, with abundant lidar keypoints with decimeter-level precision and more than two GNSS satellites, single-frequency IAR is feasible. Fig. (e)e shows that by integrating lidar and GPS (L1) observations, ADOP is always below 0.12 cycles and all ambiguities are correctly fixed, giving the positioning accuracy that is even higher than GPS+QZSS (L1/L2) thanks to the contribution of lidar, with the horizontal, vertical and 3D RMSE of 0.008 m, 0.015 m and 0.017 m.

(a)
(b)
(c)
(d)
(e)
(f)
Fig. 9: East-North-Up errors, numbers of satellites and ADOP of positioning results from all tested methods (ambiguities are fixed when for a to e). Grey: float solutions; red: wrongly-fixed solutions; green: correctly-fixed solutions; purple: positioning solutions for Lidar-only method. (a) GPS (L1) results. (b) GPS (L1/L2) results. (c) GPS+QZSS (L1/L2) results. (d) Lidar-only results. (e) GPS (L1)+lidar results. (f) GPS (L1) results by fixing all ambiguities.

Fig. 10

presents the cumulative distribution function (CDF) of the horizontal and 3D errors of the tested methods. The superiority of the proposed lidar-aided method in terms of positioning accuracy is clearly shown, outperforming all the other positioning methods. Moreover, to demonstrate the precision improvement of the GPS (L1)+lidar solutions brought by the lidar-aided ambiguity resolution, we evaluate the square-root precision gain in the East-North-Up directions in Fig. 

11, which shows how many times the precision of positioning solutions increases by fixing integer ambiguities. Due to the considerably larger number of measurements and higher precision of lidar than the code observations, the float solutions are almost the same as the Lidar-only ones, which can be observed from Fig. (d)d and (e)e. However, by correctly fixing the integer ambiguities, the carrier phase observations further improve the positioning precision by around 6 times horizontally and 2 times vertically, which explains the higher accuracy and precision of GPS (L1)+lidar results than those of Lidar-only as shown in Table I and Fig. (e)e. Note that the precision of Lidar-only solutions in 3 directions are homogeneous in this experiment, and the lower vertical precision gain is caused by the lower precision of GNSS observations in the Up direction.

(a)
(b)
Fig. 10: Cumulative distribution functions of 2D and 3D errors of positioning solutions from all tested methods. (a) 2D CDF. (b) 3D CDF.
Fig. 11: Square-root precision gain of GPS (L1)+lidar positioning solutions (ambiguities are fixed when ).

Vi-C Ambiguity resolution performance

In order to assess the ambiguity resolution performance in terms of the proportion of correctly fixed epochs, full ambiguity resolution is applied by removing the acceptance test and forcing the resolved integer ambiguities to be fixed for all epochs for the demonstrated positioning methods. Fig. (f)f shows positioning errors per epoch, numbers of tracked satellites and ADOP time series of GPS (L1) results, with the empirical success rate computed as 22.7% and the mean formal success rate evaluated as 48.2%. Note that GPS+QZSS (L1/L2) and Lidar-only results are not applicable for empirical success rate evaluation since the former provides the benchmark ambiguities and the latter does not utilize GNSS observations. The results of GPS (L1/L2) and the proposed method, namely GPS (L1)+lidar, are also omitted as they are identical to those in Fig. (c)c and (e)e since all fixed solutions are already accepted when the acceptance test is present. The empirical success rates of these two methods are 99.9% and 100%, respectively, while their formal success rates are both computed as above 99.9%. Again, we have previously concluded that IAR with single-frequency GNSS-only observations from fewer than 8 satellites is not feasible, which is reflected here by the low empirical success rate of GPS (L1). In comparison, the integration of lidar data substantially increases the empirical success rate without requiring additional GNSS measurements and correctly fixes all the integer ambiguities, while keeping ADOP below 0.12 cycles and achieving comparable ambiguity resolution performance as the GNSS-only model using dual-frequency data.

Vii Discussion

Vii-a Quality of lidar measurements and HD map

Although we have shown analytically that decimeter-level precision of lidar measurements can enable successful IAR (Fig. 5), one limitation of our experiment is that we have used highly accurate lidar data, as the SRE of the registered keypoints suggested (Fig. 8). In terms of the HD map, the reference scans are georeferenced with the validated ground truth information provided in the KITTI dataset. In practice, for large-scale HD map products of urban road environments, such accurate georeferencing can be difficult to achieve and they may be produced with larger errors, decreasing the accuracy of the derived lidar measurements. On the other hand, since the reference scans are meant to be acquired from a previous time, the registration accuracy may be influenced by the environmental differences between the rover and reference scans if the HD map is not up-to-date, which is not reflected in our experiment. Furthermore, due to the lack of GNSS raw observations synchronized with the lidar data in KITTI, the rover-reference scan pairs are pre-assigned with a 3 s interval, which is equivalent to the distance of a few meters. In reality, the nearest reference scan from the HD map should be identified using position estimates of the vehicle in real time from less demanding techniques such as SPP.

Vii-B Number of lidar keypoints and empirical success rate

It has been demonstrated in Section IV that the number of correctly matched keypoints is not required to be large to ensure the ambiguity success rate upper bound of 99.9%, provided that the precision of the lidar measurements is at the decimeter-level, or that a few satellites can be tracked. In order to determine a recommended number of lidar keypoints for consistently successful IAR in a practical environment, we have repeated the experiment using the GPS (L1)+lidar positioning setup with the number of lidar keypoints limited to a range between 5 and 45 by applying full ambiguity resolution. The empirical success rates against different numbers of keypoints are shown in Fig. 12, which indicates that the empirical success rate is beyond 99% with only 10 keypoints, and increases to above 99.9% when 35 or more keypoints are used. It should be remarked that this quantity of correspondences between registered rover and reference scans can be easily obtained, as the empirical minimum number of keypoints per epoch in our experiment is 44.

Fig. 12: Empirical success rates using different numbers of keypoints and full ambiguity resolution.

Vii-C Runtime efficiency

The keypoint matching and positioning stages of the proposed method are experimented with the Torch-Points3D

[2] implementation of Ms-SVConv and MATLAB [20], respectively. On a platform consisting of AMD Ryzen 3800XT CPU and NVIDIA RTX 3070 GPU, the former approximately takes 0.85 s and the latter takes 0.05 s to complete the computation for each epoch. Therefore, the proposed instantaneous lidar-aided ambiguity resolution method has the potential of real-time positioning for vehicles.

Viii Concluding remarks

In this contribution we proposed an instantaneous lidar-aided ambiguity resolution method focusing on vehicle positioning in urban canyons, where GNSS signals are prone to blockage and multipath. The lidar measurements are generated by a keypoint extraction strategy via learning-based point cloud registration between rover scans and reference scans from a pre-built HD map. A mixed measurement model is employed to integrate such lidar measurements with their DD GNSS counterparts to obtain precise float solutions and enable instantaneous IAR. Closed-form expressions of the ambiguity variance matrix and the corresponding ADOP are developed to provide a priori evaluation of the ambiguity resolution performance using the numbers of available satellites and keypoints, as well as the precision of the measurements (cf. 17).

Our analytical study has shown when limited GNSS satellites and/or frequencies are accessible, which is often the case in urban environments, the proposed lidar-aided method can significantly reduce the ADOP value comparing with the GNSS-only approach, thus enabling successful instantaneous IAR (Fig. 2). Moreover, a moderate number of lidar keypoints can reduce the required satellites to track, to the extent that IAR is feasible with single-frequency data from only 2 or 3 satellites (Fig. 5). The numerical results from a simulated experiment illustrate that the proposed method achieves the empirical ambiguity success rate of 100% and 3D positioning RMSE of 0.017 m using GPS L1 and lidar measurements, thereby outperforming both the Lidar-only and GPS+QZSS (L1/L2) positioning methods (Fig. 9). Future work will undertake real-world experiments to further examine the performance of the proposed method in GNSS-challenging urban canyons.

Acknowledgments

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The first author acknowledges the financial support from The University of Melbourne through the Melbourne Research Scholarship.

[Supplementary proofs] An application of the determinant factorization rule [15, 21] to the ambiguity variance matrix (14) gives

(23)

Substitution of the identities ,  [21], and the phase-to-code variance ratio into (23) yields

(24)

The above expression for the determinant of the ambiguity variance matrix, together with the ADOP definition (15), gives

(25)

For the GNSS-only case, we have . This simplifies the last term in (25) to , from which the ADOP of the GNSS-only model (16) follows. For the integrated GNSS-lidar case however, we have instead (cf. 13). The first expression of the ADOP-ratio (17) follows then by substituting into , showing that

(26)

Finally, the second expression of the ADOP-ratio (17) follows from the defintion of the generalized eigenvalues (18), that is

(27)

References

  • [1] G. Blewitt (1989) Carrier phase ambiguity resolution for the Global Positioning System applied to geodetic baselines up to 2000 km. Journal of Geophysical Research: Solid Earth 94 (B8), pp. 10187–10203. Cited by: §I.
  • [2] T. Chaton, N. Chaulet, S. Horache, and L. Landrieu (2020) Torch-Points3D: a modular multi-task framework for reproducible deep learning on 3d point clouds. In 2020 International Conference on 3D Vision (3DV), Vol. , pp. 1–10. External Links: Document Cited by: §VII-C.
  • [3] K. Chiang, G. Tsai, H. Chu, and N. El-Sheimy (2020) Performance enhancement of INS/GNSS/Refreshed-SLAM integration for acceptable lane-level navigation accuracy. IEEE Transactions on Vehicular Technology 69 (3), pp. 2463–2476. External Links: Document Cited by: §I.
  • [4] M. A. Fischler and R. C. Bolles (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6), pp. 381–395. Cited by: §II-B, §VI-A.
  • [5] S. Fontana, D. Cattaneo, A. L. Ballardini, M. Vaghi, and D. G. Sorrenti (2021) A benchmark for point clouds registration algorithms. Robotics and Autonomous Systems 140, pp. 103734. Cited by: §VI-A, §VI-A.
  • [6] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the KITTI dataset. International Journal of Robotics Research (IJRR). Cited by: §II-A, §V.
  • [7] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu (2020) A survey of deep learning techniques for autonomous driving. Journal of Field Robotics 37 (3), pp. 362–386. Cited by: §I.
  • [8] C. Gunther and P. Henkel (2012) Integer ambiguity estimation for satellite navigation. IEEE Transactions on Signal Processing 60 (7), pp. 3387–3393. Cited by: §I.
  • [9] H. V. Henderson, F. Pukelsheim, and S. R. Searle (1983) On the history of the Kronecker product. Linear and Multilinear Algebra 14 (2), pp. 113–120. Cited by: §II-C.
  • [10] B. Hofmann-Wellenhof, H. Lichtenegger, and E. Wasle (2008) GNSS–global navigation satellite systems: GPS, GLONASS, Galileo, and more. Springer, New York. Cited by: §I, §I.
  • [11] S. Horache, J. Deschaud, and F. Goulette (2021)

    3D point cloud registration with multi-scale architecture and unsupervised transfer learning

    .
    In 2021 International Conference on 3D Vision (3DV), Vol. , pp. 1351–1361. External Links: Document Cited by: §II-B.
  • [12] N. Joubert, T. G. R. Reid, and F. Noble (2020) Developments in modern GNSS and its impact on autonomous vehicle architectures. In 2020 IEEE Intelligent Vehicles Symposium (IV), Vol. , pp. 2029–2036. External Links: Document Cited by: §I.
  • [13] A. Khodabandeh, S. Zaminpardaz, and N. Nadarajah (2021) A study on multi-GNSS phase-only positioning. Measurement Science and Technology 32 (9), pp. 095005. Cited by: §I, §III-A, §IV-B.
  • [14] A. Khodabandeh and P. J. G. Teunissen (2018) On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases. Journal of Geodesy 92 (6), pp. 637–658. Cited by: §III-A.
  • [15] K. Koch (1999) Parameter estimation and hypothesis testing in linear models. Springer, Berlin. Cited by: Acknowledgments.
  • [16] R. B. Langley, P. J. G. Teunissen, and O. Montenbruck (2017) Introduction to GNSS. In Springer handbook of global navigation satellite systems, pp. 3–23. Cited by: §III-A.
  • [17] W. Li, X. Cui, and M. Lu (2020) High-precision positioning and mapping using feature-based RTK/LiDAR/INS integrated system for urban environments. In Proceedings of the 33rd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2020), pp. 2628–2640. Cited by: §I.
  • [18] W. Li, G. Liu, X. Cui, and M. Lu (2021) Feature-aided RTK/LiDAR/INS integrated positioning system with parallel filters in the ambiguity-position-joint domain for urban environments. Remote Sensing 13 (10), pp. 2013. Cited by: §I.
  • [19] R. Liu, J. Wang, and B. Zhang (2020) High definition map for automated driving: overview and analysis. The Journal of Navigation 73 (2), pp. 324–341. Cited by: §II-A.
  • [20] MATLAB (2021) 9.11.0.1837725 (r2021b). The MathWorks Inc., Natick, Massachusetts. Cited by: §VII-C.
  • [21] D. Odijk and P. J. G. Teunissen (2008) ADOP in closed form for a hierarchy of multi-frequency single-baseline GNSS models. Journal of Geodesy 82 (8), pp. 473. Cited by: §I, §IV-B, Acknowledgments.
  • [22] R. Odolinski and P. J. G. Teunissen (2017) Low-cost, high-precision, single-frequency GPS–BDS RTK positioning. GPS Solutions 21 (3), pp. 1315–1330. Cited by: §I.
  • [23] F. Pomerleau, M. Liu, F. Colas, and R. Siegwart (2012) Challenging data sets for point cloud registration algorithms. The International Journal of Robotics Research 31 (14), pp. 1705–1711. Cited by: §VI-A.
  • [24] C. Qian, H. Zhang, W. Li, B. Shu, J. Tang, B. Li, Z. Chen, and H. Liu (2020) A LiDAR aiding ambiguity resolution method using fuzzy one-to-many feature matching. Journal of Geodesy 94 (10), pp. 1–18. Cited by: §I.
  • [25] C. Qian, H. Zhang, W. Li, J. Tang, H. Liu, and B. Li (2020) Cooperative GNSS-RTK ambiguity resolution with GNSS, INS, and LiDAR data for connected vehicles. Remote Sensing 12 (6), pp. 949. Cited by: §I.
  • [26] R. G. R., S. E. Houts, R. Cammarata, G. Mills, S. Agarwal, A. Vora, and G. Pandey (2019) Localization requirements for autonomous vehicles. SAE International Journal of Connected and Automated Vehicles 2 (3), pp. 173–190. External Links: Document, Link Cited by: §I.
  • [27] P. J. G. Teunissen (1995) The least-squares ambiguity decorrelation adjustment: a method for fast GPS integer ambiguity estimation. Journal of Geodesy 70, pp. 1–2. Cited by: §I, §III-C.
  • [28] P. J. G. Teunissen (1997) A canonical theory for short GPS baselines. Part IV: precision versus reliability. Journal of Geodesy 71 (9), pp. 513–525. Cited by: §I, §IV-B, §IV-B, §IV.
  • [29] P. J. G. Teunissen (1999) An optimality property of the integer least-squares estimator. Journal of Geodesy 73 (11), pp. 587–593. Cited by: §I.
  • [30] P. J. G. Teunissen (2000) Adjustment theory: an introduction. Delft University Press. Note: Series on Mathematical Geodesy and Positioning Cited by: §II, §III-B, §IV-A.
  • [31] P. J. G. Teunissen (2000) ADOP based upper bounds for the bootstrapped and the least squares ambiguity success. Artificial Satellites 35 (4), pp. 171–179. Cited by: §IV-B.
  • [32] P. J. G. Teunissen (2017) Carrier phase integer ambiguity resolution. In Springer handbook of global navigation satellite systems, pp. 661–685. Cited by: §I, §III-C.
  • [33] Velodyne LiDAR HDL-64E high definition real-time 3D LiDAR. Note: https://autonomoustuff.com/products/velodyne-hdl-64eAccessed: 2021-11-01 Cited by: §IV-B.
  • [34] S. Verhagen, B. Li, and P. J. G. Teunissen (2013) Ps-LAMBDA: ambiguity success rate evaluation software for interferometric applications. Computers & Geosciences 54, pp. 361–376. Cited by: §IV.
  • [35] S. Verhagen and B. Li (2012) LAMBDA software package: matlab implementation, version 3.0. Delft University of Technology and Curtin University, Perth, Australia. Cited by: §III-C.
  • [36] S. Verhagen (2005) On the reliability of integer ambiguity resolution. Navigation 52 (2), pp. 99–110. Cited by: §I, §IV.
  • [37] W. Wen, G. Zhang, and L. Hsu (2019) Correcting NLOS by 3D LiDAR and building height to improve GNSS single point positioning. Navigation 66 (4), pp. 705–718. Cited by: §I.
  • [38] W. Wen, G. Zhang, and L. Hsu (2020) Object-detection-aided GNSS and its integration with lidar in highly urbanized areas. IEEE Intelligent Transportation Systems Magazine 12 (3), pp. 53–69. Cited by: §I.
  • [39] W. Wen, G. Zhang, and L. Hsu (2021) GNSS outlier mitigation via graduated non-convexity factor graph optimization. IEEE Transactions on Vehicular Technology. External Links: Document Cited by: §I.
  • [40] J. Xiong, J. W. Cheong, Z. Xiong, A. G. Dempster, M. List, F. Wöske, and B. Rievers (2020) Carrier-phase-based multi-vehicle cooperative positioning using V2V sensors. IEEE Transactions on Vehicular Technology 69 (9), pp. 9528–9541. External Links: Document Cited by: §I.
  • [41] J. Zhang, W. Wen, F. Huang, X. Chen, and L. Hsu (2021) Continuous GNSS-RTK aided by LiDAR/inertial odometry with intelligent GNSS selection in urban canyons. In Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), pp. 4198–4207. Cited by: §I.
  • [42] J. Zhang, K. Khoshelham, and A. Khodabandeh (2021) Seamless vehicle positioning by lidar-GNSS integration: standalone and multi-epoch scenarios. Remote Sensing 13 (22). External Links: Link, ISSN 2072-4292, Document Cited by: §I, §II, §III-B, §IV-A.
  • [43] Z. Zhang, Y. Dai, and J. Sun (2020) Deep learning based point cloud registration: an overview. Virtual Reality & Intelligent Hardware 2 (3), pp. 222–246. Cited by: §I.