Biographies
Sriramya Bhamidipati
is a Ph.D. student in the Department of Aerospace Engineering at the University of Illinois at UrbanaChampaign, where she also received her master’s degree in 2017. She obtained her B.Tech. in Aerospace from the Indian Institute of Technology Bombay in 2015. Her research interests include GPS, power and control systems, artificial intelligence, computer vision and unmanned aerial vehicles.
Grace Xingxin Gao is an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. Before joining Stanford University, she was an assistant professor at University of Illinois at UrbanaChampaign. She obtained her Ph.D. degree at Stanford University. Her research is on robust and secure positioning, navigation and timing with applications to manned and unmanned aerial vehicles, robotics and power systems.
Abstract
Urban navigation using GPS and fisheye camera suffers from multipath effects in GPS measurements and data association errors in pixel intensities across image frames. We propose a Simultaneous Localization and Mapping (SLAM)based Integrity Monitoring (IM) algorithm to compute the position protection levels while accounting for multiple faults in both GPS and vision. We perform graph optimization using the sequential data of GPS pseudoranges, pixel intensities, vehicle dynamics and satellite ephemeris to simultaneously localize the vehicle as well as the landmarks, namely GPS satellites and key image pixels in the world frame. We estimate the fault mode vector by analyzing the temporal correlation across the GPS measurement residuals and spatial correlation across the vision intensity residuals. In particular, to detect and isolate the vision faults, we developed a superpixelbased piecewise Random Sample Consensus (RANSAC) technique to perform spatial voting across image pixels. For an estimated fault mode, we compute the protection levels by applying worstcase failure slope analysis to the linearized GraphSLAM framework.
We perform ground vehicle experiments in the semiurban area of Champaign, IL and have demonstrated the successful detection and isolation of multiple faults. We also validate tighter protection levels and lower localization errors achieved via the proposed algorithm as compared to SLAMbased IM that utilizes only GPS measurements.
1 Introduction
Integrity Monitoring (IM) serves as a important performance metric to assess the navigation solution estimation [ochieng2003gps]. Vehicles operating in urban areas face challenges [joerger2017towards] due to static infrastructure, such as buildings and thick foliage, dynamic obstacles, such as traffic and pedestrians, and environmental conditions, such as shadows, sunlight and weather. GPS systems receive fewer measurements in urban environments due to degraded satellite visibility. They also suffer from received signal faults caused by multipath and satellite faults caused by anomalies in the broadcast navigation message.
To address the abovementioned challenges, one possible solution is to incorporate additional redundancy through the sensor fusion of GPS and vision. Vision sensor performs well in urban areas due to the featurerich surroundings [hol2011sensor]. Sensor fusion [krishnaswamy2008sensor] integrates measurements from multiple sensors to improve the accuracy of the vehicle and provide robust performance. Individual sensors, such as GPS and camera, have inherent limitations in operability that are reliably corrected by combining these complementary sensors in a sensor fusion framework. In particular, occlusion and illumination variations in multiple pixel intensities induce data association errors across images, thereby termed as vision faults [miro2006towards]. Therefore, there is a need for the development of sensorfusionbased IM techniques that account for multiple faults in both GPS and vision.
Rich literature exists on urban IM approaches for GPSbased navigation systems that utilize external information sources. In [velaga2012map]
, the authors developed a sequential mapaided IM technique that checks for outliers in position and Geographic Information System (GIS) using traditional RAIM
[walter2008shaping] and weightbased topological mapmatching process, respectively. Another paper [binjammaz2013gps] developed three phases of integrity checks that include assessing the position quality via traditional Receiver Autonomous Integrity Monitoring (RAIM), speed integrity via GPS Doppler and map matching accuracy via fuzzy inference system. However, these approaches have practical limitations because the offline map database is not always available and its accuracy cannot be guaranteed due to the dynamic changes in the urban surroundings. Another line of prior work [li2017lane, toledo2009lane] utilizes the odometry information obtained from DeadReckoning (DR) sensors, such as inertial measurement units, wheel speed encoder and camera, to perform GPSbased IM. But the drawbacks of these approaches are that they do not address the faults associated with DR sensors, and also, do not account for the simultaneous occurrence of faults across multiple sensor sources.In this paper, we leverage the generalized and flexible platform developed in our prior work [bhamidipati2018multiple], which is Simultaneous Localization and Mapping (SLAM)based Fault Detection and Isolation (FDI), as the basis for assessing the sensor fusion integrity. Another extension of the SLAMbased FDI platform, described in our prior work [bhamidipati2018receivers], assesses the integrity of cooperative localization using a network of receivers. SLAM [cadena2016past], a wellknown technique in robotics, utilizes sensor measurements to estimate the landmarks in a threedimensional (3D) map while simultaneously localizing the robot within it. Analogous to this, our prior work [bhamidipati2018multiple] on SLAMbased FDI combines the sequential data of GPS measurements, receiver motion model and satellite orbital model in a graph framework to simultaneously localize the robot, which is the GPS receiver, landmarks in the map, which are the GPS satellites. A key feature of this platform is that it utilizes graph optimization techniques [latif2014robust] and therefore, does not require any prior assumption regarding the distribution of states. Given that we localize the landmarks as well, our SLAMbased FDI does not require any prior information regarding the surrounding 3D maps.
We propose SLAMbased IM algorithm using GPS and fisheye camera to compute the error bounds, termed as protection levels, of the estimated navigation solution by applying worstcase failure slope analysis [salos2013receiver, joerger2014solution] to the GraphSLAM framework. In this work, we consider global landmarks as the GPS satellites and additional local landmarks as the key image pixels in the world frame. Here, world frame represents the EarthCentered EarthFixed (ECEF) frame. We simultaneously update the state vectors of the vehicle, GPS satellites and key image pixels and thereafter, perform multiple FDI. We constrain the graph via GPS pseudoranges, raw fisheye images, vehicle dynamics and satellite ephemeris.
For vision measurements, we opt for a fisheye camera mounted on an vehicle and point it upwards for the following reasons; Firstly, given its wide FieldOfView (FOV) the image pixels are spatially spreadout in different directions with respect to the vehicle, thereby, compensating for the restricted spatial geometry of the limited global landmarks, i.e., GPS satellites. Secondly, given that the camera is pointing upwards, the unstructured skyline of the buildings aids in resolving the attitude of the vehicle. Thirdly, the fisheye image captures the open sky section with respect to the vehicle that is used to distinguish the LineOfSight (LOS) GPS satellites from that of the NonLineOfSight (NLOS) ones [shytermeja2014proposed].
The rest of the paper is organized as follows: Section II describes our SLAMbased IM algorithm that utilizes GPS and fisheye camera; Section III experimentally validates the proposed algorithm in performing multiple FDI of GPS and vision faults and assessing the corresponding localization accuracy and size of protection levels; Section IV concludes the paper.
2 SLAMbased IM using GPS and Fisheye Camera
We outline the highlevel architecture and later, explain the details of the proposed algorithm. In this work, we focus on the measurement faults that are more frequently observed in urban areas, namely GPS and vision faults. Even though the formulation of the proposed algorithm is capable of addressing other faults, for simplicity, we consider no measurement faults associated with the receiver motion model and satellite orbital model. For reference, details regarding addressing the satellite faults using SLAMbased FDI are described in our prior work [bhamidipati2018multiple]. In Fig. 1, we show the architecture of our SLAMbased IM algorithm using GPS and fisheye camera that is summarized as follows:

During initialization, we initialize a 3D map using the PVT of the receiver and satellites computed via an established GPS receiver algorithm [lashley2010valid]. We set the initial value of all GPS measurement fault status to indicating neutrality. For the vision module, we perform initial calibration to estimate the scaling from image to global frame.

Firstly, we preprocess the raw image obtained from the fisheye camera using our hybrid sky detection algorithm to distinguish the sky pixels from the nonsky pixels. The detected sky pixels are used to distinguish the LOS and NLOS satellites and thereafter, formulate the corresponding GPS measurement covariance.

We consider the nonsky pixels along with GPS pseudoranges and carriertonoise density values, receiver motion model and satellite orbital model as input measurements to our algorithm. We combine the measurements in an extended graph optimization module to estimate the overall state vector, which consists of the state vector of the vehicle, GPS satellites and key image pixels using Mestimator [shevlyakov2008redescending]based Levenberg Marquardt algorithm [lourakis2005brief].

We independently analyze the measurement residuals against an empirical distribution to detect and isolate GPS faults. We develop a superpixel [li2015superpixel]based piecewise Random Sample Consensus (RANSAC) [conte2009vision] to perform spatial voting for the detection and isolation of vision faults. Based on the estimated fault status of the measurements, we estimate the measurement fault mode, which has binary entries, such that indicates nonfaulty and represents faulty.

Finally, utilizing the estimated fault mode and overall state vector, we formulate the failure slope for the GraphSLAM framework and subsequently, compute the protection levels using worstcase failure mode slope analysis [salos2013receiver, joerger2014solution].
The proposed SLAMbased IM algorithm using GPS and fisheye camera consists of three main modules, namely measurement preprocessing, extended graph optimization and IM for GraphSLAM. We describe the details as follows:
2.1 Preprocessing the measurements
We consider the following measurements as inputs to our SLAMbased IM algorithm: GPS pseudoranges and values from the GPS receiver, pixel intensities from the fisheye image, control input obtained from the vehicle motion model and satellite ephemeris decoded from the navigation message.
1. Vision module:
We preprocess the raw image obtained from fisheye camera using hybrid sky detection algorithm, to distinguish the skypixels from the nonsky pixels. The pipeline of our hybrid sky detection is seen in Fig. 1(a). Our hybrid sky detection takes into account not only the pixel intensities but also prior knowledge regarding the spatial location of the sky pixels.
We convert the raw image to gray scale and then perform median blur operation. The median blur [wang2010new] is a nonlinear filter that reduces the noise in the image while keeping the edges relatively sharp. Next, we compute the gradient by combining the magnitude obtained via two Sobel operations [gao2010improved], one executed in horizontal and the other in vertical direction. An example of the image obtained from fisheye camera operating in urban areas and pointing upwards is seen in Fig. 1(b).
We observe that the probability of sky is highest close to the center and exponentially decreases outwards
[haque2008hybrid]. Therefore, the corresponding locationbased sky probability, denoted by is given by(1) 
where is the 2D image coordinates, such that , represents the predefined domain of the image coordinates. denotes the cardinality of the image domain and denotes the 2norm residual. denotes the predefined center coordinates in the 2D image frame.
Combining the location probability with Otsu’s method of intensity thresholding [moghaddam2012adotsu], we compute the hybrid optimal border, seen in Fig. 1(c), that separates the sky region, represented by subscript , from that of the infrastructure, denoted by subscript
. We minimize the variance of sky and infrastructure to estimate the Otsu’s intensity threshold
as(2) 
where

denotes the intensity vector that stacks all the pixel intensities in the fisheye image, such that , where denotes the intensity of any 2D pixel coordinates ;

and denotes the weights associated with the sky and building infrastructure, respectively, such that and ;

and denotes the variance of the pixel intensities associated with the sky and building infrastructure.
Utilizing Eqs. (1) and (2), we compute the hybrid sky probability, denoted by at any 2D image coordinate as
(3) 
where and are the maximum and minimum intensity values in the fisheye image, such that and , respectively. Considering as the predefined sky threshold, if , then it is categorized as sky pixel and nonsky pixel otherwise. The skyenclosed area in the fisheye image is seen in Fig. 1(d).
Next, using the nonsky detected pixels, we describe the vision measurement model in Eq. (4) that is formulated via omnidirectional camera model [caruso2015large] and direct image alignment [engel2014lsd]. Direct image alignment computes the depth maps in an incremental fashion and compares the pixel intensities across the image frames directly, such that the spatial context of the image is preserved. This vision measurement model is utilized later in our extended graph optimization module to formulate the corresponding vision odometrybased component of the cost function.
(4) 
such that is pixel noise and from [caruso2015large],
where the subscript refers to keyframe,

denotes the intensity of any 2D pixel coordinates in the keyframe and denotes the image domain of keyframe; Detailed explanation regarding keyframe selection and estimation of semidense depth maps is given in prior literature [caruso2015large];

denotes the intensity of any 2D pixel coordinates in the current frame and denotes the image domain consisting of nonsky pixels;

denotes the map from 3D world coordinates, denoted by to 2D pixel in image frame;

denotes the 3D warp function that unprojects the pixel coordinates and transforms it by a relative state vector . The relative state vector indicates the difference between the current vehicle pose, denoted by with respect to that of the keyframe, denoted by ; Here, x denotes the 3D vehicle position and denotes the 3D orientation; and denotes the rotation matrix and translation vector of , respectively;

denotes the inverse mapping of 2D pixel coordinates to 3D world coordinates via an inverse distance represented by . Here, and denotes the transformed 2D pixel coordinates. We calibrate the camera parameters, namely , , , and during initialization;

denotes the inverse distance of the pixel coordinates in the keyframe.
2. GPS module:
In the GPS module, considering visible satellites, we describe the GPS measurement model as
(5) 
where x and denotes the 3D position of the vehicle and satellite, respectively; and represents the receiver clock bias and satellite clock bias corrections, respectively; represents the measurement noise related to satellite.
We also formulate the measurement covariance of satellite via the measured values and the sky area detected via Eq. (3) in the vision preprocessing module. Note that the classification of the satellite as either LOS or NLOS depends on the unknown state vector of the vehicle and satellite. Therefore, the measurement covariance of satellite is given by
(6) 
where

denotes the vehicle state vector at time instant comprising of 3D position, 3D velocity, clock bias, clock drift and 3D attitude, respectively, such that ;

denotes the state vector of satellite comprising of its 3D position, 3D velocity, clock bias and clock drift corrections, such that ;

and are the vision coefficients, such that and when and and otherwise; is the predefined threshold explained in Eq. (3); , , and are constant predetermined coefficients and denotes the projection of the state vector of satellite in the image frame.
2.2 Extended graph optimization
In our extended graph optimization module, our cost function consists of four error terms, namely GPS pseudoranges, nonsky pixel intensities, receiver motion model and satellite orbital model, as follows:
(7)  
where

denotes the overall state vector comprising of the state vector of the vehicle, GPS satellites and key image pixels in the world frame, given by and is estimated during the graph optimization;

denotes the Mestimator used to transform the corresponding weighted residuals; Details regarding the choice of Mestimator used are explained in our prior work [bhamidipati2018multiple];

denotes the fault status associated with the GPS pseudorange of satellite and estimated at the past time instant; Similarly, denotes the estimated vision fault status of any 2D pixel at the previous time instant;

denotes the GPS measurement model; denotes the motion model of the receiver and denotes the satellite orbital model; and denotes the estimated state vector of the vehicle and satellite, respectively, at the previous time instant; and denote the motion control inputs of the vehicle and satellite, respectively;

and denotes the predicted covariance matrix of the vehicle state vector and satellite state vector at the time instant; Explanation regarding estimating these covariances is given in our prior work [bhamidipati2018multiple];

denote the measurement covariance of the satellite and is estimated from Eq. (6); Similarly, denotes the covariance associated with the intensity of the nonsky pixel and is estimated based on Section of [engel2014lsd].
The first three terms in the cost function , given in Eq. (7), correspond to the residuals associated with the GPS pseudoranges, satellite ephemeris and vehicle state vector, whose details are provided in our prior work [bhamidipati2018multiple]. The last term represents the summation of intensity residuals across nonsky pixels based on the vision measurement model explained in Eq. (4). In particular, we perform subgraph optimization at each instant, as seen in Eq. (8), where the cost function is formulated using the past history of measurements.
(8) 
where denotes the number of time instants utilized in the subgraph optimization thread and denotes our SLAMbased IM estimate of the overall state vector computed during the subgraph optimization. We estimate the key image pixels in the world frame, represented by , via inversemapping defined in Eq. (4). Details regarding mapping that involves periodically executing fullgraph optimization is given in our prior work [bhamidipati2018multiple].
2.3 IM for GraphSLAM framework
We compute the protection levels associated with the estimated vehicle position using worstcase failure mode slope analysis [salos2013receiver, joerger2014solution]. This is justified because worstcase failure mode slope is derived for weighted least squares estimator and graph optimization via Mestimatorbased Levenberg Marquardt algorithm is also a nonlinear weighted least squares problem. However, there are certain design challenges involved in applying worstcase failure slope analysis for the protection level computation of GraphSLAM framework. Firstly, given that the worstcase failure slope is derived for linear measurement model but the cost function associated with GraphSLAM is nonlinear, we linearize the formulation of graph optimization at the estimated overall state vector. Secondly, GraphSLAM is a sequential methodology, whereas the worstcase failure slope falls under snapshot technique for integrity monitoring. Therefore, we linearize our graph formulation over not only the current time instant, but over the past time history of measurements so as to incorporate the temporal aspect in protection level computation. Thirdly, the graph optimization for SLAM framework consists of a large number of states and measurements. However, evaluating all possible fault modes associated with the measurements is computationally cumbersome. Therefore, we directly compute a single fault mode based on the measurement fault status estimated via multiple FDI module.
1) Multiple FDI module:
Based on the estimated overall state vector from the extended graph optimization explained in Section 2.2, we independently compute the measurement residuals associated with GPS pseudoranges and nonsky pixel intensities. In our multiple FDI module, we evaluate the GPS residuals by analyzing the temporal correlation of their nonfaulty error distribution and vision residuals using spatial correlation across image pixels.
GPS faults:
To detect and isolate GPS faults in pseudoranges, we evaluate each residual against an empirical Gaussian distribution, which represents the measurement error distribution during nonfaulty conditions. This is justified because we observe that the GPS measurements follow a Gaussian distribution during nonfaulty conditions, as explained in our prior work
[bhamidipati2018multiple]. We replicate the nonfaulty conditions of GPS measurements by executing the initialization procedure in opensky conditions. Thereafter, deviation of the measurement residual, denoted by, from the Cumulative Distribution Function (CDF) of its empirical Gaussian distribution, denoted by
, is categorized as a fault and the corresponding fault status is computed in Eq. (9). The justification regarding the formulation of fault status is explained in our prior work [bhamidipati2018receivers].(9) 
Vision faults: Unlike GPS faults, vision faults caused by data association errors exhibit high spatial correlation across image pixels and low temporal correlation. This is justified because the vision faults are localized to a group of neighboring pixels and are not isolated to a standalone pixel. We developed a superpixelbased piecewise RANSAC technique that performs spatial voting across the image pixels to detect and isolate vision faults. RANSAC [conte2009vision]
, a popular outlier detection method in image processing, estimates the optimal fitting parameters of a model via random sampling of data containing both inliers and outliers.
The steps involved in the superpixelbased piecewise RANSAC technique are described as follows: first, we segment the image into clusters, known as superpixels, based on the color similarity and space proximity between image pixels using superpixel segmentation [li2015superpixel]. We denote the number of superpixels depicting nonsky pixels to be , where the total number of superpixels into which the image is segmented is predefined during initialization. For each nonsky superpixel, we denote the pixel intensity vector as , which stacks the intensities of pixels within the superpixel. We represent the received intensity, i.e., keyframe pixel intensities Vs expected intensity, i.e., transformed current pixel intensities as a twoDimensional (2D) plot. Next, we estimate the fitted line using RANSAC that passes through the optimal set of inliers and thereafter, compute the fraction of outliers in the superpixel, which is represented by . Then, utilizing the estimated model parameters of the fitted line, we evaluate the corresponding fraction of outliers at all the other nonsky superpixels, denoted by . Finally, the fault status at each superpixel is computed as the product of all the estimated outlier fractions, as seen in Eq. (10), and the same fault status is assigned to all the pixels within that superpixel. This procedure is repeated for all the nonsky superpixels to compute the fault status of all the nonsky pixels in the keyframe. Our algorithm considers an underlying assumption that there are sufficient number of superpixels to reach a common consensus. If the number of superpixels associated with nonsky pixels is less, such as in opensky setting, a predefined penalty is assigned to the vision fault status.
(10) 
2) Protection level computation
In Eq. (11), based on the design solutions explained in Section 2.3.1, we linearize the overall measurement model of the graph optimization framework using firstorder approximation. For simplicity, we derive the protection levels using measurements of the current time instant, but the same formulation is applicable for extension to the past history of measurements.
(11) 
where

denotes the overall measurement vector that concatenates GPS pseudoranges, control input of vehicle, satellite ephemeris and keyframe pixel intensities against an estimated overall state vector ; denotes the overall measurement noise;

denotes the linearized overall measurement model that vertically stacks the Jacobian associated with GPS pseudoranges, denoted by , vehicle motion model and satellite orbital model, denoted by and nonsky pixel intensities, denoted by , such that ;

denotes the overall fault vector associated with the overall measurement vector and thereby, stacks measurement faults obtained from individual sensor sources.
As described in Eq. (7) of the graph optimization module, we express the Mestimatorbased Levenberg Marquardt formulation, which is a weighted nonlinear least squares problem, as
(12) 
where

denotes the estimation matrix of the graphoptimization framework and denotes the pseudoinverse matrix, such that ;

denotes the Mestimatorbased weight functions for the GPS pseudoranges, vehicle motion model and satellite orbital model, nonsky image pixel intensities, respectively, and evaluated at ; Details regarding the choice of Mestimator and the corresponding weight functions are explained in our prior work [bhamidipati2018multiple];

denotes the iterative damping factor associated with the Levenberg Marquardt algorithm.
Next, we define the overall test statistic, denoted by
, as the summation of the weighted squared residuals across all the measurements. We consider an assumption that the overall test statistic is chisquare distributed, denoted by
under nonfaulty conditions and noncentral chisquared, denoted by , under the presence of GPS faults or vision faults or both.(13) 
such that
(14) 
where denotes the number of redundant measurements, i.e., difference between the number of overall measurements, denoted by and overall states, denoted by , such that . indicates the noncentrality parameter associated with the overall test statistic during faulty conditions.
According to the worstcase failure mode slope analysis [salos2013receiver], as seen in Fig. 4, the protection level is calculated as the projection in the position domain of the measurement faults that would generate a noncentrality parameter in the overall test statistic with the maximum slope. In particular, the noncentrality parameter is estimated from the falsealarm, denoted by and misdetection rates, denoted by , which are set according to the predefined integrity requirements.
In Eq. (15), we formulate the measurement fault mode, denoted by , using GPS and vision fault status estimated in Eqs. (9) and (10). For this, we consider a predefined fault threshold, denoted by , such that if the fault status is above , the measurement is flagged as faulty in the computation of protection levels. Given that we consider measurement faults in only GPS and vision, the fault entries of receiver and satellite motion models are set to zero for this work. However, the corresponding fault vector, which comprises of the exact measurement fault magnitudes, is still unknown. According to [salos2013receiver], for a given fault mode, the worst case fault direction that maximizes the integrity risk, is the one that maximizes the failure mode slope, which is seen in Fig. 4 and denoted by . In this context, we define the square of failure mode slope, denoted by , as the ratio of squared state estimation error in position of the vehicle over the overall test statistic. Using the linearized equations seen in Eqs. (11), (2.3) and (13), we derive the failure slope for the graph optimization framework in terms of unknown fault vector. For this, we consider , which is valid approximation after the iterative convergence of the graph optimization at any time instant since .
(15) 
Considering to be the number of nonzero entries in the fault mode estimated via multiple FDI module, we define fault matrix, denoted by , as and next, rearrange the rows of the and matrices to match the rows of the fault matrix. Thereafter, we define a transformed fault vector, denoted by , such that . Based on the abovementioned steps, we describe the failure slope formulation of GraphSLAM framework in Eq. (2.3).
(16)  
where extracts the vehicle 3D position from the overall state vector , such that , denotes the residual matrix, such that and represents the state gain matrix, such that .
Referring to [joerger2014solution]
, for a given fault mode but unknown fault vector, the worstcase failure slope equals the maximum eigenvalue of the corresponding failure slope formulation. Therefore, we express the worstcase failure slope of the GraphSLAM framework as
(17) 
Next, we compute protection level , seen in Eq. (18) as the ycoordinate that corresponds to the integrity metric along the line passing through the origin and with slope given by .
(18) 
3 Experiment Results
We validate the performance of the proposed SLAMbased IM algorithm that utilizes both GPS and fisheye camera. We conduct realworld experiments on a moving ground vehicle in the semiurban area of Champaign, IL along the route shown in Fig. 5. Our experimental setup comprises of a commercial offtheshelf GPS receiver and a fisheye camera fitted with FOV lens. During s, the ground vehicle operates in opensky conditions, thereby experiencing no GPS faults but less visual features. In Fig. 5, the blue highlighted region suffers from vision challenges, namely illumination variations due to sunlight and shadows, that causes data association errors across images. Similarly, the red highlighted region is enclosed with tall buildings that leads to multipath effects in the GPS measurements. For instance, at s we showcase the true overlap of the GPS satellite positions over the fisheye image, where out of the visible satellites are affected by multipath.
Fig. 6 shows the average fault status of GPS pseudoranges and vision superpixels, as indicated in red and blue, respectively. Given that the ground vehicle navigates in opensky conditions for s, the average GPS fault status estimated via our multiple FDI module is low, whereas the average vision fault status is high due to the featureless surroundings. As the vehicle passes through the red highlighted region shown in Fig. 5 that represents the semiurban area, the average fault status of vision is low but that of GPS increases due to multipath.
We further analyze the performance of our multiple FDI module in the challenging semiurban area, i.e., for s during which the ground vehicle experiences GPS faults due to multipath and vision faults due to illumination variations. Fig. 6(a) plots that the individual GPS fault status of out of the visible satellites with PRNs , and . In accordance with the skyplot shown in Fig. 5, our proposed SLAMbased IM algorithm successfully flags the satellites with PRN and as faulty while accurately estimating the highelevation satellite with PRN as nonfaulty. During the same duration, we also analyze the vision fault status associated with the superpixels. In Fig. 6(b), at each time instant, we plot the top four fault status of the superpixels, such that each marker represents a superpixel. We observe that in urban region, the value of the associated vision fault status decreases due to featurerich tall buildings in urban areas. However, when the vehicle enters the blue highlighted region seen in Fig. 6(b), the illumination variations induced by the bright sunlight causes the fault status associated with certain superpixels to shown an increasing trend.
We demonstrate the improved performance of the SLAMbased IM algorithm that utilizes GPS and fisheye camera seen in Fig. 7(a), as compared to the SLAMbased IM algorithm that utilizes GPSonly seen in Fig. 7(b). By utilizing GPS and fisheye camera, we demonstrate higher localization accuracy, with an Root Mean Squared Error (RMSE) of
m and standard deviation of
m, as compared to employing GPSonly that shows an RMSE of m and standard deviation of m. We also validate that the lower mean size of protection levels are estimated using GPS and fisheye camera, i.e. m than using GPSonly, i.e., m thereby, achieving tighter protection levels.4 Conclusions
We proposed a Simultaneous Localization and Mapping (SLAM)based Integrity Monitoring (IM) algorithm using GPS and fisheye camera that estimates the protection levels of the GraphSLAM framework while accounting for multiple faults in GPS and vision. We developed hybrid sky detection algorithm to distinguish the nonsky and sky pixels, which are later used in graph optimization and GPS measurement covariance, respectively. By utilizing the GPS pseudoranges, nonsky pixel intensities, receiver and satellite motion model, we performed graph optimization via Mestimatorbased Levenberg Marquardt algorithm. We simultaneously estimated the state vector of the vehicle, GPS satellites and key image pixels in the world frame. We estimated the fault mode vector by independently evaluating the measurement residuals against an empirical Gaussian distribution for GPS faults and using our developed superpixelbased piecewise RANSAC for vision faults. We computed the protection levels via the worstcase failure slope analysis that estimates the maximum eigenvalue associated with the failure slope formulation of the linearized GraphSLAM framework.
We conducted realworld experiments using a ground vehicle in a semiurban region to analyze the performance our proposed SLAMbased IM algorithm that utilizes GPS and fisheye camera. We successfully detected and isolated multiple measurement faults in GPS and vision. We demonstrated higher localization accuracy using our proposed algorithm with an RMSE of m and standard deviation of m, as compared to GPSonly that shows an RMSE of m and standard deviation of m. We also validated that the mean size of protection levels estimated using GPS and fisheye camera, i.e. m is lower than using GPSonly, i.e., m.
Comments
There are no comments yet.