Positioning aiding using LiDAR in GPS signal loss scenarios

05/17/2019 ∙ by Szymon Krupinski, et al. ∙ Jacobs University Bremen 0

In the presented scenario, an autonomous surface vehicle (ASV) equipped with a laser scanner navigates on a inland pathway surrounded and crossed by man-made structures such as bridges and locks. GPS receiver present on board experiences signal loss and multipath reflections in situation when the view of the sky is obscured by a bridge or tall buildings. In both cases, a potentially dangerous situation is provoked as the robot has no or inaccurate positioning data. A sensor data processing scheme is proposed where these gaps are smoothly filled in by positioning data generated from scan matching and registration of the laser data. This article shows preliminary results of positioning data improvement during trials in harbor-river environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Motivation

In the domain of underwater robotics, the standard suite of navigational sensors include an inertial measurement system, Doppler velocity log and a GPS receiver for initial surface positioning fixes. It is sometimes augmented with acoustic positioning beacons such as long- or ultra short base line (LBL/USBL) which provides low frequency and relatively noisy estimate but is free from drift accumulation over time. Subject to the quality of the chosen components, this setup tends to provide reasonable positioning accuracy in zones covered by acoustic localization. In its absence, positioning quality quickly deteriorates during the mission.

For the surface vehicles, the localization is greatly simplified due to the near constant availability of the GPS signal. Its ubiquity causes that the alternative solution is rarely sought. Contrary to the autonomous underwater vehicle AUVs, surface vehicle very rarely carry a DVL and might not be equipped even with a simpler mechanical equivalent. This restricts what can be done in case of GPS signal failure even further. And this failure is a fairly regular occurrence in man-made (harbor) or complex natural environments (canyon, mangrove). While today autonomous operations in harbours, rivers and navigational canals represent only a small fraction of work done, it is precisely in these environments where their unreliability can lead to costly and dangerous consequences, such as a collision.

The proposition in this article is to turn to the payload sensor for additional positioning clues. As the illustrating example, a data-rich 3-D laser scanner is used. In case of AUVs, the same idea could be applied to a bathymetric or imaging sonar.

The ultimate motivation of developing a LiDAR-equipped vehicle with robust navigation is to use it for autonomous data gathering in harbor, river and coastal environments. In the future, this technology can serve to develop autonomous water taxis, ferries and other ships.

I-B Related work

A number of well established methods exist for positioning without constant availability of GPS data. Underwater vehicle positioning is a notable application, since GPS, or, for that matter, any other signal using electromagnetic (EM) waves, cannot penetrate water beyond a limited skin depth. Thus, the initial GPS fix is taken on the surface and then, the positioning is carried out using a combination of dead reckoning, integration of inertial measurements and acoustic distance fixes, if available.

A couple of filtering techniques have achieved the status of a near standard positioning solution in marine robotics and similar domains, namely Kalman (Extended or Unscented) and Particle filtering [8], [6]. They permit integrating several heterogeneous sensor measurements of complimentary characteristics and calculating the optimal estimation of the current vehicle state.

Several advanced, mostly experimental autonomous platforms are also dotted in the complimentary environment mapping algorithms, together constituting a simultaneous localization and mapping SLAM. The technique was also applied to underwater robots by Ribas [12], Mallios [6] and others. While SLAM represents a complete localization solution, it creates a significant memory overhead due to the need to maintain a map of the environment and the computation needs due to constant new sensor data and map updates. In case of survey vehicles, creating a product-grade map of the scene for immediate navigation purpose would be impractical. Thus, some authors propose a localisation on an (partial) existing map, notably using techniques of scan matching [8]. Matching a current scan to a global map yields a global position candidate. The approach taken in this work makes use of relative matching of consecutive scans without a global map.

LiDAR, or 3-D laser scanners provide a measurement of distance along an array of laser beams typically mounted on a rotating head, so that they can sweep a large volume of the environment. They are commonly used in autonomous driving [7] but they are also making their way underwater where they enable such applications as autonomous or milimetric-precision survey [9]. 3-D scanners produce a considerable volume of data in form of structures clouds of 3-D points, although some devices can also output depth images. In addition to the geometric information, the intensity of laser reflection is often recorded for every point.

In addition to sonar and LiDAR, vision-based ASV navigation has been explored, for example by Dunbabin et al.[1], Wang et al. [16], Heidarsson and Sukhatme [2]. Despite producing information-rich data at a high frequency, standard imaging techniques do not provide the exact geometric information about the environment. Thus, with the decreasing sensor prices, LiDARs become more commonplace in autonomous vehicles, including ASVs.

The idea of calculating relative displacement from the incremental LiDAR scan matching is not new. Tang et al.[15] makes use of such technique for indoor navigation of a terrestrial robot. However, data collected in a building environment has many easy to exploit characteristics, such as near-perfect planes created by the walls, short sensing distances and the existence of the ground plane. The techniques using bathymetric sonar readings and a known depth map, commonly referred to as “terrain-based navigation” belong to the same family, although with a slightly different geometry of the problem (as explored by Lucido et al. [5], Li et al. [4] and others).

Fig. 1: An example from [15] of matching wall contours detected by a LiDAR device in an indoor application.

I-C Investigated scenario

In the presented scenario, an autonomous surface vehicle (ASV) navigates on a inland pathway surrounded and crossed by man-made structures such as bridges and locks. GPS receiver present on board experiences signal loss in situation when the view of the sky is obscured by a bridge. In other situations, for example navigating close to a tall building or a canal wall, the positioning signal is known to be distorted by multipath reflections, giving false position readings. In both cases, a potentially dangerous situation is provoked as the robot has no or inaccurate positioning data.

Horizontal Dilution of Precision (HDOP) gives estimate of the of the current precision due to satellite position and conditions. It can serve as a criterium of whether using GPS input is safe. In practice, a low HDOP does not guarantee a correct position fix. During the tests, the two positioning methods are run in parallel in order to better analyse such situation and to enable to design a future robust switching strategy.

Fig. 2: Still image frames from the video camera installed on board of the vehicle showing environment likely to cause GPS signal disturbance.
Fig. 3: Red circle show the discontinuities and false indication of the GPS sensor as the vehicle crosses bridges and locks on the Charles river.
Fig. 4: Part of the point cloud representing one of the bridge pillars in front of the vehicle, as seen in the left-hand-side of Fig. 2.

Since the blackout of the GPS sensor means that there is no alternative positioning data, the technique can be validated using a parts of the vehicle’s route where the GPS signal was of good quality and can thus serve as the ground truth.

Ii Proposed Navigation Scheme

The positioning aiding discussed here is supposed to complement a generally reliable GPS indications. Due to the nature of the incremental scan matching, if one starts collecting scans only after a detected signal loss, the calculated position may be based on the already false last estimation. Additionally, given the difficulty to automatically detect failures using simple metrics such as HDOP, it is reasonable to assume that the point cloud processing has to run continuously. Given that the vehicle runs on a limited energy source, it is imperative to propose an efficient algorithm.

In general, the problem of scan matching considered here is an optimisation problem of finding a transformation (here, represented by an affine transformation including rotation and translation) that minimises distances between all corresponding points and (homogeneous coordinates) of two point clouds and respectively:

If point clouds/scans and are expressed in a frame of reference and respectively, then the final calculated transformation () can then be directly used as the relative rotation/displacement between the vehicle’s poses at which the scans were taken. Given a large operating scan speed (less than 0.1s to complete a scan), the deformation of the point cloud due to vehicle’s motion can be neglected, contrary to the equivalent use of a rotating head sonar in [12]. For the trial campaign, two algorithms were considered. They are summarised in the table below and illustrated in Figs 6 and 7.

Full 6-D registration Reducted 3-D matching

Feed two point cloud captured during separate scans into the regular PCL registration pipeline, output: 3 components of rotation (yaw, pitch, roll) and a translations vector

Down-project the point clouds on the x-y plane and use image-based registration to obtain 3 components of the relative motion: yaw, x- and y-translation

While the full 6-D registration permits to exploit state-of-the-art algorithms bundled with the PCL, it’s performance is directly related to the number of points in the point cloud. In a calm weather, where roll and pitch are bounded to several degrees, trying to calculate them is often counterproductive, especially for a sensor mounted on a Wave Adaptive Modular Vessel (WAM-V) platform, which has inherent stability due to the mechanical design (www.wam-v.com/tech). It can be noted that the z-translation of the point cloud can be expected to be null. In order to speed up the calculations, the initial pose guess can be based on these criteria.

Fig. 5: Velodyne 32E LiDAR sensor, as used during the trials, is characterised by small size.

PCL’s canonical point cloud registration includes a stage of filtering, correspondence rejection and pre-aligning, before the final fine alignment is carried out using Iterative Closest-Point algorithm (ICP) [13]. However, if scans from a very short period are used as input, the displacement/rotation between them is typically very low, thus they can be considered pre-aligned. For the implementation of the pre-alignment, Fast Point Feature Histograms (FPFH) descriptors computing algorithm [14] bundled with PCL was chosen after trying a number of alternatives.

While the initial version of the algorithm was conform to the scheme given in Fig. 6, some variations were tried in the field.

A simpler method was also introduced which projects all points in the cloud onto a x-y plane to form an image. The size and pixel dimensions of this image are important parameters: some points further away will be dropped if they do not fit the canvas. The resolution of the image will essentially determine the expected precision. The desired side-effect of this operation is that if there is a dense point cloud segment, e.g. from a nearby wall or bridge pillar, it will form a bright line or zone in the image, at the same time reducing significantly the number of points to process. The consecutive scan is likely to produce a similar characteristic shape; the relative rotation and translation can thus be computed with high confidence thanks to this correspondence. The resulting images resemble strongly imaging sonar captures of a sea bottom and structures.

Fig. 6: Method 1: Canonical PCL point cloud registration pipeline.
Fig. 7: Method 2: Simplified, 3-D registration of scans.
Fig. 8: Results of reduction of point clouds from the dataset to 2-D images in the simple algorithm (points dilated for better visibility).

Iii Implementation and Results

Fig. 9: REx IV ASV used for data collection. During the trials, it was remotely controlled from a chase boat. Image source [10]

The navigation algorithm was tested on a Reef Explorer IV (REx IV) vehicle belonging to the MIT Sea Grant research group navigating on the section of Charles River between Boston University Bridge and Charlestown Bridge, featuring tall city architecture, multiple bridges of different types, locks and canal walls. The vehicle (more info available at [10]) is of WAM-V design and carries a top mounted Velodyne HDL-32E Lidar Sensor, a camera and a GPS receiver. During normal operation in open space of the river, the GPS provides a satisfactory estimation of position, with uncertainty restricted to less than meter.

The data processing and positioning pipeline was implemented in the environment of Robotic Operating System (ROS) [11] using Point Cloud Library (PCL, pointclouds.org) for all operations on 3-D point clouds. This library contains an extensive and slowly maturing framework for registration of point clouds which contains necessary functions to build the discussed positioning algorithms [3].

Fig. 10: Yaw calculated using scan matching seems to reflect all the variations of the compass/GPS cap but there is a growing drift over the test period, reaching at the end.
Fig. 11: Travelled distance estimated by scan matching.
Fig. 12: The experiment in which the GPS localisation was used as ground truth shows some imperfection in the estimation but also the potential of the scan matching technique.

Analysis of several collected data sets reveals some properties of the data coming from the LiDAR device:

  1. Almost no points are registered on the water surface

  2. The clouds are relatively scarce

  3. Complex structures with girders, pipes, etc. return point clouds where normal estimation is unreliable

  4. HDOP is not an infallible indicator of GPS errors, especially in the multipath scenario

The above observations have direct consequences for the choice and performance of the scan matching algorithms. Property 1) excludes existing algorithms which rely on the detection of the base plane. 2) and 3) render algorithms which rely on surface normals less robust, yet FPFH, which uses the information about normals was still the best-performing descriptor. Point 4) is a signal against using HDOP to select which source of localisation: GPS or scan-matching is to be trusted more. More work needs to be dedicated to finding an estimator which will allow detecting early stage of signal blackout.

Fig. 13: Crossing Charles River Dam Road caused a GPS blackout followed by a period of incorrect estimation smoothed through the device’s internal filter. The scan matching-based positioning kept track of the ASV’s progression under the bridge, albeit the reconstruction of the trajectory is rather noisy. In this trial, the 1st method was used in post-processing.

The performance obtained on a regular laptop PC used for the processing were 10 Hz and above with the simplified method and between 8 Hz - 1 Hz with full PCL processing chain. The last result varied with the number of points present in either of the scan point clouds. Depending on presence of shore or structures near the vehicle, the successful matching was achieved for 6,000 - 27,000 points per scan, with mostly failures below this range. On average, below 10% of the points were filtered out at the outlier rejection stage. Given that the scan rate was 10Hz, the simplified method was performing in real time but its results were characterised by a lower degree of precision and higher percentage of unresolved scan matching. In good conditions, the ICP stage was virtually unnecessary, as the pre-alignement with FPFH produced a nearly ideal estimation. It is, however, not yet quantified what impact on the final result would eliminating ICP have.

An additional stage of processing was introduced due to the experience gained during the trials: the scans were compared w.r.t. the number of points they contained. Too big discrepancy would normally signify that the incoming scan was anomalous (e.g. due to a momentary blinding of the Velodyne sensor) and had to be dropped.

Iv Conclusions

Iv-a Achieved objectives

Despite a low degree of precision, the preliminary results show that the method can be used to render navigation in cluttered environments more robust. At this time it is difficult to achieve precision and real-time processing at the same time but active work on this topic continues. A potential speed-up can be achieved either by eliminating the fine alignment step in the right circumstances or by skipping the pre-alignment if the scans are separated by a very short time. As an added benefit of processing the LiDAR data on board, obstacle avoidance can be performed on them at the same time with a reduced performance hit.

Iv-B Further work

Further work has to be invested into merging the two localisation sources into one, coherent position estimation. The candidates outlined in the introductory section of this articles are considered: Kalman and particle filters. Finding a parameter more robust than HDOP which could be used to tune the filters online is a desirable result.

Acknowledgment

The authors wish to acknowledge the kind help and advice received during the trials from the MIT Sailing Pavilion team and the MIT Sea Grant colleagues.

References

  • [1] M. Dunbabin, B. Lang, and B. Wood, “Vision-based docking using an autonomous surface vehicle,” 2008 IEEE International Conference on Robotics and Automation, pp. 26–32, 2008.
  • [2]

    H. K. Heidarsson and G. S. Sukhatme, “Obstacle detection from overhead imagery using self-supervised learning for autonomous surface vehicles,”

    2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3160–3165, 2011.
  • [3] D. Holz, A. E. Ichim, F. Tombari, R. B. Rusu, and S. Behnke, “Registration with the point cloud library: A modular framework for aligning in 3-D,” IEEE Robotics & Automation Magazine, vol. 22, no. 4, pp. 110–124, 2015.
  • [4] P. Li, G. Sheng, X. Zhang, J. Wu, B. Xu, X. Liu, and Y. Zhang, “Underwater terrain-aided navigation system based on combination matching algorithm,” ISA transactions, 2018.
  • [5] L. Lucido, B. Pesquet-Popescu, J. Opderbecke, V. Rigaud, R. Deriche, Z. Zhang, P. Costa, and P. Larzabal, “Segmentation of bathymetric profiles and terrain matching for underwater vehicle navigation,” International Journal of Systems Science, vol. 29, no. 10, pp. 1157–1176, 1998.
  • [6] A. Mallios, P. Ridao, D. Ribas, F. Maurelli, and Y. Petillot, “EKF-SLAM for AUV navigation under probabilistic sonar scan-matching,” in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on.   IEEE, 2010, pp. 4404–4411.
  • [7] F. Maurelli, D. Droeschel, T. Wisspeintner, S. May, and H. Surmann, “A 3d laser scanner system for autonomous vehicle navigation,” in Advanced Robotics, 2009. ICAR 2009. International Conference on.   IEEE, 2009, pp. 1–6.
  • [8] F. Maurelli, Y. Petillot, A. Mallios, P. Ridao, and S. Krupinski, “Sonar-based AUV localization using an improved particle filter approach,” in OCEANS 2009-EUROPE.   IEEE, 2009, pp. 1–9.
  • [9] D. McLeod, J. Jacobson, M. Hardy, and C. Embry, “Autonomous inspection using an underwater 3d lidar,” in Oceans-San Diego, 2013.   IEEE, 2013, pp. 1–8.
  • [10] MIT. (2018) MIT marine autonomy bay : Main - rex iv browse. [Online]. Available: http://oceanai.mit.edu/pavlab/pmwiki/pmwiki.php?n=Main.RexIV
  • [11] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “ROS: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2.   Kobe, Japan, 2009, p. 5.
  • [12] D. Ribas, P. Ridao, J. D. Tardós, and J. Neira, “Underwater SLAM in man-made structured environments,” Journal of Field Robotics, vol. 25, no. 11-12, pp. 898–921, 2008.
  • [13] S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on.   IEEE, 2001, pp. 145–152.
  • [14] R. B. Rusu, “Semantic 3d object maps for everyday manipulation in human living environments,” Ph.D. dissertation, CS Department, TU Muenchen, Germany, October 2009.
  • [15] J. Tang, Y. Chen, X. Niu, L. Wang, L. Chen, J. Liu, C. Shi, and J. Hyyppä, “LiDAR scan matching aided inertial navigation system in GNSS-denied environments,” Sensors, vol. 15, no. 7, pp. 16 710–16 728, 2015.
  • [16] K. Wang, Y. Liu, and L. Li, “Vision-based tracking control of underactuated water surface robots without direct position measurement,” IEEE Transactions on Control Systems Technology, vol. 23, pp. 2391–2399, 2015.