CMU-GPR Dataset: Ground Penetrating Radar Dataset for Robot Localization and Mapping

07/15/2021
by   Alexander Baikovitz, et al.
Carnegie Mellon University
0

There has been exciting recent progress in using radar as a sensor for robot navigation due to its increased robustness to varying environmental conditions. However, within these different radar perception systems, ground penetrating radar (GPR) remains under-explored. By measuring structures beneath the ground, GPR can provide stable features that are less variant to ambient weather, scene, and lighting changes, making it a compelling choice for long-term spatio-temporal mapping. In this work, we present the CMU-GPR dataset–an open-source ground penetrating radar dataset for research in subsurface-aided perception for robot navigation. In total, the dataset contains 15 distinct trajectory sequences in 3 GPS-denied, indoor environments. Measurements from a GPR, wheel encoder, RGB camera, and inertial measurement unit were collected with ground truth positions from a robotic total station. In addition to the dataset, we also provide utility code to convert raw GPR data into processed images. This paper describes our recording platform, the data format, utility scripts, and proposed methods for using this data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

09/03/2019

The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset

In this paper we present The Oxford Radar RobotCar Dataset, a new datase...
02/24/2020

Real-time Kinematic Ground Truth for the Oxford RobotCar Dataset

We describe the release of reference data towards a challenging long-ter...
03/09/2020

Low-viewpoint forest depth dataset for sparse rover swarms

Rapid progress in embedded computing hardware increasingly enables on-bo...
01/25/2021

The GRIFFIN Perception Dataset: Bridging the Gap Between Flapping-Wing Flight and Robotic Perception

The development of automatic perception systems and techniques for bio-i...
04/24/2020

Detecting and Tracking Communal Bird Roosts in Weather Radar Data

The US weather radar archive holds detailed information about biological...
04/29/2021

Accurate outdoor ground truth based on total stations

In robotics, accurate ground-truth position fostered the development of ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

We present the CMU-GPR dataset—the first open-source ground penetrating radar (GPR) dataset available to researchers interested in subsurface-aided perception for robot navigation, to the best of our knowledge. Radar-based perception has been shown to perform more robustly than conventional spatial or visual sensors in inclement weather [RadarRobotCarDatasetICRA2020, sheeny2020radiate, yspark-2019-icra-ws]. While recent work has focused on using millimeter-wave radar systems to construct surface-level models, there has been growing interest in using subsurface information from GPR for localization [Cornick2016, Ort2020].

GPR presents a modality to recognize a location in situations where the visual environment may change by perceiving typically constant subsurface features. For instance, a robot operating in a mining environment may encounter substantial changes in the surface environment due to mining operations, yet underground features remain more consistent. Similarly, GPR-based localization can be effective in sparsely featured environments, such as monotonous tunnels and open roads, where subsurface geologic diversity can enable lane tracking with respect to a prior map.

In this contribution, we produce a dataset and utility functions to empower researchers to explore how GPR can be used as a tool for robot navigation applications. The dataset collected contains subsurface measurements from a low-cost, off-the-shelf, single-channel GPR system along with a wheel encoder, IMU, RGB camera, and robotic total station.

Figure 1: SuperVision platform for collecting subsurface data using GPR sensor during motion. (a) A cross section of the subsurface is shown containing different flat boundaries and a pipe. (b) Schematic of the SuperVision platform used for data collection.

Ii The SuperVision Platform

SuperVision shown in Figure 1(b) is a custom, manually-pulled experimental rig used for acquiring subsurface data from GPR. SuperVision uses a quad-core Intel NUC for compute and wirelessly transmits onboard data from an XSENS MTI-30 9-axis Inertial Measurement Unit, YUMO quadrature encoder with 1024 PPR, Intel RealSense D435 camera, and a Sensors and Software Noggin 500 GPR. Ground truth data was acquired by a Leica TS15 total station. The base station logs measurements from both the onboard computer and total station to ensure time synchronization.

Iii Data Collection

The CMU-GPR dataset contains short trajectories from three distinct, GPS-denied environments: a basement (nsh_b), a factory floor (nsh_h), and a parking garage (gates_g). The dataset consists of 15 traversals where the manually-pulled experimental rig revisits previous locations, whether through forward-backward motion or by closing loops. Several distinct trajectories contain similar features, which are relevant to research on re-localization using subsurface information and further described on the project website. Additionally, we provide trajectories from outdoor environments that do not contain ground truth information, which present additional data to train models.

Figure 2(a) shows the directory structure for a single trajectory sequence. Each zip file contains data from a sequence, where each .csv represents a different measurement type. The maximum size of a zipped dataset file is 4 GB. The formats for each measurement type are summarized in Figure 2(b). In addition to the data, the project website contains relevant IMU noise and bias parameters as well as the factory extrinsic calibration of the system.

Figure 2: (a) Directory layout for the CMU-GPR dataset. (b) Data format by sensor measurement type.

Iv Development Tools

Along with relevant datasets for localizing GPR, we provide utility code, written in Python, which processes the raw GPR data to construct images. A modular routine to process 1D measurements and construct an image with uniform spacing is described in Section IV-A. Additionally, we provide a script to generate submaps, which can be used for training or evaluating GPR sensor models.

Iv-a Signal Processing

Utility code to process raw GPR signals and images is provided in signal_processing_utils.py and used in metric_gpr_image.py

. The implementation accepts raw, unevenly spaced GPR images and produces processed brightness scans. The base pipeline performs rubber band interpolation, mean background subtraction, dewow filtering, triangular bandpass filtering, zero time correction, SEC gain, wavelet denoising, and gaussian filtering.

Iv-B Image Construction

Beyond processing the raw data, the utility code also simplifies image acquisition. The MetricGprImage object stores the GPR dataset and allows the client to access images based on the time of acquisition. The ImageConstructor object is even more abstract, allowing the client to create a traditional radargram of the entire sequence or automatically generate all valid submap images.

Figure 3: API for constructing processed images from CMU-GPR dataset.

V Approaches

The dataset proposed in this paper was used for robot localization in an unknown, GPS-denied environments using a GPR sensor. In that work, Baikovitz et al. [baikovitz2021ground]

utilized learned sensor models to incorporate GPR submaps into a factor graph-based estimation framework. These learned models provided relative motion predictions during loop closures thereby correcting for accumulated drift in the position estimates. The goal of this work was to demonstrate how a low-cost, off-the-shelf, single-channel GPR system can be used effectively for robot localization.

We see many avenues for extending the use of this dataset and GPR in robotics. One limitation with a localization-only approach is that one needs to revisit prior locations in order to correct for drift. This can be avoided if we maintain an online map of the subsurface structures such as pipes, mines, and natural caves, and localize with respect to it. This would involve solving a simultaneous localization and mapping (SLAM) problem. Finding the correct representation for GPR data is likely the largest challenge for robust, GPR-based SLAM. We observe that a 2D image representation performs well compared to raw 1D traces, which do not provide unique enough features, and 3D point clouds, which are sensitive to noise and variable subsurface composition. Some prior work addresses automatic detection of hyperbolic features in 2D GPR images; however, these methods are often heuristic-based and not investigated for use in a SLAM graph 

[qingxuhyperbola2017].

Vi Discussion

Our motivation for providing this contribution is to encourage others in the field to take similar steps in making GPR-based perception datasets available to researchers. We believe that providing this data to the research community will spur further development of robust GPR-based localization systems for real world deployment.

References