SUPS: A Simulated Underground Parking Scenario Dataset for Autonomous Driving

02/25/2023
by   Jiawei Hou, et al.
0

Automatic underground parking has attracted considerable attention as the scope of autonomous driving expands. The auto-vehicle is supposed to obtain the environmental information, track its location, and build a reliable map of the scenario. Mainstream solutions consist of well-trained neural networks and simultaneous localization and mapping (SLAM) methods, which need numerous carefully labeled images and multiple sensor estimations. However, there is a lack of underground parking scenario datasets with multiple sensors and well-labeled images that support both SLAM tasks and perception tasks, such as semantic segmentation and parking slot detection. In this paper, we present SUPS, a simulated dataset for underground automatic parking, which supports multiple tasks with multiple sensors and multiple semantic labels aligned with successive images according to timestamps. We intend to cover the defect of existing datasets with the variability of environments and the diversity and accessibility of sensors in the virtual scene. Specifically, the dataset records frames from four surrounding fisheye cameras, two forward pinhole cameras, a depth camera, and data from LiDAR, inertial measurement unit (IMU), GNSS. Pixel-level semantic labels are provided for objects, especially ground signs such as arrows, parking lines, lanes, and speed bumps. Perception, 3D reconstruction, depth estimation, and SLAM, and other relative tasks are supported by our dataset. We also evaluate the state-of-the-art SLAM algorithms and perception models on our dataset. Finally, we open source our virtual 3D scene built based on Unity Engine and release our dataset at https://github.com/jarvishou829/SUPS.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

research
05/24/2023

Polarimetric Imaging for Perception

Autonomous driving and advanced driver-assistance systems rely on a set ...
research
03/25/2022

Rope3D: TheRoadside Perception Dataset for Autonomous Driving and Monocular 3D Object Detection Task

Concurrent perception datasets for autonomous driving are mainly limited...
research
12/22/2022

Vision-Based Environmental Perception for Autonomous Driving

Visual perception plays an important role in autonomous driving. One of ...
research
09/15/2023

AVM-SLAM: Semantic Visual SLAM with Multi-Sensor Fusion in a Bird's Eye View for Automated Valet Parking

Automated Valet Parking (AVP) requires precise localization in challengi...
research
07/08/2023

Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots

High-quality datasets can speed up breakthroughs and reveal potential de...
research
03/09/2022

SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving

Surround-view cameras are a primary sensor for automated driving, used f...
research
02/28/2023

TrainSim: A Railway Simulation Framework for LiDAR and Camera Dataset Generation

The railway industry is searching for new ways to automate a number of c...

Please sign up or login with your details

Forgot password? Click here to reset