EU Long-term Dataset with Multiple Sensors for Autonomous Driving

09/07/2019 ∙ by Zhi Yan, et al. ∙ Czech Technical University in Prague Universit 0

The field of autonomous driving has grown tremendously over the past few years, along with the rapid progress in sensor technology. One of the major purposes of using sensors is to provide environment perception for vehicle understanding, learning and reasoning, and ultimately interacting with the environment. In this article, we introduce a multisensor framework allowing vehicle to perceive its surroundings and locate itself in a more efficient and accurate way. Our framework integrates up to eleven heterogeneous sensors including various cameras and lidars, a radar, an IMU (Inertial Measurement Unit), and a GPS/RTK (Global Positioning System / Real-Time Kinematic), while exploits a ROS (Robot Operating System) based software to process the sensory data. In addition, we present a new dataset (https://epan-utbm.github.io/utbm_robocar_dataset/) for autonomous driving captured many new research challenges (e.g. highly dynamic environment), and especially for long-term autonomy (e.g. creating and maintaining maps), collected with our instrumented vehicle, publicly available to the community.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Both academic research and industrial innovation into autonomous driving has seen tremendous growth in the past few years and is expected to continue to grow rapidly in the coming years. This can be explained by two factors including, the rapid development of hardware (e.g. sensors and computers) and software (e.g. algorithms and systems), and the needs for travel safety, efficiency, and low-cost along with the development of human society.

A general framework for autonomous navigation of unmanned vehicle consists of four modules, including sensors, perception and localization, path planning and decision making, as well as motion control. It typically to have vehicles answer three questions: ”Where am I?”, ”What’s around me?”, and ”What should I do?”. As shown in Fig. 1, the vehicle acquires the external environmental data (e.g. image, distance and velocity of object) and self-measurements (e.g. position, orientation, velocity and odometry) through various sensors. Sensory data are then delivered to the perception and localization module, help the vehicle understand its surroundings and localize itself in a pre-built map. Moreover, the vehicle is expected to not only understand what happened but also what is going on around it [7], and it may simultaneously update the map with a description of the local environment for long-term autonomy [13, 6]. Afterwards, depending on the pose of the vehicle itself and other objects, a path is generated by the global planer and can be adjusted by the local planer according to the real-time circumstance. Then the motion control module will calculate motor parameters to execute the path and send commands to the actuators. Following the loop across these four components, the vehicle can navigate autonomously following a typical “see-think-act” cycle.

Fig. 1: A general multisensor based framework for a map based autonomous driving system.

Effective perception and accurate localization are known as the most essential part of many modules for an autonomous vehicle to safely and reliably operating in our daily life. The former includes the measurement of internal (e.g. velocity and orientation of the vehicle) and external (e.g. human, object and traffic sign recognition) environmental information, while the latter mainly includes visual odometry / SLAM (Simultaneous Localization And Mapping), localization with a map, and place recognition / re-localization. These two tasks are closely related and both affected by the sensors used and the processing manner of the data they provide.

Nowadays, the heterogeneous sensing system is commonly used in the field of robotics and autonomous vehicles, in order to produce comprehensive environmental information. Commonly used sensors include various cameras, 2D/3D lidar (LIght Detection And Ranging), radar (RAdio Detection And Ranging), IMU (Inertial Measurement Unit), and GNSS (Global Navigation Satellite System). The combination use of these is mainly due to the fact that different sensors have different properties, and each category has its own pros and cons [15]. On the other hand, ROS (Robot Operating System) [11] has become the de facto standard platform for development of software in robotics, and today increasing numbers of researchers and industries develop autonomous vehicles software based on it. As an evidence, for example, seven emerging ROS-based autonomous driving systems were presented at ROSCon111https://roscon.ros.org/ 2017, while this number was zero in 2016.

In this article, we report our progress in building an autonomous car (see Fig. 2) at the University of Technology of Belfort-Montbéliard (UTBM) in France, with a focus on the recently completed multisensor framework. Firstly, we introduce the sensors used for the purpose of efficient perception and accurate localization in autonomous driving, while illustrating the reason of choosing them, the installation positions, and some trade-offs we made in the system configuration. Second, we introduce a new dataset222https://epan-utbm.github.io/utbm_robocar_dataset/ for autonomous driving, recorded with our multisensor platform in both urban and suburban areas, where all the sensors are calibrated, data are approximately synchronized (i.e. at the software level), and the ground truth for vehicle localization is provided. This dataset includes many new features for urban driving, such as sloping road, shared zone, diversion, etc., and as it captures daily and seasonal changes, it is especially suitable for long-term vehicle autonomy research [9]. Additionally, we implemented the state-of-the-art methods as baselines for the lidar odometry benchmarking, with ground-truth trajectories recorded by GPS/RTK. Finally, we illustrate the proposed system characteristics via a horizontal comparison with other vehicle platforms and their related datasets.

Fig. 2: The UTBM autonomous driving platform.

Starting to work with the autonomous vehicle might be a challenge and time consuming. Because people have to face difficulties on the design and the implementation from the hardware (especially various sensors) to the software level. The main purpose of this article is also to summarize our experience and to help readers to quickly overcome similar issues. We hope these descriptions will give the community a practical reference.

The rest of the article is organized as follows. Section II introduces our multisensor framework for autonomous driving, including both hardware and software perspectives. Section III presents our proposed dataset, baseline methods and challenges. Section IV gives an overview of some related work. The article is concluded by a short summary in Section V.

Ii The Framework

So far, there is no almighty and perfect sensor, and they all have limitations and edge cases. For example, GNSS is extremely easy to navigate and works in all weather conditions, but its update frequency and accuracy are usually not enough to meet the requirements of autonomous driving. Also, buildings and infrastructures in the urban environment are likely to obstruct the signals, thereby leading the positioning failures in many daily scenes such as urban canyons, tunnels, and underground parking lots. Among visual and range sensors, the 3D lidar is generally very accurate and has a large field of view (FoV). However, the sparse and geometry data (i.e. point clouds) obtained from this kind of sensors experience limited ability in semantic-related perception tasks. Furthermore, in the case of vehicle traveling at high speed, relevant information is not handily extracted due to scan distortion. The 2D lidar have obviously similar problems, with further limitations due the availability of a single scan channel and reduced FoV. Nevertheless, 2D lidars are usually cheaper than the 3D ones, which have mature algorithm support and been widely used in mobile robotics long enough for mapping and localization problems. Visual cameras can encode rich semantic and texture information into the image, while low robustness is experienced with lightness and illumination variances. Radar is very robustness to lightness and weather changes, while it lacks of range sensing accuracy. In summary, it is difficult to rely on a single sensor type for efficient perception and accurate localization in autonomous driving, as concerned by this article. Hence, it is important for researchers and industries to leverage the advantages of different sensors and make the multisensor system complimentary with individual ones. Table 

I summarizes typical advantages and disadvantages of the commonly used sensors.

Sensors Pros Cons
GNSS easy-to-use low positioning accuracy
less weather sensitivity limited by urban area
lidar high positioning accuracy high equipment cost
fast data collection high computational cost
can be used day and night ineffective during rain
camera low equipment cost low positioning accuracy
providing intuitive images affected by lighting
radar reliable detection low positioning accuracy
unaffected by the weather slow data collection
TABLE I: Pros and cons of the commonly used sensors for autonomous driving

Ii-a Hardware

Fig. 3: The sensors used and their mounting positions.

The sensor configuration of our autonomous car is illustrated in Fig. 3. Its design mainly adheres to two principles: strengthen the visual scope as much as possible, and maximize the overlapping area perceived by multiple sensors. In particular:

  • Two stereo cameras, i.e. a front-facing Bumblebee XB3 and a back-facing Bumblebee2, are mounted on the front and rear of the roof, respectively. These two cameras are both with CCD sensors in global shutter mode, and compared to rolling shutter cameras, they are more advantageous when the vehicle is driving at a high speed. In particular, every pixel in a captured image is exposed simultaneously at the same instant in time in global shutter mode, while exposures typically move as a wave from one side of the image to the other in rolling shutter mode.

  • Two Velodyne HDL-32E lidars are mounted on the front portion of the vehicle roof, side by side. Each Velodyne lidar has 32 scan channels, 360 horizontal and 40 vertical FoV, with a measuring range up to 100 m. It is noteworthy that when using multiple Velodyne lidars in proximity to one another, as in our case, sensory data may be affected due to one sensor picking up a reflection intended for another. In order to reduce the likelihood of the lidars interfering with each other, we used its built-in phase-locking feature to control where the laser firings overlap for the data recording, and post-processed it to remove data shadows behind each lidar sensor. Details will be given in Section II-B2.

  • Two Pixelink PL-B742F industrial cameras with fisheye lens are installed in the middle of the roof, facing the lateral sides of the vehicle. The camera has CMOS global shutter sensor that freezes the high-speed motion, while the fisheye lens allows to capture an extremely wide angle of view. This setting, on the one hand, increases the vehicle’s perception of the environment on both lateral sides that has not been well studied so far, and on the other hand, adds a semantical complement to the Velodyne lidars.

  • An ibeo LUX 4L lidar is embedded into the front bumper close to the y-axis of the car, which provides four scanning layers, a 85 (or 110 if one uses only two layers) horizontal FoV, and up to 200 m measurement range. Together with a radar, they are extremely important for our system to ensure the safety of the vehicle itself as well as other objects (especially humans) in the vicinity of the front of the vehicle.

  • A Continental ARS 308 radar is mounted in a position close to the ibeo LUX lidar, which is very reliable for the detection of moving objects. While less angularly accurate than lidar, radar can work in almost every condition and even use reflection to see behind obstacles. Our framework is designed to detect and track objects in front of the car by “cross-checking” both radar and lidar data.

  • A SICK LMS100-10000 laser rangefinder (i.e. 2D lidar) facing the road is mounted on one side of the front bumper. It measures its surroundings in two-dimensional polar coordinates and provides a 270 FoV. Due to its downward tilt, the sensor is able to scan the road surface and deliver information about road markings and road boundaries. The combination use of the ibeo LUX and the SICK lidars is also recommended by the industrial community, i.e. the former for object detection (dynamics) and the latter for road understanding (statics).

  • A Magellan ProFlex 500 GNSS receiver is placed in the car with two antennas on the roof. One antenna is mounted on the z-axis perpendicular to the car rear axle for receiving satellite signals and the other is placed at the rear of the roof for synchronizing with an RTK base station. With the help of the RTK enhancement, the GPS positioning will be corrected and the positioning error will be reduced from meters-level to centimeters-level.

  • An Xsens MTi-28A53G25 IMU is also placed inside the vehicle, putting out linear acceleration, angular velocity, absolute orientation, among others.

It is worth mentioning that a trade-off we made in our sensor configuration is the side-by-side use of two Velodyne 32-layer lidars rather than adopting a single lidar or other models. The reason for this is twofold. First, in the single lidar solution, the lidar is mounted on a “tower” in the middle of the roof in order to eliminate occlusions caused by the roof, which is not an attractive option from an industrial design point of view. Second, other models such as 64-layer lidar is more expensive than two 32-layer lidars which costs more than two 16-layer lidars. We therefore use a pair of 32-layer lidars as the trade-off between sensing efficiency and hardware cost.

Regarding the reception of sensory data, the ibeo LUX lidar and the radar are connected to a customized control unit that is used for real-time vehicle handling and low-level control such as steering, acceleration and braking. This setting is very necessary, because the real-time response from these two sensors to CAN bus is extremely important for driving safety. All the lidars via a high-speed Ethernet network, the radar via RS-232, the cameras via IEEE 1394, and the GPS/IMU via USB, are connected to a DELL Precision Tower 3620 workstation. The latter is only for data collection purpose, while a dedicated embedded automation computer will be used as master computer ensuring operation of the most essential system modules such as SLAM, point cloud clustering, sensor fusion, localization, and path planning. Then a gaming laptop (with high-performance GPU) will serve as slave unit which is responsible to process computational intense and algorithmically complex jobs, especially for the visual computing. In addition, our current system is equipped with two 60Ah external car batteries that can provide us with more than one hour of autonomy.

Ii-B Software

Our software system is based entirely on ROS. For data collection, all the sensors are physically connected to the DELL workstation and all ROS nodes were running locally. This setting maximizes data synchronization at the software level (timestamped by ROS)333Data synchronization at the hardware level is beyond the scope of this article.. The ROS-based software architecture diagram and the publish frequency of each sensor for data collection are shown in Fig. 4. It is worth pointing out that the collection is done with a CPU-only (Intel i7-7700) computer, while without any data delay. This is mainly due to the fact that we only record the raw data and leave the post-processing to offline playback.

Fig. 4: ROS-based software architecture diagram for data collection. The data is saved in rosbag format. Please note that, in order to facilitate the reader to reproduce the system, we indicate the ROS package name instead of the ROS node name for each sensor driver. However, the ROS master communicates actually with the node provided by the package.

Ii-B1 Sensor Calibration

Like most of other multisensor systems, all our cameras and lidars are both intrinsically and extrinsically calibrated. The intrinsic calibration of the monocular cameras as well as the extrinsic calibration of the stereo cameras were performed with a chessboard using ROS camera_calibration

package, while the lidars are with factory intrinsic parameters. Then, all other sensors were calibrated with respect to the Velodyne lidars. The extrinsic parameters of the lidars were estimated via minimizing the voxel-wise

distance of the points from different sensors by driving the car in a structured environment with several landmarks. To calibrate the extrinsic transform between the stereo camera and the Velodyne lidar, we drove the car facing the corner of a building and manually aligned two point clouds on three planes i.e. two walls and the ground. An aligned sensor data is visualized in Fig. 5. As we can see, through the calibration, points from all the lidars and the stereo cameras are aligned properly. In addition, it is obvious that the point clouds engendered by two Velodyne HDL-32E lidars are as dense as that obtained by a costly Velodyne HDL-64E (i.e. a 64-layer lidar).

Fig. 5: A ROS Rviz screenshot of the collected data with calibrated sensors. The UTBM robocar is in the centre of the image with a truck in front. The red ring points come from the Velodyne, white points from the SICK, and colored dots from the ibeo LUX lidar. The point clouds in front of and behind the car are from the two Bumblebee stereo cameras.

Ii-B2 Configuration of two Velodyne lidars

As aforementioned, the two Velodyne lidars have to be properly configured in order to work efficiently. Firstly, the phase lock feature of each sensor needs to be set to synchronize the relative rotational position of the two lidars, based on the Pulse Per Second (PPS) signal. While the latter can be obtained from the GPS receiver connected to the lidar’s interface box. In our case, i.e. the two sensors are placed on the left and right sides of the roof, the left one has its phase lock offset set to 90, while the right one is set to 270, as shown in Fig. 6.

Fig. 6: Phase offset setting of two side-by-side installed Velodyne lidars

Secondly, the Eq. 1 [4] can be used to remove any spurious data due to blockage or reflections from the opposing sensor (i.e. data shadows behind each other, see Fig. 7):

(1)

where, is the subtended angle, is the diameter of the far sensor, and is the distance between sensor centers.

Fig. 7: Data shadows behind a pair of Velodyne lidars.

Moreover, in order to avoid network congestion led by the broadcast data of the sensors, we configure each Velodyne (the same for the SICK and the ibeo LUX lidars) to transmit its packets to a specific (i.e. non-broadcast) destination IP address (in our case, the IP address of the workstation), via a unique port.

Iii Dataset

Our recording software is fully implemented into the ROS system. Data collection was carried out based on the Ubuntu 16.04 LTS (64-bit) and the ROS Kinetic. The vehicle was driven by a human and any ADAS (Advanced Driver Assistant System) functions were disabled. The data collection was performed in the downtown (for long-term data) and a suburb (for roundabout data) of Montbéliard in France. The vehicle speed was limited to 50 km/h following the French traffic rules. It is conceivable that the urban scene during the day (recording time around 15h to 16h) is highly dynamic, while the evening (recording time around 21h) is relatively calm. Light and vegetation (especially street trees) are abundant in summer, while winter is generally poorly lit, with little vegetation and sometimes even covered with ice and snow. All data were recorded in rosbag files for easy sharing with the community. The data collection itineraries can be seen in Fig. 8, which were carefully selected after many trials.

(a) Itinerary for long-term data.
(b) Itinerary for roundabout data.
Fig. 8: Data collection itineraries drawn on Google Maps.

For the long-term data, we focus on the environment that is closely related to periodic changes [8, 14] such as daily, weekly and seasonal changes. The length of the data recording is about 5 km each time and the route passes through the city centre, a park, a residential area, a commercial area and a bridge on the river Doubs, and includes a small and a big road loop (for loop-closure purpose). The RTK base station was placed at a fixed location on the mound - position marked by the red dot in Fig. 8 (sea level 357 m) - in order to communicate with the GNSS receiver in the car with minimal signal occlusion. With these settings, we recorded data during the day, at night, during the week, in the summer and winter (with snow), always following the same itinerary. At the same time, we captured many new research challenges such as uphill/downhill road, shared zone, road diversion, and highly dynamic/dense traffic.

Moreover, roundabout is very common in France as well as in other European countries. This road condition is not easy to handle even for humans. The key is to accurately predict the behavior of other vehicles. To promote related research on this topic, we recorded some data in the area near the UTBM Montbéliard campus, which contains 10 roundabouts with various sizes in the range of approximately 0.75 km (see Fig. 8).

Iii-a Lidar Odometry Benchmarking

As part of the dataset, we establish several baselines for lidar odometry, which is one of the core challenges of our dataset. Specifically, we forked the implementation of the following state-of-the-art methods and experimented with our dataset:

  • loam_velodyne [17] is one of most advanced lidar odometry method and providing real-time SLAM for 3D lidar, submitted the state-of-the-art performance in KITTI benchmark [3]. The implementation is robust for both structured (urban) and unstructured (highway) environments, and a scan restoration mechanism is devised for fast-speed driving.

  • LeGO-LOAM [12] is a lightweight and ground-optimized LOAM, mainly to solve the problem that the performance of LOAM deteriorates when resources are limited and operating in noisy environments. Point cloud segmentation in LeGO-LOAM is performed to discard points that may represent unreliable features after ground separation.

As an example, Fig. 9 and Fig. 9 show respectively the odometry results of using loam_velodyne and LeGO-LOAM algorithms on a recording round. Users are encouraged to evaluate their methods, compare with the provided baselines on devices with different levels of computation capability, and submit their results to our baseline GitHub repository. However, only real-time performance is accepted, as it is critically important for the vehicle localization in autonomous driving.

(a) Lidar odometry generated by loam_velodyne.
(b) Lidar odometry generated by LeGO-LOAM.
Fig. 9: Evaluation example of the baseline methods.

Iii-B Long-term Autonomy

Towards an on-the-shelf autonomous driving system, a long-term vehicle localization and mapping is necessary. For this goal, we introduce the concept of “self-aware localization” and “liability-aware long-term mapping” to advance the robustness of vehicle localization in a real-life and changing environment. To be more specific, for the former, the vehicle should be able to wake up in any previously known locations [2]. While the “liability-aware long-term mapping” enables the vehicle to maintain the map in long-term with monitoring the variance of landmarks and goodness of map alignment [13]. In this article, we present the multiple sessions of driving data with a variance of lightness and landmarks. We propose the long-term localization and mapping as open problems and encourage the researchers to investigate the potential solutions on our long-term dataset.

Iv Related Work

Over the past few years, numerous platforms and resources for autonomous driving have emerged and grabbed public attention. The AnnieWAY platform444http://www.mrt.kit.edu/annieway/ with its famous KITTI dataset555http://www.cvlibs.net/datasets/kitti/ [3] have always shown strong influence in the community. This dataset is the most widely-used visual perception dataset for autonomous driving, recroded with a sensing system comprising an OXTS RT 3003 GPS/IMU integrated system, a Velodyne HDL-64E 3D lidar, two Point Grey Flea 2 grayscale cameras, and two Point Grey Flea 2 color cameras. With this configuration, the instrumented vehicle is able to produce 10 lidar frames per second with 100k points per frame for lidar based localization and 3D object detection, two gray images for visual odometry and two color images for optical flow estimation, object detection, tracking and semantic understanding benchmarks.

The RobotCar666https://ori.ox.ac.uk/application/robotcar/ from the University of Oxford is considered to be another powerful competitive platform. The public available dataset777https://robotcar-dataset.robots.ox.ac.uk/ [10] is the first multisensor long-term on-road driving dataset. The Oxford RobotCar is equipped with a Point Grey Bumblebee XB3 stereo camera, three Point Grey Grasshopper2 fisheye camera, two SICK LMS-151 2D lidar and a SICK LD-MRS 3D lidar. Within this configuration, the three fisheye cameras cover a 360 FoV, the 2D/3D lidars and stereo cameras yield a data steam on 11 fps and 16 fps, respectively. This dataset is collected in a period of one year and around 1000 km in total.

Other datasets including Cityscapes888https://www.cityscapes-dataset.com/ [1], BBD100K999https://bair.berkeley.edu/blog/2018/05/30/bdd/ [16], and ApolloScape101010http://apolloscape.auto/self_localization.html [5], mainly focus on visual perception such as object detection, semantic segmentation, and lane/drivable area segmentation, and only visual data (i.e. images) are released. As the present article focuses more on multisensor perception and localization, we do not give further details of these datasets here.

For a deeper analysis, KITTI provides a relative comprehensive challenges for both perception and localization, and its hardware configuration, i.e. a combination of 3D lidar and stereo cameras, is widely used as prototype car by autonomous vehicle companies. While, there are still two limitation of KITTI dataset. First, the dataset only captured in one session and long-term variances, e.g. lightness, season, of the scene are not investigated. Second, the visual cameras have not covered the full FoV, thereby blind spots existed. Oxford dataset investigated the vision based perception and localization with variance of seasons, weather and time, however, the most advanced 3D lidar sensory data is not included. In this article, we leverage the pros of the platform design in KITTI and Oxford, and eliminate the cons. That is, a combination of four lidars (including two Velodynes) and four cameras multisensor framework is proposed to engender stronger range and visual sensing.

Apart from the hardware configuration and dataset collection, there exist widely-cited open-source repositories, such as Apollo111111https://github.com/ApolloAuto/apollo, Autoware121212https://github.com/CPFL/Autoware, and Udacity131313https://github.com/udacity/self-driving-car, which provide researchers a platform to contribute and share autonomous driving software.

V Conclusion

In this article, we presented our autonomous driving platform with a focus on multisensor framework for efficient perception and accurate localization. To build the framework, we integrated up to eleven heterogeneous sensors including various lidars and cameras, radar, GPS/IMU, in order to enhance the vehicle’s visual scope and perception capability. By exploiting the heterogeneity of different sensory data, the vehicle is also expected to have a better situation awareness and ultimately improve the safety of autonomous driving for human society.

Leveraging our instrumented car, a ROS-based dataset is cumulatively recorded and is publicly available to the community. This dataset is full of new research challenges and as it contains periodic changes, it is especially suitable for long-term autonomy study. We hope our efforts could pursue the development and help on solving related problems in autonomous driving.

Acknowledgment

This work was supported by the Quality Research Bonus (BQR) of the University of Technology of Belfort-Montbéliard (UTBM), the Contrat de Plan État-Région (CPER) 2015-2020 Mobilitech, and the PHC Barrande programme under grant agreement No. 40682ZH (3L4AV).

The authors would like to thank Dr. Abdeljalil Abbas-Turki, Dr. Olivier Lamotte, Dr. Jocelyn Buisson, and Fahad Lateef for their help in building the dataset, and the Lincoln Centre for Autonomous Systems (L-CAS) for hosting the dataset.

References

  • [1] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    .
    In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    pp. 3213–3223. Cited by: §IV.
  • [2] R. Dubé, D. Dugas, E. Stumm, J. I. Nieto, R. Siegwart, and C. Cadena (2017) SegMatch: segment based place recognition in 3d point clouds. In IEEE International Conference on Robotics and Automation (ICRA), pp. 5266–5272. Cited by: §III-B.
  • [3] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the KITTI dataset. International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: 1st item, §IV.
  • [4] (2018) HDL-32E user manual. Velodyne. Note: 63-9113 Rev. M Cited by: §II-B2.
  • [5] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang (2018) The apolloscape dataset for autonomous driving. CoRR abs/1803.06184. External Links: Link, 1803.06184 Cited by: §IV.
  • [6] T. Krajník, J. P. Fentanes, G. Cielniak, C. Dondrup, and T. Duckett (2014) Spectral analysis for long-term robotic mapping. In IEEE International Conference on Robotics and Automation (ICRA), pp. 3706–3711. Cited by: §I.
  • [7] T. Krajník, J. P. Fentanes, J. M. Santos, and T. Duckett (2017) FreMEn: frequency map enhancement for long-term mobile robot autonomy in changing environments. IEEE Transactions on Robotics 33 (4), pp. 964–977. Cited by: §I.
  • [8] T. Krajnik, T. Vintr, S. Molina, J. P. Fentanes, G. Cielniak, O. M. Mozos, G. Broughton, and T. Duckett (2019) Warped hypertime representations for long-term autonomy of mobile robots. IEEE Robotics and Automation Letters 4 (4), pp. 3310–3317. Cited by: §III.
  • [9] L. Kunze, N. Hawes, T. Duckett, M. Hanheide, and T. Krajnik (2018) Artificial intelligence for long-term robot autonomy: a survey. IEEE Robotics and Automation Letters 3, pp. 4023–4030. Cited by: §I.
  • [10] W. Maddern, G. Pascoe, C. Linegar, and P. Newman (2017) 1 year, 1000 km: the oxford robotcar dataset. The International Journal of Robotics Research 36 (1), pp. 3–15. Cited by: §IV.
  • [11] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng (2009) ROS: an open-source robot operating system. In ICRA Workshop on Open Source Software, Cited by: §I.
  • [12] T. Shan and B. Englot (2018) LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758–4765. Cited by: 2nd item.
  • [13] L. Sun, Z. Yan, A. Zaganidis, C. Zhao, and T. Duckett (2018) Recurrent-octomap: learning state-based map refinement for long-term semantic mapping with 3d-lidar data. IEEE Robotics and Automation Letters 3 (4), pp. 3749–3756. Cited by: §I, §III-B.
  • [14] T. Vintr, Z. Yan, T. Duckett, and T. Krajnik (2019) Spatio-temporal representation for long-term anticipation of human presence in service robotics. In IEEE International Conference on Robotics and Automation (ICRA), pp. 2620–2626. Cited by: §III.
  • [15] Z. Yan, L. Sun, T. Duckett, and N. Bellotto (2018-10)

    Multisensor online transfer learning for 3d lidar-based human detection with a mobile robot

    .
    In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain. Cited by: §I.
  • [16] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell (2018) BDD100K: A diverse driving video database with scalable annotation tooling. CoRR abs/1805.04687. External Links: Link, 1805.04687 Cited by: §IV.
  • [17] J. Zhang and S. Singh (2014) LOAM: lidar odometry and mapping in real-time.. In Robotics: Science and Systems, Vol. 2, pp. 9. Cited by: 1st item.