3D Reactive Control and Frontier-Based Exploration for Unstructured Environments

08/01/2021 ∙ by Shakeeb Ahmad, et al. ∙ University of Colorado Boulder 0

The paper proposes a reliable and robust planning solution to the long range robotic navigation problem in extremely cluttered environments. A two-layer planning architecture is proposed that leverages both the environment map and the direct depth sensor information to ensure maximal information gain out of the onboard sensors. A frontier-based pose sampling technique is used with a fast marching cost-to-go calculation to select a goal pose and plan a path to maximize robot exploration rate. An artificial potential function approach, relying on direct depth measurements, enables the robot to follow the path while simultaneously avoiding small scene obstacles that are not captured in the map due to mapping and localization uncertainties. We demonstrate the feasibility and robustness of the proposed approach through field deployments in a structurally complex warehouse using a micro-aerial vehicle (MAV) with all the sensing and computations performed onboard.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Robotic exploration and navigation through unknown and unstructured environments have many real-world applications, ranging from firefighting and surveillance to search and rescue operations for natural disasters. A successful autonomous robotic mission may result in very valuable information about a previously unknown environment that could be hazardous for humans. A recent DARPA Subterranean (SubT) Challenge [7] poses such a problem for large scale, complex underground environments. A large variety of these environments require reliable and repeatable solutions to 3D navigation and planning. A popular vehicle choice is the quadrotor UAV due to its traversability and hover-in-place capabilities. This paper scopes 3D exploration and planning for a quadrotor UAV in unknown environments with extreme levels of clutter.

Fig. 1: A side view of the depth sensors’ configuration on the quadrotor UAV. The environment map is incrementally built using the upward and downward facing depth cameras and the long range 360 Ouster lidar. The path is generated using the map-based frontier view planner and followed by the APF reactive controller which uses direct information from all the depth cameras to locally avoid small obstacles.

Typical motion planners rely on one of the two types of information: instantaneous sensor measurements or a map representation of the environment. The map is typically built in an incremental fashion as the vehicle traverses through an unknown environment by fusing the environment information as seen by the robot. OctoMap [17] and Voxblox [22] have been shown in past to provide dense representation of an environment in a computationally feasible way. Given the occupancy grid of an area already explored by the robot, Yamauchi [27, 26] first proposed the concept of frontier-based exploration. He proposed that efficient exploration of unknown areas could be achieved by detecting map frontiers, grouping them together and prioritizing them based on a user-defined objective. He further presented [28] that the technique had a potential to work in a multi-robot setting where each robot could share map frontiers making the exploration resilient to the loss of one or multiple robots in the network. The concept of frontiers has been widely used since then to bias path planners towards the unseen areas in a map [12, 13, 15]. Our path planning method is primarily based on this notion. The environment representation is obtained in the form of Euclidean Signed Distance Field (ESDF), a concept that is well-known in computer graphics and has been proven very valuable to many path planners in past especially in mobile robotics [8, 23]

. We use Voxblox as an open-source and computationally feasible solution to obtain this information. Frontiers are first grouped together to obtain multiple goal poses of interest at any replan time instant. Fast marching method is then used to calculate the appropriate cost-to-go from the robot’s position to the sampled goal poses and the lookahead path is generated by following the gradient of the cost-to-go map.

In order to ensure reliable navigation through a highly cluttered environment with obstacles of heterogeneous nature, a planner relying on a single type of information may not be sufficient. This is because the map-based motion plans rely heavily on the mapping accuracy. Small and thin objects are typically difficult to capture in maps due to their reduced representation. Building a very high resolution map is computationally costly for a small-scale robot such as an MAV. Moreover, small perturbations in the onboard localization add to the mapping uncertainty. As a result, the motion plans may get extremely close to the obstacles in general or may ignore small scene obstacles completely that are non-existent in the map, rendering the solution ineffective especially for a UAV that is highly sensitive to collisions. The reactive controllers inspired by potential functions [18] and biology [4] solve many of these issues by planning relative to the sensor frame without building an environment map. Some efforts have been made recently to plan inside the depth image space [2, 11, 9, 20, 1]. We complement the mapping-based planning solution with a potential function reactive controller that senses and avoids small obstacles using direct depth information from the onboard sensors. In the rest of our paper, we use the terms global and local for the map-based planner and the reactive controller respectively. Existing long-range UAV planning methods in the literature such as [3, 5, 6, 8] have been demonstrated in underground tunnels and small confined nearly-empty rooms. These environments exhibit general predictable structures, for instance, underground mines typically consist of long corridors connected through junctions with most of the obstacles appearing close to the walls. We demonstrate our planning approach in a highly cluttered and unstructured environment containing obstacles that are heterogeneous in shape and size. In one of our runs, the UAV travelled over 300 m in a tightly-spaced environment. To the best of our knowledge, this work is the first attempt to demonstrate the long-range UAV exploration in the environments of extreme clutter. The solution is a part of the autonomy stack of our team (MARBLE) for the DARPA Subterranean Challenge Phases I and II [21].

The rest of the paper is organized as follows. Section II motivates the problem and introduces the sensing modalities. Section III details the map-based path planner followed by the discussion of the vision-based reactive controller in Section IV. The results from the physical UAV experiments are presented in Section V. Finally, Section VI concludes the discussion.

Ii Problem Setup

In order to maximize the field-of-view (FOV) in a practically feasible manner, the UAV considered in this paper is equipped with 4 depth sensors: a 360 degrees long-range 3D lidar sensor and upward, downward and forward facing depth cameras. The 3D lidar sensor used for our application is Ouster OS1 which is well-suited for localization and mapping due to its long range and wide horizontal FOV. However, like many other long range lidar sensors, it suffers from narrow vertical FOV, sparse pointcloud, and performance degradation at close ranges. In contrast, the depth cameras, such as an Intel Realsense, provide a higher resolution depth information and wider vertical FOV, that are essential to detect and avoid small and thin obstacles at close ranges. Therefore, the local obstacle avoidance relies only on the upward, downward and forward facing depth cameras while the map for the global planner is built primarily using the 3D lidar sensor complemented by the upward and downward facing depth cameras for vertical awareness. In order to compensate for the cameras’ horizontal FOV, we consider the UAV to be exhibiting the nonholonomic unicycle kinematics in the plane of its body frame where , , and are the forward, vertical and steering command velocities in respectively. This ensures that there is a depth camera in every allowed direction of motion for the local obstacle avoidance. The pose of the robot in the world frame is referred to as where is the set of all possible robot poses. The outer planning loop generates a lookahead path at each replan time instant using the most recent environment map available. With an intention to plan in a computationally feasible manner onboard an MAV with limited resources, the planned lookahead path is purely geometric. The inner loop is primarily based on the Artificial Potential Function (APF) controller and is responsible for generating steering and translational velocity commands for the robot, given a lookahead path. In order to follow the path, we choose a lookahead point on the path that is a constant distance away from the robot at all times. This point serves as the attractor for the APF controller and moves along the path as the robot navigates, until the next replan time instant or the end of path. Since the controller directly generates the control commands, , , and

, the nonholonomic constraints are taken care of during path following. The controller relies on the direct sensor information from all the depth cameras in order to generate the repulsive velocity vector. This two-tier collision avoidance

i.e., map-based and map-less, plays an important role in improving the navigation safety in practice by dealing with the uncertainties that arise due to imperfect path following coupled with the mapping and localization uncertainties during path planning. The next two sections provide further insight into the two planning layers.

Iii Global Path Planning

1:  path
2:  FindFrontierVoxels
3:  FilterFrontierContiguous
4:  SampleViewPoses
5:  ComputeFMM
6:  
7:  pathFollowGradient
Algorithm 1 Planner

Iii-a Preliminaries

It is assumed that the robot is able to actively build and maintain a consistent 3D voxelized grid cell representation of the environment using onboard state estimation and depth sensors. The map

is a set of voxelized grid cells , defined as

(1)

Every voxel in the map corresponds to a physical position in the real environment. This position transform is defined as

(2)

where is the position of the voxel, is the voxel size, and is the map datum position.

Two functions are defined at each voxel position

. The first function is the voxel occupancy probability

with and referring to the voxel as free and occupied respectively. Unseen voxels begin with an occupancy probability of . The second is the distance to the closest occupied cell . This is a Euclidean Signed Distance Field (ESDF) where negative values of are the distance to the closest free cell when is occupied. In practice, we use Voxblox to maintain the ESDF [22].

For the purpose of global planning, we consider only positions in for online computation.

(3)

The space of safe robot configurations restricts the geometric position of the robot to all voxel cells that are at least away from the nearest occupied voxel. The safe voxel set, is defined as

(4)

where maps from the robot position in to the nearest map voxel .

Fig. 2: A 2D diagram of the frontier goal pose sampling and planning approach on an occupancy grid. 1) Planning starts with dual occupancy-grid and ESDF maps. 2) Label frontier voxels as free cells adjacent to unseen. 3) Cluster, threshold, and greedily group frontier voxels. 4) Randomly sample goal poses that view frontiers. 5) Calculate cost to a subset of goal poses until the best is found using fast marching method. 6) Follow wave front arrival time gradient to generate paths to goal poses and select the best path.

Iii-B Frontier

In order to direct the robot’s navigation towards exploring the environment, we consider sampling goal poses that focus the robot’s sensor field of view on the map’s frontier. A frontier is the interface between the seen and unseen portions of the environment or the set of seen and free voxels neighboring at least one unseen voxel. More formally, the frontier is defined in (5) below

(5)

where is the set of seen and free voxels, is the set of unseen voxels, and is the 26-neighboring voxels to the voxel .

In 2D and simple 3D environments, these voxels would focus the robot’s exploration quite well. However, in more complex and unstructured 3D environments, frontier voxels often show up in small orphaned groups due to sensor noise and occlusions. Focusing the robot’s attention on these voxels would waste valuable flight time on exploring very small portions of the missing map. In order to rectify this issue, we consider a contiguous cluster-based filtering method to pre-select larger regions of interest for the robot to explore.

Iii-B1 Contiguous Clustering

Frontier voxels are considered contiguous if the distance between their centers is less than or equal to . This ensures that the diagonally adjacent voxels are considered contiguous. The contiguous clustering procedure separates frontier voxels into connected clusters through a segmentation algorithm. Once the frontier voxels are grouped into their exclusive cluster sets , those above a size threshold are kept in the frontier subset for goal pose sampling. Clusters are obtained using the open-source Point Cloud Library (PCL) implementation of Euclidean Cluster Extraction [24].

Iii-C Pose Sampling

1:  , Groups
2:  while  \Groups do
3:      Sample \Groups
4:     
5:     for  GetFrontierNeighbors do
6:        if  Groups then
7:           
8:        end if
9:     end for
10:     Groups
11:  end while
12:  for  Groups do
13:     while True do
14:         SamplePose
15:        if CheckAdmissible then
16:           if !CheckOcclusion then
17:              
18:              Break
19:           end if
20:        end if
21:     end while
22:  end for
Algorithm 2 SampleGoalPoses

The global planner samples goal poses relative to the clustered frontier using Algorithm 2. The remaining set of clustered frontier, , are grouped using a greedy radius-based procedure. Then, one goal pose is sampled . Goal poses are sampled relative to each group’s centroid voxel,

, using a random uniform distribution in polar coordinates, given in the equation below, where

is half the robot’s vertical field of view. A goal pose is admissible if it is far enough from an occupied voxel and its line of sight to the group centroid voxel is unoccluded. This technique was adapted from an underwater surface inspection algorithm [10].

(6)

Each sampled goal pose, , is given corresponding information gain, . The gain function used in this work is the number of frontier voxels in the sensor field of view at the sampled goal pose. This encourages planning to poses that explore a large amount of unseen space.

Iii-D Fast Marching Method

The arrival time of a wave front starting at the robot’s current voxel location ), to each voxel in the set of seen and free voxels, , is calculated using a fast approximation of the solution to the Eikonal Equation given below [14] [19].

(7)

where is the wave propagation speed at voxel, given in the equation below

(8)

This speed function penalizes paths that travel close to obstacles in order to avoid collisions and maximize the sensor field of view along the path. This wave front propagation is run until the goal pose with the greatest utility, , is found. Given that the current voxel arrival time, , is increasing with fast marching algorithm run time, there is an upper bound on the maximum utility of any goal pose in the subset of goal poses with no evaluated wave front arrival time, . Therefore, the fast marching calculation can prematurely terminate once it satisfies the following stopping criteria:

(9)

This guarantees that the cost-to-go to the best goal pose has already been found without performing any extraneous computation. After selecting the goal pose with the highest utility, the 3D Sobel operator is used to follow the voxel gradient path of the wave front arrival time, , from the selected goal pose back to the robot’s current position. The robot then uses this path to generate control commands.

Fig. 3: Exploration performance of the global planning algorithm in a simulated 40m x 40m x 3.0m Unreal Engine 3D maze environment. The UAV was constrained to a max flight speed of m/s and turn rate of rad/s and had a sensor FoV of x and a range of m. The left figure shows the path traveled by the simulated UAV. The UAV tends to center itself in the maze corridors because of the ESDF speed map which leads to efficient flight paths and large sensor fields of view. The right figure shows the mean exploration volume ratio vs time and bounds for simulation runs.

Paths produced by following the wave arrival time map gradient to each frontier goal pose are not only obstacle-aware, but they also tend to produce en-route poses that reduce depth sensor occlusion over the length of the path. This results in a greater overall exploration rate because the sensor field of view is implicitly maximized by penalizing obstacle proximity when calculating goal pose costs.

The global planning algorithm is validated in a simulated environment same as that of [25] which utilizes a UAV with a single RGBD sensor. The exploration ratio across simulation runs is shown in Fig. 3 along with the path taken by the UAV for one of the simulation runs. The planner was able to explore the environment in less than minutes during each run.

Iv Local Reactive Control

The goal of the reactive control problem is to navigate the UAV along the global path safely while respecting the vehicle kinematics. This planning layer ensures that the robot is repelled away from the obstacles that would not otherwise be avoided due to imperfect localization, mapping and path following. To this end, we exploit the APF technique using direct sensor information from the forward, upward and downward-facing depth cameras onboard the UAV. The repulsive potential acting on the robot is the sum of the repulsive potentials from the individual cameras transformed to the robot’s body frame .

We assume that any 3D point in a depth camera’s frame can be projected to its image coordinates using the pinhole camera model as

(10)

where defines the focal length of the camera lens and is the location of the optical center of the camera in the image coordinates. It is important to note that the number of depth image points, representing a scene object, increases as the inverse square of the depth of the object for a pinhole camera. For instance, a rectangular obstacle of length and height located at depth parallel to the camera’s optical frame has the cross-sectional area of . The number of pixels it occupies inside the image can be computed as the occupied image area as . However, we would like the repulsive potential function to be dependent on the object’s distance and not the number of points it occupies inside the image. Therefore, it does not seem suitable to sum the conventional APF repulsive potential gradient vectors generated by all the image points.

Given a pointcloud of size from a camera, we calculate the depth image-based repulsive potential gradient vector using a slightly modified form as

(11)

where is a positive constant, is the depth of the th pixel and refers to the maximum sensing horizon. By using (11), all the points inside a depth image are implicitly treated as a small obstacle which is exerting a single repulsive potential regardless of the number of depth image points. Assuming the small rectangular obstacle and a camera of infinite resolution, the depth image-based repulsive potential gradient vector (11) reduces to

(12)

taking integral of which with respect to yields the repulsive potential function in its conventional form [18] as if it were generated using a single obstacle.

The global planner generates a lookahead point that continually updates to be a constant distance away from the robot. The attractive potential gradient vector is applied to the robot in the direction of the lookahead point in at all times. Consequently, the resultant acting on the UAV is given as,

(13)

where represents the sum of the individual repulsive potential gradient vectors transformed to . The term regulates the speed of the UAV depending upon the level of clutter around it. The user-defined threshold can be set based on the desired aggressiveness of the robot in reaction to the nearby obstacles. The velocity commands for the UAV are finally generated to follow the resultant 3D vector as

(14)

where , and are the forward, vertical and steering velocity commands in respectively and , and define the maximum velocities for each axis.

V Experimental Evaluation

To evaluate the performance of the proposed exploration and control strategies, experiments were conducted on a UAV platform, as shown in Fig. 4. The demands of real-time operation, along with size, weight, and power considerations, limit the computation available for autonomy.

V-a Aerial Robot Platform

The UAV control system begins with the Pixracer R15, a PX4-based flight controller that provides attitude stabilization and velocity control. For sensing, an Ouster OS-1-64 LiDAR and Lord Microstrain 3DM-GX-15 IMU support localization and mapping, while a forward-facing Intel RealSense D435i, as well as upward-facing and downward-facing Camboard pico flexx depth cameras, are utilized for obstacle avoidance. Several open-source software packages, including Google Cartographer [16] for LiDAR-inertial SLAM, VoxBlox [22] for ESDF mapping, and OctoMap [17] for mapping visualization, provide input to the proposed global and local planners, which all operate onboard a Intel NUC 7i7DNBE computer.

Fig. 4: The quadrotor UAV platform used for the hardware experiments.

V-B Local Control Demonstration

Worse-case local control scenarios occur when mapping fails to capture obstacles in the environment, and frontier-based exploration plans an infeasible path that would collide with an undetected obstacle. In particular, small or thin obstacles, such as cabling and wiring, are less likely to be captured in the map. More generally, localization, mapping, and path following errors can also lead to close encounters or collisions. In scenarios such as these, it is the responsibility of the local planner to rapidly sense the proximity of obstacles, and ensure an evasive maneuver is taken.

To evaluate the performance of the local controller, we placed obstacles of different shapes and sizes in our lab and let the UAV sense and avoid them. Manual goal points are placed behind obstacles, causing the attractive potential field to pull the UAV directly towards the obstacle. Once the agent senses the object’s relative proximity, the repulsive potentials push the vehicle in the opposing direction. The resulting trajectories and repulsive potential gradient vectors from a single-obstacle scenario are presented in Fig. 7. In all evaluations, the competing attractive and repulsive potentials cause the agent to decelerate and steer away from the obstacle, before continuing towards the goal point. A second, more complex scenario is visualized in Fig. 10, which has been designed to test the controller’s ability to sense and avoid thin obstacles, as well as near-field objects underneath the vehicle.

((a))
((b))
Fig. 7: Results from a single obstacle local planner flight tests. The black circles represent the UAV physical bounds and the rounded cyan square represents the obstacle. The black lines show the path followed by the UAV during each of the maneuvers with red arrows representing the repulsive potential gradient vectors. The UAV starts from the respective start locations for each of the runs and maneuvers towards the goal points that are (a) 2 m and (b) 5 m behind the obstacle. The approximate obstacle location is overlaid on the plots.
((a))
((b))
Fig. 10: The local planner triggers the repulsive potentials when the UAV is within close proximity of a narrow 22 mm wide pole, and a 80 cm tall, vertically-prominent box. The response is shown from a (a) side view, and (b) top-down view. The traversed trajectory, spherical vehicle bounds and the repulsive potential gradient vectors are represented as the black lines, the gray spheres, and the red arrows respectively, The approximate locations and geometries of the obstacles are shown by the solid team objects.

V-C Exploration Demonstration

This experimental scenario is set in a 100x80x6m warehouse which houses manufacturing operations. Unlike the typical structured warehouse, this environment is highly cluttered and heterogeneous. Many different activities take place on the manufacturing floor, leading to object diversity that likely ranges in the hundreds or thousands. Looking up, steel-truss ceilings support the roof, featuring exposed sprinkler, ductwork, and conduit networks, as well as light fixtures and ceiling-to-floor electrical cabling. In case of a UAV navigating in such an environment, even a single close contact of the robot with an obstacle, for instance an electric cable, can prove detrimental to the exploration mission safety.

This demanding environment has provided an opportunity to evaluate the full proposed solution: frontier-based exploration with potential-field based local control. Given the incredible size of the space, three independent flight tests were conducted in various regions of the warehouse, the results of which are shown in Fig. 17. Clutter in the environment creates occlusions, requiring the agent to precisely modulate position and heading within confined areas to reach its goal pose and continue exploration. A large vertical maneuver is highlighted in Fig. 17e, where the vehicle passes through a narrow slot before flying up towards an unexplored region. With many thin obstacles in the environment, it is likely that some will go undetected during map generation and path planning. Two instances of the local controller recovering the agent from near-collisions are shown in Fig. 17d and Fig. 17f. To provide a sense of the speed of exploration, Fig. 18

captures the map and trajectory at different moments during the third mission. For a more quantitative understanding of the rate of exploration, Fig.

19, presents the volume of map explored as a function of mission time.

These results shared herein have provided validation that the proposed solution is capable of exploring real-world environments that are highly cluttered and unstructured. The interplay between the frontier-based exploration and the potential-based control is demonstrated to be satisfactory in preventing collisions with a variety of obstacles and enabling continued exploration of confined spaces.

((a))
((b))
((c))
((d))
((e))
((f))
Fig. 17: Maps and trajectories (a, b, c) from three exploration missions in an unstructured warehouse environment, with detailed visualizations and first-person-view images during salient maneuvers (d, e, f). UAV trajectories are shown as black lines, repulsive potential gradient vectors as red arrows, and resulting positions as axes. In (c) the third field deployment, (d) the UAV maneuvered through a tight space, by flying above a lathe machining tool while remaining below low-hanging fluorescent light fixtures. A safety shield protruding up from the lathe was likely difficult to map due to its thinness, but the local planner sensed its proximity and reactively pushed the vehicle away from it. Soon after, (e) the global planner led the vehicle away from that highly-cluttered environment, by sending it through a narrow slot, and up to an open, unexplored area. The UAV returns to the machine area in (f), and again flies directly above the lathe, and is pushed up by the repulsive potential field. For explicit visualization of the occupancy grid, the OctoMaps with 0.2 m leaf size are shown. The ceiling has been removed to provide visibility of the building interior. The distances travelled by the UAV in the three deployments are (a) 152.5 m (b) 118.0 m (c) 304.08 m.
Fig. 18: Time progression of an exploration mission in a unstructured warehouse environment. For explicit visualization of the occupancy grid, the OctoMap with 0.2 m leaf size is shown. The ceiling has been removed to provide visibility of the building interior.

Vi Conclusion

The paper proposed a two-tier planning approach for long-range UAV navigation in highly complex indoor environments. An efficient traversal towards the unexplored map frontiers is achieved using fast marching as a way to perform multi-goal cost-to-go calculations over a Euclidean Signed Distance Field (ESDF) that is generated incrementally on the robot using Voxblox. In many indoor cluttered environments, a single type of perception scheme is not necessarily sufficient to react to the versatile set of obstacles under uncertainties. In order to ensure reliable and safe UAV operations in such scenarios, we extended the Artificial Potential Function (APF) approach to the instantaneous and high resolution depth images to function as the path following controller. A thorough evaluation of the planning architecture is performed through real-world field deployments in a structurally complex warehouse where robustness of the perception and planning solution is of paramount importance along with the exploration efficiency due to extreme levels of clutter.

A video of the field deployments is posted at https://youtu.be/UKipgkjVM4k.

Fig. 19: Volume explored by the UAV as a function of time for all three deployments.

Acknowledgment

This work was supported through the DARPA Subterranean Challenge, cooperative agreement number HR0011-18-2-0043. The authors would like to thank Jeff Popiel, the CEO of the Geotech Environmental Equipment, for entertaining our request to conduct autonomous deployments at the warehouse. Also, we would like to thank Zoe Turin from the Bio-Inspired Perception and Robotics Lab at CU Boulder for her help related to the hardware and experiments.

References

  • [1] S. Ahmad and R. Fierro (2019) Real-time quadrotor navigation through planning in depth space in unstructured environments. In 2019 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1467–1476. Cited by: §I.
  • [2] S. Ahmad, Z. N. Sunberg, and J. S. Humbert (2021) APF-pf: probabilistic depth perception for 3d reactive obstacle avoidance. In 2021 American Control Conference (ACC), Vol. , pp. 32–39. External Links: Document Cited by: §I.
  • [3] A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart (2016) Receding horizon” next-best-view” planner for 3d exploration. In 2016 IEEE international conference on robotics and automation (ICRA), pp. 1462–1468. Cited by: §I.
  • [4] J. Conroy, G. Gremillion, B. Ranganathan, and J. S. Humbert (2009) Implementation of wide-field integration of optic flow for autonomous quadrotor navigation. Autonomous robots 27 (3), pp. 189. Cited by: §I.
  • [5] T. Dang, S. Khattak, F. Mascarich, and K. Alexis (2019) Explore locally, plan globally: a path planning framework for autonomous robotic exploration in subterranean environments. In 2019 19th International Conference on Advanced Robotics (ICAR), pp. 9–16. Cited by: §I.
  • [6] T. Dang, M. Tranzatto, S. Khattak, F. Mascarich, K. Alexis, and M. Hutter (2020) Graph-based subterranean exploration path planning using aerial and legged robots. Journal of Field Robotics 37 (8), pp. 1363–1388. Cited by: §I.
  • [7] DARPA (2020)(Website) External Links: Link Cited by: §I.
  • [8] M. Dharmadhikari, T. Dang, L. Solanka, J. Loje, H. Nguyen, N. Khedekar, and K. Alexis (2020) Motion primitives-based path planning for fast and agile exploration using aerial robots. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 179–185. Cited by: §I, §I.
  • [9] G. Dubey, S. Arora, and S. Scherer (2017) Droan—disparity-space representation for obstacle avoidance. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1324–1330. Cited by: §I.
  • [10] B. Englot and F. S. Hover (2013) Three-dimensional coverage planning for an underwater inspection robot. The International Journal of Robotics Research 32 (9-10), pp. 1048–1073. Cited by: §III-C.
  • [11] A. T. Fragoso, C. Cigla, R. Brockers, and L. H. Matthies (2018) Dynamically feasible motion planning for micro air vehicles using an egocylinder. In Field and Service Robotics, pp. 433–447. Cited by: §I.
  • [12] L. Freda and G. Oriolo (2005) Frontier-based probabilistic strategies for sensor-based exploration. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 3881–3887. Cited by: §I.
  • [13] H. H. González-Banos and J. Latombe (2002) Navigation strategies for exploring indoor environments. The International Journal of Robotics Research 21 (10-11), pp. 829–848. Cited by: §I.
  • [14] M. S. Hassouna and A. A. Farag (2007) Multistencils fast marching methods: a highly accurate solution to the eikonal equation on cartesian domains. IEEE transactions on pattern analysis and machine intelligence 29 (9), pp. 1563–1574. Cited by: §III-D.
  • [15] L. Heng, A. Gotovos, A. Krause, and M. Pollefeys (2015) Efficient visual exploration and coverage with a micro aerial vehicle in unknown environments. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1071–1078. Cited by: §I.
  • [16] W. Hess, D. Kohler, H. Rapp, and D. Andor (2016) Real-time loop closure in 2d lidar slam. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1271–1278. Cited by: §V-A.
  • [17] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. Note: Software available at http://octomap.github.com External Links: Link, Document Cited by: §I, §V-A.
  • [18] O. Khatib (1986) Real-time obstacle avoidance for manipulators and mobile robots. In Autonomous robot vehicles, pp. 396–404. Cited by: §I, §IV.
  • [19] D. Kroon (2020) Accurate fast marching. External Links: Link Cited by: §III-D.
  • [20] L. Matthies, R. Brockers, Y. Kuwata, and S. Weiss (2014) Stereo vision-based obstacle avoidance for micro air vehicles using disparity space. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3242–3249. Cited by: §I.
  • [21] M. T. Ohradzansky, E. R. Rush, D. G. Riley, A. B. Mills, S. Ahmad, S. McGuire, H. Biggie, K. Harlow, M. J. Miles, E. W. Frew, C. Heckman, and J. S. Humbert (2021) Multi-agent autonomy: advancements and challenges in subterranean exploration. Journal of Field Robotics. Cited by: §I.
  • [22] H. Oleynikova, Z. Taylor, M. Fehr, R. Siegwart, and J. Nieto (2017) Voxblox: incremental 3d euclidean signed distance fields for on-board mav planning. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §I, §III-A, §V-A.
  • [23] R. Reinhart, T. Dang, E. Hand, C. Papachristos, and K. Alexis (2020) Learning-based path planning for autonomous exploration of subterranean environments. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I.
  • [24] R. B. Rusu and S. Cousins (2011-May 9-13) 3D is here: Point Cloud Library (PCL). In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China. Cited by: §III-B1.
  • [25] L. Schmid, M. Pantic, R. Khanna, L. Ott, R. Siegwart, and J. Nieto (2020) An efficient sampling-based method for online informative path planning in unknown environments. IEEE Robotics and Automation Letters 5 (2), pp. 1500–1507. Cited by: §III-D.
  • [26] B. Yamauchi, A. Schultz, and W. Adams (1998) Mobile robot exploration and map-building with continuous localization. In Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No. 98CH36146), Vol. 4, pp. 3715–3720. Cited by: §I.
  • [27] B. Yamauchi (1997) A frontier-based approach for autonomous exploration. In Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97.’Towards New Computational Principles for Robotics and Automation’, pp. 146–151. Cited by: §I.
  • [28] B. Yamauchi (1998) Frontier-based exploration using multiple robots. In Proceedings of the second international conference on Autonomous agents, pp. 47–53. Cited by: §I.