iRotate: Active Visual SLAM for Omnidirectional Robots

03/22/2021
by   Elia Bonetto, et al.
Max Planck Society
0

In this letter, we present an active visual SLAM approach for omnidirectional robots. The goal is to generate control commands that allow such a robot to simultaneously localize itself and map an unknown environment while maximizing the amount of information gained and consume as little energy as possible. Leveraging the robot's independent translation and rotation control, we introduce a multi-layered approach for active V-SLAM. The top layer decides on informative goal locations and generates highly informative paths to them. The second and third layers actively re-plan and execute the path, exploiting the continuously updated map. Moreover, they allow the robot to maximize its visibility of 3D visual features in the environment. Through rigorous simulations, real robot experiments and comparisons with the state-of-the-art methods, we demonstrate that our approach achieves similar coverage and lesser overall map entropy while keeping the traversed distance up to 36 the other methods. Code and implementation details are provided as open-source.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 6

05/19/2021

Active Visual SLAM with independently rotating camera

In active Visual-SLAM (V-SLAM), a robot relies on the information retrie...
07/31/2020

Dynamic Object Tracking and Masking for Visual SLAM

In dynamic environments, performance of visual SLAM techniques can be im...
02/28/2019

Efficient Dense Frontier Detection for 2D Graph SLAM Based on Occupancy Grid Submaps

In autonomous robot exploration, the frontier is the border in the world...
03/31/2021

LIFT-SLAM: a deep-learning feature-based monocular visual SLAM method

The Simultaneous Localization and Mapping (SLAM) problem addresses the p...
02/08/2019

Active Area Coverage from Equilibrium

This paper develops a method for robots to integrate stability into acti...
10/04/2021

Fast Uncertainty Quantification for Active Graph SLAM

Quantifying uncertainty is a key stage in autonomous robotic exploration...
01/22/2020

OmBURo: A Novel Unicycle Robot with Active Omnidirectional Wheel

A mobility mechanism for robots to be used in tight spaces shared with p...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Mobile robots that assist humans in everyday tasks are increasingly popular, both in workplaces and homes. Self-localization and environment mapping are the key building-block state-estimation functionalities of such robots, which enable robustness in higher level task performance, such as navigation, manipulation and interaction with humans. Most often, these robots operate in a previously-unknown environment, e.g., in a person’s house. To develop the aforementioned state-estimation functionalities, a popular approach is V-SLAM – visual simultaneous localization and mapping using on-board RGB cameras 

[1].

Many V-SLAM methods are passive processes, e.g. [2, 3], in which the robots are required to follow external control inputs and use their sensors only to build the map. Active V-SLAM, on the other hand, refers to those V-SLAM methods in which a robot makes ‘on-the-fly’ decisions regarding its own movement actions, such that both the map estimate and the higher level tasks could benefit from those movement actions.  State-of-the-art methods for active V-SLAM are typically exploration-oriented, i.e., they focus on mapping the environment faster, while trying to keep the map uncertainty low [1]. This is done by first choosing a goal point on the map from a set of possible goals, deciding among them based on which one allows to have the most informative path, followed by the execution of the chosen path.

Our novel insight in this work is that during the path execution the robot can continuously and actively select its camera heading to maximize the environment coverage, without jeopardizing its 2D position. This is also leveraging the omnidirectional nature of the robot, where position and orientation can be independently controlled. Coverage includes both the observation of previously unseen areas of the map and map refinement. By performing active coverage, the robot can substantially speed up the information acquisition and quickly lower the overall map uncertainty. Intuitively, this can significantly reduce the total distance the robot has to travel to map the environment, as also evidenced through our rigorous experimental results. Total travel distance translates to energy consumption and autonomy time for various robots, which is critical for many applications.

Fig. 1: Our omnidirectional robot actively mapping an office reception area.

Based on the above insight, the Active V-SLAM method we propose in this paper has the following novel contributions, each of which adds a layer of activeness.

  • [leftmargin=*]

  • In the exploration stage, the computation of the optimal path to a chosen frontier point goal (in our case the goal set consists of frontier points), takes into account the possible heading directions on all waypoints on the path in order to maximize the path utility. This way we exploit all the map information available up to that time instant.

  • While executing the chosen optimal path, an intermediate refinement is done every time a waypoint is reached. In this, we re-compute the next waypoint’s optimal heading based on the updated occupancy map. This allows us to account for dynamic variations in the map entropy and new information which, for example, loop closures or the robot’s own movements might have brought between the time of reaching that waypoint and the time of the previous frontier point selection.

  • Based on the the real-time 3D features distribution we continuously control the heading of the camera. This enables us to incorporate the latest information acquired by the robot, unavailable during both the previous steps.

Thus, our V-SLAM approach is active throughout the mapping process, rather than at fixed intervals of frontier-points selection. Furthermore, we also present (and experimentally compare) variations in the classical utility functions, based on the Shannon entropy of the map, to account for the presence of obstacles, weighted average utilities and balance between the exploration and coverage behaviors of the robot. We present our results and comparisons both in simulation and on a real robot. The latter is a Festo Didactic’s Robotino, operating in an office environment (Fig. 1 and 2). It has an omni-directional drive system and an on-board RGB-D camera, in addition to on-board computational units. Our method’s source code and instructions for use are available open-source111https://github.com/eliabntt/active_v_slam.

The rest of the paper is structured as follows. In Sec. II we review the current state-of-the-art in active SLAM. Our method is described thoroughly in Sec. III, while the experimental results are presented in Sec. IV. In Sec. V we conclude with comments on future work.

Ii Related Work

Active SLAM algorithms address the exploration/exploitation dilemma in SLAM [4, 5], i.e., finding a balance between exploring new regions and exploiting previously acquired information to reduce overall map uncertainty. Most active SLAM algorithms consist of the following workflow – i) selection of candidate goals, ii) path generation (set of waypoints) for those goals, iii) computation of path utilities, and iv) executing/tracking the maximum utility path. Waypoints may either be reachable robot positions or include both position and headings (pose). We further describe state-of-the-art of these components.

Candidate goals are usually found alongside frontiers [6, 7, 4, 8]. Other approaches, like [9] and [10], rely on persistent RRT-like structures, while in [11, 12] the robot chooses the next set of actions based on the current global state.

Computing utilities for all possible paths to a goal is usually an intractable problem and many approximations are used.  In [13], after running a local RRT generation, authors select the first node of the best path and use it to simulate IMU trajectories and a parallel robust visual inertial odometry system to predict pose covariances. In [10], the authors use a series of approximations to take into account loop closures, as opposed to [4]

, where a simulated iSAM is used to predict uncertainties. Another approach is to use heuristics and thresholds to select the action to take, and local planners are used to execute them, e.g., in

[12] attractors are placed manually in the environment and an MPC is employed for local planning.  Similarly, [11] use a set of approximations to make the non-convex MPC problem convex, and solve it to compute quickly, yet sub-optimal, maximum utility paths.  However, as shown in [14, 1], an accurate method to simulate uncertainty evolution and loop closures remains to be found. In our approach, we overcome the issue of simulating these by i) planning and choosing the most informative global path using the available map information, and ii) executing that path by continuous local re-planning (the second and third novelties mentioned in Sec. I). This allows us to utilize, in realtime, the information acquired by the robot along the path.

The utility of the global paths is usually computed as a linear combination of several metrics [1]. These metrics usually comprise consistency of the map, path entropy and robot’s uncertainty. The balance between exploration and exploitation is either performed implicitly within the utility formulation [4] or based on some hand-crafted heuristics and threshold values [12, 11].  In our approach, for utility computation of the global paths, we use the map entropy at a given location as the measure of information gain and the distance between that location and the starting position of the robot as a discount factor, similar to [13]. We successfully show that doing so is indeed useful for both exploration speed and energy consumption. Moreover, to compute the path utility, we not only consider the final waypoint, like in [9, 8], but also include the contribution of each waypoint on the path.  Even if the final waypoint could embed a higher information gain, it ignores the process which will bring the robot to that waypoint. To address this issue in our work, we use a weighted average scheme over each path to compute its utility. Every waypoint’s contribution to the path utility is exponentially discounted with respect to its distance to the starting point.

Iii Proposed Approach

Iii-a Problem Description and Notations

Let there be an omnidirectional robot, traversing on a 2D plane in a 3D environment, whose pose is given by and velocity by . Let the robot be equipped with an RGB-D camera with an associated maximum sensing distance and horizontal field of view (FOV) , a 2D laser range finder, and an IMU.  Let represent a bounded 2D grid map of the environment, where for each point

we have an occupancy probability

.  We assume that the robot begins exploration from a collision-free state and all map cells belong to the unknown space .  The robot’s goal is to quickly, efficiently (i.e, with minimal energy consumption or travelled distance) and autonomously map all the observable points in as free () or occupied (). The problem is considered solved when the is verified, where represent the space that cannot be mapped as, for example, unreachable locations and represent the already explored space. Some cells of this map are further identified as frontiers. These are cells located at the boundaries between free and unknown spaces, which are large enough for the robot to be traversed.  At any given time instant the robot’s state, map state and the set of observed visual features, and the graph of previously visited locations generated by the V-SLAM backend are available to the robot. A node is defined by its coordinates in the map frame and by the orientation of the robot. While solving this problem the robot must also be able to avoid both static and dynamic obstacles. Global and local path planning have to be performed without any a priori knowledge of the environment or of any landmarks in it.

We propose an active SLAM approach to solve the above-described mapping problem. The crux of our approach is to explore the environment by generating and executing paths that maximize information acquisition by the robot’s sensors. More precisely, at a given instant the robot i) identifies a suitable exploration goal position given by , ii) based on the so-far acquired information up to that instant, generates a collision-free path, which consists of contiguous poses , with maximum possible future information gain, and iii) executes the path by continuously re-planning its heading direction to incorporate all acquired information along the way and keep in its FOV as many visual features as possible. We leverage the omnidirectional nature of the robot platform, i.e., the position and orientation of the robot can be controlled independently.  In this work, we use Shannon entropy as a measure of the map uncertainty. The map entropy is therefore given by

(1)

where is the Shannon entropy of a map cell and represents its occupancy probability. if .

We further describe our goal selection approach in Sec. III-B, the path utilities formulation based on Shannon entropy in Sec. III-C. Utilities form the basis for both global path planning (Sec. III-D) and path execution and local re-planning (Sec. III-E). In Sec. III-F, we outline a finite state machine necessary to coordinate the proposed active SLAM approach.

Iii-B Goal selection

Reachable frontier map cells are defined as those which have no obstacles in the neighbouring cells, and belong to a region that is big enough for the robot to traverse. We use the frontier extraction algorithm from  [15], and modify it in the following way.  At first, frontier clusters are identified. Next, candidate goal locations are selected at centroids of those clusters. If a centroid is not reachable, a greedy search is performed in its proximity to identify the next reachable frontier cell in that cluster. These are then marked as candidate goals. Further, if any of the candidate goals has been a candidate in the near past, it is discarded assuming that it was unreachable during previous attempts. This is followed by the process of global path generation (Sec. III-D) to the candidate goals, and utility computation of those paths (Sec. III-C). The candidate goal with the maximum utility path is then chosen as the actual goal position.  If, for some reason, frontiers are not available, the robot either does a complete in-place rotation or tries to move to randomly-picked previously-visited location.

Iii-C Utilities

We use utility as a measure of information gain for an attainable robot pose (note that this includes both position and orientation of the robot) on the map that lies along a path , leading to the goal location . Given a pose , we obtain a set of map cells that are visible from the pose . A cell is deemed visible if there exists a ray that goes from it to the queried pose , without intercepting any obstacle and lies within the robot camera’s FOV, hypothesizing the robot at pose . At every cell , we associate a utility value .  The total utility of a pose along a path , is given as

(2)

We propose three different ways to compute , which are also experimentally compared in Sec IV. The first way, , is to define it as the plain Shannon entropy, i.e., . In the second way, , we propose to take into account the presence of obstacles directly in the utility computation.

(3)

where is a constant (set to in our experiments), and is the probability of a cell being an obstacle. Utility of this form would favor the selection of poses from which obstacles are visible. Visual features usually lie where there are obstacles that create contrasts. Thus, favoring robot headings towards obstacles improves the likelihood of observing more features. On the other hand, empty areas are usually easy to map. Furthermore, the correct representation of the obstacle cells influences the final mapping and navigability.

The third proposed way of computing utility, , is based on the following idea. While moving towards a frontier point, the robot could prioritize the re-observation of previously explored cells in contrast to seeking the exploration of only new areas. To this end, we first define

(4)

as our weighting function, where itself is a function of the distance between the queried pose and the final goal pose given as

(5)

Constants and are parameters to tune re-observation priorities (set to and , respectively, in our experiments). Using this weighting function, is given as

(6)

Iii-D Global path generation – first-level activeness

For every candidate goal that is available, we generate a path with an A planner applied to the global occupancy grid. Every path obtained in this way is reduced to a set of waypoints , where a waypoint is defined as , , . The path length between waypoints is restricted to a fixed value ( in our experiments). This discretization is done for several reasons. First, this lowers the overall computational requirement of the complete active SLAM approach. Second, it facilitates the robot’s dynamic constraints, e.g., it allows the robot sufficient time to rotate such that the required heading of the subsequent waypoint is achievable and unnecessary in-place rotations are avoided.

While the A* planner also returns a heading for every waypoint, a naive but non-informative approach is to interpolate the A* trajectory, following the direction of movement, to obtain waypoint headings. We propose a novel informative method, which we refer as our

first-level activeness. In this method, for every waypoint we replace by , where the latter maximizes the waypoint’s utility value as follows.

(7)

However, instead of directly using (2) to compute , we update the map cell set in a way such that the cells visible from the waypoints prior to are not included in . The final path utility is then given by

(8)

where is a discount factor, is the path-distance between the waypoint and the current robot location. Instead of the state-of-the-art approach [13] that uses a plain weighted sum, we use a weighted average between all waypoints utilities. This choice enables us to capture a more comprehensive contribution of the whole path in terms of overall information gain. Finally, the highest utility path is selected and its waypoints are used by the local path execution module, as described next.

Iii-E Local re-planning and execution

We use a nonlinear model predictive control (MPC) based method for local path execution. The NMPC input is the current robot state, the obstacle states and the desired waypoint provided by the global planner. The output is a trajectory of linear and angular velocity commands, the first step of which is executed at every timestep. The goal of NMPC is to track the waypoints with the minimum energy consumption while avoiding obstacles (which are formulated as space constraints). It is also constrained by linear and angular velocity limits. Note that while A* provides an obstacle free path, it is necessary to include obstacle constraints in the local re-planning and execution to avoid collisions with any newly discovered obstacle cell. We further describe two key novelties of our overall approach, which ensures active information acquisition.

Iii-E1 Local waypoint refinement – second-level activeness

The second-level activeness is carried out immediately before sending a waypoint to the NMPC.  Here, we exploit the fact that the robot has already moved and mapped new areas of the environment, since the instant of global path was computed.  The information considered during the previous global planning and optimal heading generation could have changed drastically, e.g., due to loop closures events, intermediate robot movements or newly mapped obstacles.  Therefore, each time the robot reaches a waypoint, , our method re-computes the optimal heading of the subsequent waypoint, , using the same procedure as described at the end of Subsec. III-D, but using the updated map information.

Iii-E2 Real-time heading refinement – third-level activeness

The V-SLAM back-end computes in real-time a set of 3D features. In the third-level activeness, which is carried out in between two consecutive waypoints, the key idea is to keep in the robot camera’s FOV as many 3D visual features as possible.  For every detection step in V-SLAM (i.e., node added to the graph), we obtain the 3D locations of all the features that are being stored and could be used for loop closures. We use this information to actively steer the heading of the camera.  At any given time instant while traversing between waypoints and , we have i) , the desired optimal angle of the next waypoint as computed by the second-level activeness, ii) , the orientation that the robot should achieve to retain the maximum amount of features within its FOV from . At every time-step we re-compute with a 3D ray-tracing technique on the updated octomap, and using the horizontal FOV of the robot camera , the maximum distance of the features from it and the next waypoint’s 2D position. Let be the distance between the current robot position and the next waypoint . In real-time, we then compute the current desired orientation for the robot, , as

(9)

where and are constants. The NMPC’s objective is now modified to track a continuously-updated waypoint , whose position component is the same as but the orientation component is updated to . Varying the parameters and allows to vary the importance of keeping 3D visual features within the FOV versus achieving the previously-desired orientation . For instance, in our experiments we set and to keep the features within the FOV as long as the robot is far enough w.r.t. the next waypoint.  Overall, the third-level activeness increases the chance of loop closures and place recognition (especially while revisiting areas), therefore improving map quality and lower trajectory errors.

Iii-F Fsm

A finite state machine coordinates the execution of the three active SLAM layers introduced in this work. A simplified scheme of the FSM is depicted in 1. The NMPC states (WAITING, OPERATING or GOAL REACHED) influence what the FSM does. If the MPC is waiting for commands (WAITING, GOAL REACHED states) the FSM will manage the request of available goals and the generation of optimal paths toward them. If the MPC is actively working (OPERATING), the FSM will stay idle.

Result: Exploration
state = get NMPC state;
if state = WAITING or state = GOAL REACHED then
        if waypoints is not empty then
               // Second-level activeness
               optimize best heading;
               // ‘In’ NMPC Third-level activeness
               send waypoint to NMPC;
               popfront(waypoints);
              
       else
               endpoints = require frontier points;
               if endpoints is empty then
                      if map graph is empty then
                             endpoints = 360 rotation in place;
                            
                     else
                             endpoints = random graph node;
                            
                      end if
                     
               end if
              // First-level activeness
               waypoints = getOptimalSolution(endpoints);
              
        end if
       
else if state != OPERATING then
        recovery;
       
Algorithm 1 Pseudocode of the finite state machine (FSM) coordinating our proposed active SLAM layers.

Iv Experiments and Results

Our robot, a Festo Didactic’s Robotino 1 (Fig. 2), has a 3-wheel omnidirectional drive base. We augmented the robot hardware with an extended physical structure containing a 2D laser range finder (Hokuyo A2M8) an IMU and an RGB-D camera (Intel RealSense D435i) and a single-board computer (Intel(R) Core(TM) i7-3612QE CPU @ 2.10GHz ). A virtual model of the same robot was created for the simulation experiments.

Iv-a Simulation Experiments

Iv-A1 Setup and Implementation

For validation of the proposed algorithm we use the Gazebo simulator. We test our method in two different environments: i) AWS Robomaker Small House World222https://github.com/aws-robotics/aws-robomaker-small-house-world (Fig. (a)a), which is , and ii) a modified version of the Gazebo’s Cafe environment (Fig. (b)b) using 3DGems’ [16], which is .

We use real-hardware-like parameters for all the sensors in the simulation. The LRF has an angular resolution of and provides a complete sweep. The camera is used at the resolution of . Its horizontal FOV is , and maximum sensing depth is meters. The NMPC horizon is steps of s each. For all experiments the maximum translation speed of the robot is set to , while the maximum angular speed is . The ground truth maps were obtained by using pgm_map_creator333https://github.com/hyfan1116/pgm_map_creator and edited to remove unwanted artifacts. Maps have a cell size of and are compared w.r.t. the ground truth using the following metrics: i) balanced accuracy score (BAC) on the three classes free, occupied and unknown, ii) root mean squared absolute trajectory error (ATE RMSE), and iii) the number of loop closures triggered per meter of distance travelled. Map entropy is monitored throughout the entire experiment, and presented as normalized entropy w.r.t. the actual explored area.

We use RTAB-Map [2] as our V-SLAM back-end, which is a graph-feature-based visual SLAM framework. A 2D occupancy map is generated by the projection of octomap built through our RGB-D sensor. This allows us to map the obstacles that would be otherwise hidden to the 2D LRF.

Fig. 2: Our robot platform – Festo Didactic’s Robotino with additional structure and hardware. Real robot (left), Gazebo model (right).
(a) AWS’s small house
(b) Gazebo’s edited Cafè
Fig. 5: Simulated Environments

As our overall active SLAM approach is extremely modular (different possible ways to compute path utility and the three levels of activeness), we combine our individual contributions to develop various other approaches, including some from state-of-the-art, to compare with our overall approach.

First, in addition to our novel approach of generating path utilities (Subsec. III-D) which includes the first-level activeness, we consider 2 additional utility computation methods. These are i) OL: using only the optimized goal position instead of the waypoints on the path and ii) INTER: the orientation of the robot is simply interpolated along the trajectory to the goal position and the total utility is computed as a plain sum of the utilities. Both of these neglect our first-level activeness. Active SLAM approaches using these for utility computation are prefixed by OL and INTER, respectively. Approaches using our complete procedure of global path generation in (Subsec. III-D

) are prefixed with A.  Based on the combination of activeness levels and path utility computation approach, we performed experiments with the following methods – A_1_2_3 (our complete approach), A_1, OL_0, OL_1, OL_1_3, OL_2, OL_2_3 and INTER_0. The numbers in the suffix signify the activeness levels enabled, with 0 referring to none of them enabled. We performed a longer time duration version of our complete approach named A

_1_2_3. Finally, we also provide experiments with 2 additional methods, where our complete approach is used, but instead of using plain Shannon entropy scheme (), we use and , as introduced in Subsec. III-C. These methods are named A_O and A_DW_O. Note that the method OL_0 closely corresponds to [8], and INTER_0 to [4].

Simulation results are the average over 20 different successful trials of ten minutes each for each method with the same starting location on the map. Variability in trials is achieved as a result of different trajectories the robot takes in each one of the runs. Since RTABMap’s execution rate is not fixed, but varies based on the movement of the robot, the plots are presented after post-processing of the data. This consists of a bucketing procedure in time windows of 2 seconds, eventually filled with previous values. While we show plots and tables for results on both simulation environments, we only discuss the results related to the Gazebo’s Cafè environment. The results on the other environment from AWS are quite similar.

Iv-A2 Results

(a) Path length evolution
(b) Time evolution
Fig. 8: Cafè environment: comparison of proposed methods.
(a) Path length evolution
(b) Time evolution
Fig. 11: Cafè environment: comparison of OL-type methods
(a) Path length evolution
(b) Time evolution
Fig. 14: Cafè environment: comparison of different utilities
BAC ATE RMSE Loops per m
mean std mean [m] std [m] mean std
A_1_2_3
INTER_0
OL_0
A_1
TABLE I: Cafè environment: comparison of proposed methods.
BAC ATE RMSE Loops per m
mean std mean [m] std [m] mean std
OL_2
OL_2_3
OL_1
OL_1_3
TABLE II: Cafè environment: comparison of OL-type methods.
BAC ATE RMSE Loops per m
mean std mean [m] std [m] mean std
A_1_2_3
A_1_2_3
OL_2_3
A_O
A_DW_O
TABLE III: Cafè environment: comparison of different utilities.

We first compare our full method A_1_2_3 with A_1, OL_0 [8] and INTER_0 [4].  From Tab. I, we can see that while in terms of BAC all four methods perform equally, there are notable differences in terms of ATE RMSE and loop closures triggered. In INTER_0, the smooth movement of the robot, associated with the interpolated trajectory, leads to a much lower ATE RMSE. However, in INTER_0 the robot travels the most without any gain either in terms of explored space compared to both A_1 and OL_0, or in terms of the final entropy of the map as compared to A_1_2_3 and A_1. Moreover, in Fig. 8 we can notice how, despite having a much higher exploration area per meter travelled A_1_2_3 continuously keeps a much lower normalized entropy. Most importantly, our complete approach A_1_2_3 (and A_1) successfully explores the environment, while achieving a trajectory length shorter that OL_0.

In Tab. II and Fig. 11, we compare the methods OL_1, OL_1_3, OL_2 and OL_2_3. We see from Fig. 11 that the path lengths are considerably reduced when the second and third levels of activeness are enabled. What is also evident from Fig. 11 is that with both the activeness levels enabled, we successfully explore the environment while keeping the overall entropy lower w.r.t. the base method of OL_0.  From Tab. II, we see that ATE RMSE of OL_2_3 and OL_1 are comparable, especially taking into account the difference between these methods in terms of the rotational movement performed by the robot. By design, the second and third levels of activeness cause the robot to make more rotational movements. Such movements are known to increase the uncertainty in pose estimates of the robot. However, as is evident, in our active approach they still do not adversely affect the robot localization.  Furthermore, the number of loop closures per meter is higher when the second and third levels of activeness are enabled. Overall, we find that the OL_2_3 is, among these four methods, the best performing one by having a clear pattern of lower normalized entropy and almost complete exploration, while travelling the least path length.

Finally, in Fig.14 and Tab.III, we compare OL_2_3 against our full approach A_1_2_3 and the approaches with different utilities (A_O and A_DW_0 ) and path length (A_1_2_3). Recall that A_1_2_3 is an extended ( minutes) version of our full approach A_1_2_3 We observe that with shorter paths (), all our approaches, when compared to OL_2_3, keep normalized entropy low both during the experiment and at the end, while reaching comparable amount of area explored in the same time-duration. The ATE RMSE is also better for A_1_2_3, A_O and A_DW_0, as compared to OL_2_3, while the loop closures are comparable.  We can observe that the method A_DW_0, as expected, is linked to a lower normalized entropy and lower distance travelled as compared to OL_2_3. The presence of obstacles in the utility facilitates every considered metric (the entropy reduction, ATE RMSE and the loop closure frequency). This justifies the introduction of such a term in the utility function for an active V-SLAM approach. Since the obstacle cells are the ones harder to refine, and for which the system must be more vigilant, exploiting them directly in the utility computation is beneficial.  The objective of A_1_2_3 is to show that our approach has no issues in completely exploring the environment, while keeping the path shorter and the same final normalized entropy. Moreover, we also find that if run for a longer period of time, our approach further refines its results and reaches better metrics overall.

(a) Path length evolution
(b) Time evolution
Fig. 17: Small house: comparison of proposed methods.
(a) Path length evolution
(b) Time evolution
Fig. 20: Small house: comparison of OL-type methods
(a) Path length evolution
(b) Time evolution
Fig. 23: Small house: comparison of different utilities
BAC ATE RMSE Loops per m
mean std mean [m] std [m] mean std
A_1_2_3
INTER_0
OL_0
A_1
TABLE IV: Small house: comparison of proposed methods.
BAC ATE RMSE Loops per m
mean std mean [m] std [m] mean std
OL_2
OL_2_3
OL_1
OL_1_3
TABLE V: Small house: comparison of OL-type methods.
BAC ATE RMSE Loops per m
mean std mean [m] std [m] mean std
A_1_2_3
A_1_2_3
OL_2_3
A_O
A_DW_O
TABLE VI: Small house: comparison of different utilities.

Iv-B Real Robot Experiments

(a) Path length evolution
(b) Time evolution
Fig. 26: Comparison of our complete approach with the state-of-the-art methods on our real robot in an office reception area.
Area Normalized Entropy Loops per m
mean [] std [] mean std mean std
A_1_2_3
A_O
A_DW_O
OL_0
OL_2_3
TABLE VII: Real robot results: comparison of proposed methods and utilities using various metrics.

The Experiments with our real robot consists of five runs of five minutes each in a custom environment, as depicted in Fig. 1. For safety reasons the speed of the robot has been limited to m/s. For this set of experiments, the ground truth of both the robot position and the map were not available. Therefore, the analysis can only be done with respect to the area explored, the distance travelled, the final map entropy and the number of loop closures, as shown in Fig. 26 and Tab. VII.  The real robot results in general have the same trend as our simulation experiments, further validating our approach.  Most importantly, as seen in Fig. 26, our complete approach A_1_2_3 achieves the shortest path length and explores the maximum area, while keeping a fairly low entropy.  Both A_O and A_DW_O, i.e., the methods using our new proposed utilities, are the best ones considering the final normalized entropy and the average loop closures.  Furthermore, we can see in table Tab. VII that OL_0 achieves more loop closures that OL_2_3, despite having a much higher normalized entropy. This is slightly in contrast to the simulation results.  Overall, even if we recognize that the real robot experiments are not as statistically significant as the simulation ones, they clearly indicate that our three levels of activeness improve the final results bringing benefits to all the comparison metrics.  Lastly, it must be noted that we get longer paths with A_O and A_DW_O. This is most likely linked to the very low velocity of the robot. We noticed that while these two methods facilitate longer and more informative paths, the OL methods usually facilitate the nearest frontier. In A_O and A_DW_O methods, the robot suffers from several start-and-stop instances and the continuous necessity to re-computed goals.

V Conclusions

In this work, we presented a novel active V-SLAM method for omnidirectional robots. Our approach consists of three levels of activeness. In the first level, the robot selects goals and decides on the most informative paths to it. The second level re-optimizes waypoints along the path in a way such that the real-time updated map information is continuously exploited. The third level activeness ensures that the robot orientations have maximum visual feature visibility. This eventually ensures better localization accuracy, in addition to lower map entropy. The most significant result of our approach is that it requires up to 36% less path traversal to obtain similarly good coverage and lower map entropy compared to methods that are tuned versions of the state-of-the-art approaches [4]. We demonstrated the efficacy of our method both in simulation and in real robot scenarios. A drawback of our method is that the robot encounters some start-stop situations in which the local paths are replanned. While this may not be desirable in some applications, one way to avoid such a behavior is to employ a better computational unit.  Future direction of our work includes active V-SLAM with moving objects and persons in the environment, and extension of our approach to aerial robots.

References

  • [1] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309–1332, 2016.
  • [2] M. Labbé and F. Michaud, “Rtab-map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation,” Journal of Field Robotics, vol. 36, no. 2, pp. 416–446, 2019. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21831
  • [3] A. Rosinol, M. Abate, Y. Chang, and L. Carlone, “Kimera: an open-source library for real-time metric-semantic localization and mapping,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 1689–1696.
  • [4] H. Carrillo, P. Dames, V. Kumar, and J. A. Castellanos, “Autonomous robotic exploration using a utility function based on rényi’s general theory of entropy,” Autonomous Robots, vol. 42, no. 2, pp. 235–256, Feb 2018. [Online]. Available: https://doi.org/10.1007/s10514-017-9662-9
  • [5] R. Sim and N. Roy, “Global a-optimal robot exploration in slam,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005, pp. 661–666.
  • [6] B. Yamauchi, “A frontier-based approach for autonomous exploration,” in Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97. ’Towards New Computational Principles for Robotics and Automation’, 1997, pp. 146–151.
  • [7] H. Umari and S. Mukhopadhyay, “Autonomous robotic exploration based on multiple rapidly-exploring randomized trees,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 1396–1402.
  • [8] A. Dai, S. Papatheodorou, N. Funk, D. Tzoumanikas, and S. Leutenegger, “Fast frontier-based information-driven autonomous exploration with an mav,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 9570–9576.
  • [9] L. Schmid, M. Pantic, R. Khanna, L. Ott, R. Siegwart, and J. Nieto, “An efficient sampling-based method for online informative path planning in unknown environments,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1500–1507, 2020.
  • [10] J. Vallvé and J. Andrade-Cetto, “Active pose slam with rrt*,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 2167–2173.
  • [11] Y. Chen, S. Huang, and R. Fitch, “Active slam for mobile robots with area coverage and obstacle avoidance,” IEEE/ASME Transactions on Mechatronics, vol. 25, no. 3, pp. 1182–1192, 2020.
  • [12] C. Leung, S. Huang, and G. Dissanayake, “Active slam using model predictive control and attractor based exploration,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 5026–5031.
  • [13] C. Papachristos, S. Khattak, and K. Alexis, “Uncertainty-aware receding horizon exploration and mapping using aerial robots,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 4568–4575.
  • [14] C. Stachniss, G. Grisetti, and W. Burgard, “Information gain-based exploration using rao-blackwellized particle filters.” in Robotics: Science and Systems, vol. 2, 2005, pp. 65–72.
  • [15] D. Mammolo, “Active slam in crowded environments,” 2019. [Online]. Available: http://hdl.handle.net/20.500.11850/341706
  • [16] A. Rasouli and J. K. Tsotsos, “The effect of color space selection on detectability and discriminability of colored objects,” arXiv preprint arXiv:1702.05421, 2017.