Autonomous operation in robotics applications requires robots to have access to a consistent model of the surrounding environment, in order to support safe planning and decision making. Towards this goal, a robot must have the ability to create a map of the environment, localize itself on it, and control its own motion. Active SLAM refers to the joint resolution of these three core problems in mobile robotics, with the ultimate goal of creating the most accurate and complete model of an unknown environment. Active SLAM can be seen as a decision-making process in which the robot has to choose its own future control actions, balancing between exploring new areas and exploiting those already seen to improve the accuracy of the resulting map model.
During the last decades, active SLAM has received increasing attention111The number of publications on active SLAM has grown from 53 in 2010 to over 660 in 2022 (a twelve-fold increase). The number becomes almost 5500 if we extend the search to include belief-space planning, active exploration, and simultaneous planning, localization and mapping. Source: dimensions.ai. and has been studied in different forms across multiple communities, with the ambition of deploying autonomous agents in real-world applications (e.g., search and rescue in hazardous environments, underground or planetary exploration). This divergence has broadened the scope of the problem and provided a wider context, yielding numerous approaches based on different concepts and theories that have made the field flourish; but it also created a disconnect between research lines that could mutually benefit from each other. With this survey, we seek to fill this gap by providing a general problem statement and a unified review of related works.
Currently, active SLAM is at a decisive point, driven by novel opportunities in spatial perception and artificial intelligence (AI). These include, for instance, the application of breakthroughs in neural networks to prediction beyond line-of-sight, reasoning over novel environment representations, or leveraging new SLAM techniques to process dynamic and deformable scenes. Throughout this paper, we give a fresher picture of active SLAM that goes beyond the classical —but still mainstream— entropy computation over discretized grids. Besides, we identify the open challenges that need to be addressed for active SLAM to have an impact on real applications, shaping future lines of research, and describing how they can nourish from the cross-fertilization between research fields. Among those challenges, we emphasize the urgent need for benchmarks and reproducible research.
I-a Historical Perspective
Ever since the first mobile robots were built in the late 1940s, the ambition that they could perform autonomous tasks has been one of the major focuses of robotics research. To operate autonomously, a robot needs to form a model of the surrounding environment —including localization and mapping— and perform safe navigation [siegwart11]
. While the former involves estimating the position of the robot and creating a symbolic representation of the environment, the latter refers to planning and controlling the movements of the robot to safely achieve a goal location. Localization, mapping, and planning have been often investigated in combination, resulting in multiple research areas such as SLAM, active localization, active mapping, and active SLAM.
Localization and mapping were treated deterministically and solved independently until probabilistic approaches went mainstream in the 1990s, when researchers realized that both tasks were correlated and dependent of one another. SLAM refers, thereby, to the problem of incrementally building the map of an environment while at the same time locating the robot within it [thrun05]. This problem has attracted significant attention from the robotics community in the last decades; see [durrant06, grisetti10, cadena16] and the references therein.
SLAM, however, is a passive method and is not concerned with guiding the navigation process. In contrast, active approaches do consider the navigation aspects of the problem. Bajcsy [bajcsy85], Cowan and Kovesi [cowan88], and Aloimonos et al. [aloimonos88] were the first to study and analyze the problem of active perception (also referred to as active information acquisition [atanasov14]) in the late nineties. Bajcsy [bajcsy98] would later formally define it as the problem of actively acquiring data in order to achieve a certain goal, necessarily involving a decision-making process. For the cases in which the objective is to improve localization, mapping, or both, the problems are respectively referred to as active localization, active mapping, and active SLAM.
Active mapping was the first problem to be addressed, dating back to the work of Connolly [connolly85] in 1985. Better known since then as the next best view problem, active mapping tackles the search of the optimal movements to create the best possible representation of an environment. Subsequent examples date to the 1990s [maver93, whaite97, pito99]
, always under the assumption of perfectly known sensor localization. This problem has been primarily addressed in the computer vision community to reconstruct objects and scenes from multiple viewpoints, since the nature of the projective geometry for monocular cameras, occlusions, and limited field of view often make impossible to do it from just one viewpoint; see[gallos19, zeng20b] and the references therein.
In a similar vein, active localization aims to improve the estimation of the robot’s pose by determining how it should move, assuming the map of the environment is known. First relevant works can be traced back to 1998, when Fox et al. [fox98] and Borgi and Caglioti [borghi98] formulated it as the problem of determining the robot motion so as to minimize its future expected (i.e., a posteriori) uncertainty. In particular, it is in [fox98] where the foundations of the current workflow were laid: (i) goal identification, (ii) utility computation, and (iii) action selection (we will extensively review these stages later in this survey). Other relevant subsequent work can be found in [jensfelt01, mostegel14, gottipati19, xie20], but also in the related literature of perception-aware planning [strader20] and planning under uncertainty [roy99].
Finally, active SLAM unifies the previous problems, and allows a robot to operate autonomously in an initially unknown environment. It refers to the application of active perception to SLAM and can be defined as the problem of controlling a robot which is performing SLAM in order to reduce the uncertainty of its localization and the map representation [carrillo12]. Historically, active SLAM has been referred to with different terminology, which has significantly hindered knowledge sharing and dissemination within the robotics community. Relevant seminal works can be found under the names of active exploration [thrun91], adaptive exploration [feder99, bourgault02], integrated exploration [makarenko02, stachniss04], autonomous SLAM [newman03], simultaneous planning, localization and mapping [stachniss09], belief-space planning (BSP) [platt10], or simply robotic exploration [stachniss03, sim05]. It was not until 2002 —when Davison and Murray [davison02] coined the term active SLAM— that the robotics community started adopting this nomenclature. Thrun and Möller [thrun91] demonstrate that in order to solve robotic exploration, agents have to switch between two opposite principles depending on the expected costs and gains: exploring new areas and revisiting those already seen, i.e., the so-called exploration-exploitation dilemma. The first approach in which a robot chooses actions that maximize the knowledge of the two variables of interest is attributed to Feder et al. [feder99], who also separate the procedure in three major stages as in [fox98]. Table I contains a subset of relevant works that have followed [feder99]. This table differentiates the main aspects of each approach, including the type of sensors, the state representation, and the theoretical foundations.
I-B About Previous Surveys
Only two works have previously addressed the problem of surveying active SLAM research. The first of them, published in 2016, is a section of a more general survey on SLAM carried out by Cadena et al. [cadena16]. The other, by Lluvia et al. [lluvia21], conducts a more extensive survey on “Active Mapping and Robot Exploration”. Table II summarizes the topics they address, along with those covered in the present survey.
Cadena et al. [cadena16] describe both the history and the main aspects of the problem, and identify three open challenges: the decision of when to stop performing active SLAM, the problem of accurately predicting the effect of future actions, and the lack of mathematical guarantees of optimality. However, the brevity of the active SLAM section prevented delving into a detailed discussion of the most relevant works or providing a more unified mathematical formulation of the problem. Moreover, since [cadena16]
was published, many relevant contributions have been proposed and new open problems have arisen. For instance, progress has been made on the way uncertainties of the robot location and the map are represented and quantified. Furthermore, recent work has also opened new research endeavors, including deep learning (DL).
Lluvia et al. [lluvia21] also provide a thorough historical review and relate the different communities that have been trying to solve this problem under different nomenclatures. Similar to [cadena16], they do not attempt to present a unified mathematical formulation of active SLAM nor do they cover utility computation, a field which has been mostly overlooked in the literature. They delve, nevertheless, into the optimization of vantage points and the trajectories to reach them, a new problem that has attracted significant attention from the control community and has seen many contributions in recent years. In [lluvia21], the authors present a comparison between representative works in active SLAM, although with a limited scope. Contrarily to [lluvia21], we present a more complete analysis and a broader set of open challenges, which extends the ones identified in [cadena16].
|SLAM Approach||Sensors||Environment Representation||Formulation||Goal Locations||Utility Function||Validation Environment||Stopping Criterion|
Feder et al. [feder99]
|EKF||Sonar||Landmark map||Traditional||Local vicinity||Sim. & real||-|
Bourgault et al. [bourgault02]
|EKF||Lidar||OG map||Traditional||Local vicinity||MI||Real||-|
Stachniss et al. [stachniss04]
|FastSLAM [montemerlo02]||Lidar||OG map||Traditional||Frontiers & re-visiting||Particle’s volume & distance||Sim. & real||Particle’s volume|
Stachniss et al. [stachniss05]
|RBPF||Lidar||OG map||Traditional||Frontiers & re-visiting||MI & distance||Sim. & real||-|
Leung et al. [leung06]
|EKF||Lidar||Landmark map||MPC||Unknown space & re-visiting||Simulation||-|
Valencia et al. [valencia12]
|Pose SLAM [ila09]||Lidar||OG map||Traditional||Frontiers & revisiting||Entropy||Simulation||-|
Carlone et al. [carlone14jirs]
|RBPF||Lidar||OG map||Traditional||Frontiers & re-visiting||KLD||Simulation||-|
Indelman et al. [indelman15]
|GTSAM||Camera||Landmark map||BSP||-||Robot’s & distance||Simulation||-|
Zhu et al. [zhu15]
|RGBDSLAM [endres13]||RGBD||Octomap||Traditional||Frontiers||Coverage & distance||Simulation||-|
Bircher et al. [bircher16]
|ROVIO [bloesch15] & mapping||Stereo & IMU||Octomap||MPC||RRT paths||Coverage & distance||Sim. & real||Coverage|
Papachristos et al. [papachristos17]
|ROVIO [bloesch15] & dense mapping||Stereo & IMU||Octomap||MPC||RRT paths||Sim. & real||Min. utility|
Umari et al. [umari17]
|Gmapping [grisetti07]||Lidar||OG map||Traditional||Frontiers||Map’s MI & distance||Sim. & real||-|
Carrillo et al. [carrillo18]
|ICP & iSAM [kaess08]||Lidar||OG map||Traditional||Frontiers||Shannon-Rényi entropy||Sim. & real||Time|
Jadidi et al. [jadidi18]
|Pose SLAM [ila09]||Lidar||COM||Traditional||Frontiers||MI & distance||Simulation||Coverage|
Palomeras et al. [palomeras19b]
|ICP & g2o [grisetti11]||Lidar||Octomap||Traditional||Random||Coverage||Sim. & real||Coverage|
Chaplot et al. [chaplot19]
|Neural Networks||RGB||OG map||DRL||Local vicinity||Coverage||Sim. & real||-|
Niroui et al. [niroui19]
|Gmapping [grisetti07]||Lidar||OG map||DRL||Frontiers||Map’s MI & distance||Sim. & real||-|
Placed et al. [placed20]
|Gmapping [grisetti07]||Lidar||OG map||DRL||Local vicinity||Robot’s||Simulation||-|
Suresh et al. [suresh20]
|ICP & iSAM [kaess08]||Sonar||Octomap||MPC||RRT paths & re-visiting||Robot’s & coverage||Sim. & real||-|
Chen et al. [chen20b]
|Linear SLAM [zhao19]||Camera||Landmark map||MPC||Local vicinity||Graph’s||Sim. & real||Coverage|
|Topic||Cadena et al. [cadena16]||Lluvia et al. [lluvia21]||Ours|
|Traditional scheme||Env. representation||Yes||Yes||Yes|
|Alternative approaches||Continuous domain||No||Yes||Yes|
|Open problems||State prediction||Yes||Yes||Yes|
I-C Paper Structure
The remainder of this manuscript is organized as follows. Section II provides a unified problem formulation for active SLAM and describes the three subproblems (or stages) it has traditionally been divided into. Sections III to V cover those three stages separately. In particular, Section III deals with the identification of vantage points, Section IV with utility computation, and Section V with selection and execution of the optimal action. Sections VI and VII consider, on the other hand, alternative continuous-state optimization and DL methods. Section VIII is devoted to multi-robot active SLAM. Section IX outlines the open research questions in active SLAM. Finally, Section X concludes the manuscript.
Ii The Active SLAM Problem
Ii-a Problem Formulation
Active SLAM can be framed within the wider mathematical framework of partially observable Markov decision processes (POMDPs), after some particularization. POMDPs model decision-making problems under both action and observation uncertainties and can be formally defined as the 7-tuple. In particular, a POMDP consists of the agent’s state space , a set of actions , a transition function between states where
is the space of probability density functions (pdfs) over, an observation space , the conditional likelihood of making any of those observations , where is the space of pdfs over , a reward scalar mapping , and the discount factor which allows to work with finite rewards even when planning over infinite time horizons.
Contrary to the fully observable case, agents in a POMDP cannot reliably determine their own true state, . Instead, they maintain an internal belief or information state,
, which represents the posterior probability over states at time, given the available data collected up to that time [kaelbling98, thrun05, sigaud13]:
where is the set of all available observations and the set of past control actions (both collectively referred as the history ). The belief space,
, of probability density functions over the setis defined as:
In order to evaluate the effect of future actions, agents must be capable of predicting posterior belief distributions, that is, the pdf over after performing a certain action, , and taking a future observation :
Since the future measurements are unknown for the agent, their expected value has to be studied instead. Consider that an agent in the state defined by executes a certain action , and transitions to another state with pdf . Then, the likelihood of making an observation will be given by [sigaud13]:
where is the observation model and the motion model.
Since the belief is a sufficient statistic, optimal policies for the original POMDP may be found by solving an equivalent continuous-space MDP over [astrom65, kaelbling98]. Such MDP is defined by the 5-tuple , where the transition and reward functions are and . To preserve consistency, this belief-dependent reward function builds on the expected rewards of the original POMDP:
Then, the decision at time will be provided by the (control/action) policy , which maps elements from the space of pdfs over to the action space:
The optimal policy, , that yields the highest expected rewards for every belief state can be found via:
where expectation is taken w.r.t. . In general, computing the optimal policy for MDPs with continuous state spaces is hard and most works resort to approximate solutions or problem simplifications [kaelbling98, araya10].
The active SLAM problem requires, however, some variation and particularization of the above general POMDP formulation. Let us consider a robot capable of moving in an unknown environment while performing SLAM. That is, at every time step, the robot can change its own linear and angular velocities; moreover, the robot is able to process the sensor data into a map representation, , and an estimate of its own state (e.g., pose), . Thus, the state space can be defined as the joint space .
The evolution of both the state and the measurements in SLAM is governed by probabilistic laws [thrun05], as (1) and (4) express. However, two assumptions are worth mentioning in the context of active SLAM regarding each of the equations, that further simplify its resolution. First, the robot’s state is commonly assumed Gaussian with a pdf having mean and covariance (see, e.g., [valencia12, indelman15]). Secondly, despite less prevalent than the former, some works (e.g., [platt10]) also assume maximum likelihood (ML) observations, which allows rewriting expected measurements as:
In addition, in active SLAM the reward typically reflects the agent’s knowledge of the system (i.e., it involves the uncertainty in the belief rather than focusing on reaching specific states). These reward functions are known as utility functions and may be defined mathematically as the scalar mapping . This reward mapping, however, is inconsistent with both POMDPs (where the reward is dependent on and ) and belief MDPs (where the reward is restricted to the form in (5)). To circumvent this limitation, -POMDP [araya10] extends the POMDP formulation to allow the inclusion of beliefs’ uncertainty in the objective. This enables the use of information-oriented criteria rather than control-oriented, without losing basic properties such as Markovianity.
Finally, considering a finite-horizon and ML observations, the discount factor and expectation over future measurements in (7) can be dropped, and active SLAM can be reduced to the following optimization for open-loop planning settings:
where is the optimal sequence of actions to execute over the future planning horizon ( lookahead steps).
Ii-B Decoupling Active SLAM into Three Subproblems
While the previous section provided a unified formulation for active SLAM, for computational convenience active SLAM has been traditionally decoupled into three subproblems (or stages) [fox98, feder99, makarenko02], which will be briefly described hereafter and covered in detail in Sections III to V:
Identification of the potential actions: solely to reduce the computational burden, the first stage aims to determine a reduced subset of possible actions to execute.
Utility computation: the expected cost and gain of performing each candidate action has to be estimated.
Action selection and execution: finally, the last stage involves finding and executing the optimal action(s).
The entire process should be iteratively repeated until the whole environment is accurately modeled, although in practice it is done until some stopping conditions are met.
Iii Stage 1: Identification of Potential Actions
The first stage in traditional active SLAM approaches consists in generating the set of available actions the robot could execute (i.e., goals the robot can reach); this can be understood as a way to reduce (and discretize) the search space of potential actions. Early works simply used random goals or required human interaction, until the concept of frontiers was introduced by Yamauchi [yamauchi97]. This resulted in improved exploration strategies, and has consolidated as the most common approach. Nevertheless, the advent of neural networks has led to new ways of evaluating the space of potential goals. In this section, we present the most important methods to identify goal locations. Since they strongly depend on the representation of the environment estimated by the SLAM pipeline, we start by providing a brief description of the different existing representations for active SLAM.
Iii-a Representation of the Environment
We review four different types of map representations: topological, metric, metric-semantic, and hybrid maps.
Iii-A1 Topological maps
use lightweight graphs to describe information about the topology of the environment. Historically, vertices in this graph represent convex regions in the free space, while edges model connections between them. The construction of these graphs is a segmentation problem, usually done over an occupancy grid; see [bormann16] for a survey on these methods. Despite these maps allow leveraging graph theory, which provides powerful tools for planning and exploration, they are not frequently used in active SLAM [mu16, fermin17].
Iii-A2 Metric maps
are the most used representations to encode information about the environment in active SLAM. They can be further divided into two categories: sparse and dense maps. The former rely on a sparse set of interest points (or landmarks) to represent a scene, and have been especially used in optimal control [leung06, atanasov15, chen20b] and belief-space planning [indelman15] approaches. Dense maps can be based on point clouds, meshes or, more typically, a discretization of the environment into cells that encode a certain metric (e.g., occupancy, distance to obstacles). Occupancy grid (OG) maps, first proposed in the late eighties for perception and navigation by Elfes [elfes89] and Moravec [moravec89], assign to each cell its probability of being occupied. They have been used in numerous active SLAM frameworks, such as [carrillo18, gottipati19, chaplot19, niroui19, placed21]. Their extension to 3D representations include OctoMaps [hornung13], Supereight [vespa18] and voxel maps [muglikar20], all of which have been used in active SLAM frameworks [palomeras19, selin19, deng20b, dai20, batinovic21]. Jadidi et al. [jadidi18] use continuous occupancy maps (COM), so as to allow the use of continuous optimization methods over them. There exist many other dense representations that encode more sophisticated metrics, such as those based on signed distance fields (SDF), like Voxblox [oleynikova17]. Still, they are seldom used in active SLAM frameworks [saulnier20].
Iii-A3 Metric-semantic maps
go beyond geometric modeling and associate semantic information to classical metric maps. Instead of geometric features, a sparse map can capture objects, described by a semantic category, pose, and shape [bowman17, nicholson18]. Active object-level SLAM has been considered in [eidenberger10, atanasov14b]. Examples of dense metric-semantic maps include Voxblox++ [grinvald19] and Kimera [rosinol20] (which build upon an SDF), and Fusion++ [mccormac18] and [zheng19] (based on voxel maps). Despite being used in some SLAM formulations (see [cadena16, rosinol20] and the references therein), they have not yet been used in active SLAM. An exception is the work of Asgharivaskasi and Atanasov [asgharivaskasi21, asgharivaskasi22] which develops a multi-class (semantic) OctoMap and uses a closed-form lower bound on the Shannon mutual information between the map and range-category observations to select informative robot trajectories.
Iii-A4 Hybrid and hierarchical maps
combine some of the previous representations to enhance the decision-making process. Hybrid metric-topological maps have been applied to tackle either navigation [thrun96] or SLAM [tomatis03]. Rosinol et al. [rosinol21] combine metric, semantic, and topological representations into a single model, a 3D scene graph. These hierarchical representations break down metric-semantic maps into interconnected high-level entities, paving the way for high-level reasoning. The use of hybrid maps in active SLAM, however, is mostly unexplored, with [gomez19] being among the very few works that have successfully integrated them.
Iii-B Detecting Goal Locations
The identification of all possible destinations the robot could travel to easily proves to be intractable because of the dimensions of the map and the action set [burgard05]. In practice, a finite subset of them is identified, allowing for computational tractability despite not guaranteeing global optimality [indelman15]. The simplest approach consists of randomly selecting the goal destinations [gonzalez02, tovar06]. Random-based exploration requires low computational resources and works under the assumption that every spot in the environment has the same information associated. In 1997, Yamauchi [yamauchi97] revolutionized the field by introducing the concept of frontiers, i.e., the areas that lie between known and unknown regions. Since its proposal, frontier-based exploration has been the most used by far and has been tailored to different map representations. Frontiers have been effectively identified for topological maps as nodes with no neighbors in certain directions [mu16]. For 2D OG maps, a plethora of geometric frontier-detection methods have been developed to circumvent the computational cost of searching the entire space [quin21]. Keidar and Kaminka [keidar12] propose the wavefront frontier detector (WFD) and fast frontier detector (FFD). WFD starts the search from the robot’s location and restricts it to the free space; FFD performs the search after each scan is collected, following the intuition that frontiers are bound to appear in recently scanned regions. Following this idea, the same authors present the incremental WFD [keidar14], that restricts the search to recently scanned areas. Quin et al. [quin21] improve the performance of the previous algorithms by only evaluating a subset of the observed free space. Refer to [holz10, quin21] for further discussion. Umari and Mukhopadhyay [umari17] first present a frontier search method over a 2D OG based on rapidly-exploring random trees (RRTs) that grow both globally and locally to sample recently scanned regions. This strategy, often combined with computer vision algorithms has been widely used [wu19, placed21]. The sample-based frontier detector algorithm [qiao18] reduces the computational load of the previous methods by only storing the nodes of the search tree. Frontier identification in 3D maps is less frequent, since 3D maps are more expensive to store and analyze, and are often incomplete due to the sensed volume. Apart from simple search techniques [dornhege13, dai20], most methods evaluate map portions incrementally [zhu15, batinovic21] or along surfaces [senarathne16]. Alternatively, in [shen12]
, authors propose a method that disperse random particles over the 3D known space. No matter the method used, after detecting frontiers, a clustering step is frequently required to prevent the frontier set from being high-dimensional (e.g., using K-means[lu20] or mean-shift [umari17]).
Shortly after the concept of frontiers was proposed, Newman et al. [newman03] and Stachniss et al. [stachniss04] realized that, for a robot with high uncertainty, potential loop closure areas encode more information than frontiers. Similarly, Grabowski et al. [grabowski03] observe that regions of interest where sensor readings overlap may be more informative than new frontiers. In other words, these works explicitly account for the exploration-exploitation dilemma in the frontier detection step. It is a common practice in active SLAM to include potential loop closure regions —along with frontiers— in the set of goal candidates [valencia12, palomeras19b], or to switch between exploring new frontiers and revisiting known places [stachniss04, kim15, suresh20].
In contrast to frontier-based approaches, some active SLAM formulations allow the identification of goal locations locally in the robot’s vicinity. However, note that decisions will be optimal only locally and a short decision-making horizon may induce wrong behaviors [feder99, van12]. This strategy is typical in deep reinforcement learning (DRL) approaches [zhu18, placed20], for which local optimality is alleviated by network memorization. Following the idea that evaluating larger neighborhoods would lead to more robust decisions, in [valencia12] authors use RRT-based paths to several configurations over the free space as the action set; and in [indelman15] the entire environment is considered under the umbrella of continuous-domain optimization.
Iv Stage 2: Utility Computation
The second and main stage in classical active SLAM approaches focuses on the evaluation of each possible destination according to some criterion, in order to estimate the effect that executing the set of actions to reach each destination would have. Naive utility formulations using just geometric or time-dependent functions often result in non-desirable behaviors for active SLAM [charrow15, valencia12, placed21iros], since they do not properly capture the uncertainty in the belief. The exploration-exploitation dilemma
can be more effectively solved by quantifying the expected uncertainty of the two target random variables: the robot location and the map. There is a plethora of metrics and the choice of which one to use mainly depends on the selected way to represent the variables of interest. Metrics based on information theory usually aim at OG maps, while those based on the theory of optimal design are more suitable for Gaussian distributions over landmarks and poses. We review each choice below.
Iv-a Naive Cost Functions
The simplest (and first-broadly-used) metrics are naive geometric functions, such as the Euclidean distance to the goal location [yamauchi97], the time required to reach it [dai20], or the expected size of the visited area after executing a set of actions [gonzalez02, tovar06, umari17]. In fact, the latter can be seen as an approximation of the map’s entropy, which is strongly related to the number of known cells in an OG map [makarenko02]. The fact that these metrics are computed over Euclidean or temporal spaces means that they can be used regardless of the map representation chosen. Application examples in the literature include voxel [palomeras19b, batinovic21] and semantic maps [gomez19]. Stachniss et al. [stachniss05] show that combining distance and information-based functions results in better exploration strategies, and this has since been a common approach, especially for multi-robot configurations [chen20]. However, manual tuning to overcome discrepancies between the multiple terms involved is needed [julia12, umari17].
Iv-B Information Theory (IT)
The most common approach to assess utility in active SLAM uses information theory (IT) to quantify the uncertainty in the joint belief state. Within it, there exist different metrics that allow for such quantification, although all of them build on the same concept: entropy. The notion of entropy was introduced by Shannon [shannon48] and can be defined as a measure of a variable’s uncertainty, randomness, or surprise; this is in fact strongly related to its associated information [cover99].
Early exploration strategies use only the map representation as the variable of interest [yamauchi97, colares16, dai20], thereby assuming no error in the robot localization. However, soon after the first of these works emerged, it was observed that high uncertainty in the robot state estimation leads to wrong expected map uncertainties [bourgault02]. The entropy of the SLAM posterior after executing a candidate action can be computed as [stachniss05]:
where are the expected (ML) future measurements, which may be estimated using, e.g., ray-casting techniques [carrillo12].
The computation of the previous joint entropy is known to be intractable in general [stachniss05]. To overcome this, most approaches resort to entropy approximations
that first compute utility of the two variables independently, and then combine them heuristically[bourgault02, stachniss05, vallve14, suresh20]. Let us first consider the case of graph-based SLAM, in which the problem is described using a graph representation where nodes represent the robot poses and edges encode the existing constraints between them; see [thrun05, durrant06, grisetti10, cadena16] and the references therein. The joint entropy in (10) can be approximated by [valencia12]:
The mismatch between the magnitudes of the addends in the previous equation is the main drawback of such approximation, calling for the addition of weighting parameters to balance the contributions of the two terms [blanco08, carlone14jirs]. Carrillo et al. [carrillo18] circumvent this by embedding a metric of the robot’s uncertainty in a combined Shannon-Rényi utility function; a popular approach that also appears in [popovic20]
. On the other hand, the expectation-maximization (EM) algorithm[wang20] embeds the impact of robot’s uncertainty directly in a virtual map. A similar approximation can be done for particle-filter SLAM, which represents the belief over robot trajectories as a set of particles [thrun05, durrant06, montemerlo02]. The only difference is that the integral in (10) is now approximated by the weighted mean of all possible solutions (i.e., particles) [stachniss05].
The first term in (10) refers to the robot’s state entropy, which can be computed as a function of the posterior covariance log-determinant, assuming that it is an -dimensional Gaussian distribution with covariance ,
On the other hand, the second term is the expected map’s entropy, and its computation depends on the representation chosen. For instance, in landmark-based maps it can be computed in the same way as the robot’s entropy, under the same assumption [davison05]. For discrete metric maps, and assuming cells independent from each other, it can be defined as [stachniss09]:
with being the occupancy probability of cell . This entropy measure has been used in both 2D [makarenko02, vallve14] and 3D OG maps [palomeras19, dai20, lu20]. More efficient approaches that only evaluate of cells in the robot’s vicinity have been proposed in the context of particle-filter SLAM [blanco08, du11, carlone14jirs].
The most common metric to assess utility in active SLAM is not Shannon’s entropy of the SLAM posterior, but its expected reduction. This utility function is known as mutual information (MI) [bourgault02, stachniss03] and is defined as the difference between the entropy of the actual state and the expected entropy after executing an action, i.e., the information gain:
where expectation is taken w.r.t. .
Kullback-Leibler divergence (KLD) or relative entropy [kullback51] has also been used as utility function. KLD measures the change in the form of a pdf (as MI), but also how much its mean has translated [mihaylova02]. It is defined as follows:
where and can be the prior and posterior distributions (as in MI) [houthooft16], or the estimated and true posteriors under the assumption the latter can be somehow approximated [fox03, kontitsis13, carlone14jirs].
For OG maps, the three metrics above (entropy, MI, and KLD) ultimately rely on counting the number of cells in a map, being thus discrete and ill-suited for optimization techniques. To mitigate this issue, Deng et al. [deng20, deng20b] propose a differentiable cost-utility function for both 2D OG and voxel maps that can be used with continuous optimization methods (albeit the approach still assumes perfect robot localization).
In the context of information-theoretic planning, there exists a problem variation in which the uncertainty of only a subset of variables is reduced. The motivation comes from the fact that maximizing information of all variables does not always imply maximizing that of the subset of interest. This problem variation has been referred to as focused active inference [levine13]. In general, focused active inference is more computationally intensive than the standard case, since it requires marginalization of the (posterior) Fisher information matrix via, e.g., Schur complement. Kopitkov and Indelman [kopitkov17, kopitkov19] present a method based on the matrix determinant lemma that does not require the posterior covariance to calculate entropy considering both the unfocused (entropy over all variables) and focused (entropy over a subset of variables) cases.
Iv-C Theory of Optimal Experimental Design (TOED)
There exists a second group of utility functions built upon optimal design theory (TOED) that tries to quantify uncertainty directly in the task space (i.e., from the variance of the variables of interest). Unlike information-theoretic metrics that target binary probabilities in the grid map, task-driven metrics apply to Gaussian variables. Following TOED, a set of actions to execute in active SLAM will be preferred over another if the covariance of the joint posterior is smaller, i.e., the posterior covariance matrix,, has to be minimized. In order to compare matrices associated to different candidates, several functions —known as optimality criteria— have been proposed, such as the trace (originally known as -optimality) [chernoff53]
, its maximum/minimum eigenvalue (-optimality) [ehrenfeld55], or its determinant (-optimality) [wald43]. The latter was often disregarded in active SLAM because its traditional formulation did not allow for checking task completion and generated precision errors ( rapidly when there are low-variance terms) [mihaylova02, sim05]. However, Carrillo et al. [carrillo12] show these problems can be solved using Kiefer’s formulation of -optimality [kiefer74], thus re-establishing the latter as an effective measure of uncertainty for active SLAM.
On the basis of TOED, Kiefer [kiefer74] proposes a family of mappings , parametrized by a scalar :
which can be particularized for the different values of and expressed in terms of the eigenvalues of , , by leveraging the properties of the matrix power:
In essence, utility functions are functionals of the eigenvalues of . The boundary cases and result in the four modern optimality criteria:
T-optimality criterion () captures the average variance:
D-optimality criterion () captures the volume of the covariance (hyper) ellipsoid:
A-optimality criterion (
) captures the harmonic mean variance, sensitive to a lower-than average value:
-optimality criterion () captures the radii of the covariance (hyper) ellipsoid:
Optimality criteria were first used in active SLAM by Feder et al. [feder99], where utility was computed as the area of the covariance ellipses describing the uncertainty in the robot’s and map’s posteriors. Since then, many active SLAM methods based on TOED have been proposed, mostly based on [sim05, leung06] and, recently, on [placed20, suresh20]. Even so, TOED criteria are not as popular as IT approaches. Note that both the map and robot pose uncertainties must be described by a covariance matrix , either by using a full covariance matrix in landmark-based representations (i.e., ) or by including the effect of the map’s uncertainty in (and thus ) [carlone17].
Monotonicity. One of the most important assumptions in active SLAM is that uncertainty increases as exploration takes place. However, the seminal work in [carrillo15] notes how monotonicity is lost for some utility functions under certain conditions, concluding that only guarantees this property and is thereby the only appropriate utility function for this task. Kim and Kim [kim17] and Rodríguez-Arévalo et al. [rodriguez18] demonstrate, however, that rather than on the utility function chosen, monotonicity depends on how the error and uncertainty are represented. In [rodriguez18], the authors prove that only differential representations guarantee monotonicity for all utility functions. In summary, representation of uncertainty is a key issue in active SLAM, since certain representations do not guarantee its monotonicity property during exploration, and thus may lead to incorrect decisions.
Iv-D The Graphical Structure of the Problem
Quantification of uncertainty via scalar mappings of the covariance matrix may be a computationally intensive task, mostly due to the fact that the covariance is a large and dense matrix. Therefore, most works resort to reasoning over the Fisher information matrix (FIM), i.e., the inverse of the covariance, which is generally sparser. Still, their evaluation is expensive, especially for large state spaces. To circumvent this issue, some works have proved that analyzing the connectivity (i.e., Laplacian) of the underlying pose-graph in active graph-SLAM is equivalent to computing optimality criteria. This observation traces back to the work of Cheng [cheng81], who realized that, in general, a graph with minimum number of spanning trees is -optimal. Khosoussi et al. [khosoussi14] take this line of research further, showing that classical and are indeed related to the number of spanning trees of the graph and its algebraic connectivity, respectively, for the case of 2D graph-SLAM with constant uncertainty along the trajectory. In [khosoussi19] and [chen21], these results are extended to the synchronization problem, and also relate to the average node degree of the graph. Placed and Castellanos [placed21iros, placed21] study the general active graph-SLAM problem formulated over the Lie group ; showing the existing relationships between modern optimality criteria of the FIM and connectivity indices when the edges of the pose-graph are weighted appropriately, and reporting substantial reductions in computation time. These results have been used in coverage problems [chen20b], multi-robot exploration [chen20], or to develop a stopping criterion [placed22].
The graph structure of the problem has also been recently exploited in conjunction with IT utility functions. Kitanov and Indelman [kitanov19] relate the number of spanning trees of the graph to entropy (which ultimately depends on the covariance determinant) and its node degree to Von Neumann entropy. The latter has been also applied to the focused case, thus relating the graph topology to the marginalized FIM [shienman21].
V Stage 3: Action Selection and Execution
Once every possible destination has an associated utility value, the last stage of active SLAM involves the selection of the optimal destination. This can be formulated as an optimization problem w.r.t. the set of actions to reach every possible goal location, cf. (9). When the set of candidate destinations is discrete (and typically consists in a handful of options), the solution of the optimization problem can be obtained via enumeration [yamauchi97, dornhege13, umari17]. For the case of TOED-based utility functions, it will be a minimization or maximization problem depending on whether the covariance () or the FIM () are analyzed. Since , and [placed21], the optimization problem is
Information-based utility functions will seek to minimize entropy (or, equivalently, to maximize MI). Following [valencia12], the optimal set of discrete actions can be found as:
In any case, after selecting the most informative destination, it all comes down to navigating to it using, for example, sampling-based planning methods as RRT [lavalle01], probabilistic road maps (PRM) [kavraki96], or their asymptotically optimal variants [karaman11]. Note that despite selecting the optimal destination among a discrete set of candidates, the executed path to reach it rarely represents an optimal solution for the original problem (9); this suboptimality is caused by the fact that the approaches we have seen so far decouple the problem into first computing and evaluating a set of goal locations, and then computing a path to one of these goals.
Vi Belief-space Planning and
As a potential solution to the suboptimality induced by classical decoupled approaches, there exists a second family of methods in which the future trajectory of the robot is directly optimized. These methods represent an alternative solution to the traditional scheme and may be divided into two categories, depending on whether they discretize the action space or not. The first category relies on path planning algorithms to generate a discrete set of candidate paths towards the unknown space, in order to later evaluate their utility. Works from Oriolo et al. [oriolo04] and Freda et al. [freda05] are among the first to apply these algorithms for exploration, evaluating robot configurations inside the not-previously-sensed free space. In contrast to discrete frontier optimization, that compares utility only at candidate locations, these methods evaluate it over the paths to reach them, guaranteeing that the path to execute is optimal among the considered subset.
On the other hand, globally optimal solutions have been considered under the umbrella of continuous-state POMDPs. Despite their resolution would ideally require to compute a policy over the infinite-dimensional space of posteriors of the joint state space [van12b] and computing an exact solution is known to be intractable in general [madani99], active SLAM as a continuous-state POMDP can be approximately solved under the frameworks of belief-space planning (BSP) or optimal control. Such optimization techniques require a continuous utility function, which can be obtained directly from complex continuous representations of the environment [jadidi18] or inferred from discretized representations. For example, Vallvé and Andrade-Cetto [vallve14] compute a dense entropy field from the posteriors’ evaluation over the discretized configuration space.
Vi-a Belief-Space Planning (BSP)
Continuous-domain BSP optimizes the future trajectory of the robot without discretizing the action space, but rather performing a continuous optimization. Bai et al. [bai14] and Kontitsis et al. [kontitsis13] use sampling-based methods to maximize an objective function that rewards uncertainty reduction and goal achievement. Platt et al. [platt10] apply linear quadratic regulation (LQR) to compute locally optimal policies. Van Den Berg et al. [van12] relax the assumption that future observations are consistent with the current robot pose belief (ML observations). Indelman et al. [indelman15] extend [van12] to the case where the belief describes both robot poses and unknown landmarks in the environment, while also modeling missed observations. Porta et al. [porta05] generalize value iteration to continuous-state POMDPs while assuming state-dependent reward functions. Van den Berg et al. [van12b] present a highly efficient method for solving continuous POMDPs in which beliefs can be modeled using Gaussian distributions over . Prentice and Roy [prentice09] develop a belief-space variant of the PRM algorithm called the belief road map (BRM), incorporating predicted uncertainty of future position estimates into the planning process. Valencia et al. [valencia13] contribute a pose-SLAM path-planning approach that leverages the BRM to find a path to the goal with the lowest accumulated pose uncertainty.
Vi-B Active SLAM as Optimal Control
Converting a POMDP formulation of active SLAM into an equivalent continuous-space MDP, as discussed in Section II, leads to a stochastic optimal control problem in general. Depending on the transition and observation models, noise distribution, and the reward function, the problem may be simplified further. Le Ny et al. [leny09] and Atanasov et al. [atanasov14] show that when the transition and observation models are linear in the state and the noise is Gaussian, then the time evolution of the belief state
may be obtained by the Kalman filter and the covariance isindependent of the measurement realizations. If the reward function depends only on the covariance, as is the case for the negative entropy and the MI, the active SLAM problem reduces from a stochastic to a deterministic optimal control problem. Deterministic optimal control problems are easier to solve, and techniques such as linear quadratic Gaussian (LQG) regulation [koga21] or search-based [atanasov14, schlotfeldt19], and sampling-based [hollinger14, lan16, kantaros19] motion planning are applicable. If the assumptions necessary for the deterministic reduction cannot be satisfied, the stochastic active SLAM problem may be solved by obtaining an open-loop control sequence under deterministic dynamics first, followed by a closed-loop feedback policy, under stochastic dynamics linearized around open-loop trajectory [koga22].
In the presence of state or action constraints, the optimal control formulation of active SLAM can be approached using differential dynamic programming (DDP) or model predictive control (MPC). Rahman and Waslander [rahman21] introduce an augmented Lagrangian formulation of iterative LQG, which captures belief-state constraints via a penalty function. The approach iterates between iLQG trajectory optimization in an unconstrained stochastic optimal control problem and Lagrange multiplier updates for the penalty function. This and several other works [murali19, rahman21, koga21] develop differentiable formulations of sensor field-of-view constraints amenable to gradient-based optimization. Carlone and Lyons [Carlone14icra] split the environment into convex regions and formulate the problem using mixed-integer programming. Chen et al. [chen20b] employ a spectrahedral description of the convex hull of the space of orientations and relax non-convex obstacle constraints using a convex half-space representation.
Striking a suitable balance between exploration and exploitation in active SLAM is challenging because the effects of potential future loop closures are not easy to capture in the predicted evolution of the belief . Leung et al. [leung06, leung08] introduce attractor states to guide the robot based on three modes (explore, improve map, and improve localization), determined using uncertainty thresholds. Attractor states were combined with a right-invariant extended Kalman filter in [xu21] to achieve active range-bearing landmark-based SLAM.
Vii Deep-Learning-based Approaches
Advances in DL have created new opportunities in using neural networks to solve active SLAM; these techniques follow a completely different scheme, circumventing the split into three stages that characterizes traditional approaches. Usually, goal identification is not required due to the chosen action set, and utility computation and selection of the best action are both embedded in the network. In this section, we particularly focus on DRL methods for autonomous robotic exploration and discuss the design of the state, action, and reward spaces, as well as the problems of partial observability, generalization, and the necessity for training environments.
Vii-a Deep Reinforcement Learning (DRL)
The first question that arose in the early work on learning-based active SLAM was which type of learning was suitable for this decision-making problem, in which (i) agents must directly learn from the interaction with the environment, (ii) the state may not be fully observable, and (iii) policies have to generalize to other scenarios in which a priori
knowledge is nonexistent. Such premises soon set supervised learning aside and led the community to explore deep reinforcement learning (DRL)[gallos19], the natural extension of RL with neural networks, to attack (PO)MDPs. In particular, it is model-free DRL which has attracted more attention, although isolated approaches that combine it with model-based planning in an end-to-end fashion do exist [karkus17]. In contrast to traditional approaches, path planning is not required anymore, decision-making is embedded in the network, and the SLAM module only affects the agent during training. Still, note that SLAM will be required during testing if the system is expected to build a map of the environment and for some DRL configurations (e.g., if the network requires the robot’s pose [hu20] or the map [zhu18] as inputs, or for navigation [li19]). Contrary to the traditional approaches, the computational effort in DRL-based approaches is mostly confined to the training phase, while the testing phase reduces to a forward pass on the network.
The great success of the work from Mnih et al. [mnih15] boosted the research in model-free DRL and several value- and policy-based methods emerged shortly after. The behavior of deep Q-networks (DQNs) [mnih15] improves using the double [hasselt16] and double-dueling (D3QN) [hessel18] architectures. Rather than using a value function approximator, proximal policy optimization (PPO) [schulman17] directly maximizes the future expected rewards. Actor-critic techniques combine both value-iteration and policy gradient methods, e.g., the deep deterministic policy gradient (DDPG) [lillicrap15], asynchronous advantage actor-critic (A3C) [mnih16], and soft actor-critic (SAC) [haarnoja18]. We refer the reader to [arulkumaran17] and [zeng20] for a survey on the methods. Despite all these strategies were initially proposed for different decision-making problems (e.g., video-games), they have been successfully applied to robotic exploration.
Vii-B On the Reward Function Design and the Action Set
Tai and Liu [tai16] are among the first to employ DRL for robotic exploration, extracting the next best actions to execute from a discrete set using a 2-layer DQN. Convergence speed towards an optimal policy notably improves using parallel simulation environments, e.g., with A3C [mirowski17] and asynchronous DDPG [tai17], at the cost, however, of an increased training complexity. In any case, the above works use purely extrinsic reward functions, which has proved not to be solving the active SLAM problem but the obstacle avoidance one [placed20]. As a response, the notions of motivation and curiosity [ryan00] were exploited to design intrinsic rewards, giving origin to curiosity-driven methods that motivate agents to visit unknown configurations [bellemare16]. Similarly, the idea of uncertainty minimization —borrowed from traditional active SLAM— led to uncertainty-aware approaches. This is the case of [zhelo18, oh19] that encourage the visit of high-covariance states, and [chaplot19b, gottipati19] where the reward encodes the belief accuracy; all these use A3C. True uncertainty metrics inherited from classic theories have also been introduced in the reward function design, such as the entropy [li19], MI in DQN [chen19] and A3C [niroui19], or the KLD between priors and posteriors [houthooft16]. Recently, metrics from TOED have been embedded in the learning process. For instance, of the virtual map generated by the EM algorithm is combined with graph neural networks in [chen20c], and is used in [placed20] together with D3QN and prioritized experience replay [schaul16]. Design of effective reward functions is still an open problem, crucial to achieve fast training and good generalization skills.
In most learning-based active SLAM works, the action set is considered as a discrete set of robot velocities to be executed during a fixed time horizon [zhelo18, chaplot19b, gottipati19, placed20]. Therefore, in contrast to approaches that distribute possible destinations inside a local neighborhood [li19], the SLAM estimates are not required for navigation. In fact, this formulation selects candidate actions also independently of the map. Apparently, the use of such local action sets could be regarded as a myopic approach that does not account for long-term rewards, but network memorization makes uncertainty reduction during long-term exploration possible. Policy gradient methods also make the use of continuous action spaces possible, e.g., by directly extracting optimal continuous velocity commands [tai17, hu20]. In contrast to the above methods, [niroui19] and [chen20c] use the network to estimate utility of detected frontiers, thereby linking the action sets of traditional and DRL approaches.
Vii-C Partial Observability and Generalization
Partial observability and generalization are two inherent and often-forgotten concepts in active SLAM. First of all, the uncertainty about the observations and actions taken, and the limited observations of the environment makes the problem not fully observable. Consequently, agents are unable to distinguish their own true state based on single observations, and learned policies are bound to be suboptimal [zhu21]. This issue can be alleviated by expanding the network inputs with previous observations or rewards [mirowski17] or, alternatively, by adding memory skills to the agent so it can learn about previous data on its own [hausknecht15, karkus17]. In general, network inputs directly consider sensor data both for training and testing phases [tai16, tai17, placed20], although some architectures require more complex inputs (e.g., a map) [zhu18, hu20] or even assume the availability of ground-truth information (thereby circumventing partial observability) during training [chaplot19].
The second element intrinsic to active SLAM is the lack of prior knowledge of the environment. Therefore, the agents’ ability to transfer their knowledge to new scenarios is crucial. Unlike partial observability, over-fitting can be avoided by using sparser network inputs [shi19, placed20], or by expanding the training data [zhelo18], although it significantly increases training time. The related task of transferring trained agents to real scenarios (sim-2-real) is still an open research problem, and few efforts have been made in this direction [tai17].
Vii-D Training Environments
The use of DRL introduced a major challenge during training: the need of a simulation environment to acquire data online. Unlike supervised methods, training with offline data is not possible and real-world training seems infeasible. To overcome this problem, many works use their own simplified simulation scenarios, containing grid-worlds and agents that move discretely in them. Inputs for the networks are thereby restricted to ground-truth locations [niroui19] or complete map representations [zhu18]. In order to feed more realistic data, such as images [tai16] or laser measurements [tai17, placed20], the use of more complex simulators is required.
Stage [gerkey03] is one of the simplest engines used in the literature [niroui19], although it restricts perception to two-dimensional bitmapped environments. Gazebo [koenig04] is a much more complete simulator which allows for 3D simulations, realistic rendering, visual sensors, etc. In addition, it is tightly integrated into the widespread Robotic Operating System (ROS), which makes its use commonplace [tai16, tai17, placed20]. CoppeliaSim/V-REP [rohmer13]
also allows for online mesh manipulation, but it is not an open-source solution and is less integrated into ROS, limiting its adoption. Combination of a physics engine (i.e., robot motion and sensor models) with a DRL framework is not always straightforward. Zamoraet al. [zamora16] present a powerful framework by integrating the RL toolkit OpenAI Gym [brockman16] with ROS and Gazebo.
In contrast to the above platforms, initially designed for robotics simulations and later adapted to DRL, there is a second family of simulators born in the age of AI. They tend to prioritize training speed over the breadth of simulation capabilities. DeepMind Lab [beattie16] allows agents to move discretely in low-textured, game-like scenarios, and provides access to a visual sensor and velocity. Habitat-Sim [savva19] takes a leap forward by supporting physics simulation and different robot and visual sensor models. More interestingly, it has the powerful capability of rendering simulation environments from image datasets, e.g., Replica [straub19]. iGibson [li21] also provides fast visual rendering and physics simulation, and includes simulation of lidar and optical flow sensors. The ROS ecosystem is already integrated in [li21], whereas [savva19] requires the use of external libraries. Despite their potential, none of these platforms has yet been used for DRL in the context of active SLAM.
Viii Multi-robot Active SLAM
The active SLAM problem can be extended to a multi-agent setting, where robots optimize their sensing trajectories collaboratively to estimate a common map of the environment. Each robot has its own state space and action space . Applying an active SLAM algorithm to the joint state space and joint action space can generate desirable behavior but becomes computationally infeasible as the number of robots increases because the complexity of centralized algorithms scales exponentially with [atanasov15]. Such algorithms also require collecting all robot measurements and performing joint optimization on a centralized server before communicating the planned actions back to the individual robots. If the robot team is small and connectivity is maintained at all times, centralized algorithms can be used to plan all robot trajectories simultaneously. For example, Charrow et al. [Charrow_AuRo14] achieve multi-robot target tracking by maximizing the MI between the target location and range-only observations over a set of motion primitives. However, larger teams with intermittent communication and limited onboard computation require decentralized algorithms, where individual robots solve smaller instances of the active SLAM problem, or fully distributed algorithms, where the robots exchange information only with their neighbors. Kantaros et al. [kantaros19, kantaros21] propose an informative planning222 Informative path planning can be considered a generalization of active SLAM to include objectives beyond the quality of localization and mapping, e.g., for target tracking or environmental monitoring. technique which constructs random trees of control sequences and is particularly simple to distribute. The algorithm scales to very large numbers of sensors and targets and is probabilistically complete and asymptotically optimal.
A particularly important instance of the problem is collaborative multi-robot exploration, where the robots aim to coordinate how to efficiently explore different regions of the environment. Early works such as [burgard00, burgard05] present an approach for choosing appropriate frontiers, while simultaneously taking into account their utility and the cost of reaching them. Each time a target point is assigned to some specific robot, the utility of the unexplored area visible from that frontier is reduced. This mechanism is used to assign different frontiers to different robots. Colares and Chaimowicz [colares16] develop a decentralized multi-robot formulation of the classical frontier-based exploration method. The authors use an objective function that captures the frontier entropy and distance, and a robot coordination factor that penalizes regions that other robots are already exploring. Atanasov et al. [atanasov15] consider a multi-agent active information acquisition problem, in which an information measure is maximized over a discrete space of agent trajectories, and propose a decentralized planning scheme using coordinate descent in the space of agent trajectories. Schlotfeldt et al. [schlotfeldt18] introduce an anytime search-based planning formulation that progressively reduces the suboptimality of the multi-agent plans while respecting real-time constraints. Instead of using search-based planning, Ossenkopf et al. [ossenkopf19] generate candidate robot actions using RRT*. The sampling is biased to prioritize exploration, map improvement, or localization improvement. The map and robot state entropy is evaluated along the planned trajectories in two stages: short-horizon exact computation using filter updates, and long-horizon approximation using predicted loop closures. Lauri et al. [lauri17] introduce a decentralized -POMDP, allowing the specification of an information-theoretic objective. The authors show that a multi-agent A* algorithm that searches the joint policy space can be applied to belief-dependent rewards to achieve cooperative target tracking with periodic communication. Hu et al. [hu20] design a hierarchical control approach for cooperative exploration, combining a high-level region-assignment layer and a low-level safe-navigation layer. The former uses dynamic Voronoi partitions to assign different regions to individual robots; the latter achieves collision-free navigation to successive frontier points using DRL.
Another important instance is collaborative multi-robot active estimation, where the goal is to seek actions that actively reduce the uncertainty over relevant random variables. For instance, Kontitsis et al. [kontitsis13] develop a multi-robot active SLAM method that uses a relative entropy optimization technique [botev13] to select trajectories which minimize both localization and map uncertainties. Indelman [indelman18] develops a collaborative multi-robot BSP framework, which incorporates reasoning about future observations of environments that are unknown at planning time. That approach has been extended in [regev16] to a decentralized setting. Best et al. [best16] propose the self-organizing map algorithm, considering the problem of multi-robot path planning for active perception and data collection tasks. Chen et al. [chen20] leverage graph connectivity indices and their relationship to optimality criteria to achieve multi-robot active graph-SLAM. Each robot aims to improve the pose graphs of the other agents by sharing its observations when it moves near areas where they have low connectivity.
Ix Open Research Questions
Active SLAM still requires much research in order to support the deployment of fully autonomous robots in complex environments. Many are the challenges and research fields involved, so cooperation between them is crucial to achieve real-world impact. In this section, we present some of what we consider the most important research questions. Although some of them are long-known challenges and are already under intense investigation, others have not received such attention.
Ix-a Prediction Beyond Line-of-sight
Resolution of active SLAM relies on fast and precise predictions of future states for the variables of interest. The accurate prediction of the scene and robot pose after executing a set of candidate actions can be the difference between making the right decision or not. The expected sensed space and the resulting map representation have traditionally been predicted using a sensor model together with ray-casting techniques [burgard05, carrillo18]. Recent related work, however, addresses the problem of scene completion and occupancy anticipation from a DL perspective. Fehr et al. [fehr19] use a neural network to augment the measurements of a depth sensor and Ramakrishnan et al. [ramakrishnan20] directly predict augmented OG maps beyond the sensor’s field-of-view using auto-encoders (AE). Rather than using raw sensor measurements, Katyal et al. [katyal21] and Hayoun et al. [hayoun20] extend an input OG map beyond the line-of-sight also using AE. Shrestha et al. [shrestha19] predict maps of occupancy probabilities instead with variational AE. Dai et al. [dai18] perform scene prediction over 3D SDF-based maps. All these methods seem promising for fast and precise online map prediction beyond line-of-sight, although their integration into active SLAM is yet to be done and brings with it numerous associated challenges. For instance: How does scene prediction behave in unstructured environments? How to account for uncertainty in the predictions? Is measurement prediction more reliable and informative than direct map/scene prediction? How to predict the effect of only a certain set of non-myopic actions in the map rather than augmenting the whole scene? Regarding the latter, [richter17, richter18] and [asraf20] subordinate predictions to candidate actions.
On the other hand, the robot state is straightforwardly estimated using motion models or path planners. However, the prediction of its associated uncertainty is not trivial and requires more attention. Work from Asraf and Indelman [asraf20] is among the very few efforts made to combine data-driven scene prediction with BSP. In addition, they use the predicted observations to forecast the posterior uncertainty over the robot trajectory. Besides the robot’s movement, it is the appearance of loop closures (exploitation) that significantly affects the new states’ uncertainty, thus making its forecast critical. Despite some isolated works have partially studied this problem [stachniss04, fairfield10, placed21], it still remains as an open challenge.
Ix-B From Active SLAM to Active Spatial Perception
Most active SLAM approaches reason over geometric representations of the environment (e.g., OG maps). However, when we explore new environments as humans, we are mostly interested in semantic elements of the environment (e.g., presence of objects, rooms) rather than the shape of the environment per se. Modern SLAM systems are now able to build 3D metric-semantic maps in real-time from semantically labeled images, see [rosinol20] and the references therein. These maps include both occupancy information and semantic labels of entities (e.g., chairs, tables, humans, etc.) in the environment. Very recent work goes even further and develops spatial perception systems that infer hierarchical map representations, in the form of 3D scene graphs [rosinol21, armeni19, hughes22]. They symbolize high-level representations of an environment, that capture from low-level geometry (e.g., a 3D mesh of the environment) to high-level semantics (e.g., objects, people, rooms, buildings, etc.). While there is a growing amount of work in estimating these high-level representations from sensor data, their use in active SLAM is still uncharted territory. Very early effort in this direction includes the work from Ravichandran et al. [ravichandran22], which focuses on object search using 3D scene graphs.
Active metric-semantic information acquisition, or Active Spatial Perception, has the potential to impact many aspects of robot autonomy: (i) by leveraging semantic knowledge, a robot can more effectively predict unseen space (e.g., predict the presence of rooms or objects in each room), (ii) the use of semantics can further enhance SLAM performance (e.g., allowing for novel loop closure detection methods [hughes22]), and (iii) hierarchical representations may enable novel and more computationally efficient planning methods. However, each opportunity comes with many open research questions, for instance: How to quantify uncertainty over metric-semantic or even hierarchical scene representations? How to leverage hierarchical structures to improve computation? How to perform spatial prediction in hierarchical representations?
Ix-C Robust Online Belief Space Planning and Active SLAM
Another key aspect is data association, i.e. association between measurements and the corresponding landmarks (or other entities in the map representation). In perceptually aliased and ambiguous environments, determining the correct data association is challenging, and incorrect associations may lead to catastrophic failures. The research community has been investigating approaches for robust perception to allow reliable and efficient operation in ambiguous environments (see, e.g. [sunderhauf12, olson13, Yang20ral-GNC, indelman16csm, hsiao19, shelly22]). Yet, these approaches focus on inference (rather than planning), i.e. actions are given. Only recently, ambiguous data association was considered also within BSP and, in particular, active SLAM. Pathak et al. [pathak18] incorporate, for the first time, reasoning about future data association hypotheses within a BSP framework, enabling autonomous hypotheses disambiguation. Another related work in this context is [hsiao20], that also reasons about ambiguous data association in future beliefs while utilizing the graphical model presented in [hsiao19]
. A first-moment approximation to Bayesian inference with random sets of targets, known as the probability hypothesis density (PHD) filter, has been successfully applied to active target tracking problems[dames20]. However, explicitly considering all possible data associations leads to an exponential growth of the number of hypotheses, and determining the optimal action sequence quickly becomes intractable. Shienman and Indelman [shienman22] recently presented an approach that utilizes only a distilled subset of hypotheses to solve BSP problems while reasoning about data association and providing performance guarantees considering a myopic setting. Nevertheless, BSP and active SLAM in these challenging settings remain open problems. More generally, finding an appropriate simplification of the original BSP or active SLAM problem, which is easier to solve, with no, or bounded, loss in performance, is an exciting and novel direction [indelman16ral, elimelech22, shienman22, barenboim22ijcai].
Ix-D Reasoning in Dynamic and Deformable Scenes
One of the most common assumptions in active SLAM is to consider the environment static —or slightly dynamic, at best. Real scenes, however, contain moving agents most of the times, and even deformable elements (e.g., clothes, water). Handling these elements would greatly impact the robot’s autonomy, its reasoning ability and awareness, and would facilitate its deployment in real-world scenarios.
The study of dynamic environments has long been a topic of interest for the path planning [van05] and the SLAM [saputra18] communities; but its investigation in the context of active SLAM has been typically restricted to the action execution step (i.e., re-planning) [trivun15, maurovic17]. However, many other relevant aspects emerge when considering reasoning with dynamic elements: How to foresee their effects in planning? How to integrate them in the utility function? How to maintain a lightweight representation?
Non-rigid environments present an even greater challenge. Planning for mobile robots in deformable environments started receiving some attention a couple of decades ago [anshelevich00, rodriguez06]. Medical applications have also stimulated progress on SLAM in deformable environments [newcombe15, lamarca20]. However, to date, no efforts have been made towards developing a deformable active SLAM framework. We believe this is partly due to the unavailability and complexity of simulators for mobile robots in deformable environments, and partly due to the difficulty in extending the current map representations to deformable scenes. Given the importance of obtaining accurate robot trajectory estimates towards mapping deformable environments, active SLAM can play a major role in this area.
Ix-E Towards Meaningful and Autonomous Stopping Criteria
Unlike with coverage and exploration in known environments, determining the moment in which the task of active SLAM is complete is non-trivial. Performing active SLAM is known to be a computationally expensive process: a vast amount of resources is required to estimate and optimize utility online, thereby conditioning the execution of other tasks; therefore, it is crucial to understand when such process can be considered complete and other tasks can be prioritized. Cadena et al. [cadena16] already identified this issue as an open research question, but little research has been done on the topic. Even recent active SLAM works still rely on traditional temporal [carrillo18, placed21iros] or spatial [chen20b, xu22] constraints to decide when exploration has terminated. These metrics, however, cannot be used in truly unknown environments nor do they assert task completion (see [placed22]). The use of TOED-based metrics has been identified as a promising tool [cadena16, lluvia21, placed22] to determine when a given exploration strategy is no longer adding information. Nevertheless, many questions remain to be answered: How to guarantee task completion? How to transition between exploration strategies? Also, the advent of DRL approaches raises a new question: when to stop training?
Ix-F Reproducible Research in Active SLAM
The increasing attention towards active SLAM creates the need for new benchmarks to objectively evaluate new findings against existing research. This has long been a challenge in the robotics community [del2006benchmarks], where real-life robotics experiments are often difficult to replicate across research groups. In related problems, such as SLAM, static datasets are commonly used for benchmarking. However, in active SLAM, the agent must interact with the environment, making the use of datasets impractical. In recent years, a significant effort has been made in robotics to address challenges in benchmarking [calli2015benchmarking] and reproducibility [pineau2021improving]. Despite these efforts, such benchmarks are still lacking in active perception.
Typically, in active SLAM, researchers select a set of scenarios (e.g., platform, task, and environment) representative of the desired application, and experiments are conducted in simulation via customized simulators or in the real world via specialized hardware. While such an evaluation is adequate for validation, the specified scenario may not be general enough or sufficiently specified to be reproduced. Consequently, one-to-one comparisons are rarely made between approaches. While targeting more general embodied agents, several open-source datasets [ammirato2018active] and simulators [savva19, hall2022benchbot, xu22] show promise for active SLAM research. Also, there are some open source frameworks [umari17, placed21] that make the comparison and testing of new algorithms straightforward, only by modifying the decision-making portion. While some works take advantage of these simulators and datasets [chaplot19], the issue of establishing a proper methodology for evaluating active SLAM when it comes to generalization from simulation to the real world remains an open question. Besides, there is a dire need to establish adequate performance metrics for active SLAM that go beyond commonly-used exploration time and coverage. Improving the quality of estimates is the main objective of active SLAM, and should therefore be measured.
Ix-G Practical Applications
Although active SLAM methods have practical relevance in many real-world problems such as search and rescue, where constructing a sound representation of the environment is time critical, very few practical implementations and deployments of active SLAM have been described in the literature. Walter et al. [walter08] propose a partially autonomous system for underwater ship hull inspection. Kim and Eustice [kim15] deploy a complete active SLAM system. Palomeras et al. [palomeras19, palomeras19b] report the autonomous exploration of complex underwater environments for environmental preservation purposes. Fairfield and Wettergreen [fairfield10] investigate terrestrial applications and autonomous mapping of abandoned underground mines. A roughly similar application but in the archaeological context of catacomb exploration is presented in [serafin16]. Strader et al. [strader20] report experiments of active perception in a Mars-analogue environment. Finally, assistive mapping examples for office-like environments can be found in [newman03, li20, hsiao20]. Aerial applications of active SLAM are significantly less common. Chen et al. [chen20b] propose an MPC framework to address coverage tasks while maintaining low uncertainty estimates.
Overall, there are very few reports of field experiments involving active SLAM systems. Besides, by 2022, there is a large imbalance between the patents using the terms SLAM and active SLAM333We used “simultaneous localization and mapping” after:priority:19920101, and ”active slam” OR ”active simultaneous localization and mapping” after:priority:19920101 as queries search in the Google patents search platform., about 39,000 for the former and 31 for the latter. This indicates that the technology readiness level of active SLAM is not in a deployment phase but in early development. Furthermore, it raises the question of whether active SLAM is important for all applications or whether human supervision is still preferred. Among the roadblocks preventing the transition from theory to applications (including the challenges mentioned in the previous sections), we also remark that the high computational complexity of active SLAM often clashes with application constraints, e.g., the low computational budget available on aerial robots.
The active SLAM problem, which consists in actively controlling a robot such that it can estimate the most accurate and complete model of the environment, has been a topic of interest in the robotics community for more than three decades, and is now receiving renewed attention —also thanks to the novel opportunities offered by learning-based methods. Despite the role of active SLAM in many applications, the disparity and lack of unification in the literature has prevented the research community from providing a cohesive framework, bringing algorithms to maturity, and transitioning them to real applications. In this paper, we take a step toward this goal by taking a fresh look at the problem and creating a complete survey to serve as a guide for researchers and practitioners.
In particular, we present a unified active SLAM formulation under the umbrella of POMDPs, highlighting the most common assumptions in the literature. Then, we discuss the traditional resolution scheme, which decouples the problem into goal identification, utility computation, and action selection. We delve into each stage, reviewing the most important theories and presenting state-of-the-art techniques. We then review alternative approaches that have drawn great interest and have undergone major advances in recent years, including (continuous) BSP and modern approaches based on DRL. Finally, we discuss relevant work in multi-robot active SLAM.
Besides discussing the historical evolution and current trends in active SLAM, we also identify the most relevant open challenges in this field. These include prediction beyond line-of-sight and active spatial perception, among others. We also emphasize the need for a unified formulation and evaluation metrics that allow for direct comparison between works. Reproducibility and benchmarking need to be addressed for this field to mature and achieve real-world impact.