Monte Carlo Localization in Hand-Drawn Maps

04/02/2015 ∙ by Bahram Behzadian, et al. ∙ 0

Robot localization is a one of the most important problems in robotics. Most of the existing approaches assume that the map of the environment is available beforehand and focus on accurate metrical localization. In this paper, we address the localization problem when the map of the environment is not present beforehand, and the robot relies on a hand-drawn map from a non-expert user. We addressed this problem by expressing the robot pose in the pixel coordinate and simultaneously estimate a local deformation of the hand-drawn map. Experiments show that we are able to localize the robot in the correct room with a robustness up to 80

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Localization, the problem of estimating a robot pose in the environment, is probably one of the most studied and understood problems in mobile robotics. Several solutions have been presented in the last decades, most of them based on probabilistic inference over the space of possible robot configurations 

[22]. Although the existing approaches have been demonstrated to be robust, efficient and very accurate [3, 6, 16], they mostly rely on one major assumption: The existence of an already available map of the environment, built beforehand with the same sensing modality of the robot.

In some circumstances, however, building such a map could be a nuisance for the user. This is the case, for instance, of vacuum cleaners, lawn mowers, pool cleaners and many other service robots. Often, when the user buys such a robot, he or she wants to use it immediately without waiting for an expensive and time-consuming mapping routine to complete. In some other cases, building an a-priori map is not even possible, for instance in environments that may harm humans, e.g., a mine field or a toxic factory. Moreover, mapping algorithms may result in local minima and the resulting maps might be unusable for navigation. Although automatic tools to detect such inconsistencies exist [14], they require an expert user to analyze the data to correct the map.

In this paper, we address the localization problem when no map of the environment is available beforehand. We propose an algorithm that solely requires a hand-drawn map of the environment, sketched by the user. We believe that drawing such a map puts no burden on the user and is an intuitive task. Furthermore, we do not assume that the map is metrically accurate nor proportional up to a single scale. Objects might be missing, and the deformation of the map might change at different locations. This reduces the ability to accurately localize the robot in a metric sense, since the map is bended in different ways and distances are not respected. To address these problems, we extend the Monte Carlo localization algorithm in two ways. First, we express the robot pose in the pixel coordinate of the drawing. This resolves the metrical issues and provides a sort of normalization with respect to the deformation. Second, we augment the state space of the robot with a deformation parameter and track the local deformation of the drawing over time.

Fig. 1: Occupancy grid of the dataset FR079 (top), built using a SLAM algorithm, an hand-drawn map used to localize the robot (middle) and their overlay (bottom).

Ii Related work

Robot localization is a widely studied problem [22] and several approaches have been proposed in the past, such as Markov localization [7], Monte Carlo localization (MCL) [3], and multiple hypothesis tracking (MHT) [9]. Those approaches rely on the existence of a prior map and a static environment. Some researchers extended those approaches to be able to handle changes in the environment over time [23, 12, 1]. However, they still rely on metrical maps and some prior information of the the environment to be built beforehand.

Few works have been done with respect to localization without prior maps. Koenig and Simmons [11] propose a localization approach, where the user provides only a topological sketch of the environment. Some authors used floor plan maps available from construction blueprints. Ito et al. [8] propose a localization algorithm that relies on blueprints of the environment. They employ an RGB-D sensor to estimate the walls of the environment and match them to the floor plan. To improve the initialization, they rely on a map of WiFi signal strengths. Setalaphruk et al. [17] employ a multi-hypothesis filter on blueprints. They first compute a Voronoi diagram of the floor plan and store its intersections as landmarks. Second, they use the Voronoi intersection from the current readings and match them to the one computed from the map. Contrary to our approach, they assume that the blueprint of the environment is metrically correct, and its scale is known.

Most of the works using hand-drawn maps exploit them as a mean of communication between the robot and a human operator. Kawamura et al. [10] present an approach where an operator first sketches a rough map of the environment with a set of way points to follow. They use a library of known objects to associate perceptions to the sketch map and triangulate the robot position using angle measurements. Skubic et al. [21] Propose a sketch interface for guiding a robot. The user draws a sketch of both the environment and instructs the robot to go to certain locations. The system computes a path in the hand-drawn map, and the robot execute it in an open-loop controller, without localizing the robot. Shah et al. [18] propose a similar approach. The focus of the work, however, is on how to extract qualitative instructions for the robot. They translate the instructions to robot commands and localize the robot using an overhead camera system. Yun and Miura [24] proposed and evaluated a quantitative measure of navigability on hand-drawn maps. The work only focuses on qualitative aspects of navigability but does not address the ability to localize in them. Moreover, the sketch maps that they considered are made of line segments and are automatically generated. Skubic et al. [20] propose an approach to qualitative reason about sketch maps. The authors are able to extract information such as an obstacle is on the right of the robot and give qualitative commands as turn left or go straight. The work has been extended by Chronis and Skubic [2] with the use of reactive controllers to guide the robot. Forbus et al. [5] developed nuSketch, a qualitative reasoning framework exploiting topological properties and Voronoi diagrams. They provide answers to query like finding places with certain properties or properties of paths. Our work is orthogonal to them, and the proposed localization algorithm can be integrated with any of those control interfaces.

Localization using hand-drawn maps has received very little attention from the robotics community. Parekh et al. [15]

presented a technique to perform scene matching between a map of the environment and a sketch using spatial relations. They assume a set of objects is present in both the sketch and the map with known segmentation. They then use particle swarm optimization (PSO) techniques to compute the alignment and the sketch deformation.

Matsuo and Miura [13] extended the previous work in a simultaneous localization and mapping (SLAM) framework. They assume, however that the sketch map is composed of rectangles corresponding to building in the scene. Shah and Campbell [19] present an algorithm for controlling a mobile robot using a qualitative map consisting of landmarks and path waypoints. They assume that the both the sketch map and the real environment is made of point landmarks and also assume known data associations between them. In contrast to them, our approach does not make any assumption on the format of the sketch map and treats it as a raster image. Moreover, we do not attempt to transform the hand-drawn map to reflect the real world but we directly localize the robot in the hand-drawn map. To the best of our knowledge, the proposed approach is the first attempt to localize a robot from a generic hand-drawn map with no further assumptions.

Iii Localization in Hand-Drawn Maps

In this section, we describe the extension we made in the original Monte Carlo localization algorithm [3] for localizing in hand-drawn maps. We propose two main extensions. First, we augment the state space of the robot with an additional variable that represents the local deformation of the map. Second, we localize the robot in the pixel coordinate frame of the map, instead of the world coordinate frame. In order to do so, we extended both the motion and the observation model to be locally projected onto the hand-drawn map.

Iii-a Monte-Carlo Localization in Pixel Coordinates

The goal of our work is to estimate the pose of the robot and a local scale at time , given the history of odometry measurements and observations . Formally, this is equivalent to recursively estimate the following posterior:

(1)

where is a normalization factor and is the hand-drawn map. The motion model  denotes the probability that the robot ends up in state  given it executes the odometry readings  in state . The distribution represents the Brownian motion of the scale parameter and the observation model denotes the likelihood of making the observation  given the robot’s pose , the local scale , and the map .

Following the MCL approach, we approximate the distribution as a weighted sum of Dirac delta functions. The recursive estimation is performed using the sequential importance resampling algorithm [4]. For the proposal, we sample the pose and the scale from the motion model and the Brownian process, respectively. Under the chosen proposal distribution, we compute the weight of the particle according to the observation model and the recursive term. The particle set is then resampled, at each iteration, according to the weight of the particles.

To compute the number of particles needed, one can use the KLD sampling approach of Fox [6]

. The algorithm estimates a discrete approximation of the target distribution using the weighted set of particles. During resampling, it computes the Kullback-Leibler divergence each time a new particle is resampled and stops the process when the divergence is below a confidence level.

Iii-B Proposal Distribution

The purpose of the proposal distribution is to provide a mean for sampling a new set of particles given the current one. In the original MCL algorithm, the proposal distribution is the robot motion model. In our work, we need to provide a proposal distribution for both the robot position and the local scale . We modified the original motion model describe in the MCL algorithm to project the motion of the robot in the image coordinates. Let and the pose and scale associated with the -particle at time and the incremental odometry measurement. The new pose of the particle is computed as follow

(2)

where represents the odometry reading and is a sample from the noise term. We sample

from the normal distribution and a wrapped normal distribution, respectively for the translation

and the rotational part.

(3)
(4)

where and

are the covariance matrix for the translation and the standard deviation for the rotation. The scale follows a Brownian motion and is sampled according to following process

(5)

where is a sample from a normal distribution and is the standard deviation of the scale increment. Note that we include a standard deviation term in the Wiener process. This is to account for smaller variations than its original formulation. One can formulate Eq. 5 using the standard formulation of the Wiener process by including an additional scaling term to the .

Intuitively, given the pose of a particle and its estimated scale, we first sample a zero mean, transformation from the odometry noise and perturb the odometry measurement accordingly. We then project the perturbed odometry on the hand-drawn map and apply the projected transformation to the robot pose. The scale sampling has been chosen to be locally close to the scale of the previous step, but still able to explore the whole space. The Brownian motion was a natural choice for that, given its statistical properties.

Iii-C Observation Model

After we sample the particles according to the proposal distribution, we follow the importance sampling principle and compute the weights of the particle set. With our choice of proposal, the weight of each particle must be proportional to the observation model , where we omitted the time index for notational simplicity. Intuitively, this model describes the likelihood of the measurement , given the hand-drawn map , the local scale and the pose of the robot in the image coordinates . In this work, we consider 2D laser range finders as sensors. The measurements consist of a set of range values along a set of corresponding directions . We employ the beam based model [22] and modify it to project the observations in the hand-drawn map.

Formally, let be the -th range measurement along the angle . Let us trace a ray on the map from the robot pose to the closest obstacle in the map and be the measured distance. The original formulation of the beam based model considers as being expressed in world coordinates and describes the measurement distribution as a mixture of four components

(6)

where models the measurement noise, models the unexpected obstacles not present in the map, models the sensor maximum range, and

models a uniformly distributed random measurement. The functions are defined as following

(7)
(8)
(9)
(10)

Here,

denotes a truncated exponential distribution with parameter

and support between and . denotes a uniform distribution between and and is a window parameter. To account for the deformations, we need to project the real measurements coming from the sensor to the image coordinate frame by applying the estimated transformation. In our case, this entails to scale all the ranges according to estimated scale . The resulting observation model is

(11)

All the parameters of the model have been learned from real data. To collect the data for the learning phase, we positioned the robot at fixed locations in the environment and draw few different sketches for each location. Then, for each sketch and location, we performed grid search to find the best scale. Given the scale, we computed the maximum-likelihood estimate of the parameters, following the approach described in [22].

Iv Experimental Results

We evaluated our approach in both simulated and real environments. For the simulation, we used the Gazebo simulator available in ROS. We created two virtual environments, resembling a simulated room (Room) and a simulated apartment (Apartment). Figure 2 depicts the two simulated environments together with a simulated robot. In the real world experiment, we tested our algorithm in our building (FR079), whose map and a full dataset is publicly available. We chose this dataset for two reasons. First it contains very challenging aspects of localization algorithms, given the presence of many, similarly looking rooms. Second, similar data is publicly available online, and other researchers can use that to replicate our results. Since not everyone has the same artistic capabilities, we asked nine people to walk in the environment and draw a sketch. Figure 5 shows all the nine sketches together.

We use the same parameters for all our experiments and the simulated robot. For the proposal distribution, we have as covariance matrix for the translational component, for the rotational noise, and for the scale noise. With respect to the observation model, we set according to our sensor specification, and we estimated the rest of the parameters from data. The resulting estimated values are for the exponential distribution, for the max range, and , , , for the mixture weights. We also subsampled the range measurements to only 10 beams per scan, to compensate for the independence assumption of the laser beams. We intialized the filter by drawing the particles uniformly over a region of the map drawn by the user, for the position, uniformly between and for the orientation, and uniformly between 0.01 and 1 for the scale. The square region was about the size of a room, simulating a plausible initial guess from the user.

Fig. 2: Simulated Apartment (left) and Room (right) environment in Gazebo. The area covered by the simulated laser scanner is shown in blue.

Iv-a Simulated Experiments

For the simulation, we used the Room environment as a proof of concept scenario. We let the robot start in the lower right corner of the room and performed 4 different paths. We simulated a weak prior on the initial robot position by sampling the particles uniformly in a square of centered at the robot position. We obtained a success rate of 100% in localizing the robot. Some videos of the whole experiment are available on Youtube111https://www.youtube.com/playlist?list=PL2DAq2wc_lgJnSTYusQjeck-gc3fc2jtP.

Fig. 3: Hand-drawn map of the Apartment (left) and the Room (right) environment. The dashed squares represents the rooms we used in our experiment.

For the second experiment in the Apartment environment, we simulated a series of navigation tasks, where the robot starts in a certain location and is instructed to reach a specific room. We believe this is the natural application of our approach, where the user sketches the map, a rough location of the robot and ask the robot to reach a particular showing it on the map. We set up our experiments in the following way. After we draw a sketch of the environment, we subdivided it into small square, each representing a room to be reached. We then randomly generated 10 different sequences navigation tasks, in the form of go from room A to room B. For each sequence, we performed 10 runs of our localization algorithm with different random seeds. We considered a sequence as a success if the robot, at the end of the trajectory, is localized in the correct room.

Fig. 4: Overlay of the hand-drawn map over the Apartment environment. The hand-drawn map has been manually aligned. Note the non uniform scaling and the distortions present.

Figure 3 shows the hand-drawn map used in this experiments, together with our subdivision. To understand the differences between the real map and the hand-drawn one, Figure 4 shows an overlay of the two, where we manually rotate, scaled and aligned the two maps. Even under manual alignment, one can see that the scaling of the sketch is not uniform with respect to the real map and that many distortions are present. Table I shows the results of the experiment, together with the sequences we used. Our approach has an overall success rate of 93%. We only had a few failures in the paths from 10 to 6 and 13 to 6. Note that this are the most challenging paths, since are the longest and traverse the whole map. We also had some problems from 13 to 10. This was due to the ambiguity in the two corners, where the robot was mistakenly localized in the wrong one.

room a b Chance of Success
1 6 100%
1 10 100%
6 1 100%
6 10 100%
8 1 100%
8 6 100%
10 1 100%
10 6 70%
13 6 80%
13 10 80%
Total 93%
TABLE I: Success rate for the Apartment environment

All the trajectories for this experiment are publicly available on youtube222https://www.youtube.com/playlist?list=PL2DAq2wc_lgI4x4TK1PQOsovRKk-99OwS.

Iv-B Real World Experiments

Figure 5 shows the hand-drawn maps of Building 079 used for this experiment. The numbers in the figure represent the different rooms we identified in the environment. In a way similar to the simulated experiments, we randomly generate 7 navigation sequences and, for each sequence, we performed 10 runs of our localization algorithm with different random seeds. Table II shows the localization results with the real data for each run. The ratio difference denotes the absolute difference between the ratio (length/width) of the original occupancy grid map and each hand-drawn map. Figure 6 illustrates the success rate as a function of the difference in ratios. We see that localization has a high success rate, almost 80%, when the difference in the ratio of the hand-drawn map is relatively low. The success rate of localization decline, when this difference increases. The table shows the highest failure in the test run from room 9 to 12. Room number 9 is fully occupied with furniture that heavily distorted the image of the walls in the laser scan, therefore, the robot was not able to localize properly in the beginning. The robot randomly localized itself in any of the other rooms looking alike. The lowest successful rate was obtained when using the map No. 2. The user has drawn the doors in an unusual way that the robot can not recognize the entrance and exit properly. The localization results for AIS map-0333https://www.youtube.com/playlist?list=PL2DAq2wc_lgJu9ftPMaozs0fl9kwrkXCZ and map-3444https://www.youtube.com/playlist?list=PL2DAq2wc_lgJdzfJJDNUNvom0UWCU3Mz4 and is publicly available on youtube.

In addition to FR079, we also tested our method on the Intel dataset. The videos from the Intel dataset are publicly available on youtube555https://www.youtube.com/watch?v=uQhK19jpa2I&index=2&list=PL2DAq2wc_lgI9JDwGfDLvZVds_vwpk3OS.


ais-0
ais-1 ais-2

ais-3
ais-4 ais-5

ais-6
ais-7 ais-8
Fig. 5: Sketch maps for the FR079 environment used for the experiments.
room-to-room ais-3 ais-0 ais-5 ais-6 ais-1 ais-2 ais-8 ais-4 ais-7
ratio difference 0.03 0.24 0.25 0.94 1.05 1.12 1.14 1.22 1.64
15 to 16 100 100 100 80 100 60 100 20 40
14 to 13 80 100 0 40 60 0 60 100 0
4 to 6 100 100 100 0 20 0 0 80 0
9 to 12 0 0 60 0 0 0 0 0 0
5 to 11 100 100 0 0 0 0 0 0 20
CR to 12 80 100 100 0 0 0 100 0 0
11 to OR 100 70 0 60 80 0 0 100 40
success rate % 80 81 51 25 37 8 37 42 14
TABLE II: Results of the real world experiments.
Fig. 6: Percentage of FR079 runs successfully localized.

V Discussion

We believe our approach will also work with blueprints of the environment, given their metrical accuracy.

Vi Conclusions

In this paper, we addressed the problem of robot localization when no accurate map of the environment is available and the robot has to solely rely on a hand-drawn map sketched by the user. To do so, we extended the classical Monte Carlo localization algorithm in two ways. First, we propose to localize with respect to the image coordinate frame. Second, we track, together with the pose of the robot, a local deformation of the hand-drawn map. Since no metric information is available on the hand-drawn map, we propose to evaluate the localization in terms of coarse localization at the level of rooms of the environment. We evaluated our approach in both simulated and real environments and achieved a correct localization, up to the room level, of about 80% of the cases when the ratio of the sketch map resembles the real environment. We believe this is a starting point that addresses a very challenging problem with potential applications. In future, we plan to extend our approach to incorporate more sophisticated distortion models and employ it for navigation purposes.

References

  • Biswas and Veloso [2014] Joydeep Biswas and Manuela Veloso. Episodic non-markov localization: Reasoning about short-term and long-term features. In Proceedings of the IEEE International Conference on Robotics and Automation, 2014.
  • Chronis and Skubic [2004] George Chronis and Marjorie Skubic. Robot navigation using qualitative landmark states from sketched route maps. In Proceedings of the IEEE International Conference on Robotics and Automation, 2004.
  • Dellaert et al. [1999] Frank Dellaert, Dieter Fox, Wolfram Burgard, and Sebastian Thrun. Monte carlo localization for mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation, 1999.
  • Doucet et al. [2001] Arnaud Doucet, Nando De Freitas, and Neil Gordon. Sequential Monte Carlo methods in practice. Springer, 2001.
  • Forbus et al. [2004] Kenneth D Forbus, Jeffrey Usher, and Vernell Chapman. Qualitative spatial reasoning about sketch maps. AI magazine, 25, 2004.
  • Fox [2003] D. Fox. Adapting the sample size in particle filters through kld-sampling. International Journal of Robotics Research, 22, 2003.
  • Fox et al. [1999] D. Fox, W. Burgard, and S. Thrun. Markov localization for mobile robots in dynamic environments.

    Journal of Artificial Intelligence Research

    , 11, 1999.
  • Ito et al. [2014] Seigo Ito, Felix Endres, Markus Kuderer, Gian Diego Tipaldi, Cyrill Stachniss, , and Wolfram Burgard. W-rgb-d: Floor-plan-based indoor global localization using a depth camera and wifi. In Proceedings of the IEEE International Conference on Robotics and Automation, 2014.
  • Jensfelt and Kristensen [2001] Patric Jensfelt and Steen Kristensen. Active global localization for a mobile robot using multiple hypothesis tracking. IEEE Transactions on Robotics and Automation, 17, 2001.
  • Kawamura et al. [2002] Kazuhiko Kawamura, A Bugra Koku, D Mitchell Wilkes, Richard Alan Peters, and A Sekmen. Toward egocentric navigation. International Journal of Robotics and Automation, 17, 2002.
  • Koenig and Simmons [1996] Sven Koenig and Reid G Simmons. Passive distance learning for robot navigation. In ICML, 1996.
  • Krajnık et al. [2014] Tomáš Krajnık, Joao Santos, Bianca Seemann, and Tom Duckett. Froctomap: an efficient spatio-temporal environment representation. Advances in Autonomous Robotics Systems, 2014.
  • Matsuo and Miura [2012] Keisuke Matsuo and Jun Miura. Outdoor visual localization with a hand-drawn line drawing map using fastslam with pso-based mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
  • Mazuran et al. [2014] Mladen Mazuran, Gian Diego Tipaldi, Luciano Spinello, Wolfram Burgard, and Cyrill Stachniss. A Statistical Measure for Map Consistency in SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, 2014.
  • Parekh et al. [2007] Gaurav Parekh, Marjorie Skubic, Ozy Sjahputera, and James M. Keller. Scene matching between a map and a hand drawn sketch using spatial relations. In Proceedings of the IEEE International Conference on Robotics and Automation, 2007.
  • Röwekämper et al. [2012] Jörg Röwekämper, Christoph Sprunk, Gian Diego Tipaldi, Stachniss Cyrill, Patrick Pfaff, and Wolfram Burgard. On the Position Accuracy of Mobile Robot Localization based on Particle Filters Combined with Scan Matching. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
  • Setalaphruk et al. [2003] Vachirasuk Setalaphruk, Atsushi Ueno, Izuru Kume, Yasuyuki Kono, and Masatsugu Kidode. Robot navigation in corridor environments using a sketch floor map. In Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), 2003.
  • Shah et al. [2010] Danelle Shah, Joseph Schneider, and Mark Campbell. A robust sketch interface for natural robot control. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010.
  • Shah and Campbell [2013] Danelle C Shah and Mark E Campbell. A qualitative path planner for robot navigation using human-provided maps. International Journal of Robotics Research, 32, 2013.
  • Skubic et al. [2002] Marjorie Skubic, Sam Blisard, Andy Carle, and Pascal Matsakis. Hand-drawn maps for robot navigation. In AAAI Spring Symposium, Sketch Understanding Session, page 23, 2002.
  • Skubic et al. [2007] Marjorie Skubic, Derek Anderson, Samuel Blisard, Dennis Perzanowski, and Alan Schultz. Using a hand-drawn sketch to control a team of robots. Autonomous Robots, 22, 2007.
  • Thrun et al. [2005] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005.
  • Tipaldi et al. [2013] Gian Diego Tipaldi, Daniel Meyer-Delius, and Wolfram Burgard. Lifelong localization in changing environments. International Journal of Robotics Research, 32, 2013.
  • Yun and Miura [2008] Jooseop Yun and Jun Miura. A quantitative measure for the navigability of a mobile robot using rough maps. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008.