## 1 Introduction

Precise navigation and localization abilities are crucial for animal and robots. These elementary skills are far developed in complex animals like mammals [20], [22]. Experimental studies have shown that transient neural firing patterns in place cells in the hippocampus can predict the animals location or even complete future routes [21], [8], [13], [3]. These neural findings have inspired many computational models based on attractor networks [11], [24], [16], [5] and have been successfully applied to humanoid robot motion planning tasks [23], [27].

In this work we took inspiration from simpler insects which are able to precisely navigate by using sparse perceptual and pro-prioceptual sensory information. For example, the desert ant Cataglyphis (Figure 1a) employs basic path integration, visual piloting and systematic search strategies to navigate back to its nest from distant locations several hundred meters away [30]. While computational models of such navigation skills lead to better understanding of the neural implementation in the insect [14], they also have a strong impact on a large number of practical robotic applications. Especially, non-industrial autonomous systems like household robots (Figure 1b) or robotic toys require precise navigation features utilizing low-cost sensory hardware [15], [17], [18].

Despite this obvious need for precise localization in low cost systems, most related work in mobile robot navigation can be categorized into two extreme cases. Either computational and monetary expensive sensors like cameras or laser range finders are installed [26], [7], [12] or simplistic navigation strategies such as a random walks are used like in the current purchasable household robots [29]. When using limited sensing, only few navigation strategies have been proposed. O’Kane et al. investigated how complex a sensor system of a robot really has to be in order to localize itself. Therefore, they used a minimalist approach with contact sensors, a compass, angular and linear odometers in three different sensor configurations [19]

. Erickson et al. addressed the localization problem for a blind robot with only a clock and a contact sensor. They used probabilistic techniques, where they discretized the boundary of the environment into small cells. A probability

of the robot being in the cell at the time step is allocated to each one and updated in every iteration. For active localization, they proposed an entropy based approach in order to determine uncertainty-reducing motions [6]. Stavrou and Panayiotou on the other hand proposed a localization method based on Monte Carlo Localization [4] using only a single short-range sensor in a fixed position [25]. The open challenge however that remains is how to efficiently localize a robot equipped only with a single binary sensor and odometer. Such tasks arise, for example, especially with autonomous lawn mowing robots. They usually use a wire signal to detect whether they are on the area assigned to them or not. There are also sensors that detect the moisture on the surface and can thus detect grass [1]. All these sensors usually return a binary signal indicating whether the sensor is in the field or outside. The aforementioned approaches can either not be applied to such localization tasks because they require low-range sensors [25] or they are trying to solve the problem with even less sensor information [6]. A direct comparison between these approaches and our method is not possible since different sensors types are used. However, to be able to compare the estimation accuracy of our methods with the aforementioned algorithms, we created our test environments similar to the test envrionment used in [25].We propose an efficient and simple method for global localization based only on odometry data and binary measurements which can be used in real time on a low-cost robot with only limited computational resources. This work demonstrates for the first time how basic navigation principles of the Cataglyphis ant can be implemented in an autonomous outdoor robot. Our approach has the potential to replace the currently installed naive random walk behavior in most low cost robots to an intelligent localization and planning strategy inspired by insects. The methods we used are discussed in Section II. We first present a wall following scheme which is required to steer the robot along the boundary line of the map based on the binary measurement data. Inspired by path integration we determine dominant points (DPs) to represent the path estimated by the odometry. We then generate a first estimate of the pose of the robot using land navigation. Therefore, we compare the shape of the estimated path with the given boundary line. The generated pose estimate allows to systematically search for the true pose in the given area using a particle filter. This leads to a more accurate localization of the robot. Here a particle filter is required since other localization techniques, such as Kalman Filter, are not able to handle binary measurement data. In Section III, we evaluate our approach in two simulated environments and show that it leads to accurate pose estimates of the robot. We conclude in Section IV.

## 2 Methods

For our proposed localization method, we assume that a map of the environment is given as a boundary map defined as a polygon with vertices with . In Figure 2 such a boundary map is shown. As sensor signals we only receive the binary information , where means that the sensor is outside of the map and means the sensor is within the boundary defined by the polygon. We are using a simple differential drive robot controlled by the desired linear and angular velocities and . Furthermore, odometry information is assumed to be given. We assume the transform between the sensor position and the odometry frame, which is also the main frame of the robot and has the position in world coordinates, is known as

(1) |

Here, is the two dimensional rotation matrix with the orientation angle of the robot and , the distances between odometry frame and sensor position in robot coordinates. In Figure 3 the setup for the differential drive robot is depicted. For our proposed localization method, it is required that there is a lever arm between the robots frame and the sensor, such that if only the orientation of the robot changes the position of the sensor changes too.

### 2.1 Wall Follower

Since only at the boundary we are able to generate sensor data useful for localization, a wall follower is required. The proposed control algorithm allows to navigate the robot along the boundary line even if the binary measurements are corrupted by noise. Thereby, to increase the scanning area, the robot moves in wiggly lines along the boundary such that the mean of the sensor detection is . For the update step of the sensor mean we use exponential smoothing

(2) |

starting with an initial mean of (assumption: start within the field). Here defines the update rate of the mean . We then calculate the difference between the desired mean and the actual mean

(3) |

The difference is scaled onto the range . We now use relative linear and angular velocities and for the control algorithm. They have to be multiplied with the maximum allowed velocities for the given robotic system and . Using the difference from Equation (3), we update the relative forward velocity for the differential drive robot using again exponential smoothing

(4) |

Here, defines an update rate and the absolute value of . The relative velocity increases the longer and better the desired mean of is reached. Thereby, the relative velocity is always in the range of . The relative angular velocity of the robot is determined using the deviation to the desired mean and a stabilizing term to ensure robustness against noise corrupted measurements

(5) |

Here is a counter variable and the counter divider. Thus, should be chosen accordingly to the frequency of the operating system. A diagram of the controller as well as pseudo code can be found in Figure 4 and Algorithm (1) respectively.

### 2.2 Path Integration / Land Navigation

In order to get a first pose estimate for the robot we use land navigation between the given map and the driven path along the boundary line.

First, we start by generating a piecewise linear function of the orientation of the boundary line in regard to the length . This function represents the shape of the boundary and is defined as

(6) |

with

(7) |

Here,

defines the Euclidean norm for the vector

. We start with and . Thereby, with consisting of the vertices of the map and as the difference in orientation between and .Second, in order to represent the path driven by the robot as vertices connected by straight lines (similar to the map representation), we generate dominant points (DPs). Therefore, we are simply using the estimated position from the odometry of our robot starting with at the first contact with the boundary line. We also initialize a temporary set which holds all the actual points which can be combined to a straight line. We start by adding in every iteration step the new position estimate from the odometry data to our temporary set and then check if

(8) |

and

(9) |

are both true. If this is the case we add the point to the set of DPs and set the temporary set back to . Here, defines the shortest distance between the point and the vector given by . Pseudo code for the algorithm can be found in Algorithm (2). The parameters and are problem specific and have to be tuned accordingly.

Third, every time a new DP is accumulated we generate a piecewise orientation function as presented above using the actual set of DPs, as long as the accumulated length between the DPs is larger then . This ensures that the algorithm does not start too early with the comparison. The parameter represents the minimal required portion of the boundary line to identify unique positions and has to be chosen or trained according to the given map. We then compare with for all vertices of the boundary line in order to find a suitable vertex at which the robot could be. Therefore, we adjust such that . We now evaluate these functions at linearly distributed points from to which results into vectors . Here is the path length driven by the robot along the boundary line. We then calculate the correlation error

(10) |

where is the evaluation of the function in the range from to at linearly distributed points. We now have correlation errors between the actual driven path by the robot along the boundary line and every vertex of the polygon map. In Figure 5 correlation errors are graphically displayed.

Fourth, we search for the vertex with the minimal correlation error and check if the error is below a certain treshold . The parameter has to be trained accordingly to the given map. If we do not find a convenient vertex, we simply drive further along the wall until finding a vertex which follows the condition above. Thereby we limit the stored path from the robot to the length of the circumference of the map. If a convenient vertex is found, we call this vertex from now on matching vertex, the first guess of the robots position can be determined as , where is the matching vertex. Moreover, we can also define a orientation estimate as

(11) |

where .

### 2.3 Systematic Search

Using the land navigation from above we now have a first estimate on where the robot is located. Based on this estimate we are able to start a systematic search in this region to determine a more accurate and certain pose estimate. Therefore, we use a particle filter since other localization techniques, such as Kalman Filter, can not handle binary measurement data. The general idea of the particle filter [28]

is to represent the probability distribution of the posterior by a set of samples, called particles, instead of using a parametric form as the Kalman Filter does. Here each particle

(12) |

represents a concrete instantiation of the state at time , where denotes the number of particles used. The belief is then approximated by the set of particles . The Bayes filter posterior is used to include the likelihood of a state hypothesis

(13) |

Here and represent the measurement history and the input signal history respectively.

The idea of our method is to reduce the particles required for global localization with a particle filter extensively by just sampling particles in an area around the position estimate retrieved by land navigation as shown above. This is nothing else than the transformation from a global localization problem to a local one. Therefore, we simply use a Gaussian distribution

with the mean and parametrized covariance matrix . The parameters can be chosen accordingly to the mean and uncertainty of the error of the initial pose estimate as we will see in Section 3. In Figure 6an example distribution of the particles around the estimated pose is shown. After sampling the particles we now are able to further improve the localization executing a systematic search along the border line using the particle filter. Thereby, the weighting of the particles happens as follows: If a particle would see the same as the sensor has measured, then we give this particle the weight , where is the weight of the i-th particle. If the particle would not see the same, then . Here the parameter has to be larger then .

## 3 Results

In this section, we evaluate the proposed algorithms in regard to stability and performance. Stability is defined as the fraction of localization experiments were the robot could localize itself from an initially lost pose in the presence of sensor noise. For that the Euclidian distance between the true and the estimated pose had to be below a threshold of . Performance is defined as the mean accuracy of the pose estimation and the time required to generate the pose estimate. Therefore, we created a simulation environment using Matlab with a velocity and an odometry motion model for a differential drive robot as presented in [28]. For realistic conditions, we calibrated the noise parameters in experiments to match the true motion model of a real Viking MI 422P, a purchasable autonomous lawn mower. For this calibration, we tracked the lawn mower movements using a visual tracking system (OptiTrack) and computed the transition model parameters through maximum likelihood estimation. The parameters can be found in Table 1. We used a sampling frequency of for the simulated system and the maximum linear velocity and angular velocity . These values correspond approximately to the standard for purchasable low-cost robots. We set the number of which leads to a period time of the wiggly lines . This is a trade-off between the enlargement of the scanning surface and the movement that can be performed on a real robot. The relative position of the binary sensor in regard to the robot frame, Equation (1), is given by .

Vel. Motion Model | Odom. Motion Model | |
---|---|---|

0.0346 | 0.0849 | |

0.0316 | 0.0412 | |

0.0755 | 0.0316 | |

0.0566 | 0.0173 | |

0.0592 | - | |

0.0678 | - |

### 3.1 Wall Follower

As mentioned before, only at the boundary we are able to generate data useful for localization since only there we can detect differences in the signal generated by the given binary sensor. Hence, we first evaluate the performance of the presented wall following algorithm. Therefore, we use the mean squared error (MSE) to measure the deviation of the path of the sensor to the given boundary line of the map. An example of such a path along the map is given in Figure 7. Let be the i-th position of the sensor and the closest point at the boundary line to , then

(14) |

where represents the number of time steps required to complete one round along the boundary line. We also evaluate the mean velocity of the robot driving along the boundary

(15) |

where is the circumference of the map and the time required by the robot to complete one round. In Figure 8, Figure 9 and Figure 10, the average simulation results over 10 simulations for different noise factors in Equation (2) and Equation (4) are presented. The used map is shown in Figure 7. The noise factor hereby determines the randomness of the binary signals, thus a factor of leads to always accurate measurements whereas a factor of implies total random measurement results. Hence, a noise factor of signals that of the output signals of the binary sensor are random.

The presented wall follower leads to a stable and accurate wall following behavior as exemplarily shown in Figure 7. With increasing noise factor, the MSE becomes larger. However, the algorithm is stable enough to steer the robot along the boundary line even by noise factors around . The algorithm seems to work best, thus leading to small MSE, for small . However, a small value for leads to large changes in the linear and angular velocities which may not be feasible on a real robot or may cause large odometry erros. We discuss this problem more intensively in Section 4. The parameter seems to have no strong influence neither on the MSE nor on the mean velocity.

### 3.2 Land Navigation

In order to evaluate the performance and stability of the proposed shape comparison method we simulated the differential drive robot times in two different maps (Figure 11) with randomly chosen starting positions. We trained the parameters specifically for the given maps under usage of the proposed wall following algorithm with , , and a simulated sensor noise of . The resulting parameters are presented in Table 2.

Map 1 | ||||
---|---|---|---|---|

Map 2 |

We then calculated the difference of the estimated position and the true position as well as for the estimated orientation and true orientation . Histograms of these errors are depicted in Figure 12 and Figure 13. The mean time for finding this first pose estimate is around for map and for map . The results show that the proposed algorithm is able to compute accurate estimates of the robots poses. Those estimates can then be further used to sample particles for a particle filter to systematically search in the region of interest for the correct pose.

### 3.3 Systematic Search

We now demonstrate how the position estimate from above can be further used to systematically search for a better estimate and for executing planning tasks. As proposed, we first sample particles around the position estimate using Gaussian distributions

(16) |

, the particles and the pose estimate are shown as well as the true position of the robot. In the following step we systematically search for the true pose of the robot by following the boundary line. The particle filter algorithm weights the particles and does a resampling if required. We stop the wall following if we are certain enough about the pose of the robot, hence the variance of the particles is less then a certain treshold. In Figure

17 the robot after the systematic search for a better pose estimate is shown. Now we are able to execute certain tasks with the robot, for example mowing the lawn. Thereby, the robot has to move away from the boundary and the pose estimate becomes more uncertain as shown in Figure 18. After reaching again the boundary line the particle filter is able to enhance the pose estimate based on the new information as depicted in Figure 19.We also evaluated the performance of the proposed systematic search. Therefore, we simulated the differential drive robot times sampling the particles as proposed in Equation (16). We then followed the boundary line until the standard deviation of the orientation has been below . Again, we evaluated the difference between position and orientation estimate and true position and orientation. Here, 3 times the particle filter has not found a sufficient accurate pose estimate. If this happens, we simply can restart the procedure by finding a pose estimate using land navigation. In Figure 14 a histogram depicting the results are shown. The systematic search for the pose of the robot using a particle filter improves the orientation estimate compared to the initial estimate presented in section 3.2.

## 4 Conclusion

We have presented a method for global localization based only on odometry data and one binary sensor. This method, inspired by insect localization strategies, is essential for efficient path planning methods as presented in [10] or [9] which require an accurate estimate of the robots pose. Our approach combines ideas of path integration, land navigation and systematic search to find a pose estimate for the robot.

Our method relies on a stable and accurate wall following algorithm. We showed that the proposed algorithm is robust even under sensor noise and, if convenient parameters are chosen, leads to an accurate wall following behavior.

The proposed approach for finding a first pose estimate is robust and accurate. Given this pose estimate we are able to sample particles around this position for starting a systematic search with a particle filter algorithm. Therefore, we maintain the wall following behavior until the uncertainty of the pose estimate of the particle filter is below a certain threshold. The proposed search method improves the orientation estimate intensively while maintaining the accuracy of the position estimate. As mentioned in the Introduction, it is not possible to compare our approach directly with other approaches since different types of sensors are used. However, comparing our localization results with the results presented in [25] for a particle filter with a single low-range sensor, we reach a similar accuracy for the position estimate by only using a binary sensor. Thereby, the accuracy is in the range of around .

Using the generated pose estimate we are now able to execute given tasks by constantly relocating the robot at the boundary lines. For example, using bioinspired neural networks for complete coverage path planning

[31]. Open research questions in regard to the proposed approach are which methods can be used to learn the algorithm parameters , , , on the fly. Also, the algorithm has to be evaluated on a real robotic system in a realistic garden setup.## References

- [1] (2009-November 3) Lawn-mower with sensor. Google Patents. Note: US Patent 7,613,552 Cited by: §1.
- [2] (2013-09) Cataglyphis nodus. Note: CC BY-SA 3.0, https://de.wikipedia.org/wiki/Cataglyphis External Links: Link Cited by: 1(a).
- [3] (2011) Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval. Nature neuroscience 14 (2), pp. 147. Cited by: §1.
- [4] (1999) Monte carlo localization for mobile robots. In Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on, Vol. 2, pp. 1322–1328. Cited by: §1.
- [5] (2012) A goal-directed spatial navigation model using forward trajectory planning based on grid cells. European Journal of Neuroscience 35 (6), pp. 916–931. Cited by: §1.
- [6] (2008) Probabilistic localization with a blind robot. In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pp. 1821–1827. Cited by: §1.
- [7] (2017) A global localization system for mobile robot using lidar sensor. In 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 478–483. Cited by: §1.
- [8] (2006) Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature 440 (7084), pp. 680. Cited by: §1.
- [9] (2013) A survey on coverage path planning for robotics. Robotics and Autonomous systems 61 (12), pp. 1258–1276. Cited by: §4.
- [10] (2014) A probabilistic approach to high-confidence cleaning guarantees for low-cost cleaning robots. In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pp. 5600–5605. Cited by: §4.
- [11] (1982) Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences 79 (8), pp. 2554–2558. Cited by: §1.
- [12] (2014) W-rgb-d: floor-plan-based indoor global localization using a depth camera and wifi. In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pp. 417–422. Cited by: §1.
- [13] (2007) Neural ensembles in ca3 transiently encode paths forward of the animal at a decision point. Journal of Neuroscience 27 (45), pp. 12176–12189. Cited by: §1.
- [14] (2000) A mobile robot employing insect strategies for navigation. Robotics and Autonomous systems 30 (1-2), pp. 39–64. Cited by: §1.
- [15] (2012) Balancing and navigation control of a mobile inverted pendulum robot using sensor fusion of low cost sensors. Mechatronics 22 (1), pp. 95–105. Cited by: §1.
- [16] (2006) Path integration and the neural basis of the’cognitive map’. Nature Reviews Neuroscience 7 (8), pp. 663. Cited by: §1.
- [17] (1995) Design and testing of a low-cost robotic wheelchair prototype. Autonomous robots 2 (1), pp. 77–88. Cited by: §1.
- [18] (1999) Initial calibration and alignment of low-cost inertial navigation units for land vehicle applications. Journal of Robotic Systems 16 (2), pp. 81–92. Cited by: §1.
- [19] (2007) Localization with limited sensing. IEEE Transactions on Robotics 23 (4), pp. 704–716. Cited by: §1.
- [20] (1978) The hippocampus as a cognitive map. Oxford: Clarendon Press. Cited by: §1.
- [21] (2013) Hippocampal place-cell sequences depict future paths to remembered goals. Nature 497 (7447), pp. 74. Cited by: §1.
- [22] (1999) Beyond the cognitive map: from place cells to episodic memory. MIT press. Cited by: §1.
- [23] (2016-01-15) Recurrent spiking networks solve planning tasks. Nature Publishing Group: Scientific Reports 6 (21142). External Links: Document, Link Cited by: §1.
- [24] (1997) Path integration and cognitive mapping in a continuous attractor neural network model. Journal of Neuroscience 17 (15), pp. 5900–5920. Cited by: §1.
- [25] (2012) Localization of a simple robot with low computational-power using a single short range sensor. In Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on, pp. 729–734. Cited by: §1, §4.
- [26] (2017) Global localization of a mobile robot using lidar and visual features. In Robotics and Biomimetics (ROBIO), 2017 IEEE International Conference on, pp. 2377–2383. Cited by: §1.
- [27] (2019-01-01) Intrinsic motivation and mental replay enable efficient online adaptation in stochastic recurrent networks. Neural Networks - Elsevier 109, pp. 67–80. Note: Impact Factor of 7.197 (2017) External Links: Document, ISBN 0893-6080, Link Cited by: §1.
- [28] (2005) Probabilistic robotics. MIT press. Cited by: §2.3, Table 1, §3.
- [29] (2007) Evaluating the roomba: a low-cost, ubiquitous platform for robotics research and education.. In ICRA, pp. 1393–1399. Cited by: §1.
- [30] (1987) Spatial organization of foraging behavior in individually searching desert ants, cataglyphis (sahara desert) and ocymyrmex (namib desert). In From individual to collective behavior in social insects: les Treilles Workshop/edited by Jacques M. Pasteels, Jean-Louis Deneubourg, Cited by: §1.
- [31] (2017) DISCRETE bioinspired neural network for complete coverage path planning. International Journal of Robotics and Automation 32 (2). Cited by: §4.

Comments

There are no comments yet.