Cataglyphis ant navigation strategies solve the global localization problem in robots with binary sensors

08/13/2019
by   Nils Rottmann, et al.
Universität Lübeck
0

Low cost robots, such as vacuum cleaners or lawn mowers, employ simplistic and often random navigation policies. Although a large number of sophisticated localization and planning approaches exist, they require additional sensors like LIDAR sensors, cameras or time of flight sensors. In this work, we propose a global localization method biologically inspired by simple insects, such as the ant Cataglyphis that is able to return from distant locations to its nest in the desert without any or with limited perceptual cues. Like in Cataglyphis, the underlying idea of our localization approach is to first compute a pose estimate from pro-prioceptual sensors only, using land navigation, and thereafter refine the estimate through a systematic search in a particle filter that integrates the rare visual feedback. In simulation experiments in multiple environments, we demonstrated that this bioinspired principle can be used to compute accurate pose estimates from binary visual cues only. Such intelligent localization strategies can improve the performance of any robot with limited sensing capabilities such as household robots or toys.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

08/13/2019

Loop Closure Detection in Closed Environments

Low cost robots, such as vacuum cleaners or lawn mowers employ simplisti...
09/27/2021

A Biologically Inspired Global Localization System for Mobile Robots Using LiDAR Sensor

Localization in the environment is an essential navigational capability ...
06/12/2019

Adaptive Navigation Scheme for Optimal Deep-Sea Localization Using Multimodal Perception Cues

Underwater robot interventions require a high level of safety and reliab...
11/17/2020

Improved Visual-Inertial Localization for Low-cost Rescue Robots

This paper improves visual-inertial systems to boost the localization ac...
07/06/2021

Learned Visual Navigation for Under-Canopy Agricultural Robots

We describe a system for visually guided autonomous navigation of under-...
09/21/2018

Infrastructure-free Localization of Aerial Robots with Ultrawideband Sensors

Robots in a swarm take advantage of a motion capture system or GPS senso...
02/12/2021

Customizable Stochastic High Fidelity Model of the Sensors and Camera onboard a Low SWaP Fixed Wing Autonomous Aircraft

The navigation systems of autonomous aircraft rely on the readings provi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Precise navigation and localization abilities are crucial for animal and robots. These elementary skills are far developed in complex animals like mammals [20], [22]. Experimental studies have shown that transient neural firing patterns in place cells in the hippocampus can predict the animals location or even complete future routes [21], [8], [13], [3]. These neural findings have inspired many computational models based on attractor networks [11], [24], [16], [5] and have been successfully applied to humanoid robot motion planning tasks [23], [27].

(a) ”Cataglyphis nodus” by [2].
(b) Create 2
Figure 1: The Cataglyphis ant and the robot Create 2 from the company iRobot.

In this work we took inspiration from simpler insects which are able to precisely navigate by using sparse perceptual and pro-prioceptual sensory information. For example, the desert ant Cataglyphis (Figure 1a) employs basic path integration, visual piloting and systematic search strategies to navigate back to its nest from distant locations several hundred meters away [30]. While computational models of such navigation skills lead to better understanding of the neural implementation in the insect [14], they also have a strong impact on a large number of practical robotic applications. Especially, non-industrial autonomous systems like household robots (Figure 1b) or robotic toys require precise navigation features utilizing low-cost sensory hardware [15], [17], [18].

Despite this obvious need for precise localization in low cost systems, most related work in mobile robot navigation can be categorized into two extreme cases. Either computational and monetary expensive sensors like cameras or laser range finders are installed [26], [7], [12] or simplistic navigation strategies such as a random walks are used like in the current purchasable household robots [29]. When using limited sensing, only few navigation strategies have been proposed. O’Kane et al. investigated how complex a sensor system of a robot really has to be in order to localize itself. Therefore, they used a minimalist approach with contact sensors, a compass, angular and linear odometers in three different sensor configurations [19]

. Erickson et al. addressed the localization problem for a blind robot with only a clock and a contact sensor. They used probabilistic techniques, where they discretized the boundary of the environment into small cells. A probability

of the robot being in the cell at the time step is allocated to each one and updated in every iteration. For active localization, they proposed an entropy based approach in order to determine uncertainty-reducing motions [6]. Stavrou and Panayiotou on the other hand proposed a localization method based on Monte Carlo Localization [4] using only a single short-range sensor in a fixed position [25]. The open challenge however that remains is how to efficiently localize a robot equipped only with a single binary sensor and odometer. Such tasks arise, for example, especially with autonomous lawn mowing robots. They usually use a wire signal to detect whether they are on the area assigned to them or not. There are also sensors that detect the moisture on the surface and can thus detect grass [1]. All these sensors usually return a binary signal indicating whether the sensor is in the field or outside. The aforementioned approaches can either not be applied to such localization tasks because they require low-range sensors [25] or they are trying to solve the problem with even less sensor information [6]. A direct comparison between these approaches and our method is not possible since different sensors types are used. However, to be able to compare the estimation accuracy of our methods with the aforementioned algorithms, we created our test environments similar to the test envrionment used in [25].

We propose an efficient and simple method for global localization based only on odometry data and binary measurements which can be used in real time on a low-cost robot with only limited computational resources. This work demonstrates for the first time how basic navigation principles of the Cataglyphis ant can be implemented in an autonomous outdoor robot. Our approach has the potential to replace the currently installed naive random walk behavior in most low cost robots to an intelligent localization and planning strategy inspired by insects. The methods we used are discussed in Section II. We first present a wall following scheme which is required to steer the robot along the boundary line of the map based on the binary measurement data. Inspired by path integration we determine dominant points (DPs) to represent the path estimated by the odometry. We then generate a first estimate of the pose of the robot using land navigation. Therefore, we compare the shape of the estimated path with the given boundary line. The generated pose estimate allows to systematically search for the true pose in the given area using a particle filter. This leads to a more accurate localization of the robot. Here a particle filter is required since other localization techniques, such as Kalman Filter, are not able to handle binary measurement data. In Section III, we evaluate our approach in two simulated environments and show that it leads to accurate pose estimates of the robot. We conclude in Section IV.

2 Methods

Figure 2: Boundary Map: An example map of an environment which defines the boundary by connecting vertices to a polygon shape.

For our proposed localization method, we assume that a map of the environment is given as a boundary map defined as a polygon with vertices with . In Figure 2 such a boundary map is shown. As sensor signals we only receive the binary information , where means that the sensor is outside of the map and means the sensor is within the boundary defined by the polygon. We are using a simple differential drive robot controlled by the desired linear and angular velocities and . Furthermore, odometry information is assumed to be given. We assume the transform between the sensor position and the odometry frame, which is also the main frame of the robot and has the position in world coordinates, is known as

(1)

Here, is the two dimensional rotation matrix with the orientation angle of the robot and , the distances between odometry frame and sensor position in robot coordinates. In Figure 3 the setup for the differential drive robot is depicted. For our proposed localization method, it is required that there is a lever arm between the robots frame and the sensor, such that if only the orientation of the robot changes the position of the sensor changes too.

Figure 3: A differential drive robot within the world frame given by . The robots main frame is determined by the frame for the odometry. The robots position is given by , and the sensor position by , .

2.1 Wall Follower

Since only at the boundary we are able to generate sensor data useful for localization, a wall follower is required. The proposed control algorithm allows to navigate the robot along the boundary line even if the binary measurements are corrupted by noise. Thereby, to increase the scanning area, the robot moves in wiggly lines along the boundary such that the mean of the sensor detection is . For the update step of the sensor mean we use exponential smoothing

(2)

starting with an initial mean of (assumption: start within the field). Here defines the update rate of the mean . We then calculate the difference between the desired mean and the actual mean

(3)

The difference is scaled onto the range . We now use relative linear and angular velocities and for the control algorithm. They have to be multiplied with the maximum allowed velocities for the given robotic system and . Using the difference from Equation (3), we update the relative forward velocity for the differential drive robot using again exponential smoothing

(4)

Here, defines an update rate and the absolute value of . The relative velocity increases the longer and better the desired mean of is reached. Thereby, the relative velocity is always in the range of . The relative angular velocity of the robot is determined using the deviation to the desired mean and a stabilizing term to ensure robustness against noise corrupted measurements

(5)

Here is a counter variable and the counter divider. Thus, should be chosen accordingly to the frequency of the operating system. A diagram of the controller as well as pseudo code can be found in Figure 4 and Algorithm (1) respectively.

Figure 4: Wall Follower: Control method blocks representing the wall following algorithm.
  • Parameters
    , , , ,

  • Inputs

  • Outputs
    ,

1:
2: start within the field
3:while true do
4:     if Mode == 0 then search for boundary
5:         
6:         if  then
7:              
8:              
9:         else
10:              Mode
11:         end if
12:     else if Mode == 1 then wall following
13:         
14:         
15:         
16:         
17:         
18:     end if
19:     
20:     return ,
21:end while
Algorithm 1 Wall Follower

2.2 Path Integration / Land Navigation

In order to get a first pose estimate for the robot we use land navigation between the given map and the driven path along the boundary line.

First, we start by generating a piecewise linear function of the orientation of the boundary line in regard to the length . This function represents the shape of the boundary and is defined as

(6)

with

(7)

Here,

defines the Euclidean norm for the vector

. We start with and . Thereby, with consisting of the vertices of the map and as the difference in orientation between and .

Second, in order to represent the path driven by the robot as vertices connected by straight lines (similar to the map representation), we generate dominant points (DPs). Therefore, we are simply using the estimated position from the odometry of our robot starting with at the first contact with the boundary line. We also initialize a temporary set which holds all the actual points which can be combined to a straight line. We start by adding in every iteration step the new position estimate from the odometry data to our temporary set and then check if

(8)

and

(9)

are both true. If this is the case we add the point to the set of DPs and set the temporary set back to . Here, defines the shortest distance between the point and the vector given by . Pseudo code for the algorithm can be found in Algorithm (2). The parameters and are problem specific and have to be tuned accordingly.

  • Parameters
    ,

  • Inputs

  • Outputs

1:
2:
3:if new available then
4:     
5:     if  then
6:         
7:     else
8:         
9:         
10:         if  then
11:              
12:         else
13:              
14:              
15:         end if
16:     end if
17:end if
Algorithm 2 DP Generation

Third, every time a new DP is accumulated we generate a piecewise orientation function as presented above using the actual set of DPs, as long as the accumulated length between the DPs is larger then . This ensures that the algorithm does not start too early with the comparison. The parameter represents the minimal required portion of the boundary line to identify unique positions and has to be chosen or trained according to the given map. We then compare with for all vertices of the boundary line in order to find a suitable vertex at which the robot could be. Therefore, we adjust such that . We now evaluate these functions at linearly distributed points from to which results into vectors . Here is the path length driven by the robot along the boundary line. We then calculate the correlation error

(10)

where is the evaluation of the function in the range from to at linearly distributed points. We now have correlation errors between the actual driven path by the robot along the boundary line and every vertex of the polygon map. In Figure 5 correlation errors are graphically displayed.

Fourth, we search for the vertex with the minimal correlation error and check if the error is below a certain treshold . The parameter has to be trained accordingly to the given map. If we do not find a convenient vertex, we simply drive further along the wall until finding a vertex which follows the condition above. Thereby we limit the stored path from the robot to the length of the circumference of the map. If a convenient vertex is found, we call this vertex from now on matching vertex, the first guess of the robots position can be determined as , where is the matching vertex. Moreover, we can also define a orientation estimate as

(11)

where .

Figure 5: Correlation errors between the actual DP and the vertices of the map are shown. Here the vertex with the index fulfills the required conditions and is chosen for the first position estimate.

2.3 Systematic Search

Using the land navigation from above we now have a first estimate on where the robot is located. Based on this estimate we are able to start a systematic search in this region to determine a more accurate and certain pose estimate. Therefore, we use a particle filter since other localization techniques, such as Kalman Filter, can not handle binary measurement data. The general idea of the particle filter [28]

is to represent the probability distribution of the posterior by a set of samples, called particles, instead of using a parametric form as the Kalman Filter does. Here each particle

(12)

represents a concrete instantiation of the state at time , where denotes the number of particles used. The belief is then approximated by the set of particles . The Bayes filter posterior is used to include the likelihood of a state hypothesis

(13)

Here and represent the measurement history and the input signal history respectively.

The idea of our method is to reduce the particles required for global localization with a particle filter extensively by just sampling particles in an area around the position estimate retrieved by land navigation as shown above. This is nothing else than the transformation from a global localization problem to a local one. Therefore, we simply use a Gaussian distribution

with the mean and parametrized covariance matrix . The parameters can be chosen accordingly to the mean and uncertainty of the error of the initial pose estimate as we will see in Section 3. In Figure 6

Figure 6: Sampled Particles: Particles have been sampled around the first pose estimate generated using land navigation.

an example distribution of the particles around the estimated pose is shown. After sampling the particles we now are able to further improve the localization executing a systematic search along the border line using the particle filter. Thereby, the weighting of the particles happens as follows: If a particle would see the same as the sensor has measured, then we give this particle the weight , where is the weight of the i-th particle. If the particle would not see the same, then . Here the parameter has to be larger then .

3 Results

In this section, we evaluate the proposed algorithms in regard to stability and performance. Stability is defined as the fraction of localization experiments were the robot could localize itself from an initially lost pose in the presence of sensor noise. For that the Euclidian distance between the true and the estimated pose had to be below a threshold of . Performance is defined as the mean accuracy of the pose estimation and the time required to generate the pose estimate. Therefore, we created a simulation environment using Matlab with a velocity and an odometry motion model for a differential drive robot as presented in [28]. For realistic conditions, we calibrated the noise parameters in experiments to match the true motion model of a real Viking MI 422P, a purchasable autonomous lawn mower. For this calibration, we tracked the lawn mower movements using a visual tracking system (OptiTrack) and computed the transition model parameters through maximum likelihood estimation. The parameters can be found in Table 1. We used a sampling frequency of for the simulated system and the maximum linear velocity and angular velocity . These values correspond approximately to the standard for purchasable low-cost robots. We set the number of which leads to a period time of the wiggly lines . This is a trade-off between the enlargement of the scanning surface and the movement that can be performed on a real robot. The relative position of the binary sensor in regard to the robot frame, Equation (1), is given by .

Vel. Motion Model Odom. Motion Model
0.0346 0.0849
0.0316 0.0412
0.0755 0.0316
0.0566 0.0173
0.0592 -
0.0678 -
Table 1: Measured parameters for the velocity and odometry motion model from [28].

3.1 Wall Follower

Figure 7: Example of a path along the boundary line generated using the presented wall following algorithm with , , and a sensor noise of .

As mentioned before, only at the boundary we are able to generate data useful for localization since only there we can detect differences in the signal generated by the given binary sensor. Hence, we first evaluate the performance of the presented wall following algorithm. Therefore, we use the mean squared error (MSE) to measure the deviation of the path of the sensor to the given boundary line of the map. An example of such a path along the map is given in Figure 7. Let be the i-th position of the sensor and the closest point at the boundary line to , then

(14)

where represents the number of time steps required to complete one round along the boundary line. We also evaluate the mean velocity of the robot driving along the boundary

(15)

where is the circumference of the map and the time required by the robot to complete one round. In Figure 8, Figure 9 and Figure 10, the average simulation results over 10 simulations for different noise factors in Equation (2) and Equation (4) are presented. The used map is shown in Figure 7. The noise factor hereby determines the randomness of the binary signals, thus a factor of leads to always accurate measurements whereas a factor of implies total random measurement results. Hence, a noise factor of signals that of the output signals of the binary sensor are random.

The presented wall follower leads to a stable and accurate wall following behavior as exemplarily shown in Figure 7. With increasing noise factor, the MSE becomes larger. However, the algorithm is stable enough to steer the robot along the boundary line even by noise factors around . The algorithm seems to work best, thus leading to small MSE, for small . However, a small value for leads to large changes in the linear and angular velocities which may not be feasible on a real robot or may cause large odometry erros. We discuss this problem more intensively in Section 4. The parameter seems to have no strong influence neither on the MSE nor on the mean velocity.

MSE and mean velocity for a noise factor of
Figure 8: MSE and mean velocity for a noise factor of and varying parameters and . The results shown are the average values over 10 iterations.
MSE and mean velocity for a noise factor of
Figure 9: MSE and mean velocity for a noise factor of and varying parameters and . The results shown are the average values over 10 iterations.
MSE and mean velocity for a noise factor of
Figure 10: MSE and mean velocity for a noise factor of and varying parameters and . The results shown are the average values over 10 iterations.

3.2 Land Navigation

Figure 11: Maps used for the simulation. Map 1 is at the top and map 2 at the bottom. The circumferences for the maps are and .

In order to evaluate the performance and stability of the proposed shape comparison method we simulated the differential drive robot times in two different maps (Figure 11) with randomly chosen starting positions. We trained the parameters specifically for the given maps under usage of the proposed wall following algorithm with , , and a simulated sensor noise of . The resulting parameters are presented in Table 2.

Map 1
Map 2
Table 2: Trained parameters for the presented maps from Figure 11.

We then calculated the difference of the estimated position and the true position as well as for the estimated orientation and true orientation . Histograms of these errors are depicted in Figure 12 and Figure 13. The mean time for finding this first pose estimate is around for map and for map . The results show that the proposed algorithm is able to compute accurate estimates of the robots poses. Those estimates can then be further used to sample particles for a particle filter to systematically search in the region of interest for the correct pose.

Evaluation results for map 1: Histogram of the difference between position and orientation estimate and true position and orientation after matching through shape comparison with
Figure 12: Evaluation results for map 1: Histogram of the difference between position and orientation estimate and true position and orientation after matching through shape comparison with , , , .
Evaluation results for map 1: Histogram of the difference between position and orientation estimate and true position and orientation after matching through shape comparison with
Figure 13: Evaluation results for map 2: Histogram of the difference between position and orientation estimate and true position and orientation after matching through shape comparison with , , , .

3.3 Systematic Search

We now demonstrate how the position estimate from above can be further used to systematically search for a better estimate and for executing planning tasks. As proposed, we first sample particles around the position estimate using Gaussian distributions

(16)

The standard deviation

, depending on the measurement results from Section 3.2, ensures that the particles are sampled in an area large enough around the initial pose estimate. In Figure 15 and Figure 16

, the particles and the pose estimate are shown as well as the true position of the robot. In the following step we systematically search for the true pose of the robot by following the boundary line. The particle filter algorithm weights the particles and does a resampling if required. We stop the wall following if we are certain enough about the pose of the robot, hence the variance of the particles is less then a certain treshold. In Figure 

17 the robot after the systematic search for a better pose estimate is shown. Now we are able to execute certain tasks with the robot, for example mowing the lawn. Thereby, the robot has to move away from the boundary and the pose estimate becomes more uncertain as shown in Figure 18. After reaching again the boundary line the particle filter is able to enhance the pose estimate based on the new information as depicted in Figure 19.

We also evaluated the performance of the proposed systematic search. Therefore, we simulated the differential drive robot times sampling the particles as proposed in Equation (16). We then followed the boundary line until the standard deviation of the orientation has been below . Again, we evaluated the difference between position and orientation estimate and true position and orientation. Here, 3 times the particle filter has not found a sufficient accurate pose estimate. If this happens, we simply can restart the procedure by finding a pose estimate using land navigation. In Figure 14 a histogram depicting the results are shown. The systematic search for the pose of the robot using a particle filter improves the orientation estimate compared to the initial estimate presented in section 3.2.

Figure 14: Histogram of the difference between position and orientation estimate and true position and orientation after the systematic search using the particle filter with , , , .
Randomly sampled particles around the position estimate
Figure 15: Randomly sampled particles around the position estimate .
Randomly sampled particles around the position estimate
Figure 16: Randomly sampled particles around the position estimate in close up view.
Randomly sampled particles around the position estimate
Figure 17: Pose estimate of the robot after the systematic search.
Randomly sampled particles around the position estimate
Figure 18: Moving within the map increases the uncertainty of the pose estimate.
Randomly sampled particles around the position estimate
Figure 19: New information collected at the boundary leads to an improvement in pose estimation.

4 Conclusion

We have presented a method for global localization based only on odometry data and one binary sensor. This method, inspired by insect localization strategies, is essential for efficient path planning methods as presented in [10] or [9] which require an accurate estimate of the robots pose. Our approach combines ideas of path integration, land navigation and systematic search to find a pose estimate for the robot.
Our method relies on a stable and accurate wall following algorithm. We showed that the proposed algorithm is robust even under sensor noise and, if convenient parameters are chosen, leads to an accurate wall following behavior.
The proposed approach for finding a first pose estimate is robust and accurate. Given this pose estimate we are able to sample particles around this position for starting a systematic search with a particle filter algorithm. Therefore, we maintain the wall following behavior until the uncertainty of the pose estimate of the particle filter is below a certain threshold. The proposed search method improves the orientation estimate intensively while maintaining the accuracy of the position estimate. As mentioned in the Introduction, it is not possible to compare our approach directly with other approaches since different types of sensors are used. However, comparing our localization results with the results presented in [25] for a particle filter with a single low-range sensor, we reach a similar accuracy for the position estimate by only using a binary sensor. Thereby, the accuracy is in the range of around .

Using the generated pose estimate we are now able to execute given tasks by constantly relocating the robot at the boundary lines. For example, using bioinspired neural networks for complete coverage path planning

[31]. Open research questions in regard to the proposed approach are which methods can be used to learn the algorithm parameters , , , on the fly. Also, the algorithm has to be evaluated on a real robotic system in a realistic garden setup.

References

  • [1] F. Bernini (2009-November 3) Lawn-mower with sensor. Google Patents. Note: US Patent 7,613,552 Cited by: §1.
  • [2] C. Bobzin (2013-09) Cataglyphis nodus. Note: CC BY-SA 3.0, https://de.wikipedia.org/wiki/Cataglyphis External Links: Link Cited by: 1(a).
  • [3] M. F. Carr, S. P. Jadhav, and L. M. Frank (2011) Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval. Nature neuroscience 14 (2), pp. 147. Cited by: §1.
  • [4] F. Dellaert, D. Fox, W. Burgard, and S. Thrun (1999) Monte carlo localization for mobile robots. In Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on, Vol. 2, pp. 1322–1328. Cited by: §1.
  • [5] U. M. Erdem and M. Hasselmo (2012) A goal-directed spatial navigation model using forward trajectory planning based on grid cells. European Journal of Neuroscience 35 (6), pp. 916–931. Cited by: §1.
  • [6] L. H. Erickson, J. Knuth, J. M. O’Kane, and S. M. LaValle (2008) Probabilistic localization with a blind robot. In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pp. 1821–1827. Cited by: §1.
  • [7] L. Feng, S. Bi, M. Dong, F. Hong, Y. Liang, Q. Lin, and Y. Liu (2017) A global localization system for mobile robot using lidar sensor. In 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 478–483. Cited by: §1.
  • [8] D. J. Foster and M. A. Wilson (2006) Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature 440 (7084), pp. 680. Cited by: §1.
  • [9] E. Galceran and M. Carreras (2013) A survey on coverage path planning for robotics. Robotics and Autonomous systems 61 (12), pp. 1258–1276. Cited by: §4.
  • [10] J. Hess, M. Beinhofer, and W. Burgard (2014) A probabilistic approach to high-confidence cleaning guarantees for low-cost cleaning robots. In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pp. 5600–5605. Cited by: §4.
  • [11] J. J. Hopfield (1982) Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences 79 (8), pp. 2554–2558. Cited by: §1.
  • [12] S. Ito, F. Endres, M. Kuderer, G. D. Tipaldi, C. Stachniss, and W. Burgard (2014) W-rgb-d: floor-plan-based indoor global localization using a depth camera and wifi. In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pp. 417–422. Cited by: §1.
  • [13] A. Johnson and A. D. Redish (2007) Neural ensembles in ca3 transiently encode paths forward of the animal at a decision point. Journal of Neuroscience 27 (45), pp. 12176–12189. Cited by: §1.
  • [14] D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, and R. Wehner (2000) A mobile robot employing insect strategies for navigation. Robotics and Autonomous systems 30 (1-2), pp. 39–64. Cited by: §1.
  • [15] H. Lee and S. Jung (2012) Balancing and navigation control of a mobile inverted pendulum robot using sensor fusion of low cost sensors. Mechatronics 22 (1), pp. 95–105. Cited by: §1.
  • [16] B. L. McNaughton, F. P. Battaglia, O. Jensen, E. I. Moser, and M. Moser (2006) Path integration and the neural basis of the’cognitive map’. Nature Reviews Neuroscience 7 (8), pp. 663. Cited by: §1.
  • [17] D. P. Miller and M. G. Slack (1995) Design and testing of a low-cost robotic wheelchair prototype. Autonomous robots 2 (1), pp. 77–88. Cited by: §1.
  • [18] E. Nebot and H. Durrant-Whyte (1999) Initial calibration and alignment of low-cost inertial navigation units for land vehicle applications. Journal of Robotic Systems 16 (2), pp. 81–92. Cited by: §1.
  • [19] J. M. O’Kane and S. M. LaValle (2007) Localization with limited sensing. IEEE Transactions on Robotics 23 (4), pp. 704–716. Cited by: §1.
  • [20] J. O’keefe and L. Nadel (1978) The hippocampus as a cognitive map. Oxford: Clarendon Press. Cited by: §1.
  • [21] B. E. Pfeiffer and D. J. Foster (2013) Hippocampal place-cell sequences depict future paths to remembered goals. Nature 497 (7447), pp. 74. Cited by: §1.
  • [22] A. D. Redish et al. (1999) Beyond the cognitive map: from place cells to episodic memory. MIT press. Cited by: §1.
  • [23] E. Rueckert, D. Kappel, D. Tanneberg, D. Pecevski, and J. Peters (2016-01-15) Recurrent spiking networks solve planning tasks. Nature Publishing Group: Scientific Reports 6 (21142). External Links: Document, Link Cited by: §1.
  • [24] A. Samsonovich and B. L. McNaughton (1997) Path integration and cognitive mapping in a continuous attractor neural network model. Journal of Neuroscience 17 (15), pp. 5900–5920. Cited by: §1.
  • [25] D. Stavrou and C. Panayiotou (2012) Localization of a simple robot with low computational-power using a single short range sensor. In Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on, pp. 729–734. Cited by: §1, §4.
  • [26] Z. Su, X. Zhou, T. Cheng, H. Zhang, B. Xu, and W. Chen (2017) Global localization of a mobile robot using lidar and visual features. In Robotics and Biomimetics (ROBIO), 2017 IEEE International Conference on, pp. 2377–2383. Cited by: §1.
  • [27] D. Tanneberg, J. Peters, and E. Rueckert (2019-01-01) Intrinsic motivation and mental replay enable efficient online adaptation in stochastic recurrent networks. Neural Networks - Elsevier 109, pp. 67–80. Note: Impact Factor of 7.197 (2017) External Links: Document, ISBN 0893-6080, Link Cited by: §1.
  • [28] S. Thrun, W. Burgard, and D. Fox (2005) Probabilistic robotics. MIT press. Cited by: §2.3, Table 1, §3.
  • [29] B. Tribelhorn and Z. Dodds (2007) Evaluating the roomba: a low-cost, ubiquitous platform for robotics research and education.. In ICRA, pp. 1393–1399. Cited by: §1.
  • [30] R. Wehner (1987) Spatial organization of foraging behavior in individually searching desert ants, cataglyphis (sahara desert) and ocymyrmex (namib desert). In From individual to collective behavior in social insects: les Treilles Workshop/edited by Jacques M. Pasteels, Jean-Louis Deneubourg, Cited by: §1.
  • [31] J. Zhang, H. Lv, D. He, L. Huang, Y. Dai, and Z. Zhang (2017) DISCRETE bioinspired neural network for complete coverage path planning. International Journal of Robotics and Automation 32 (2). Cited by: §4.