Topological Navigation Graph

10/15/2019 ∙ by Povilas Daniusis, et al. ∙ 9

In this article, we focus on the utilisation of reactive trajectory imitation controllers for goal-directed mobile robot navigation. We propose a topological navigation graph (TNG) - an imitation-learning-based framework for navigating through environments with intersecting trajectories. The TNG framework represents the environment as a directed graph composed of deep neural networks. Each vertex of the graph corresponds to a trajectory and is represented by a trajectory identification classifier and a trajectory imitation controller. For trajectory following, we propose the novel use of neural object detection architectures. The edges of TNG correspond to intersections between trajectories and are all represented by a classifier. We provide empirical evaluation of the proposed navigation framework and its components in simulated and real-world environments, demonstrating that TNG allows us to utilise non-goal-directed, imitation-learning methods for goal-directed autonomous navigation.



There are no comments yet.


page 3

page 8

page 10

page 11

page 12

Code Repositories


A version of the topological_navigation stack ( that can run in ROS Fuerte.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Combining perception and action into a single algorithm of fundamental and practical importance, mobile robot navigation is an attractive area for artificial intelligence research. Despite significant efforts, both from academia and industry, this area still poses challenges that need to be resolved in order to create autonomous systems capable of operating efficiently in real-world environments. For many years, the mainstream direction of robot navigation research has been largely focused on methods relying on accurate sensors and direct (e.g. probabilistic-geometric) models 

[1, 2, 3, 4]. Regardless of certain advantages (e.g. the interpretability of geometric maps), this approach poses some serious limitations, such as a lack of flexibility, drastically increasing complexity when large numbers of situations have to be identified and handled by the model.

On the other side, when humans learn various complex sensorimotor behaviours, including the skill to navigate, adaptivity and imitation play crucial roles [5]. Once children learn several basic behaviours, they soon figure out how to connect them in order to reach different, increasingly complex goals [6]. Biological neural networks are excellent learners from examples. For more than three decades, artificial neural networks have been utilised to build autonomous mobility systems or their components  [7, 8, 9, 10, 11, 12, 13]. The recent progress of neural network methods, fuelled by deep learning techniques, has been successfully adopted by mobile robot autonomy researchers, resulting in several important achievements in this field. For example, [10]

applies behaviour cloning and modern convolutional neural networks (CNNs), achieving vehicle autonomy on real roads, 


describes a transfer learning architecture for learning to drive a real vehicle solely in a simulator, 

[15] contributes a neural architecture for conditional imitation learning, [16] introduces a method to learn navigation policies from self-supervision, [17]

proposes a recurrent neural network-based solution with a demonstration using real cars. These works, in various aspects, also rely on imitation learning, which highlights a tendency that, by benefiting from the current renaissance of connectionist systems, imitation learning is becoming an increasingly popular robot autonomy research direction 


One of the most attractive properties of the imitation learning approach is the possibility of learning complex sensorimotor behaviours from demonstration data instead of direct programming. However, many imitation learning algorithms (e.g. [10, 14, 19]) are not goal directed by design. This does not allow practitioners to use these algorithms in applications, which require goal-directed behaviour. In this article, we solve this problem by suggesting a topological navigation graph (TNG) framework, which allows one to create goal-directed navigation systems from non-goal-directed, imitation-learning components.

The main contributions of this study include the TNG framework, the application of neural object detection architectures to obtain more robust trajectory imitation, and the experimental evaluation of the suggested algorithms both in a simulator and the real world by using Neurotechnology’s SentiBotics mobile robot development kit [20].

Figure 1: The architecture of regression controller (1), represented by the composition of pre-trained CNN (Mobilenet V2) and trainable MLP.

The rest of paper is organised as follows. Section 2 describes the core component of TNG - trajectory imitation controllers. Therein, we describe regression controller and propose a new, object-detection-based trajectory imitation controller. Section 3 is devoted to TNG and its implementation, and Section 4 provides an empirical evaluation of the proposed algorithms. Finally, Section 5 concludes the article.

2 Trajectory Imitation Controllers

In this section, we focus on learning a reactive skill of visuomotor trajectory following independently from the overall navigation method. As becomes apparent in Section 3, this independence adds modularity facilitating in performing complex behaviours as a whole, where every reactive task is a building block.

The trajectory following skill is represented by the trajectory imitation controller , where , , and , respectively denote camera image inputs, motor command outputs, and learnable parameters of the parametric function (e.g. a deep neural network). In order to obtain the trajectory imitation controller, the robot is first driven by an expert while the corresponding camera image and motor command data pairs

are collected. The data pairs are utilised to estimate model parameters

. Once the parameters have been estimated, the robot is expected to be able to follow the learned trajectory.

The problem of learning imitation controllers has a long history [7] and is currently being actively researched [10, 21, 15, 16, 22, 14]. Although various solutions have been proposed, in general, it remains unsolved both in the fundamental and technical senses. Important challenges associated with the trajectory following problem are related to natural environmental variations, such as lighting conditions, changes in certain environmental elements (e.g. objects like furniture), as well as the appearance of obstacles, pedestrians, and other dynamics. In most cases, it is not technically feasible to directly encode all the environmental variation into training sets of acceptable size. Therefore, we seek models that can generalise and distil the desired behaviour from realistic data sets.

We compare two models for this task. Our first model is derived from [10] which regresses motor commands with the use of a CNN, followed by our second model which is based on an object detection neural network.

2.1 Regression Controller

Trained according to behaviour cloning [18], the regression controller is a neural network model:



is a multilayer perceptron (MLP) 

[23], and is fixed a CNN encoder (e.g. ResNet [24] or MobilenetV2 [25]), with fixed weights pre-trained on ImageNet [26] classification task, and represent trainable parameters. Having a training set we obtain parameter estimate

by minimising the loss function of the ridge regression 



We optimise (2) with the Adam optimisation algorithm [28]. As compared to similar work by [10], where the entire parameter set is optimised, our approach helps to reduce the training duration without affecting the performance. This may be due to the fact that pre-trained CNN encoders extract rather general features, which are efficient representations of points in trajectory. Following the insights of [15] we also use dataset aggregation (DAgger [29]) and augmentation (input/output shifting, random lighting, and regional dropout), allowing us to improve the model’s stability and robustness.

We empirically identified several limitations in the aforementioned approach. Firstly, controllers trained with this approach are highly sensitive to rotational errors; that is, slight deviations from the trajectory may cause cascading errors [18] and eventually drive the robot out of the trajectory with no possibility of self-recovery. We used input/output shifting data augmentation to solve this problem, but it only helps to some extent. The second limitation is the model’s sensitivity to changes in the environment, which occur due to illumination variations, moved furniture, pedestrians, and so forth. To tackle this problem, additional iterations of DAgger can be performed, but with further unavoidable changes in the environment, the problem reoccurs. Thirdly, the trajectory following performance during execution does not necessarily correlate with the value of offline evaluation over the test set using the loss function [30], hence complicating the model evaluation.

2.2 Object Detection-Based Controller

In order to avoid the aforementioned problems, we utilize the RetinaNet object detection architecture [31]. Conceptually, our approach is similar to that of [22]. However, instead of detecting specific objects, which may be not present in the navigation scene, we train the object detector to detect the robot’s direction of movement, represented with a single class bounding box on the input image.

The training data consist of camera images paired with bounding box coordinates as per the rotation axis of the robot in the image. The coordinates are centered if the robot is being driven straight, and horizontally shifted in the direction of the rotation if the robot is rotated (see Figure 2).

(a) steering: LEFT
(b) steering: NONE
(c) steering: RIGHT
Figure 2: Samples of a training set as per rotation axis.

We observed shifting to be necessary for successful trajectory following at points where the robot has to make sharp turns (e.g. when steering around a corner). The model used for training is pre-trained on the COCO dataset [32]

over the object detection task. We optimise all parameters using stochastic gradient descent with momentum optimisation algorithm.

During controller execution, the output of the object detector (i.e. coordinates of the bounding box showing the direction of movement) is coupled with a PID controller [33] to centre a detected bounding box, resulting in trajectory imitation. The error input for the PID controller is defined as , where and are the detected bounding box’s centre coordinate and the centre coordinate of the input image (see Figure 3).

Figure 3: Visualisation of a detection of direction of movement (bounding box) and relative robot’s drift from the trajectory (arrow) during execution.

Comparing to the approach of Section 2.1, an object detection-based controller, by design, allows recovery from larger rotational errors, as learnt bounding boxes can be detected from various geometric transformations of training examples without any recovery-aiding data augmentations. We observed that this controller has a tendency to produce detections with low confidence for images far outside the distribution of the training sets, which adds the possibility of avoiding potentially incorrect decisions.

3 Topological Navigation Graph (TNG)

In this section, we review similar work of other authors and formally describe the proposed TNG framework for creating a goal-driven navigation system from separately learnt reactive skills.

In recent years, deep-learning-based autonomous robot navigation has been looked at from different perspectives, resulting in varied approaches being proposed. The work from [11], which includes a mapper and a planner, advances previous approaches. The researchers propose a joint, end-to-end architecture that is differentiable and has the ability to plan with a belief map for the environments where an incomplete set of world observations are provided. A noteworthy feature of  [11] is that their method is able to generalize to new environments. Another approach [13], proposes a semi-parametric topological memory (SPTM) architecture for navigation, where the environment is mapped on a non-parametric graph, in which every position is a node. In their method a parametric deep neural network is used to retrieve nodes to reach a specific goal node. [34] attempts to tackle the challenge of learning goal-directed policies from data representing a single traversal in an environment spanning over 2 kilometres. It does so by a topological approach where points in trajectories at a particular distance are represented as nodes and path planning is learnt with random episodic training.

Figure 4: Example of an environment of four intersecting trajectories (directed colored lines) and corresponding TNG.

The mentioned approaches also have certain limitations. Whilst end-to-end learning can simultaneously tune the entire architecture’s parameter set, this advantage often comes at the cost of requiring large datasets and longer training durations [35]

. Reinforcement-learning-based methods such as 

[11] are dependent on high amounts of resources for training its joint architecture since it needs to tune the parameters by interacting with a well-mapped, photo-realistic environment. Moreover, the method is also reliant on hard-to-satisfy assumptions, such as the existence of perfect odometry. While [34] is also a reinforcement-learning-based method, to reduce the resource-heavy factor it leverages the method with one-shot reinforcement learning [36]. While it shows that it can perform planning, the low-level commands that traverse from node to node are controlled by a human operator and are not learnt.  [13]

relies on self-supervised learning but no real-world demonstrations have been shown in the work.

Keeping in mind the above limitations, we propose a structure of perception and action modules, which allows one to plan and perform goal-driven navigation.

3.1 Proposed framework

Let us assume, that the environment is covered by trajectories, and let be trained trajectory imitation controllers (see Section 2), , where is the input image, and is the control signal for behaviour in the -th trajectory. We also assume, that the trajectories intersect in such a way that it is possible to reach any given trajectory, starting from any of the aforementioned trajectories, possibly by switching controllers at intersections. An example of such environment is depicted in Figure 4. Let us assign to every -th trajectory a binary classifier with parameters , which allows to detect that the robot is in this trajectory. The situations where the robot is at some intersection of the -th and -th trajectories can be recognised by another binary classifier , where are corresponding parameters.

Utilising the aforementioned components we define TNG as a directed graph , with nodes corresponding to pairs , and edges - to (see Figure 4).

Having TNG and the current sensory input , we can topologically localise the robot by evaluating and , providing information about the trajectory on which the robot currently is, as well as to which ones it can immediately navigate by starting to execute the corresponding imitation controllers. Similarly, having an externally provided goal image we can identify target trajectory by evaluating .

After the shortest path in the TNG between current and target vertices is computed (e.g. by Dijkstra’s algorithm [37]), we can start executing the controllers and classifiers, contained in its vertices and edges, switching them at every intersection connecting towards the goal, recognised by corresponding intersection classifiers until the goal trajectory is reached. Afterwards we continue to execute the goal trajectory controller until the exact location is recognised by the goal-reaching function . Here, are corresponding parameters.

3.2 Notes on Implementation

3.2.1 Classifiers and goal-reaching function

Trajectory classifiers are constructed using the form mentioned in (1) with the SoftMax activation of the output layer, as well as activations in the hidden layers. We use neurones in each layer, where is the number of trajectories in the TNG. The output of the trajectory classifiers are given by


where . The parameters of the classification network, , are optimised by minimising cross entropy loss [23], and the parameters of the encoder network, , are pre-trained on the ImageNet [26] classification task and kept fixed during training of trajectory imitation controller. We select MobileNetV2 [25] because it allows us to achieve an acceptable execution speed using low cost embedded neural network inference hardware 111

Since trajectory intersection classifiers require individual training for each intersection, it adds on a requirement of individually labelling data points for the same. In order to minimise manual data labelling in our implementation, we perform the learning of intersections with the use of a proprietary object recognition engine derived from [38], using FAST corners [39] and BRIEF features [40] as descriptors. This approach allows for the efficient memorisation and recognition of intersections, while being represented by only a few images. We experimentally found that the same principle proved to be quite efficient to model the goal-reaching function as well.

3.2.2 Trajectory Imitation Controllers

We implement TNG with both options of trajectory imitation controllers , described in Section 2.

Regression Controller.

In the case of regression controller (1), we use MLP architecture, containing neurones and activations in every layer. The output of the network is clipped at the

interval, which corresponds to the range of the SentiBotics control pad command.

Object detection-based controller.

In the case of the object-detection-based controller, we use the architecture described in Section 2.2. The controller is trained using bounding boxes of pixels. The output of the PID controller is also clipped to the range of the SentiBotics control pad command.

4 Empirical Evaluation

Our implementation of trajectory imitation controllers and TNG is done with the use of Neurotechnology’s SentiBotics Navigation SDK 3.0 [20]. We conduct the evaluation in real environments using the mobile robot platform prototype, and for simulations, we use Gazebo222 simulation modules, from the aforementioned SDK.

We start by describing the evaluation environments and the data collection process, then compare the reactive trajectory imitation controllers described in Section 2.1 and Section 2.2, and finally evaluate the navigation capability of TNG.

4.1 Environments and Data

To evaluate trajectory imitation controllers, we utilise three real-world environments and one simulated environment. These are closed-loop trajectories where two are situated inside an office (referred to as L1 and L2), one is situated in a shopping centre (L3) and the simulated environment (L4) consists of a large room with objects such as couches and tables. The trajectories are depicted in Figure  6. In turn, for the evaluation of the navigation capabilities of TNG, we utilised an office environment with multiple connecting corridors, and the map can be seen in Figure 7. The real-world environments are dynamic in terms of lighting since they are open to natural light and data is collected during the day. In addition, there are pedestrians walking around during the data collection and experimentation, while the simulated environment is completely static.

Figure 5: Number of data points used to achieve reported scores
(a) L1
(b) L2
(c) L3
(d) L4
Figure 6: Environments used to train trajectory imitation controllers on. (a) & (b) are sitauted inside an office, (c) is situated in shopping mall and (d) is a simulated environment.

To collect the data for each of the trajectories to be learnt, we drive the robot through each of the trajectory three times and record the images seen from the robot’s front facing RGB camera, as well as the corresponding motor commands.

4.2 Experiments with Trajectory Imitation Controllers

We train both the controllers in the ways mentioned earlier, where the base data remains the same, but the regression controller contains aggregated data from a few iterations of DAgger. The number of iterations is dependent on the complexity of the trajectory to learn.

Once both the controllers have been trained for a trajectory, we run the controller on the closed trajectory with a target of completing 10 laps. We observe the execution actively and intervene only when we notice the controller had made a mistake. Each time there is an intervention, we record the time required to correct the robot and bring it back to the trajectory. The experiments for evaluation are performed twice, first immediately after data collection and subsequently after a period of 4 weeks after data collection. This step is done to check if the performance holds up after there have been changes in the environments, hence we also compare the differences between the two evaluations. We do not repeat the experiments in the simulated environment since it is static and not bound to change with time.

To compare the performance of trajectory imitation controllers we use percentage autonomy [10] as a controller quality measure:


where is duration of time when the robot is controlled by a human (i.e. at times where correction is needed), and denotes the total duration of execution.

Trajectory Controller PA (%) PA (%) Difference
name type before after (%)
L1 Reg. 99.9 97.0 2.9
L2 Reg. 99.8 98.7 1.1
L3 Reg. 99.9 99.3 0.6
L1 Obj. det. 98.0 97.1 0.9
L2 Obj. det. 99.7 99.5 0.2
L3 Obj. det. 99.6 99.3 0.3
L4 Reg. 99.9 - -
L4 Obj. det. 100.0 - -
Table 1: Percentage autonomy scores calculated before and after an interval of 4 weeks.

Discussion. Table 1 displays the percentage autonomy score obtained for each trajectory before and after the interval, along with the difference in scores. Both controllers perform nearly perfectly in the simulated environment, possibly due to its static nature, while in the real-world environments, the performance is comparatively lower. Over the two evaluations over intervals, the regression controller demonstrates higher degradation in performance than the object-detection-based controller.

We speculate that the reason for the regression controller to have a higher score in the first evaluation could be due to the several DAgger iterations performed on every trajectory’s controller to improve until almost perfection, causing to over-fit to the appearance of the environment at the time of the last DAgger iteration. In contrast, in the second evaluation, the appearance goes through changes due to environmental conditions, as tests were carried out at a different time of the day.

The object detection controller’s setting poses trajectory imitation as a problem of detecting the direction of the path rather than regressing direct commands given an image, making it simple enough to produce commands heuristically. The performance of this setting indicates to be better utilizing the initial dataset and does not need aggregated data to reach high autonomy, compared to the regression controller for which it takes several DAgger iterations to reach similar autonomy. With lower differences in the percentage autonomy between evaluations before and after an interval, object detection based controller shows better tolerance to environmental changes, while creating fewer possibilities for cascading errors to occur.

We observed that the regression controllers’ mistakes were often unpredictable where as object detection based controllers made mistakes at specific locations in the trajectories. Hence, the performance can be further enhanced by exploring the application of DAgger on the object detection-based controller. We also noticed the object detection based controllers with better loss over the testsets show better real-world performances, which is not the case for the regression controller [30], this can be further investigated in future work.

4.3 Experiments with TNG

We cover a subset of an office environment with five trajectories (denoted as T1-T5), and construct the corresponding TNG, as depicted in Figure 7. For each different trajectory pair, we randomly selected initial and goal positions, and we conduct autonomous navigation episodes between them, recording average percentage autonomy, travelled distance in metres (Table 2), as well as corresponding initial, goal, and final images, which provide information on the accuracy of each navigation experiment. These images are depicted in Figures 891011 and 12 in the first, second and third columns respectively. Blue rectangles correspondingly denote visually specified and recognised navigation goal. We conduct all the experiments in a natural office environment during working hours.

The obtained empirical results indicate that the TNG framework indeed allows one to utilise pre-trained reactive trajectory imitation controllers to achieve goal-directed navigation in real-world environments.

Percentage autonomy
T1 T2 T3 T4 T5
T1 - 99.1 96.4 93.4 92.9
T2 96.0 - 96.3 98.2 97.9
T3 94.9 92.1 - 94.6 98.3
T4 99.1 97.0 97.2 - 97.3
T5 95.8 95.9 98.7 94.6 -
Distance (m.)
T1 T2 T3 T4 T5
T1 - 18.5 65.6 75.2 8.5
T2 85.3 - 47.7 62.2 82.2
T3 80.1 71.6 - 59.9 73.8
T4 14.8 38.2 81.1 - 39.2
T5 27.2 31.5 61.0 96.1 -
Table 2: Statistics of TNG navigation experiments. Rows and columns correspond to the source and destination trajectories respectively. On the left we report averaged percentage autonomy values (mean ), and on the right we provide traveled distance in metres (total distance kilometres).
Figure 7: Schematic of experiment trajectories T1-T5 depicted in different colors, and corresponding TNG model.
Figure 8: Results of autonomous navigation from trajectory T1 to the goals from the remaining trajectories (T2,T3,T4,T5).
Figure 9: Results of autonomous navigation from trajectory T2 to the goals from the remaining trajectories (T1,T3,T4,T5).
Figure 10: Results of autonomous navigation from trajectory T3 to the goals from the remaining trajectories (T1,T2,T4,T5).
Figure 11: Results of autonomous navigation from trajectory T4 to the goals from the remaining trajectories (T1,T2,T3,T5).
Figure 12: Results of autonomous navigation from trajectory T5 to the goals from the remaining trajectories (T1,T2,T3,T4).

Discussion. In contrast to metric models (e.g. [2, 11, 3]), where localisation is performed by geometrically estimating a robot’s pose on a map, or to other topological (e.g. [13, 41, 34]) or hybrid (e.g. [42, 43]) approaches, the trajectory classifiers and trajectory intersection classifiers of TNG provide the minimal localisation information, required to keep or switch reactive behaviour in a way that eventually leads to reaching the given goal. The modularity of the proposed framework allows one to augment it incrementally by adding the required imitation controllers and classifiers to the graph. Although the framework is composed of "black box" modules, its operation is transparent and easily interpretable at the system level, which can be regarded as a practical advantage, compared to entirely "black box" models (e.g. [11]). These properties may help to apply TNG to larger areas. However, TNG assumes that the navigation environment should consist of separable, intersecting trajectories. Therefore, this framework may not cope well with arbitrary environment coverings (e.g., two parallel nearby trajectories would be hard to discriminate). Moreover, TNG is geometrically sub-optimal, and requires significant human effort - both when collecting training data and when designing the navigation graph. However, there are many realistic scenarios (e.g. navigation in buildings and roads), where the aforementioned intersecting trajectory environment assumption is naturally satisfied.

5 Conclusion

In this article, we studied how to utilise reactive trajectory imitation controllers for goal-directed autonomous robot navigation. As a solution, we suggested a novel TNG framework. We also proposed the application of neural object detection architectures for visuomotor trajectory imitation. We performed an empirical evaluation of the suggested algorithms both in a simulator and in reality. The experiments conducted reveal that neural object detection architectures can be efficiently applied for visuomotor trajectory imitation, and the proposed TNG framework allows to compose reactive trajectory imitation modules into a goal-directed navigation system capable of achieving visually specified goal states.

Our future work will include research on more efficient and reliable imitation learning approaches able to handle more dynamic environments, optimising TNG by adding a fully automatic environment graph construction capability and extending it with a hierarchical component.


This research has been funded by Neurotechnology.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


We are also grateful to Neurotechnology for providing resources and support for this research.


  • [1] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005.
  • [2] Raul Mur-Artal, J Montiel, and Juan Tardos. Orb-slam: a versatile and accurate monocular slam system. IEEE Trans. Robot., 31:1147 – 1163, 10 2015.
  • [3] Jakob Engel and Daniel Cremers. Lsd-slam: Large-scale direct monocular slam. In In ECCV, pages 834–849, 2014.
  • [4] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J.J. Leonard. Past, present, and future of simultaneous localization and mapping: Towards the robust-perception age. IEEE Trans. Robot., 32(6):1309–1332, 2016.
  • [5] Samuel J Gershman, Eric J Horvitz, and Joshua B Tenenbaum. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245):273–278, 2015.
  • [6] Cecilia I Calero, Diego E Shalom, Elizabeth S Spelke, and Mariano Sigman. Language, gesture, and judgment: Children’s paths to abstract geometry. J. Exp. Child Psychol., 177:70–85, 2019.
  • [7] Dean Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NIPS, 1988.
  • [8] Philippe Gaussier and Stéphane Zrehen. Perac: A neural architecture to control artificial animals. Robot. Auton. Syst., 16:291–320, 12 1995.
  • [9] Y. LeCun, E. Cosatto, J. Ben, U. Muller, and B. Flepp. Dave: Autonomous off-road vehicle control using end-to-end learning. Technical Report DARPA-IPTO Final Report, Courant Institute/CBLL,, 2004.
  • [10] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. End to end learning for self-driving cars. CoRR, abs/1604.07316, 2016.
  • [11] S. Gupta, J. Davidson, S. Levine, R. Sukthankar, and J. Malik. Cognitive mapping and planning for visual navigation. In

    Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.

    , pages 7272–7281, July 2017.
  • [12] Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J Chadwick, Thomas Degris, Joseph Modayil, et al. Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705):429–433, 2018.
  • [13] Nikolay Savinov, Alexey Dosovitskiy, and Vladlen Koltun. Semi-parametric topological memory for navigation. In ICLR, 2018.
  • [14] Alex Bewley, Jessica Rigley, Yuxuan Liu, Jeffrey Hawke, Richard Shen, Vinh-Dieu Lam, and Alex Kendall. Learning to drive from simulation without real world labels. In ICRA, 2019.
  • [15] Felipe Codevilla, Matthias Miiller, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy. End-to-end driving via conditional imitation learning. In ICRA, pages 1–9. IEEE, 2018.
  • [16] Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros, and Trevor Darrell. Zero-shot visual imitation. In ICLR, 2018.
  • [17] Mayank Bansal, Alex Krizhevsky, and Abhijit S. Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. ArXiv, abs/1812.03079, 2018.
  • [18] T. Osa, J. Pajarinen, G. Neumann, J.A. Bagnell, P. Abbeel, and J. Peters. An algorithmic perspective on imitation learning. Found. Trends Robot., 7(1-2):1–179, March 2018.
  • [19] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In NIPS, pages 4565–4573. 2016.
  • [20] Neurotechnology. SentiBotics Navigation SDK 3.0., 2018. [Online; accessed 28-September-2019].
  • [21] A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer. Imitating driver behavior with generative adversarial networks. In IV, pages 204–211, June 2017.
  • [22] Dequan Wang, Coline Devin, Qi-Zhi Cai, Fisher Yu, and Trevor Darrell. Deep object centric policies for autonomous driving. CoRR, abs/1811.05432, 2018.
  • [23] Kevin P. Murphy. Machine learning : a probabilistic perspective. MIT Press, Cambridge, Mass., 2013.
  • [24] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pages 770–778, June 2016.
  • [25] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pages 4510–4520, June 2018.
  • [26] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pages 248–255. Ieee, 2009.
  • [27] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics New York, 2001.
  • [28] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014.
  • [29] Stephane Ross, Geoffrey J. Gordon, and J Andrew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. J. Mach. Learn. Res., 15:627–635, 11 2010.
  • [30] Felipe Codevilla, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy. On offline evaluation of vision-based driving models. In ECCV, volume 15, pages 246–262, 2018.
  • [31] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In ICCV, Oct 2017.
  • [32] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Comput. Vis. ECCV, pages 740–755. Springer, 2014.
  • [33] Karl Johan Aström and Richard M Murray. Feedback systems: an introduction for scientists and engineers. Princeton university press, 2010.
  • [34] Jake Bruce, Niko Sünderhauf, Piotr W. Mirowski, Raia Hadsell, and Michael Milford. Learning deployable navigation policies at kilometer scale from a single traversal. In CoRL, pages 346–361, 2018.
  • [35] Tobias Glasmachers. Limits of end-to-end learning. In JMLR, volume 77, pages 17–32, 2017.
  • [36] Jake Bruce, Niko Sünderhauf, Piotr Mirowski, Raia Hadsell, and Michael Milford. One-shot reinforcement learning for robot navigation with interactive replay. arXiv preprint arXiv:1711.10137, 2017.
  • [37] Steven M. LaValle. Planning Algorithms. Cambridge University Press, New York, NY, USA, 2006.
  • [38] M. Labbé. Find-object., 2011. accessed 2019-09-29.
  • [39] Edward Rosten and Tom Drummond. Machine learning for high-speed corner detection. In Comput. Vis. ECCV, pages 430–443, Berlin, Heidelberg, 2006. Springer-Verlag.
  • [40] Michael Calonder, Vincent Lepetit, Christoph Strecha, and Pascal Fua. Brief: Binary robust independent elementary features. In Comput. Vis. ECCV, pages 778–792. Springer, 2010.
  • [41] Friedrich Fraundorfer, Chris Engels, and David Nistér. Topological mapping, localization and navigation using image collections. IROS, pages 3872–3877, 2007.
  • [42] Sebastian Thrun. Learning metric-topological maps for indoor mobile robot navigation. Artif. Intell., 99:21–71, 02 1998.
  • [43] M. J. Milford and G. F. Wyeth. Mapping a suburb with a single camera using a biologically inspired slam system. IEEE Trans. Robot., 24(5):1038–1053, Oct 2008.