A Hierarchical Network for Diverse Trajectory Proposals

06/09/2019
by   Sriram N. N., et al.
0

Autonomous explorative robots frequently encounter scenarios where multiple future trajectories can be pursued. Often these are cases with multiple paths around an obstacle or trajectory options towards various frontiers. Humans in such situations can inherently perceive and reason about the surrounding environment to identify several possibilities of either manoeuvring around the obstacles or moving towards various frontiers. In this work, we propose a 2 stage Convolutional Neural Network architecture which mimics such an ability to map the perceived surroundings to multiple trajectories that a robot can choose to traverse. The first stage is a Trajectory Proposal Network which suggests diverse regions in the environment which can be occupied in the future. The second stage is a Trajectory Sampling network which provides a finegrained trajectory over the regions proposed by Trajectory Proposal Network. We evaluate our framework in diverse and complicated real life settings. For the outdoor case, we use the KITTI dataset and our own outdoor driving dataset. In the indoor setting, we use an autonomous drone to navigate various scenarios and also a ground robot which can explore the environment using the trajectories proposed by our framework. Our experiments suggest that the framework is able to develop a semantic understanding of the obstacles, open regions and identify diverse trajectories that a robot can traverse. Our comparisons portray the performance gain of the proposed architecture over a diverse set of methods against which it is compared.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

11/12/2020

Autonomous Obstacle Legipulation with a Hexapod Robot

Legged robots traversing in confined environments could find their only ...
11/03/2021

Realtime Trajectory Smoothing with Neural Nets

In order to safely and efficiently collaborate with humans, industrial r...
04/26/2020

TPNet: Trajectory Proposal Network for Motion Prediction

Making accurate motion prediction of the surrounding traffic agents such...
08/30/2017

Decentralized Trajectory Tracking Using Homology and Hodge Decomposition in Sensor Networks

With the recent development of localization and tracking systems for bot...
06/02/2021

OctoPath: An OcTree Based Self-Supervised Learning Approach to Local Trajectory Planning for Mobile Robots

Autonomous mobile robots are usually faced with challenging situations w...
12/22/2017

An Incremental Self-Organizing Architecture for Sensorimotor Learning and Prediction

During visuomotor tasks, robots have to compensate for the temporal dela...
09/05/2019

Occ-Traj120: Occupancy Maps with Associated Trajectories

Trajectory modelling had been the principal research area for understand...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Autonomous navigation requires the explorative ability to navigate by identifying diverse paths to multiple goal points by perceiving its environment. A simple instance is a case where an autonomous drone using SLAM [13] or using any other sensor based reconstruction [1, 14] can manoeuvre between obstacles by going around towards their right or left side. Similarly, in an indoor corridor intersection, it can proceed straight, right or left. The autonomous robot has to identify various possible paths and goal points that it can pursue in any given setup. Such a task is easy for humans where they can map the scene configuration to identify multiple traversable areas, goal points and also a fine grained path. One such use case in autonomous vehicle setting is where you know the rough direction of travel but not the actual goal point. A diverse path prediction is required in this case to determine the best plan of action. Some of the key applications can include but not limited to:

Fig. 1: Trajectory Proposal:(a) Bird’s-eye-view of the pointcloud data generated at a road bifurcation. This pointcloud data is converted into a 2D occupancy grid and passed as an input to TPNet

(b) Proposed trajectories in the form of probability values for each pixel which is passed to

TSNet (c) Sampled trajectory output. (d) Third-person-view of the driving scenario illustrating proposed trajectories with respect to the car.
  • In case of rerouting alternate trajectories might be required due to sudden unexpected changes in the environment.

  • It also opens up the possibility of reaching the destination in multiple possible ways.

  • It can be useful in overcoming GPS errors. The networks output can help in guiding the vehicle precisely to take turns thus overcoming the state estimation errors that results in early or late turns near an intersection as shown in figure

    2.

In this work, we present a Convolutional Neural Network framework which precludes the need for explicit determination of candidate waypoints as it learns a direct mapping from intermediate semantic representation to candidate trajectories as shown in fig. 1

. Our framework which consists of two stages. The first is a Trajectory Proposal Network (TPNet), an encoder-decoder style network, which uses the occupancy map and robot’s past trajectory as input and produces multiple traversable areas by inherently discovering the waypoints possible in the scene. This is achieved through our novel supervision based on multiple choice learning which encourages the TPNet to identify various waypoints and propose diverse traversable areas for a given scenario. The second stage of our framework, the Trajectory Sampler Network (TSNet) is a Long Short-Term Memory (LSTM) based network which uses the proposals by TPNet and samples precise trajectories values through those proposals which can lead the robot to various goal points. This alleviates the need to explicitly sample various goal points and compute trajectories towards them. Further, our framework generalizes well when trained on outdoor data and evaluated on indoor data and vice-versa. This is because we use input data which is occupancy based instead of appearance based.

Fig. 2: Planning failures due to localization based on GPS. Green dot represents the true location of the ego vehicle while the blue dot represents the erroneous location given by GPS. Hence, planning a trajectory based on the GPS location (yellow line) and executing the trajectory on the true location might not be feasible (blue line). Our proposed method gives a trajectory based on the scene structure (red line) which further can help in correcting these errors.

To evaluate the efficacy of our network, we perform extensive experimentation in outdoor scenes using the KITTI [4] scenes and our own outdoor driving scenes. Additionally, we demonstrate results in an indoor setting in a laboratory environment with varied complexity in scene structure. To quantitatively evaluate our network, we compare our results to traditional path planners like RRT-star [8], BIT-star [2] etc in terms of similarity between our paths and theirs for a particular goal point and the time taken to estimate the paths. To this end, our contributions are the following :

  • We present a novel CNN framework which is able to map the perceived surroundings to a diverse set of trajectories towards inherently inferred goal points. Such abilities are very relevant in autonomous navigation settings where alternate feasible trajectories are needed due sudden changes in the motions of surrounding vehicles.

  • Our proposed supervision strategy to train the TPNet based on Multiple Choice Learning enables the network to identify diverse traversable areas which are further used to predict trajectories towards various goal points.

  • Our framework easily generalizes to new scenarios due to the versatility of the intermediate representation using occupancy maps. For instance, a network trained on LIDAR data for outdoors works for a monocular or indoor depth maps obtained from drones.

  • We demonstrate results on the KITTI dataset which uses Velodyne 64 , our campus dataset with Velodyne 16 and also scenarios like indoor drone which uses an RGBD camera.

We demonstrate that our network performs better than classical trajectory prediction methods in terms of time taken as our network predicts trajectories in constant time despite increasing the number of possible choices. To the best of our knowledge, there has not been prior literature that addresses the problem in such an end to end fashion.

Ii Related Work

The problem of identifying multiple driving/navigation options for a given perceptual input has not been widely studied in literature. While not exactly a trajectory planning problem, classical planners such as [9], [8] would pose this problem as one of hypothesizing/sampling multiple goal locations followed by computation of trajectories to such candidate goal locations. Evidently as the diversity of navigation options increase the computational time for generating all possible candidate trajectories increases. In contrast in this paper the end to end learning algorithm provides for constant time outputs despite increasing number of navigation options or driver intents as the case may be. The task of trajectory estimation and planning has been attempted before using neural networks. Glasius et al. [5] presented an approach based on Hopfield Neural Networks for generating paths in dynamic environments. Yang et al. [12] presented a computationally efficient neural architecture for real time navigation of robots in dynamic environments. The closest formulations in the literature are those that predict driver intents around an intersection [18], [17], [19], however such methods are restrictive in that they rely on explicit knowledge of the intersection and do not scale to situations beyond intersections. There are a number of methods that map perceptual inputs to continuous space control actions [10] , [11] in an end to end framework. However extensions of such formulations to predict multiple trajectory proposals have not appeared in literature thereby placing the current algorithm as distinct in the context of existing works. On the other hand, robots would some times want to explore the possibilities of where they could traverse in a given scene. The current method works on intermediate representations obtained by a variety of robotic agents from diverse sensors such as stereo cameras, RGBD sensors and LIDAR to generate candidate navigating options (trajectories). We present an approach which generalizes to various scenarios like autonomous drones navigating in indoor and outdoor environments, ground robots exploring indoor scenes and also autonomous cars in outdoor scenes.

Iii Method

In this section, we describe the details of our architecture and formally define the problem and our approach below. The overall pipeline of the proposed method is illustrated in the figure 3.

Iii-a Inputs

We use 2 types of inputs to the Trajectory Proposal Network. The first input is which represents the occupied space (), free space () and the unknown/unexplored space () in the perceivable area of the robot. These are computed using the lidar based point cloud and identifying all the potential obstacles within a certain range, the free space and the unexplored area. is obtained by projecting the registered point cloud data to 2D grid in birds-eye view and is the stack of mutually exclusive binary masks each of dimensions . Hence, (=1) ( = 0) . The second input type specifies the track history of the ego vehicle in the from of grid with representing whether the grid cell was occupied by the ego vehicle in the past. Effectively, the input to the network is = . has ego-centric vehicle information. The vehicle’s track histories are appropriately transformed to the ego vehicles coordinate frame and then discretized to form a grid.

Iii-B Training Data

To obtain training ground truth for our framework, we use an RRTstar [8] based planner from Open Motion Planning Library OMPL [16]. For each of the training scenario in , multiple goal points are chosen in a manner that it ensures all dominant choices of motion. Further, we obtain trajectories to each of these goal points using RRTstar and discretise them over the grid. Such an approach enables us to create large amounts of training data by using lidar data and RRT based planner without any need for annotation. To this end, for a scene , represents the diverse trajectories possible where i varies from to corresponding to each scene.

Fig. 3: Pipeline(Forward Pass): Given the stacked binary masks corresponding to free, occluded, unknown regions and past trajectory points respectively as inputs, the TPNet predicts four Trajectory Proposals as a set of probability values corresponding to each grid pixels. Each of these Trajectory Proposals are then fed as inputs to TSNet which produces Trajectory Samples as a set of waypoints.

Iii-C Trajectory Proposal Network

The Trajectory Proposal Network is the first module in our framework which proposes diverse multi-modal areas from which trajectories can further be sampled by the Trajectory Sampling Network. These are regions in the open space which could be traversed by the robot.

Iii-C1 Architecture

We use a CNN with an Encoder-Decoder style architecture with skip connections. All the convolutions in the network are Dilated. The output of the network is = where is the number of diverse regions we want the network to predict. has 2 channels which indicate the probability of each pixel in the channel belonging to traversable region or not. Effectively, the network has channels and through various experiments we chose = in our setup as it was able to capture all possible trajectories in most of the situations.

Iii-C2 Training

We train the TPNet in a supervised fashion using pairs. For each training iteration, we randomly pick and one of the groundtruth trajectories from the pool . Our first loss which we call as Trajectory Diversity Loss, defined in eq. 1 encourages the network to predict diverse proposals through its output . To compute this, we evaluate the weighted cross entropy loss between each of the trajectory outputs in with the ground truth trajectory and choose the minimum of these losses, which is the best possible proposal generated. This is inspired from the Multiple Choice Learning framework presented in [6]. Such a loss encourages the network to spread its bets on various proposals in its multiple output layers.

(1)

where is weight parameter used to compute the loss and the superscripts and indicates channels corresponding to traversable and non-traversable regions of TPNet outputs. We train the network through deep supervision by computing the Trajectory Diversity Loss at two different levels. The first level as described above is at the last layer of the TPNet and the second level is immediately after the last decoder layer of the network. To achieve this, we use the output from the last decoder layer and apply deconvolutions on those features to produce outputs similar to the final layer of our network. The loss at this intermediate supervision is same as but is applied on these intermediate outputs.

Additionally, we use an Obstacle Avoidance Loss which penalizes the network when it predicts proposals which intersect with the obstacles in the scene. To achieve this, we minimize the negative log likelihood of at obstacle locations. The loss is defined as,

(2)

The effective loss to train TPNet is given by

(3)

where, is the weight of Obstacle Avoidance loss.

During the test phase, TPNet provides us with diverse trajectory regions through . Each of the predicted trajectory regions contains the probabilities associated with the pixels in trajectory regions which we further use in our next module.

Iii-D Trajectory Sampler Network

The second module of our pipeline is the Trajectory Sampler Network which is used to predict a set of future waypoints using a particular proposal from TPNet.

Iii-D1 Architecture

The input to the TSNet is one of the proposal,

from the TPNet. The probability map is encoded into a feature space using a convolutional layer with kernel size of 7 followed by 3 convolutional layers of stride 2. This is fed into a series of 3 LSTM

[7] layers along with the previous predicted coordinates from the last LSTM layer. The output from the network is = where is a predicted trajectory coordinate. The coordinates are predicted in the discretized 2-D grid representing the scene.

Iii-D2 Training

The ground truth coordinates are generated using an RRTstar with B-spline on top for each from the current location of the robot to a point with highest probability greater than a threshold and farthest from the robot location as the goal point. It is ensured that the eventual RRTstar trajectory lies within the trajectory proposal region output by TPNet. We use = as the ground truth coordinates over which it is supervised. We train the network in a supervised fashion using

pairs. The loss function is defined as the L2 distance between the predicted and ground truth waypoint at each LSTM prediction.

(4)

During test phase, each proposal from the TPNet is used by Trajectory Sampling network to generate diverse future trajectories for the robot.

Iv Experimental Setup

We extensively evaluated the proposed approach on standard datasets as well as on our own University dataset. Efficacy and robustness of our approach are demonstrated by the fact that it is agnostic to the type of sensors, environments or the hardware platform chosen for experiments. This is possible as long as the sensing hardware and the input processing of point cloud is capable of providing us with sufficiently accurate measurements which can be converted to the 2D occupancy maps. We also conducted experiments to demonstrate navigation in different scenarios based on the trajectories predicted by our framework.

Iv-a Evaluation Scenarios

Iv-A1 Standard dataset

We demonstrate the performance of our pipeline on KITTI dataset [4]. Sequences 5,6,7,8,9,10 were used for generating the training data and the evaluation was performed on sequences 0,1,2,3,4.

Iv-A2 Our University dataset

We use dataset collected from our own University as an additional dataset for evaluation. In order to collect the data we use a Mahindra e2o vehicle with mounted sensors such as Velodyne-16, GPS and IMU.

Iv-B Tests on our own setups

Test were conducted on our platforms in two completely different scenarios. For real world outdoor experiment, we show the deployment of our pipeline on a ground robot while an aerial vehicle is used for testing in constrained practical indoor conditions.

Iv-B1 Outdoor Tests

The outdoor tests were carried out in constrained alleys as well as on roads inside our campus. The runs in alleys were performed on a ClearPath Husky robot with Velodyne-16 mounted on top. On the University roads, a Mahindra e2o mounted with Velodyne-16 and UBlox RTK GPS with Xsens MTi-30 IMU was used as the hardware platform. In both cases a ROG GL552VX laptop with Core i7 CPU, Nvidia GTX 960M GPU and 16GB RAM was used for running the network as well as generating the control commands for trajectory tracking and navigation. The controls of steering, throttle and brakes on the car can be switched between manual mode and fully autonomous mode, where in autonomous mode we have network of sensors to implement closed loop feedback control to track the predicted trajectory.

Iv-B2 Indoor Tests

The indoor tests were conducted on a custom made drone(quad rotor) platform as can be seen in the figure 4. The drone is mounted with a very low powered and light weight Intel RealSense depth sensor connected to an Intel NUC i5 processor for processing the obtained depth data. As RealSense has approx. horizontal field of view, the occupancy map is registered to get a static local map around the current drone position which is then passed to the ground station laptop for trajectory prediction. A monocular visual inertial odometry is running onboard for accurate state estimation. The TPNet as well as the TSNet are running on a commodity laptop mentioned above. The 2D occupancy map and the predicted trajectory is communicated between the onboard processor of the drone and the laptop over Wi-Fi. As our proposed network works with 2D occupancy maps, the trajectory is generated for a fixed height at which drone is flying.

Implementation details:

The TPNet was trained using SGD optimizer with an initial learning rate of 0.005 and a batch size of 32. The learning rate was decreased every 10K steps by a factor of 0.8 and was trained for 400 epochs using 6 KITTI sequences. TSNet is also trained using SGD optimizer with an initial learning rate of 0.01 and learning rate decay of 0.7 for every 10K steps and was trained for 300 epochs. Batch size of 32 was used during the training. Both the networks were implemented in Tensorflow

[15].

Fig. 4: Left: KITTI results for situations with a varying number of dominant choices possible. The coloumns are represented in the order of Occupancy() input, prediction from TPNet, waypoints from TSNet and projected path in the pointcloud. Right: Shows the adaptability of the network to various sensor modalities at varying situtaions such as indoors and outdoors.
Fig. 5: Shows the ablative results for our architecture. The images shown represent the following in order (from left to right): without multiple outputs, without skip connections, without dilation, without deep supervision and with the final architecture. Points which have probabilities greater than 0.5 as a trajectory point is shown in the figure.
Datasets TPNet+TSNet TPNet+RRTstar RRTstar Informed-RRTstar BITstar
T-PL T-PL T-PL
KITTI 0.079 0.238 0.297 0.306 0.294
IIIT-H 0.082 0.112 0.192 0.202 0.190
Drone 0.079 0.090 0.147 0.141 0.139

TABLE I: Compute time (in secs) for some state-of-the-art planning algorithms for calculating 4 trajectories over various datasets
Datasets TPNet+TSNet RRTstar Informed-RRTstar BITstar
T-PL PL(2sec) PL(5sec) T-PL PL(2sec) PL(5sec) T-PL PL(2sec) PL(5sec)
KITTI 26.079 26.760 26.121 26.095 26.727 26.102 26.075 26.520 26.177 26.160
IIIT-H 24.756 27.427 24.896 24.757 27.375 24.803 24.727 26.239 24.941 24.813
Drone 2.549 2.540 2.539 2.539 2.542 2.539 2.536 2.539 2.539 2.539

TABLE II: Path Length comparison(in meters) between various planning algorithms for the same scene and goal points

V Results

We evaluated the proposed approach on a wide variety of standard datasets as well as in real world scenarios. The qualitative results as well as quantitative comparisons with other works are discussed below.

V-a Qualitative Results

In figure 4 we show scenarios where multiple choices of trajectories are possible. The network has an implicit understanding of the scene and predicts diverse trajectories towards various implicitly identified goal points. These are particularly the scenarios when there are different dominant choices of motion possible (fig. 4 left row). On the other hand, when there is only one dominant choice the proposals are aligned in the same direction (fig. 4 left row). It can be seen that the network predicts trajectories in different directions at complex situations such as a bifurcation or an intersection. Although the network has been trained only on KITTI outdoor datasets, the pipeline very well generalizes for different outdoor scenes including our University driving dataset as well as to constrained indoor environments. The network was evaluated on ground robot mounted with laser scanner as well as on drone mounted with depth sensor and is found to perform equally well in all the cases (fig. 4 right). The tests are primarily focused on demonstrating short term navigation based on trajectory tracking in different types of critical scenarios.

Ablative study: In this study, we show the importance of various components of the framework by analyzing the performance when each component is discarded. Firstly, we discard the diverse output of TPNet and use only a single output layer. Networks with single prediction output cannot reason for multimodal possibilities of traversable areas and they tend to average out multiple of them into a single output. This can be seen in figure 5, first column, where the network averages out all possibilities into a single one. In figure 5 second column, we show the output of the TPNet without the use of skip connections where it can be seen that the trajectories are spread across a larger area with many discontinuities. This indicates the need for skip connections the encoder-decoder architecture of TPNet to constrain its output to precise areas. In figure 5 third column, we show the output of TPNet without dilated convolutions. The proposals predicted contain significant discontinuities. This is because dilated convolutions capture spatial context of the scene much better and give out smooth transition of probabilities over the predicted proposal. In figure 5 fourth column we show the output of TPNet when it is trained without deep supervision where the proposals are not as precise and compact as compared to our final pipeline. The computational cost increases as we supervise over many intermediate layers. Hence, we use deep supervision in our network just after the encoder-decoder. The results show that the predicted trajectories from our framework are precise and diverse as compared to previous cases.

V-B Quantitative Results

We compare the proposed approach with various other state-of-the-art planning algorithms such as RRTstar [8], Informed-RRTstar [3] and BITstar [2] based on total time taken to predict same number of paths and path length given the same goal points and closest distance of the predicted path to any obstacles in the map. Moreover, we consider different optimization objectives for these planners such as Threshold-Path Length(T-PL) and Path-Length(PL), and exhibit the trade-off between time to predict a path and the path length. These results are presented in Table I and Table II.

Table I shows the comparison of computation time for various algorithms such as RRTstar, Informed-RRTstar, BITstar and TPNet+RRTstar. In order to have a fair comparison we compare the time taken by the above-mentioned planners to compute four different trajectories on same scene as input. The time for detecting goal points is also included in the total time for comparison. By default, the PL optimization runs for the max time limit specified looking for the shortest path while the T-PL is satisfied if it finds a path within a given threshold. Hence, we set the threshold path to be much higher than the maximum path length so that the planner exits as soon as it finds its first path to the goal. The values shown in the tables are the average of outputs on 1000 samples from each dataset. It is evident that proposed method consumes less time comparatively and is almost constant for various scenes.

Table II shows the trade-off between computing the optimal trajectory and the time taken to compute it. It represents the average path length to the goal in each of the evaluated datasets. It can be seen that the average path length reduces with increase in the max time limit for classical frameworks. While our framework takes 79 milli seconds to compute an optimal path length, Informed-RRTstar takes about 5 seconds to compute a similar optimal path length. Also, it can be noticed that our path length is very close to that of Informed-RRTstar for all the datasets. This depicts the efficiency of our framework in terms of time consumed for optimal trajectory proposal, irrespective of number of outputs.

Vi Conclusion

We proposed an end to end framework that maps intermediate representation in the form of an occupancy grid to multiple candidate trajectory options. These options were generated through a cascade of a Trajectory Proposal Network and a Trajectory Sampler Network. The efficacy of the network was established through its ability to discern dominant choices of motion and output trajectories that are non colliding with trajectory lengths similar to state of the art planners. But the network’s ability to scale up to multiple candidate options and output multiple candidate trajectories at near constant time makes it unique and distinctive from other methods. To the best of our knowledge this is the first such architecture proposed in the literature.

References

  • [1] B. Bhowmick, A. Mallik, and A. Saha (2014) Mobiscan3D: a low cost framework for real time dense 3d reconstruction on mobile devices. In International Conference on Ubiquitous Intelligence and Computing, Cited by: §I.
  • [2] J. D. Gammell, S. S. Srinivasa, and T. D. Barfoot (2015)

    Batch informed trees (bit*): sampling-based optimal planning via the heuristically guided search of implicit random geometric graphs

    .
    In 2015 IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I, §V-B.
  • [3] J. D. Gammell, S. S. Srinivasa, and T. D. Barfoot (2014) Informed rrt*: optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. Cited by: §V-B.
  • [4] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In

    Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §I, §IV-A1.
  • [5] R. Glasius, A. Komoda, and S. C.A.M. Gielen (1995) Neural network dynamics for path planning and obstacle avoidance. Neural Networks. External Links: ISSN 0893-6080 Cited by: §II.
  • [6] A. Guzmán-rivera, D. Batra, and P. Kohli Multiple choice learning: learning to produce multiple structured outputs. In Advances in Neural Information Processing Systems 25, External Links: Link Cited by: §III-C2.
  • [7] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural Comput.. Cited by: §III-D1.
  • [8] S. Karaman and E. Frazzoli (2011) Sampling-based algorithms for optimal motion planning. The International Journal of Robotics Research. Cited by: §I, §II, §III-B, §V-B.
  • [9] J. J. Kuffner and S. M. LaValle (2000) RRT-connect: an efficient approach to single-query path planning. In Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings, Cited by: §II.
  • [10] S. Levine, C. Finn, T. Darrell, and P. Abbeel (2016) End-to-end training of deep visuomotor policies. Cited by: §II.
  • [11] A. Loquercio, A. I. Maqueda, C. R. del-Blanco, and D. Scaramuzza (2018) DroNet: learning to fly by driving. IEEE Robotics and Automation Letters. Cited by: §II.
  • [12] C. Luo and S. X. Yang (2008) A bioinspired neural network for real-time concurrent map building and complete coverage robot navigation in unknown environments. IEEE Transactions on Neural Networks. Cited by: §II.
  • [13] S. Maity, A. Saha, and B. Bhowmick (2017) Edge slam: edge points based monocular visual slam. In Workshop in International Conference on Computer Vision, Cited by: §I.
  • [14] A. Mallik, B. Bhowmick, and S. Alam (2015) A multi-sensor information fusion approach for efficient 3d reconstruction in smart phone. In International Conference on Image Processing, Computer Vision, and Pattern Recognition, Cited by: §I.
  • [15] et.al. Mart Abadi (2015)

    TensorFlow: large-scale machine learning on heterogeneous systems

    .
    Note: Software available from tensorflow.org External Links: Link Cited by: §IV-B2.
  • [16] M. Moll, I. A. Şucan, and L. E. Kavraki (2015) Benchmarking motion planning algorithms: an extensible infrastructure for analysis and visualization. IEEE Robotics & Automation Magazine. External Links: Document Cited by: §III-B.
  • [17] T. Streubel and K. H. Hoffmann (2014) Prediction of driver intended path at intersections. In 2014 IEEE Intelligent Vehicles Symposium Proceedings, Cited by: §II.
  • [18] A. Zyner, S. Worrall, and E. Nebot (2018-07)

    Naturalistic Driver Intention and Path Prediction using Recurrent Neural Networks

    .
    ArXiv e-prints. External Links: 1807.09995 Cited by: §II.
  • [19] A. Zyner, S. Worrall, and E. Nebot (2018) A recurrent neural network solution for predicting driver intention at unsignalized intersections. IEEE Robotics and Automation Letters. Cited by: §II.