Autonomous navigation is a long-standing field of robotics research, which provides an essential capability for mobile robot to execute a series of tasks on the same environments performed by human everyday. In general, the task of autonomous navigation is to control a robot navigate around the environment without colliding with obstacles. It can be seen that navigation is an elementary skill for intelligent agents, which requires decision-making across a diverse range of scales in time and space. In practice, autonomous navigation is not a trivial task since the robot needs to close the perception-control loop under the uncertainty in order to obtain the autonomy.
Recently, the learning-based approaches (e.g., deep learning models, etc.) have demonstrated the ability to directly derive end-to-end policies which map raw sensor data to control commands[1, 2]. This end-to-end approach also reduces the complexity of the implementation and effectively utilizes input data from different sensors (e.g., depth camera, laser) thereby reducing cost, power and computational time. One more advantage is that the end-to-end relationship between input data and control outputs can result in an arbitrarily non-linear complex model (i.e., sensor to actuation) which has yielded surprisingly encouraging results in different control problems such as lane following , autonomous driving , and Unmanned Aerial Vehicles (UAV) control . However, most of the previous works are only tested in man-made normal environments, while navigating in complex environments such as collapsed houses/cities suffered from a disaster (e.g. an earthquake) or a natural caves still remains as an open problem.
Unlike normal environments (e.g., man-made road) which have clear visual clues in normal condition, complex environments such as collapsed cities or natural caves pose significant challenges for autonomous navigation  . The main reason is that complex environments usually have very challenging visual and physical properties. For example, the collapsed cities may have constrained narrow passages, vertical shafts, unstable pathways with debris and broken objects (Fig. 1); or the natural caves often have irregular geological structures, narrow passages, and poor lighting condition. Autonomous navigation with intelligent robots in complex environments, however, is a crucial task, especially in time-sensitive scenarios such as disaster response, search and rescue, etc. Recently, the DARPA Subterranean Challenge  was organized to explore novel methods for quickly mapping, navigating, and searching in complex underground environments such as human-made tunnel systems, urban underground, and natural cave networks.
Inspired by the DARPA Subterranean Challenge, in this work, we propose a learning-based system for end-to-end mobile robot navigation in complex environments such as collapsed cities, collapsed houses, and natural caves. To overcome the difficulty in data collection, we build a large-scale simulation environment which allows us to collect the training data and deploy the learned control policy. We then propose a new Navigation Multimodal Fusion Network (NMFNet) that effectively learns the visual perception from sensor fusion and allows the robot to autonomously navigate in complex environments. To summarize, our main contributions are as follows:
We introduce new simulation models that can be used to record large-scale datasets for autonomous navigation in complex environments.
We present a new deep learning method that fuses both laser data, 2D images, and 3D point cloud to improve the navigation ability of the robot in complex environments.
We show that the use of multiple visual modalities is essential to learn a robust robot control policy for autonomous navigation in complex environments in order to deploy in real-world scenarios.
The remainder of this paper is organized as follows: Section II discusses the related background. Section III presents the visual multimodal input used in our method. Section IV introduces our new multimodal fusion network and its architecture. In section V, we present our extensively experimental results. Section VI concludes the paper and discusses the future work.
Ii Related Work
Multiple sensor fusion for autonomous robot navigation is a popular research topic in robotics 
. Traditional methods tackle this problem using algorithms based on Kalman Filter. The advantage of this approach is the ability to fuse data from different sensors and sensor types such as visual, inertial, GPS, or pressure sensors. Lynen et al.  proposed a method based on Extended Kalman Filter (EKF) for Micro Aerial Vehicle (MAV) navigation. In 
, the authors developed an algorithm based on EKF to estimated the state of an UAV in multi-environments in real-time. Mascaro et al.
proposed a graph-optimization method to fuse data from multi sensors for UAV pose estimation. Apart from the traditional localization and navigation task, multimodal fusion is also used other applications such as object detection or semantic segmentation [15, 16] in changing environments. In both [14, 16] multimodal data from visual sensors are combined and learned in a deep learning framework to deal with challenging lighting conditions.
More recently, many methods have been proposed to directly learn control policies from raw sensory data. These methods can be divided into two main categories: reinforcement learning18, 19, 20]
. With the rise of deep learning, Convolution Neural Networks (CNN) was widely used to train an end-to-end perception system[21, 22, 23, 24, 25, 26]. In , Bojarski et al. proposed the first end-to-end navigation system for autonomous car using 2D images. Smolyanskiy et al.  extended this idea for flying robots using three cameras as the input. Similarly, the authors of DroNet 
used CNN to learn the steering angle and predict the collision probability given the RGB image as the input. Gandhi et al. introduced a navigation method for UAV by learning from negative and positive crashing trials. In  
, CNN and Variational Autoencoder were combined to estimate the steering control signal. Monajjemi et al. proposed a new method for agile UAV control. More recently, the authors in  proposed to combine the navigation map with visual input to learn a deterministic control signal.
Reinforcement learning algorithms have been widely used to learn general policies from robot experiences [17, 34, 35]. In , the authors introduced a continuous control framework using deep reinforcement learning. Zhu et al.  addressed the target-driven navigation problem given an input picture of a target object. Wortsman et al. introduced a self-adaptive visual navigation system using meta-learning. The authors in  used semantic information and spatial relationships to let a robot navigate to target objects. In , an end-to-end regression system was introduced for UAV racing in simulation. The authors in  proposed to train the reinforcement policy in simulation environments, then transfer the learned policy to the real-world. In  , the authors combined deep reinforcement learning with CNN in an effort to leverage the advantages of both techniques. Piotr et al.  proposed a method with augmented memory to train autonomous agents to navigate within large and visually rich environments (complicated 3D mazes).
While reinforcement learning methods learn the general control policies with nice mathematical formulation, they require many trial and error experiments which are dangerous and not realistic in real safety-critical robotic platforms  . On the order hand, supervised learning methods use pre-collected data to learn the control policies. The supervision data can be obtained from the real human expert trajectories [29, 30] or traditional controllers. This is time-consuming and costly but doable with the real robots. Therefore, the supervised learning approach is usually more favorable over the reinforcement learning method when working with real robot platforms. However, it is not trivial to handle the domain-shift between expert guidance and the real robot trajectories in supervised learning methods.
In this work, we choose the end-to-end supervised learning approach for the ease of deploying and testing in real robot systems. We first simulate the complex environments in physics-based simulation engine and collect a large-scale for supervised learning. We then proposed NMFNet, an effective deep learning framework to fuse visual input and allow the robots to navigate autonomously in complex environments.
Iii Multimodal Input
Complex environments such as natural cave networks or collapsed cities pose significant challenges for autonomous navigation due to their irregular structures, unexpected obstacles, and the poor lighting condition inside the environments. To overcome these natural difficulties, we use three visual input data in our method: RGB image , point cloud , and distance map obtaining from the laser sensor. Intuitively, the use of all three visual modalities ensures that the robot’s perception system has meaningful information from at least one modality during the navigation under challenging conditions such as lighting changes, sensor noise in depth channels due to reflective materials, or motion blur, etc.
In practice, the RGB images and point clouds are captured using a depth camera mounted in front of the robot while the distance map is reconstructed from the laser data. In complex environments, while the RGB images and point clouds can provide the visual semantic information for the robot, the robot may need more useful information such as the distance map due to the presence of various obstacles. The distance map is reconstructed from the laser data as follows:
where is the coordinate of point on 2D distance map. is the coordinate of robot. is the distance from the laser sensor to the obstacle, and is the incremental angle of the laser sensor.
To keep the low latency between three visual modalities, we use only one laser scan to reconstruct the distance map. The scanning angle of the laser sensor is set to to cover the front view of the robot. This will help the robots aware of the obstacles from its far left/right hand side, since these obstacles may not be captured in the front camera which provides the RGB images and point cloud data. We notice that all three modalities are synchronized at each timestamp to ensure the robot is observing the same viewpoint at each control state. Fig. 2 shows a visualization of three visual modalities used in our method.
As motivated by the recent trend in autonomous driving [28, 29, 33], our goal is to build a framework that can directly map the input sensory data , to the output steering commands . To this end, we design NMFNet with three branches to handle three visual modalities. The architecture of our network is illustrated in Fig. 4.
Iv-a 2D Features
Learning meaningful features from 2D images is the key to success in many vision tasks. In this work, we use ResNet8 to extract deep features from the input RGB image and laser distance map. The ResNet8 has3. As in 
, we choose ResNet8 to extract deep features from the 2D images since it is a light weight architecture and can achieve competitive performance while being robust again the vanishing/exploding gradient problems during training.
Iv-B 3D Point Cloud
While the robot is navigating in complex environments, relying on 2D visual information maybe not enough. For example, the RGB images from the front camera of the robot are widely used in many end-to-end visual navigation systems [29, 33], however, in environments such as natural caves, the lighting condition can be a challenge to obtain clear visual images. Therefore, we propose to use point cloud as another visual input for autonomous navigation in complex environments.
Specifically, we use the point cloud associated with the RGB images from the front camera of the robot. Although the point cloud from depth camera is ordered, it contains many points (e.g., in our camera setting, we have points in each cloud), including many missing points. In practice, due to the memory constraints, it is impractical to learn the geometric information from the huge point clouds . Therefore, we remove all the missing points and randomly select points to represent the cloud, hence the point cloud becomes unordered. The point cloud is expected to provide more geometry information of the environment for the network. However, extracting features from the unordered cloud is not a trivial task since the network needs to be invariant to all permutations of the input set.
To extract the point cloud feature vector, our network has to learn a model that is invariant to input permutation. As motivated by, we extract the features from the unordered point cloud by learning a symmetric function on transformed elements. Given an unordered point set with , we can define a symmetric function that maps a set of unordered points to a feature vector as follow:
where is a vector max operator that takes input vectors and returns a new vector of the element-wise maximum; and are usually presented by neural networks.
In practice, and function are approximated by an affine transformation matrix with a mini multi-layer preception network (i.e., T-net) and a matrix multiplication operation. Given the unordered input points, we apply this transformation twice to learn the geometric feature from the pount cloud: input transformation and feature transformation. The input transformation uses raw point cloud as input and regresses to a
matrix. It consists of a three multi-layer perceptron network with layer output sizes are 64, 128, 1024, respectively. The output matrix is first initialized as an identity matrix and all layers have ReLU and batch normalization (except the last layer). We then feed the output of the first transformation to the second transformation network which has the same architecture and generates amatrix as output. This matrix is also initialized as an identity and presents the learned features from the point cloud.
Iv-C Multimodal Fusion
Given the features from the point cloud branch and the RGB image branch, we first do an early fusion by concatenating the features extracted from the input cloud with the deep features extracted from the RGB image. The intuition is that since both the RGB image and the point cloud are captured using a camera with the same viewpoint, fusing their features will let the robot aware of both visual information from RGB image and geometry clue from point cloud data. This concatenated feature is then fed through twoconvolutional layers. Finally, we combine the features from 2D-3D fusion with the extracted features from the distance map branch. The steering angle is predicted from a final fully connected layer keeping all the features from the multimodal fusion network.
. Methods use classification loss first bin the ground-truth steering angles into small and discrete groups, then learn possible controls as a classification problem using a softmax loss function. In practice, we have observed the instability during training due to the highly imbalanced statistic in the dataset. Therefore, we employ the regression loss to train the network end-to-end using the mean squared error (MSE)loss function between the ground-truth human actuated control, , and the predicted control from the network :
Data Collection Unlike the traditional autonomous navigation problem for autonomous car or UAVs that can collect data in real-world setting [28, 29, 30], it is not a trivial task to build complex environments such as collapsed cities or a collapsed houses in real life. Therefore, we create the simulation models of these environments in Gazebo and collect the visual data from simulation. In particular, we collect the data from three types of complex environment:
Collapsed house: The house suffered from an accident or a disaster (e.g. an earthquake) with random objects on the ground.
Collapsed city: Similar to the collapsed house, but for the outdoor environment. In this scenario, the road has many debris from the collapsed house/wall.
Natural cave: A long natural tunnel in a poor brightness condition with irregular geological structures.
To build the simulation environments, we first create the 3D model of normal daily objects in indoor and outdoor environments (e.g. beds, tables, lamps, computers, tools, trees, cars, rocks, etc.), including broken objects (e.g. broken vases, broken dishes, and debris). These objects are then manually chosen and placed in each environment to create the entire simulated environment.
For each environment, we use a mobile robot model equipped with a laser sensor and a depth camera mounted on top of the robot to collect the visual data. The robot is controlled manually to navigate around each environment. We collect the visual data when the robot is moving. All the visual data are synchronized with a current steering signal of the robot at each timestamp.
Data Statistic In particular, we create 3D object models to build the complex environments. These objects are used to build environments in total (i.e., instances for each environment). In average, the collapsed house environments are built with approximately objects in an area of . The collapsed city has objects and spread in while the natural cave environments are built with objects in approximately area. We manually control the robot in hours to collect the data.
In total, we collect around visual data triples ( for each environment type, resulting a large-scale dataset with records of synchronized RGB image, point cloud, laser distance map, and ground-truth steering angle. Around of the dataset are collected when we use domain randomisation by apply random texture to the environments (Fig. 5). For each environment, we use data for training and data for testing. All the 3D environments and our dataset will be made publicly available to encourage further research.
We implement our network using Tensorflow framework
. The network is optimized using stochastic gradient descent with the fixlearning rate and momentum. The input RGB image and distance map size are () and (), respectively, while the point cloud data are sampled to points. We train the network with the batch size of and the training time is approximately hours on an NVIDIA 2080 GPU.
We compare our method with the following recent state-of-the-art methods in autonomous navigation: DroNet , VariationNet . We also present the result for Inception-V3 to serve as a baseline of deep architecture. We note that DroNet uses ResNet8 as the backbone to predict the steering angle and collision probability from RGB input. Since our dataset does not have collision ground-truth, we disable the collision probability branch of DroNet, and only use the regression branch to predict the result. Intuitively, DroNet architecture is similar to our RGB branch. All the baselines are trained with the data from domain randomisation. We show the results of our NMFNet under two settings: with domain randomisation (NMFNet with DR), and without using training data from domain randomisation (NMFNet without DR).
Table I summarizes the regression results using Root Mean Square Error (RMSE) of our NMFNet and other state-of-the-art methods. From the table, we can see that our NMFNet outperforms other methods by a significant margin. In particular, our NMFNet trained with domain randomisation data achieves RMSE which is a clear improvement over other methods using only RGB images such as DroNet . This also confirms that using multi visual modalities input as in our fusion network is the key to successfully navigate in complex environments.
|NMFNet without DR||Fusion||0.482||0.365||0.367||0.405|
|NMFNet with DR||Fusion||0.480||0.365||0.321||0.389|
Within three complex environment types, we observe that the RMSE of the collapsed house results are higher than the collapsed city and the natural cave. A possible reason is that the collapsed house environment is much smaller than others while having more objects. Therefore, it would be more challenging for the robot to navigate in the collapsed house without colliding with the objects. From Table I, we also notice that by employing domain randomisation, our NMFNet with DR shows a good improvement in comparison with the setting without domain randomisation (NMFNet without DR). On the other hand, the setup with VariationNet approach  has the highest error in all three complex environments while the Inception-V3 shows reasonable results.
V-E Contribution of Visual Modalities
To understand the contribution of each modality to the results, we perform the following experiment: We first train a network that uses only a single modality (either RGB, distance map, or point cloud) as the input. Technically, each network in this experiment is a branch of our NMFNet. We then perform similar experiments using networks with two branches (i.e., RGB + distance map, RGB + point cloud, and distance map + point cloud) as the input. All the networks use the training set with extra data from domain randomisation.
Table II shows the RMSE scores when different modalities are used to train the system. We first notice that the network that uses only point cloud data as the input does not converge. This confirms that learning meaningful features from point cloud data is challenging, especially in complex environments. On the other hand, we achieve a surprisingly good result when the distance map modality is used as the input. The other combinations between two modalities show reasonable accuracy, however, we achieve the best result when the network is trained end-to-end using the fusion from all three modalities: rgb, distance map from laser camera, and point cloud from the depth camera. To further verify the contribution of each modality, we employ Grad-CAM  to visualize the activation map of the network when different modality is used. Fig. 6 shows the qualitative visualization under three input settings: RGB, RGB + point cloud, and fusion. From Fig. 6, we can see that from a same viewpoint, the network that uses fusion data makes the most logical decision since its attention lays on feasible regions for navigation, while other networks trained with only RGB image or RGB + point cloud show more noisy attention.
We also note that the inference time of our NMFNet is approximately 100ms on an NVIDIA 2080 GPU. This allows our method to be used in a wide range of robotic applications. More qualitative results including the deployment of NMFNet on BeetleBot  can be found at https://sites.google.com/site/multimodalnavigation/.
|RGB + Point Cloud||0.718||0.499||0.783||0.667|
|RGB + Distance Map||0.568||0.452||0.503||0.508|
|Distance Map + Point Cloud||0.631||0.474||0.592||0.566|
Vi Conclusions and Future Work
We propose NMFNet, an end-to-end and real-time deep learning framework for autonomous navigation in complex environments. Our network has three branches and effectively learns the visual fusion input data. Furthermore, we show that the use of mutilmodal sensor data is essential for autonomous navigation in complex environments. The extensively experimental results show that our NMFNet outperforms recent state-of-the-art methods by a fair margin while achieving real-time performance and generalizing well on unseen environments.
Currently, our NMFNet shows limitation on scenarios when the robot has to cross small debris or obstacles. In the future, we would like to quantitatively evaluate and address this problem. Another interesting direction is to combine our method with uncertainty estimation  or a goal-driven navigation task  for more wide-range of applications.
-  M. Pfeiffer, M. Schaeuble, J. Nieto, R. Siegwart, and C. Cadena, “From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots,” in ICRA, 2017.
-  A. Nguyen, T.-T. Do, I. Reid, D. G. Caldwell, and N. G. Tsagarakis, “V2cnet: A deep learning framework to translate videos to commands for robotic manipulation,” arXiv preprint arXiv:1903.10869, 2019.
-  A. Gurghian, T. Koduri, S. V. Bailur, K. J. Carey, and V. N. Murali, “Deeplanes: End-to-end lane position estimation using deep neural networks,” in CVPR, 2016.
-  M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars.” CoRR, 2016.
-  M. Monajjemi, S. Mohaimenianpour, and R. Vaughan, “Uav, come to me: End-to-end, multi-scale situated hri with an uninstrumented human and a distant uav,” in IROS, 2016.
-  T. Howard, C. Green, D. Ferguson, and A. Kelly, “State space sampling of feasible motions for high-performance mobile robot navigation in complex environments,” Journal of Field Robotics, 2008.
-  P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. J. Ballard, A. Banino, M. Denil, R. Goroshin, L. Sifre, K. Kavukcuoglu, et al., “Learning to navigate in complex environments,” arXiv:1611.03673, 2017.
-  “Darpa subterranean challenge,” https://www.subtchallenge.com/.
-  M. Kam, X. Zhu, and P. Kalata, “Sensor fusion for mobile robot navigation,” Proceedings of the IEEE, 1997.
-  Y. Dobrev, S. Flores, and M. Vossiek, “Multi-modal sensor fusion for indoor mobile robot pose estimation,” in 2016 IEEE/ION Position, Location and Navigation Symposium (PLANS), 2016.
-  S. Lynen, M. W. Achtelik, S. Weiss, M. Chli, and R. Siegwart, “A robust and modular multi-sensor fusion approach applied to mav navigation,” in IROS, 2013.
-  H. Du, W. Wang, C. Xu, R. Xiao, and C. Sun, “Real-time onboard 3d state estimation of an unmanned aerial vehicle in multi-environments using multi-sensor data fusion,” Sensors, 2020.
-  R. Mascaro, L. Teixeira, T. Hinzmann, R. Siegwart, and M. Chli, “Gomsf: Graph-optimization based multi-sensor fusion for robust uav pose estimation,” in ICRA, 2018.
-  O. Mees, A. Eitel, and W. Burgard, “Choosing smartly: Adaptive multimodal fusion for object detection in changing environments,” in IROS, 2016.
-  D. Feng, C. Haase-Schütz, L. Rosenbaum, H. Hertlein, C. Glaeser, F. Timm, W. Wiesbeck, and K. Dietmayer, “Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges,” IEEE Transactions on Intelligent Transportation Systems, 2020.
-  A. Valada, J. Vertens, A. Dhall, and W. Burgard, “Adapnet: Adaptive semantic segmentation in adverse environmental conditions,” in ICRA, 2017.
-  Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel, “Benchmarking deep reinforcement learning for continuous control,” in ICML, 2016.
-  S. Ross, N. Melik-Barkhudarov, K. S. Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert, “Learning monocular reactive uav control in cluttered natural environments,” in ICRA, 2013.
-  H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” CoRR, 2016.
A. Giusti, J. Guzzi, D. C. Cireşan, F.-L. He, J. P. Rodríguez,
F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Caro,
, “A machine learning approach to visual perception of forest trails for mobile robots,”RA-L, 2015.
-  H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in CVPR, 2017.
-  A. Nguyen, D. Kundrat, G. Dagnino, W. Chi, M. E. Abdelaziz, Y. Guo, Y. Ma, T. M. Kwok, C. Riga, and G.-Z. Yang, “End-to-end real-time catheter segmentation with optical flow-guided warping during endovascular intervention,” arXiv preprint arXiv:2006.09117, 2020.
-  J. Kim and J. Canny, “Interpretable learning for self-driving cars by visualizing causal attention,” in ICCV, 2017.
-  A. Nguyen, Q. D. Tran, T.-T. Do, I. Reid, D. G. Caldwell, and N. G. Tsagarakis, “Object captioning and retrieval with natural language,” in CVPRW, 2019.
C. Richter and N. Roy, “Safe visual navigation via deep learning and novelty detection,” inRSS, 2017.
-  W. Gao, D. Hsu, W. S. Lee, S. Shen, and K. Subramanian, “Intention-net: Integrating planning and deep learning for goal-directed autonomous navigation,” arXiv:1710.05627, 2017.
-  M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al., “End to end learning for self-driving cars,” arXiv:1604.07316, 2016.
-  N. Smolyanskiy, A. Kamenev, J. Smith, and S. Birchfield, “Toward low-flying autonomous mav trail navigation using deep neural networks for environmental awareness,” in IROS, 2017.
-  A. Loquercio, A. I. Maqueda, C. R. del Blanco, and D. Scaramuzza, “Dronet: Learning to fly by driving,” RA-L, 2018.
-  D. Gandhi, L. Pinto, and A. Gupta, “Learning to fly by crashing,” in IROS, 2017.
-  A. Amini, W. Schwarting, G. Rosman, B. Araki, S. Karaman, and D. Rus, “Variational autoencoder for end-to-end control of autonomous driving with novelty detection and training de-biasing,” in IROS, 2018.
-  A. Amini, L. Paull, T. Balch, S. Karaman, and D. Rus, “Learning steering bounds for parallel autonomous systems,” in ICRA, 2018.
-  S. K. Alexander Amini, Guy Rosman and D. Rus, “Variational end-to-end navigation and localization,” arXiv:1811.10119v2, 2019.
-  L. Tai, G. Paolo, and M. Liu, “Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation,” in IROS, 2017.
-  J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in ICML, 2015.
-  T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv:1509.02971, 2015.
-  Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi, “Target-driven visual navigation in indoor scenes using deep reinforcement learning,” in ICRA, 2017.
-  M. Wortsman, K. Ehsani, M. Rastegari, A. Farhadi, and R. Mottaghi, “Learning to learn how to learn: Self-adaptive visual navigation using meta-learning,” arXiv:1812.00971, 2018.
-  J.-B. Delbrouck and S. Dupont, “Object-oriented targets for visual navigation using rich semantic representations,” arXiv:1811.09178, 2018.
-  M. Muller, V. Casser, N. Smith, D. L. Michels, and B. Ghanem, “Teaching uavs to race: End-to-end regression of agile controls in simulation,” in ECCV, 2018.
-  F. Sadeghi and S. Levine, “Cad2rl: Real single-image flight without a single real image,” arXiv:1611.04201, 2016.
-  M. Mancini, G. Costante, P. Valigi, T. A. Ciarfuglia, J. Delmerico, and D. Scaramuzza, “Toward domain independence for learning-based monocular depth estimation,” RA-L, 2017.
O. Andersson, M. Wzorek, and P. Doherty, “Deep learning quadcopter control via risk-aware active learning,” inAAAI, 2017.
-  A. Dosovitskiy and V. Koltun, “Learning to act by predicting the future,” arXiv:1611.01779, 2016.
-  G. Kahn, T. Zhang, S. Levine, and P. Abbeel, “Plato: Policy learning using adaptive trajectory optimization,” in ICRA, 2017.
-  C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in CVPR, 2017.
-  S. James, A. J. Davison, and E. Johns, “Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task,” arXiv:1707.02267, 2017.
-  M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: A system for large-scale machine learning,” in Symposium on Operating Systems Design and Implementation, 2016.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” inCVPR, 2015.
-  R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in ICCV, 2017.
-  A. Nguyen, E. Tjiputra, and Q. D. Tran, “Beetlebot: A multi-purpose ai-driven mobile robot for realistic environments,” in Proceedings of UKRAS20 Conference: “Robots into the real world”, 2020.