Visual Representation Learning for Preference-Aware Path Planning

09/18/2021 ∙ by Kavan Singh Sikand, et al. ∙ 8

Autonomous mobile robots deployed in outdoor environments must reason about different types of terrain for both safety (e.g., prefer dirt over mud) and deployer preferences (e.g., prefer dirt path over flower beds). Most existing solutions to this preference-aware path planning problem use semantic segmentation to classify terrain types from camera images, and then ascribe costs to each type. Unfortunately, there are three key limitations of such approaches – they 1) require pre-enumeration of the discrete terrain types, 2) are unable to handle hybrid terrain types (e.g., grassy dirt), and 3) require expensive labelled data to train visual semantic segmentation. We introduce Visual Representation Learning for Preference-Aware Path Planning (VRL-PAP), an alternative approach that overcomes all three limitations: VRL-PAP leverages unlabeled human demonstrations of navigation to autonomously generate triplets for learning visual representations of terrain that are viewpoint invariant and encode terrain types in a continuous representation space. The learned representations are then used along with the same unlabeled human navigation demonstrations to learn a mapping from the representation space to terrain costs. At run time, VRL-PAP maps from images to representations and then representations to costs to perform preference-aware path planning. We present empirical results from challenging outdoor settings that demonstrate VRL-PAP 1) is successfully able to pick paths that reflect demonstrated preferences, 2) is comparable in execution to geometric navigation with a highly detailed manually annotated map (without requiring such annotations), 3) is able to generalize to novel terrain types with minimal additional unlabeled demonstrations.



There are no comments yet.


page 1

page 3

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction and Related Work

Autonomous navigation through unstructured human environments is a well-studied problem in robotics, and has seen a number of different approaches. A common class of autonomous navigation is geometric navigation, which plans obstacle-free paths purely via geometric collision-checking. Geometric navigation has been shown to be successful over long-term deployments in indoor settings [1, 7, 11].

However, geometric navigation is unable to reason about paths over different terrain that may appear to be equally valid geometrically (e.g., sidewalk vs. dirt vs. gravel), but have different costs due to reliability of navigation or social norms; or terrain that may seem geometrically impassable but may actually be navigable (e.g., tall grass). This shortcoming of geometric navigation has motivated a field of research in the space of visual navigation, which uses image data from the mobile robot to reason about the environment while navigating.

End-to-end learning solutions to the visual navigation problem, which involve using deep neural networks to learn a policy which predicts control commands given raw sensory inputs, have recently become a field of great interest. The supervised approach to this learning problem is to use a reference policy (usually provided by a human) as the training signal indicating the desired behaviour for a given sensory input 


. To avoid the need to provide training labels for every input, Reinforcement Learning (RL) has gained popularity as a method for learning end-to-end policies in a variety of simulation domains 

[12] and more recently on real robots [5]. BADGR [10] leverages the available sensing redundancy on a mobile robot to learn affordances of different types of terrain in a self-supervised manner. By exploring off-policy paths, they are able to learn a planner that ignores geometric obstacles that the robot can safely traverse (e.g., tall grass). In this work, we handle a different case: when geometric information tells us there are no obstacles in a given region, but visual information tells us it would be preferable to avoid that region anyway.

While end-to-end approaches are attractive due to their ability to be learned from high-level navigation demonstrations, they have been shown to have significant difficulty generalizing to new environments [19]. To resolve this generalizability issue, a number of approaches start by processing the input to produce some intermediate representation of the environment, such as cost maps, segmentation maps [18, 13]

, or traversability estimates

[9], and then perform planning using that data as an input. For example, GoNet [9]

uses Generative Adversarial Networks (GANs) to predict the traversability of an environment given nominal examples of navigation for a mobile robot. Because there are a variety of ways of pre-processing visual information which can be useful for different specific downstream navigation tasks, there has also been work focused on choosing between various intermediate representations, and fusing these outputs together before selecting an appropriate action


Although intermediate data representations such as semantic segmentation and traversability estimation provide helpful generalizability properties, they often require dense manual labelling to train. This time-intensive process is required for handling any new object or terrain types in the environment. To ameliorate this shortcoming, inverse reinforcement learning (IRL) with visual semantic segmentation [18] learns the navigation cost associated with each semantic class autonomously from human demonstrations. A similar approach to learning visual navigation is to frame it as a reinforcement learning problem given semantic segmentation of input images [2]. While these approaches do not use any explicit labelling when training the navigation planner, they still rely on the outputs of a pre-trained semantic segmentation network, and require manual annotation to extend the semantic segmentation to novel terrain types.

Our approach retains the generalizability benefits of using an intermediate representation while removing the dependence on explicit labelling of visual information. In our approach, both the visual representations and the navigation planner can be adapted to a new environment using only unlabeled human-provided demonstrations.

Ii Preference-Aware Path Planning

We consider the path planning problem in the context of a state space , action space , and deterministic transition function . Our state space is comprised of states , where denote the robot’s position, and denotes the visual appearance of the ground at this location. We define as the space of visual appearances relevant for preference-aware planning. The action space and transition function are defined by the kinodynamic constraints of the robot. Given a start state and goal , the local path planning problem is the search for a finite receding horizon sequence of actions, such that the resulting trajectory , which is defined by , exhibits minimal cost , . Since this is a receding horizon local planning problem, the final state of the optimal solution may not reach , but the trajectory must be optimal with respect to the cost such that .

In purely geometric approaches to local planning (i.e., those that consider only geometric obstacles and treat all free space as equal), a common choice for is


where is the cost based on progress towards (e.g., ), is the cost based on the free path length of the trajectory, and the cost based on obstacle clearance [15] along . and are computed by comparing to the position of observed geometric obstacles in the environment.

Unlike purely geometric approaches, the path planning method we propose seeks to make preference-aware planning decisions also based on the appearance of the states of the robot’s trajectory. For a preference-aware planner that reasons only about distinct semantic classes, would be the set of discrete known semantic classes. In contrast, in our approach is a continuous space of k-dimensional learned visual representations relevant for preference-aware planning. To incorporate this visual information into the path planning problem, we add an additional term to Eq. 1, redefining as


where computes a cost based on the appearance of the terrain over which the trajectory traverses. Intuitively, this cost should be large for trajectories that cause the robot to traverse undesirable terrain, and small otherwise.

Instead of specifying manually, we learn it from human demonstrations that implicitly provide information about terrain desirability using a representation learning approach. In the next section, we will discuss this learning problem.

Iii Visual Representation-Based
Preference Learning

While each robot state has some true visual appearance , the robot does not have a-priori information about it. Instead, the robot observes image patches of the ground , which are then used to infer as follows. First, we use an image projection operator , to identify image patches of the ground in one state as seen by another robot state: returns the image patch corresponding to the state while observing it from the state . Note that needs access to the full image observation of the robot while at pose – we assume this image to be available, and omit it from the notation for simplicity. This projection operator can be derived from the camera’s extrinsic and intrinsic calibration, and the relative positions of the states and . We then apply a visual representation function to infer the visual appearance information from this image patch. Thus, the appearance of state is inferred via the visual observations from a different state as .

Given an initial state from which the robot can observe future states along trajectory , we formulate the preference-aware cost as


where is a visual representation function that maps image patch observations to the visual appearance of the ground , is a cost function that uses these embeddings to produce a real-valued cost, and is a discount factor to ensure states far in the future don’t have an overbearing impact on the current cost calculation. We propose to learn via representation learning

, which has recently shown great success at closing the gap between supervised and unsupervised learning for visual tasks such as image recognition 

[4] and video representation learning [14]. We leverage unlabeled human demonstrations to learn the functions and , as described below.

The training data for this learning problem consists of a set of human-provided demonstrations , where each demonstration consists of a sequence of robot locations and image observations. Each is collected while manually driving a robot from an arbitrary start to a goal location, following paths that encode the human demonstrator’s preferences.

Iii-a Visual Representation Learning

The goal of the visual representation function is to map an image patch

to a low-dimensional representation vector

that captures only the salient visual information from the patch relevant to preference-aware planning. Ideally, we would like this representation to exhibit two properties: 1) separability of patches of terrain with different preference values, and 2) invariance to viewpoint changes (that is, the visual appearance of a given patch of terrain is the same, no matter where it is observed from)

. We learn this function using a triplet loss function, a form of contrastive loss

[8], which requires identification of training triplets that simultaneously encourage the desired separability and invariance properties. We next discuss the method for triplet identification as well as the loss function used to learn .

Loss Function.    We define a loss function for such that the learned result exhibits the above properties when trained over training triplets collected in a self-supervised procedure. In this loss function, we require a triplet of image patches , which are referred to as the anchor, similar, and dissimilar patches respectively. Our loss function


enforces that in the embedding space is closer to than it is to by at least a fixed margin . Given a training dataset of triplets , the visual representation learning problem finds which satisfies


Next, we explain the process of obtaining our training dataset from the demonstrated trajectories such that this loss function will encourage learning representations which satisfy the desired properties given above.

Similar Patch Extraction.    In order to enforce viewpoint invariance, we choose triplets such that and are different views of the same location in the real world. Viewpoint invariance requires that for all arbitrary states , the visual representation of the image patch of the observation of state should be the same as seen from and :


To extract pairs which correctly enforce this property from the human demonstrations, we use the following procedure. For three arbitrary time-steps in a human demonstration trajectory , the anchor image patch and similar image patch are selected as the image projections of from time-steps respectively:


Fig. 0(a) illustrates this procedure for similar patch extraction.

Dissimilar Patch Extraction.    When selecting patches to identify as dissimilar (), we seek to ensure that regions the human demonstrator chose to avoid are distant in the embedding space from regions the demonstration traversed. Rather than ask for human annotation, our approach infers this preference based on the sequence of future demonstration states in each . For each demonstration trajectory, the robot generates a hypothesized trajectory such that they both start and end at the same points () and the length of the hypothesized path111We assume the demonstrations are non-trivial, such that always exists and has no geometric obstacles, else the corresponding demonstration trajectory is discarded. is shorter than that of the demonstration: . The dissimilar patch is then selected as the image projection of a randomly chosen state on the hypothesized patch that is distant from the demonstration trajectory:


where is a tunable threshold. By selecting patches along this hypothesized path which is also far from the demonstrated trajectory, we can infer that this region was explicitly avoided by the human demonstrator. Fig. 0(b) provides a visualization of this patch selection procedure.

(a) Similar Patch Extraction. The visual representations of patches at location , as observed from the robot at states and , are enforced to be similar.
(b) Dissimilar Patch Extraction
Fig. 1: Patch Extraction Procedure

Triplet Selection.    The complete training dataset is generated in a self-supervised manner by repeating the above procedure over exhaustively chosen time-steps for all demonstration trajectories – since there exists a large number of such time-steps in each demonstration trajectory, we are able to construct a sizable training set with a small number of human demonstrations.

Iii-B Visual Preference Cost Function

The cost function is responsible for taking the visual appearance of a single patch of terrain , as obtained from , and outputting a real-valued traversal cost, reflecting the cost incurred by travelling over this terrain. These individual patch costs are then combined together in Eq. 3 to contribute to the overall cost of a trajectory .

Loss Function.    To train our cost function, we use the same training set that was extracted in Section 1. Our loss function has a margin , and can be defined as:


This loss function enforces that is at least greater than . We therefore want to choose such that it is a patch of terrain that should have a high cost relative to . To do this, we find the which optimizes:


Here, we use , which represent image patches over which the robot traversed during demonstration, as the patches which generate , and we use , which represent image patches that were explicitly avoided during demonstration, as the patches which generate , thereby extracting the appropriate cost preferences from the demonstration. By comparing the produced costs in a pairwise fashion, we can enforce a strict ordering among the terrain types that are present. From the demonstrations, we are unable to directly label the correct cost for a region, but rather are given relative preference information, and therefore we do not use a direct regression-based cost function.

Iv Implementation Details

In this section we discuss details of our implementation that allowed the method described above to be deployed in real-time during our experimental evaluation.

Ground-Plane Homography.    When working with the robot’s visual data, we first undistort the images and apply a homography to transform the images to overhead views, determined by the intrinsics and extrinsics of the robot’s camera. Fig. 0(b) demonstrates the result of this homography. Since the transformed image has a constant scale with respect to distances in the world, rectangular image patches of constant size correspond to constant size rectangular regions on the ground plane. In our implementation, we chose a patch size of pixels, representing approximately in the real world, comparable to the size of our robot.

Local Cost-Map.    We persist the costs for each observed patch (the output of ) in a local costmap centered on the robot’s current position, using the robot’s odometry estimate to transform the existing costmap between time-steps. This affords the robot a short-term memory of visual information it has observed, but which is no longer in its view, which helps our implementation handle sharp turns and narrow field-of-view cameras. We recompute for any patches which can be observed by the robot at the current time-step, and we recompute from this costmap for each trajectory at every time step.

Network Structure.    In our implementation, the visual representation function takes the form of a neural network with 2 convolutional layers followed by a 3 densely connected layers, and our representation is a 6-dimensional vector. The cost function

is a small 3-layer Multi-Layer Perceptron (MLP) with a ReLU activation function to prevent negative outputs.

Batched Cost Computation.    Because our formulation computes costs for each image patch independently, we are able to parallelize the computation of patch costs for each image. The small patch size combined with the compact network structure described above allows our algorithm to process hundreds of patches per time-step on our robot’s modest GPU (Nvidia GeForce GTX 1050TI), which is enough to process an entire image observation. Our processing of visual information occurs at during the planning process; significantly faster than FCHarDNet [3], a segmentation network designed for efficiency in compute-constrained environments, which was only able achieve when running on the same GPU as part of an autonomy stack.

(a) Primary Evaluation Environment Trajectories
(b) Secondary Evaluation Environment Trajectories
Fig. 2:
Trajectory 1 Trajectory 2 Trajectory 3 Trajectory 4
Distance ()
Duration ()
Distance ()
Duration ()
Distance ()
Duration ()
Distance ()
Duration ()
Preference Learning 0.95 0 0 0.93 0 0 1.37 0 0 1.49 0 0
Segmentation 2.40 13 3.5 1.56 2.5 2 2.23 8.5 4 2.66 5.5 3
Annotated Geometric 1.05 1.25 1 1.15 0.25 0.5 1.32 0 1 2.17 4 0
Pure Geometric 24 4 18 5 62 7 15 5
TABLE I: Mean Metrics in Primary Evaluation Environment.

V Experimental Results

We evaluate VRL-PAP in a variety of real-world environments by measuring its 1) accuracyat following desired paths compared to other visual and geometric navigation planners; 2) adaptabilityto novel terrain types from limited unlabeled demonstration; and 3) scalabilityto long trajectories in the real world. All experiments were performed on a Clearpath Jackal Unmanned Ground Vehicle equipped with a VLP-16 LiDAR, a Microsoft Azure Kinect RGB-D camera, and an Nvidia GeForce GTX 1050TI.

V-a Accuracy In Following Desired Paths

We compare VRL-PAP to a reference and four baselines:
Reference: A reference trajectory of the correct preference-aware path provided via joystick by a human operator.
Annotated Geometric: A geometric planner using a detailed hand-annotated navigation graph of the evaluation environment including desirable paths.
Pure Geometric: A purely geometric planner without a detailed hand-annotated navigation graph of desirable paths.
Segmentation: A state of the art preference-aware planner using semantic segmentation to build a local cost map for planning: The Army Research Laboratory’s Autonomy Stack, which uses FCHarDNet [3] for semantic segmentation, trained on the RUGD Vision dataset [17].

We ran repeated trials with each of these navigation methods on four evaluation trajectories. Fig. 1(a) shows these trajectories, which traverse a real-world environment that includes multiple types of valid sidewalk, shadows cast by trees and buildings, and multiple types of undesired terrain including dirt, grass, and shrubs. This environment was also used to collect training data for the preference learning model, though the demonstration trajectories were distinct from the evaluation trajectories.

We use an undirected Hausdorff distance, which measures the distance from each point in the trajectory to the closest point in the reference trajectory, to quantitatively evaluate the accuracy of each autonomously executed trajectory:


Additionally, we evaluate the duration of time for which the robot was on undesirable terrain type, and the number of operator interventions necessary to prevent the robot from taking unsafe actions (e.g., driving into dense shrubbery).

The results of these experiments are presented in Table I. From these results, we see that VRL-PAP performs comparably to Annotated Geometric baseline, without access to the hand-made navigation graph for this environment. Further, VRL-PAP never needed human intervention to stay on the desired path while all of the baseline approaches required human intervention. The pre-trained segmentation-based approach required interventions due to instances of terrain in the environment that were not in its training dataset (short shrubbery and smooth dirt), which motivates the adaptability experiment presented in Section V-C.

(a) Park Traj. 1
(b) Park Traj. 2
(c) Adaptability Experiment
(d) Extended Deployment
Fig. 3: Cumulative Distribution Functions measuring distance to Reference trajectory

V-B Accuracy In Secondary Environment

To investigate the accuracy of VRL-PAP in a more complex scenario, we performed evaluation in an unstructured park environment, which included paths that were less clearly delineated than those in the primary evaluation environment. Fig. 4 shows a sample image from this environment, along with a top-down view of the local costmap generated by VRL-PAP after providing a handful of demonstrations in this environment. Fig. 1(b) shows the two trajectories over which we evaluated VRL-PAP and the Pure Geometric baseline, performing two trials of each. Fig. 2(a) and Fig. 2(b) show the cumulative distribution of the distance (CDF) from the reference trajectory when executing VRL-PAP and the Pure Geometric baseline – in both cases VRL-PAP more closely follows the reference trajectory, staying within of the reference at all times.

Fig. 4: Path and top-down costmap in the secondary evaluation environment. Lower cost regions are darker. Red regions are out of the robot’s field of view.

V-C Adaptability to Novel Terrain Types

In principle, both visual segmentation and VRL-PAP should be customizable to novel terrain types with sufficient training data. However, a key feature of VRL-PAP is that it can adapt to novel terrain types given only unlabeled human demonstrations. In contrast, for a visual semantic segmentation-based approach, adapting to new types of terrain requires collecting labelled segmentation images, which is significantly more onerous.

To evaluate the adaptability of VRL-PAP, we first deployed it in an environment with a new class of undesirable terrain. The initial model VRL-PAP-initial failed to avoid the terrain type, as did the segmentation-based approach. However, after providing 3 unlabeled human demonstrations of a different trajectory in the new environment, totalling just 47 seconds of driving, our updated model VRL-PAP-augmented was able to successfully avoid the undesirable terrain. Fig. 2(c) and Fig. 5 show the results of this experiment.

Fig. 5: Comparison of the behavior of path planners in the presence of a novel undesirable type of terrain (gravel). (a) Visualization of the trajectories traversed by each planner. (b) Top-down view of the predicted terrain cost by VRL-PAP-initial before being provided any training data including gravel, and (c) by VRL-PAP-augmented after re-training given a few short human demonstrations. Darker regions indicate lower navigation cost. Red regions are out of the robot’s field of view. (d) Camera-frame view of the predicted cost map by the semantic segmentation approach.

V-D Scalability To Extended Deployments

Finally, we provide an example of VRL-PAP navigating a long trajectory, demonstrating its ability to stay on desirable paths over a long period of time without human intervention. Fig. 6 shows the evaluation trajectory – it circumnavigates the park environment used in Section V-B. This trajectory covers of autonomous navigation, during which the robot was provided sequential navigation goals, and required manual interventions. Fig. 2(d) shows a CDF of the distance between VRL-PAP and the human-provided reference trajectory, showing that it stayed less than away from the reference for over of the trajectory.

Fig. 6: Extended Evaluation Trajectory

Vi Conclusion

In this work we presented VRL-PAP, a method for preference-aware path planning based on visual representations, which is learned from unlabelled human demonstrations. We provided a formulation for this approach which enforces desired properties of viewpoint invariance and separability on the learned visual representations. Finally, we demonstrated this approach’s capacity to successfully navigate in a variety of environments, as well as transfer to novel terrain types with no manual annotation of training data.

Vii Acknowledgements

This work has taken place in the Autonomous Mobile Robotics Laboratory (AMRL) at UT Austin. AMRL research is supported in part by NSF (CAREER-2046955, IIS-1954778, SHF-2006404), ARO (W911NF-19-2-0333), DARPA (HR001120C0031), Amazon, JP Morgan, and Northrop Grumman Mission Systems. The views and conclusions contained in this document are those of the authors alone.


  • [1] J. Biswas and M. Veloso (2016) The 1,000-km challenge: insights and quantitative and qualitative results. IEEE Intelligent Systems 31 (3), pp. 86–96. External Links: Document Cited by: §I.
  • [2] J. Bruce, N. Sunderhauf, P. Mirowski, R. Hadsell, and M. Milford (2018) Learning deployable navigation policies at kilometer scale from a single traversal. In Proceedings of The 2nd Conference on Robot Learning (CORL), A. Billard, A. Dragan, J. Peters, and J. Morimoto (Eds.), Vol. 87, pp. 346–361. Cited by: §I.
  • [3] P. Chao, C. Kao, Y. Ruan, C. Huang, and Y. Lin (2019) HarDNet: a low memory traffic network.

    2019 IEEE/CVF International Conference on Computer Vision (ICCV)

    , pp. 3551–3560.
    Cited by: §IV, §V-A.
  • [4] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. In

    International Conference on Machine Learning (ICML)

    pp. 1597–1607. Cited by: §III.
  • [5] H. L. Chiang, A. Faust, M. Fiser, and A. Francis (2019-02) Learning navigation behaviors end-to-end with autorl. IEEE Robotics and Automation Letters (RA-L) PP, pp. 1–1. External Links: Document Cited by: §I.
  • [6] A. Giusti, J. Guzzi, D. C. Cireşan, F. He, J. P. Rodríguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Caro, et al. (2015) A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters (RA-L) 1 (2), pp. 661–667. Cited by: §I.
  • [7] N. Hawes, C. Burbridge, F. Jovan, L. Kunze, B. Lacerda, L. Mudrova, J. Young, J. Wyatt, D. Hebesberger, T. Kortner, et al. (2017) The strands project: long-term autonomy in everyday environments. IEEE Robotics & Automation Magazine 24 (3), pp. 146–156. Cited by: §I.
  • [8] A. Hermans, L. Beyer, and B. Leibe (2017) In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737. Cited by: §III-A.
  • [9] N. Hirose, A. Sadeghian, M. Vázquez, P. Goebel, and S. Savarese (2018)

    GONet: A semi-supervised deep learning approach for traversability estimation

    In International Conference on Intelligent Robots and Systems(IROS), pp. 3044–3051. External Links: Link, Document Cited by: §I.
  • [10] G. Kahn, P. Abbeel, and S. Levine (2021)

    BADGR: an autonomous self-supervised learning-based navigation system

    IEEE Robotics and Automation Letters (RA-L) 6 (2), pp. 1312–1319. External Links: Document Cited by: §I.
  • [11] P. Khandelwal, S. Zhang, J. Sinapov, M. Leonetti, J. Thomason, F. Yang, I. Gori, M. Svetlik, P. Khante, V. Lifschitz, et al. (2017) Bwibots: a platform for bridging the gap between ai and human–robot interaction research. The International Journal of Robotics Research (IJRR) 36 (5-7), pp. 635–659. Cited by: §I.
  • [12] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller (2013) Playing atari with deep reinforcement learning. Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS). Cited by: §I.
  • [13] A. Mousavian, A. Toshev, M. Fišer, J. Košecká, A. Wahid, and J. Davidson (2019) Visual representations for semantic target driven navigation. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8846–8852. Cited by: §I.
  • [14] R. Qian, T. Meng, B. Gong, M. Yang, H. Wang, S. Belongie, and Y. Cui (2021) Spatiotemporal contrastive video representation learning. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    pp. 6964–6974. Cited by: §III.
  • [15] A. Richardson and E. Olson (2011-09) Iterative path optimization for practical robot planning. pp. 3881–3886. External Links: Document Cited by: §II.
  • [16] W. Shen, D. Xu, Y. Zhu, L. Fei-Fei, L. Guibas, and S. Savarese (2019) Situational fusion of visual representation for visual navigation. In IEEE/CVF International Conference on Computer Vision (ICCV), Vol. , pp. 2881–2890. External Links: Document Cited by: §I.
  • [17] M. Wigness, S. Eum, J. G. Rogers, D. Han, and H. Kwon (2019) A rugd dataset for autonomous navigation and visual perception in unstructured outdoor environments. In International Conference on Intelligent Robots and Systems (IROS), Cited by: §V-A.
  • [18] M. Wigness, J. G. Rogers, and L. E. Navarro-Serment (2018) Robot navigation from human demonstration: learning control behaviors. In 2018 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 1150–1157. External Links: Document Cited by: §I, §I.
  • [19] X. Xiao, B. Liu, G. Warnell, and P. Stone (2020) Motion control for mobile robot navigation using machine learning: a survey. The International Journal of Robotics Research (IJRR). Cited by: §I.