Robotic Vision for Space Mining

09/27/2021
by   Ragav Sachdeva, et al.
0

Future Moon bases will likely be constructed using resources mined from the surface of the Moon. The difficulty of maintaining a human workforce on the Moon and communications lag with Earth means that mining will need to be conducted using collaborative robots with a high degree of autonomy. In this paper, we explore the utility of robotic vision towards addressing several major challenges in autonomous mining in the lunar environment: lack of satellite positioning systems, navigation in hazardous terrain, and delicate robot interactions. Specifically, we describe and report the results of robotic vision algorithms that we developed for Phase 2 of the NASA Space Robotics Challenge, which was framed in the context of autonomous collaborative robots for mining on the Moon. The competition provided a simulated lunar environment that exhibits the complexities alluded to above. We show how machine learning-enabled vision could help alleviate the challenges posed by the lunar environment. A robust multi-robot coordinator was also developed to achieve long-term operation and effective collaboration between robots.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

page 5

09/20/2021

NASA Space Robotics Challenge 2 Qualification Round: An Approach to Autonomous Lunar Rover Operations

Plans for establishing a long-term human presence on the Moon will requi...
08/12/2019

Enabling Commercial Autonomous Space Robotic Explorers

In contrast to manned missions, the application of autonomous robots for...
11/10/2020

OnionBot: A System for Collaborative Computational Cooking

An unsolved challenge in cooking automation is designing for shared kitc...
10/09/2019

Autonomous Multirobot Technologies for Mars Mining Base Construction and Operation

Beyond space exploration, the next critical step towards living and work...
11/27/2017

Challenges and Opportunities in Exoskeleton-based Rehabilitation

Robotic systems are increasingly used in rehabilitation to provide high ...
02/05/2019

Empathic Robot for Group Learning: A Field Study

This work explores a group learning scenario with an autonomous empathic...
12/03/2019

Autonomous Robot Swarms for Off-World Construction and Resource Mining

Kick-starting the space economy requires identification of critical reso...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The need to transport resources from Earth is a serious obstacle to space exploration that must be addressed as a precursor to sustainable deep space missions. In-Situ Resource Utilisation (ISRU), where resources are extracted on other astronomical objects and exploited to support longer and deeper space missions, has been proposed as a way to mitigate the need to carry resources from Earth [32].

The difficulties of building a large-scale human presence in space and the lack of real-time interplanetary communication means that mining on planetary bodies (primarily the Moon and Mars) will have to depend on robots with a high level of autonomy [40, 28]. Although there exist semi-automated systems for mining on Earth [12], they are supported by mature infrastructure such as global navigation satellite systems (GNSS), well-maintained roads, ready access to fuel, and maintenance. These facilities will not be available at the onset of space mining missions, where robots will need to contend with hazardous terrain, lack of accurate positioning systems, limited power supply, and many other difficulties [22, 13, 35, 33, 5, 43]. Indeed, space robotics has been identified by NASA as a Centennial Challenge.

For risk and economic reasons, space mining will likely utilise a fleet of heterogeneous robots that must collaborate to accomplish the goal. This accentuates the difficulties alluded to above; apart from being able to navigate in an unstructured environment and avoid obstacles without accurate satellite positioning, a robot must also manoeuvre and interact with other robots without causing damage. This argues for a high degree of intelligence on each agent and a robust multi-robot coordination system to ensure long-term operation.

Fig. 1: Gazebo simulated lunar environment with rovers in SRCP2.

In this systems paper, we explore robotic vision to address some of the key challenges towards autonomous robots for collaborative space mining: lack of satellite positioning systems, navigation in hazardous terrain, and the need for delicate robot interactions. Specifically, we describe the main components of our solution for the NASA Space Robotics Challenge Phase 2 (SRCP2) [37], wherein a simulated lunar environment that contained a heterogeneous fleet of rovers was provided; see Fig. 1. The goal was to develop software to enable the rovers to autonomously and collaboratively find and extract resources on the Moon. Our 3rd place and innovation award winning solution, extensively employed machine-learning based robotic perception to accomplish accurate localisation, semantic mapping of the lunar terrain, and object detection to facilitate accurate close range manoeuvring between rovers.

In the rest of the paper, we further introduce SRCP2, and briefly describe our overall solution, before detailing our robotic vision algorithms and their results on the problems above.

Fig. 2: Rover classes in SRCP2 (excavator, hauler, and scout).

Ii NASA Space Robotics Challenge

In SRCP2 [37], a Gazebo simulated lunar environment that contained several rovers and two lunar landers (“base stations”) was provided; see Fig. 1. Competitors were tasked with developing software that enables the rovers to autonomously find, excavate, and retrieve resources (volatiles) in the lunar regolith. The main features of the challenge are:

  • [leftmargin=1em]

  • Resources are scattered across the 200m200m map with no prior information about their locations. Hence, the resources must be found by exploring the environment.

  • Hazardous terrain comprising mounds, craters and hills, which can cause the robot to slip, disorient, or flip. Therefore, obstacles must be avoided during navigation.

  • Absence of a global positioning system. Each rover is allowed to query its global position only once from the simulator (e.g., for initialisation); thus, the rovers need to self-localise. In addition, the positions of the base stations are neither supplied nor retrievable from the simulator.

  • The base stations comprise a processing plant, where all the excavated resources must be deposited, and a recharge station, to rapidly restore rover batteries.

  • There are three types of rovers—scout, excavator and hauler (see Fig. 2)—that have complementary specialisations. The scout has a volatile sensor to locate resources. The excavator has an arm that can perform digging. The hauler has a bin to haul the resources back to the processing plant. In addition, each rover is equipped with an IMU, stereo cameras, and a 2D LiDAR.

  • The challenge allows fleets to comprise of any combination of the rover types up to a maximum of six units.

  • The final score is the number of volatiles deposited in the processing plant during a 2-hour simulation run.

Iii Overview of our solution

Our paper focuses on the role of robotic perception and multi-robot coordination for SRCP2. It is nevertheless useful to first provide an overview of our solution to help conceptually connect the major components to be described later.

Our solution utilises two scouts, two excavators, and two haulers separated into two largely independent teams. Each team consists of one instance of each rover type. At initialisation the poses of all rovers and base stations are established on a common world coordinate frame (Sec. IV). Upon successful initialisation, the on-board localisation algorithm of each rover is invoked.

The scouts then follow a spiral search pattern centred at the base stations to discover volatiles, which prioritises the discovery of deposits closer to base. Meanwhile, the excavator and the hauler of each team follow their respective scout, ready to extract volatiles as soon as they are found. Throughout the journey, the rovers continuously generate semantic understanding of their surroundings through the camera to conduct real-time obstacle avoidance (Sec. V).

During exploration, the scout continuously monitors its volatile sensor (located at the front of the chasis) that returns a noisy measurement of distance to volatiles within a 2m radius. When a deposit is detected, the scout attempts to precisely pinpoint the location of the deposit. It does this by first rotating on the spot to align its orientation with the direction of the detected volatile (via gradient descent in conjunction with a Savitzky–Golay filter [34]), and then repeating the process while driving forward. Upon successful volatile detection, the scout pauses and waits for the excavator and hauler to rendezvous with it (Sec. VI). Once a safe parking configuration is reached, the scout continues with exploration while the excavator and hauler begin mining.

The excavator repeatedly digs for volatiles and dumps them into the hauler’s bin using object detection on the camera feed to locate the hauler (Sec. V-A), and LiDAR to accurately infer distance from the hauler’s bin to the excavator chassis (Sec. VI). The excavator then returns to following the scout, and the hauler may return to the processing plant if its bin is full. The above is repeated until both teams exhaust all resources in their respective domain.

When a rover’s battery level is low, it pauses its current task and returns to the repair station. While approaching the base stations, accumulated error in on-board localisation is zeroed by estimating the rover pose with respect to the base stations (Sec. 

IV) when the latter are in view.

Our solution was able to consistently and continuously operate in 2-hour simulation runs of SRCP2; see [27] for a video recording. In the following, we will explain in more detail how we accomplished localisation, navigation, robot interaction, and coordination, particularly the robotic vision algorithms that underpin the former three components.

Iv Localisation

Fig. 3: Pose estimation pipeline to calculate the relative pose between base stations and rover.

Accurate localisation—estimating position and orientation within the operating environment—is fundamentally important to autonomous robots [41]

. Localisation techniques can be broadly classified into active and passive methods. Active methods generally involve direct communication of signals that facilitate localisation. Examples include RF beacons, WiFi positioning, RFID positioning and GNSS.

Passive methods utilise onboard sensors to generate relative measurements between the robot and the environment to estimate position. A basic technique is to conduct dead reckoning using interoceptive sensors such as wheel encoders and an IMU to incrementally track the robot motion using Bayesian filtering. However, dead reckoning is subject to drift, hence the filter must be periodically reset using extra information such as celestial positioning [45, 47], fiducial markers [1, 46], or image matching [19, 15].

Simultaneous localisation And Mapping (SLAM) is regarded as a state-of-the-art (SOTA) passive localisation approach. In addition to tracking robot motion, SLAM techniques incrementally build a map of the environment using the sensor percepts. This allows the robot to relocalise itself in the environment (so called “loop closing”) and remove drift by redistributing accumulated error through all variables in the system. A notable instance of SLAM is visual SLAM (VSLAM), whereby the primary sensor is a camera [7]. SOTA VSLAM algorithms [26, 38] detect and map visually salient features or keypoints in the environment.

A practical robot localisation scheme will likely use a combination of active and passive methods. It is worthwhile to point out that existing Earth-centric GNSS will unlikely be sufficient for accurate localisation on the Moon [20, 22].

Iv-a Our localisation technique for SRCP2

Active localisation functionalities and positioning markers are not provided in SRCP2. Realistic star fields are also not rendered in the simulator. Our initial investigation also showed that VSLAM (specifically [26]) is brittle [42] (cannot perform over long duration without recurrent failures) in the simulated lunar environment, possibly due to the feature-poor textures used to render the lunar terrain, significant brightness contrast and strong shadowing, which reduce the ability to repeatably detect and match keypoints; see Fig. 4.

Fig. 4: ORB keypoints detected in the first person view (FPV) of a rover. Notice the relatively texture-poor terrain and strong contrast between bright and shadowed regions. Also, few keypoints are detected in the shadow.

Given the above findings, we developed a localisation solution that relies on extended Kalman filtering (EKF) 

[25, 17] of linear and angular velocity estimates from the wheel odometry and onboard IMU of a rover. To remove drift, we perform visual pose estimation of the base stations. Specifically, as a base station appears in the FOV of the rover camera (per normal return to base runs; see Sec. III), the 6DoF relative pose between the base station and rover is estimated; see Fig. 5. Given the absolute pose of the base station (initialised according to Sec. IV-C), the absolute pose of the rover is inferred and used to reset the EKF. We describe the pose estimation pipeline and initialisation next.

Fig. 5: Relative pose estimation between rover and observed base stations. See [2] for a video recording of our pose estimation results.

Iv-B Visual pose estimation

To relocalise the rovers we adopted the satellite pose estimation pipeline of Chen et al. [4]; see Fig. 3

. A deep neural network (DNN) (specifically a YOLOv5 

[16] object detector) detects the bounding box of base stations in the input image. Given a crop of a base station, a second DNN (a combination of HRNet [39] and DSNT [29]) predicts the coordinates of predetermined landmarks in the image. This gives rise to 2D-3D correspondences between the image and the 3D model of the base station, which is fed to a robust perspective-n-point (PnP) solver [11]

to compute the relative pose. The overall speed of the pipeline was 10 FPS on an RTX 2080. To train the object detector, we collected 30,000 images from the FPV of rovers and labelled them with ground truth bounding boxes containing base stations. To train the landmark predictor, we manually chose, based on visual saliency, 35 points on the CAD model of the base stations provided by NASA, and labelled the pixel locations of the points in the training images. DNN training was done using PyTorch and the YOLOv5 framework 

[16].

Fig. 6: Navigation framework that combines the path planner and waypoint driver. In order to reach the global goal waypoint while avoiding obstacles, the path planner creates a series of local goals. The motion controller drives the rover’s actuators to steer it to the next local goal.

Iv-C Initialisation

As mentioned in Sec. II, while the ground truth absolute pose of rovers can be accessed once via an API call to the simulator, no such facility exists for the base stations. To localise the base stations, we developed a routine whereby, upon spawning of the environment and objects, a scout rotates about the azimuth (achievable due to the differential drive) to attempt to find the base stations (note that all rovers are spawned close to the base stations; see Fig. 1). As soon as a base station is in view, its relative pose is estimated. The ground truth absolute pose of the scout is queried and propagated using this relative pose to estimate the absolute pose of the base station.

V Navigation

The uneven lunar terrain is hazardous for rovers due to the presence of mounds, craters, and hills. To accomplish autonomous space mining, rovers need to be able to avoid obstacles while automatically navigating the environment.

There has been recent interest in solving the navigation problem using end-to-end deep learning approaches 

[23, 36, 3]. However, these methods typically train their models on complex, feature-rich environments and either use simplistic motion models or assume that the agents are equipped with accurate satellite positioning. In addition, they suffer from intrinsic problems of learning methods including high demand of training data [3], overfitting of the environments, and lack of explainability [44].

Keeping reliability and robustness in mind, we developed a navigation approach for SRCP2 based on classical methods [44, 18] that is informed by robotic vision. Similar to classical navigation, we use a hierarchical approach consisting of a path planner and a motion controller that work in tandem; see Fig. 6. To achieve real-time obstacle avoidance, a key component of our navigation framework is to perform semantic understanding of the local environment of the rover. The map inset in Fig. 7

illustrates our semantic scene understanding, while 

[8] is a video recording that highlights our navigation system for SRCP2. We provide more details of our navigation approach in the following.

Fig. 7: Real-time semantic scene understanding for obstacle avoidance in navigation in our SRCP2 solution. In the map inset, pink indicates craters while orange indicates mounds. See the full video at [8].

V-a Semantic scene understanding

We used object detection and depth estimation to generate semantic local scene understanding of a rover, i.e., the identity and positions of select objects close to the rover. A YOLOv5 [16] object detector was trained to detect base stations (processing plant and repair station), other rovers (scouts, excavators, haulers), mounds, and craters in the FOV of the rover camera; see Fig. 8. For each detected object, the distance from the rover is determined using stereo-depth estimation [9], resulting in a local semantic map as shown in Fig. 7. To train the object detector, approximately 10,000 images labelled with ground truth bounding boxes were used. Training and implementation was done using PyTorch and the YOLOv5 training pipeline.

As EKF localisation accumulates error, the relative position of the detected objects becomes inaccurate. To address this, we introduced a time-to-live (TTL) value for each detected object dictating how long the object should persist in the map before being removed. To avoid situations where rovers are stationary for long periods of time and all the objects in their periphery expire, we continuously extend the TTL value of objects if rovers are not moving. Additionally, we maintain a 7m radius about each rover in which any objects will have their TTL value indefinitely extended, which was particularly useful when maneuvering around obstacles outside the rover’s FOV.

Fig. 8: Detection of several object classes (processing plant, repair station, hauler, excavator, crater, mound) in the FOV of a scout.

V-B Path planning and motion control

While our paper focuses on robotic vision, we also briefly outline our navigation framework that relies extensively on the ability to generate real-time semantic understanding of the local surrounding of each rover. Akin to the classical methods [44, 18], our system has two levels of planning—path planner and motion controller; see Fig. 6. The path planner, given a final destination and the local (obstacle) map, generates a series of unobstructed waypoints that the rover can follow to reach the desired destination. This path is computed by simply using the A* shortest-path algorithm [14] on a fully-connected graph composed of points on the boundary of the obstacles in the local map where all the edges that intersect with the obstacles are removed. The motion controller, given an unobstructed waypoint, is then responsible for generating control signals to efficiently move the rover from its current position to the goal waypoint.

Vi Robot interactions

Collaboration between heterogeneous robots is essential in SRCP2. Here, we describe two main aspects of robot interactions (rover rendevzous and excavation/dumping) that employ robotic vision extensively in our solution.

Vi-a Rover rendezvous

Rover rendezvous is the activity whereby a scout, an excavator and a hauler come into close proximity (less than 0.5m) to allow volatile in the regolith to be extracted and deposited from the excavator into the hauler’s bin. Rendezvous is extremely delicate since any error in the process will cause collision, resulting in an increased EKF drift or damage. Fig. 9 depicts rover rendezvous in our solution.

Fig. 9: Three rovers (scout, excavator and hauler) at the onset of rendezvous. The targeted configuration is shown by the purple triangle pair in the local semantic map of the scout. See [31] for a video of the process.

A major obstacle to rendezvous is potential localisation inaccuracies of the rovers (by up to 5m), which we mitigate using visual guidance in the close range. First, once a scout finds a volatile, it pauses on the spot to function as a “marker” for the resource. The scout then broadcasts the (estimated) location to the rovers along with an obstacle-free parking configuration, defined as a triangle pair; see Fig. 9. The excavator then approaches the scout based on the broadcasted position estimate. When the excavator is within 10m of the scout, it engages the camera and objector detector (see Fig. 8) to visually locate the scout. The predicted bounding box of the target rover in conjunction with stereo-depth are used to estimate the precise location of the volatile relative to the excavator. The scout then safely departs the dig-site, allowing the excavator to park itself in front of the deposit, as per the safe “triangle”, ready for extraction. Subsequently and in a similar fashion, the hauler approaches the excavator to complete the rendezvous.

Vi-B Excavation and dumping

With the hauler parked close to the excavator, the digging process begins; illustrated by Fig. 10. The excavator uses vision to estimate its relative location and orientation, which is used to set the scoop angle for resource depositing. This process begins with the excavator panning its camera towards the hauler until the bin is identified using YOLOv5, and then LiDAR is used to measure the closest point between hauler’s bin and the excavator. All LiDAR measurements are projected into a 2D plane and we only consider object detections that intersect within the bounding box of the hauler’s bin. If the closest point measurement reports that the distance between the two rovers is undesirable, the hauler will readjust its park accordingly. The excavator then digs volatiles from ground, and deposits them into the back of the hauler, which continues until the resource patch is depleted.

Fig. 10: In the excavation and dumping activity, the object detector on the excavator’s camera locates the bin on the hauler to contribute to relative pose estimation. See [10] for a video of the process.

Vii Robot Coordination

To facilitate task coordination in multi-robot systems, potential architectures [24] include: a centralised system where all the robots are connected to a central control unit [21], a distributed system where there is no central control and all the robots are equal and autonomous in decision making [30], and a decentralised system which is an intermediate between centralised and distributed architectures [6]. We opted for a decentralised approach that offers more scalability and higher risk tolerence than a centralised system, whilst being easier to develop and deploy than a distributed system.

Concretely, each rover in a given team is able to autonomously accomplish generic tasks such as localisation, scene understanding, locomotion, as well as specialised ones such as exploration, volatile detection, digging, dumping, parking, etc. However, transitioning from one task to the other is done via a centralised coordinator service to facilitate task synchronisation across multiple rovers (e.g. excavator shouldn’t dig and deposit resources until the hauler has finished parking). Owing to the decentralised nature of our system, should one of the teams break down, the other can continue functioning without any repercussions.

Viii Results

Qualitative results of the vision-based modules have been provided above. Here, we present some quantitative results.

Localisation

Fig. 11 displays the localisation error of a hauler over a 2 hour simulation run, where the error is computed as the difference between the EKF estimate and ground truth position. As shown in the plot localisation error accumulates as the rover moves across the lunar environment. During this run, the error for the hauler reached a maximum of 2.67m. Regular PnP resets ensure that the error stays within acceptable bounds. After resetting, the localisation error was reduced to around 0.2m or less in most cases.

Fig. 11: Localisation error of a hauler during a 2-hour simulation run. Localisation resets via visual pose estimation occur frequently throughout the run (highlighted in orange).

Navigation

Semantic scene understanding is a major part of our navigation system. Fig. 12

shows the confusion matrix of our YOLOv5 object detector model trained on our dataset. The mean testing accuracy of the model is 88%. Notably, the model generalised successfully to detect small distant mounds not in the training or testing set.

Fig. 12: Confusion matrix of our YOLOv5 object detector on testing set.

Obstacle detections are added to the persistent local map of the rover at 5 FPS, which was sufficient for all the rovers to achieve fine-grained motor control. Throughout 44 hours of simulation testing, the rovers travelled an accumulated distance of approximately 120km. During these runs, no serious navigational failures due to collisions occurred.

Rover interactions

As mentioned in Sec. VI, visual object detection plays a significant role in rover rendezvous and excavation. The same detector model for navigation (i.e., with quantitative results in Fig. 12) was used for rover interactions. A more direct measure of success of rover interactions is the amount of volatiles that was extracted by the excavator and deposited into the hauler. Across all resource extraction events attempted in the 44 hours of simulation testing, 84.8% of volatiles were successfully transferred to the hopper. Resource losses were due to rendezvous or deposit inaccuracies, which were more common in challenging terrain (many hills or obstacles in the location of the resource). In overly challenging cases, resource extraction is simply not attempted for the sake of safety; this occurred in about 20% of the resources discovered by the scout.

Overall results

22 simulation runs were performed to evaluate the overall performance of the final system. Each run consisted of 2 hours of simulation time under the competition configuration (see Sec. II). The average, minimum, and maximum number of volatiles extracted during these runs were 266, 163, and 339 respectively. The scores accumulated during a specific 2-hour run are plotted in Fig. 13. See also qualitative result in the form of a video recording in [27].

Fig. 13: Total score shows the accumulated score throughout a 2 hour run, and hauler bin score shows the number of volatiles present in the bins of the two haulers.

Ix Conclusions

Our system represents a robust implementation of autonomous space mining in the context of the NASA SRCP2. Guided by robotic vision, our rovers are able to reliably navigate and extract resources from the simulated lunar environment for extended periods. The vision system periodically alleviates localisation drift, as well as to build a persistent map providing semantic scene understanding for use in obstacle avoidance and rover interaction. Interesting future research in the context of robotic vision is to perform VSLAM under the guidance of semantic scene understanding to help alleviate issues due to texture poor terrain, so as to build a semantically meaningful map of the lunar environment.

Acknowledgements

We gratefully acknowledge funding from Andy Thomas Centre for Space Resources. We thank the following team members who contributed in various ways towards our solution: John Culton, Hans C. Culton, Alvaro Parra Bustos, Rijul Ramkumar, Shivam Savani, Amirsalar Aryakia, Sam Bahrami, Aditya Pujara and Matthew Michael. James Bockman acknowledges support by the Australian Government Research Program (RTP) Scholarship in conjunction with the Lockheed Martin Australia supplementary scholarship.

References

  • [1] A. Bou Said Yssa et al. (2019) Geometry model for marker-based localisation. Ph.D. Thesis, University of Salford. Cited by: §IV.
  • [2] (Website) External Links: Link Cited by: Fig. 5.
  • [3] D. S. Chaplot, D. Gandhi, S. Gupta, A. Gupta, and R. Salakhutdinov (2020) Learning to explore using active neural slam. arXiv preprint arXiv:2004.05155. Cited by: §V.
  • [4] B. Chen, J. Cao, A. Parra, and T. Chin (2019) Satellite pose estimation with deep landmark regression and nonlinear pose refinement. In ICCV Workshop on Recovering 6D Object Pose, Cited by: §IV-B.
  • [5] K. Cheung and C. Lee (2017) In-situ navigation and timing services for the human mars landing site part 1: system concept. Cited by: §I.
  • [6] X. Dai, L. Jiang, and Y. Zhao (2016) Cooperative exploration based on supervisory control of multi-robot systems. Applied Intelligence 45 (1), pp. 18–29. Cited by: §VII.
  • [7] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse (2007) MonoSLAM: real-time single camera slam. IEEE transactions on pattern analysis and machine intelligence 29 (6), pp. 1052–1067. Cited by: §IV.
  • [8] (Website) External Links: Link Cited by: Fig. 7, §V.
  • [9] O. Faugeras, B. Hotz, H. Mathieu, T. Viéville, Z. Zhang, P. Fua, E. Théron, L. Moll, G. Berry, J. Vuillemin, et al. (1993) Real time correlation-based stereo: algorithm, implementations and applications. Technical report Inria. Cited by: §V-A.
  • [10] (Website) External Links: Link Cited by: Fig. 10.
  • [11] M. A. Fischler and R. C. Bolles (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Comm. ACM. (24), pp. 381–395. Cited by: §IV-B.
  • [12] B. Ghodrati, S. Hadi Hoseinie, and A. Garmabaki (2015) Reliability considerations in automated mining systems. International Journal of Mining, Reclamation and Environment 29 (5), pp. 404–418. Cited by: §I.
  • [13] R. Gonzalez and K. Iagnemma (2018) Slippage estimation and compensation for planetary exploration rovers. state of the art and future challenges. Journal of Field Robotics 35 (4), pp. 564–577. Cited by: §I.
  • [14] P. E. Hart, N. J. Nilsson, and B. Raphael (1968)

    A formal basis for the heuristic determination of minimum cost paths

    .
    IEEE transactions on Systems Science and Cybernetics 4 (2), pp. 100–107. Cited by: §V-B.
  • [15] S. Hu, M. Feng, R. M. Nguyen, and G. H. Lee (2018) Cvm-net: cross-view matching network for image-based ground-to-aerial geo-localization. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 7258–7267. Cited by: §IV.
  • [16] ultralytics/yolov5: v5.0 - YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations External Links: Document, Link Cited by: §IV-B, §V-A.
  • [17] C. Kilic, C. A. Tatsch, B. M. R Jr, J. J. Beard, D. W. Ross, and J. N. Gross (2020) Team mountaineers space robotic challenge phase-2 qualification round preparation report. arXiv preprint arXiv:2003.09968. Cited by: §IV-A.
  • [18] S. M. LaValle (2006) Planning algorithms. Cambridge university press. Cited by: §V-B, §V.
  • [19] R. Li, K. Di, A. B. Howard, L. Matthies, J. Wang, and S. Agarwal (2007) Rock modeling and matching for autonomous long-range mars rover localization. Journal of Field Robotics 24 (3), pp. 187–203. Cited by: §IV.
  • [20] M. Manzano-Jurado, J. Alegre-Rubio, A. Pellacani, G. Seco-Granados, J. A. López-Salcedo, E. Guerrero, and A. García-Rodríguez (2014) Use of weak gnss signals in a mission to the moon. In 2014 7th ESA Workshop on Satellite Navigation Technologies and European Workshop on GNSS Signals and Signal Processing (NAVITEC), pp. 1–8. Cited by: §IV.
  • [21] F. Matoui, B. Boussaid, B. Metoui, and M. N. Abdelkrim (2020) Contribution to the path planning of a multi-robot system: centralized architecture. Intelligent Service Robotics 13 (1), pp. 147–158. Cited by: §VII.
  • [22] E. Mikrin, M. Mikhailov, I. Orlovskii, S. Rozhkov, and I. Krasnopol’skii (2019) Satellite navigation of lunar orbiting spacecraft and objects on the lunar surface. Gyroscopy and Navigation 10 (2), pp. 54–61. Cited by: §I, §IV.
  • [23] P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. J. Ballard, A. Banino, M. Denil, R. Goroshin, L. Sifre, K. Kavukcuoglu, D. Kumaran, and R. Hadsell (2016) Learning to navigate in complex environments. CoRR abs/1611.03673. External Links: Link, 1611.03673 Cited by: §V.
  • [24] K. Mohamed, E. Ayman, A. Elshenawy Elsefy, M. Hany, and H. Harb (2018-12) A hybrid decentralized coordinated approach for multi-robot exploration task. The Computer Journal, pp. . Cited by: §VII.
  • [25] T. Moore and D. Stouch (2016) A generalized extended kalman filter implementation for the robot operating system. In Intelligent autonomous systems 13, pp. 335–348. Cited by: §IV-A.
  • [26] R. Mur-Artal and J. D. Tardós (2017)

    Orb-slam2: an open-source slam system for monocular, stereo, and rgb-d cameras

    .
    IEEE transactions on robotics 33 (5), pp. 1255–1262. Cited by: §IV-A, §IV.
  • [27] (Website) External Links: Link Cited by: §III, §VIII, footnote 1.
  • [28] A. Neale (2011) Space mining application for south african mining robotics. In Presented at the 4th Robotics and Mechatronics Conference of South Africa (ROBMECH 2011), Vol. 23, pp. 25. Cited by: §I.
  • [29] A. Nibali, Z. He, S. Morgan, and L. Prendergast (2018)

    Numerical coordinate regression with convolutional neural networks

    .
    arXiv:1801.07372. Cited by: §IV-B.
  • [30] J. Renoux, A. Mouaddib, and S. Le Gloannec (2015) A decision-theoretic planning approach for multi-robot exploration and event search. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5287–5293. Cited by: §VII.
  • [31] (Website) External Links: Link Cited by: Fig. 9.
  • [32] K. Sacksteder and G. Sanders (2007-01) In-situ resource utilization for lunar and mars exploration. pp. . External Links: ISBN 978-1-62410-012-3, Document Cited by: §I.
  • [33] J. Z. Sasiadek (2014) Space robotics—present and past challenges. In 2014 19th International Conference on Methods and Models in Automation and Robotics (MMAR), pp. 926–929. Cited by: §I.
  • [34] A. Savitzky and M. J. E. Golay (1964-01) Smoothing and differentiation of data by simplified least squares procedures. Analytical Chemistry 36, pp. 1627–1639. Cited by: §III.
  • [35] J. Schwendner and F. Kirchner (2014) Space robotics: an overview of challenges, applications and technologies. KI-Künstliche Intelligenz 28 (2), pp. 71–76. Cited by: §I.
  • [36] Z. Seymour, K. Thopalli, N. Mithun, H. Chiu, S. Samarasekera, and R. Kumar (2021) MaAST: map attention with semantic transformersfor efficient visual navigation. arXiv preprint arXiv:2103.11374. Cited by: §V.
  • [37] Space robotics challenge phase 2. Note: http://www.spaceroboticschallenge.com/Accessed: 2021-09-09 Cited by: §I, §II.
  • [38] S. Sumikura, M. Shibuya, and K. Sakurada (2019) Openvslam: a versatile visual slam framework. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 2292–2295. Cited by: §IV.
  • [39] K. Sun, B. Xiao, D. Liu, and J. Wang (2019) Deep high-resolution representation learning for human pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §IV-B.
  • [40] J. Thangavelautham (2020) Autonomous robot swarms for off-world construction and resource mining. In AIAA Scitech 2020 Forum, pp. 0795. Cited by: §I.
  • [41] S. Thrun (2002) Probabilistic robotics. Communications of the ACM 45 (3), pp. 52–57. Cited by: §IV.
  • [42] Y. Wang, W. Zhang, and P. An (2017) A survey of simultaneous localization and mapping on unstructured lunar complex environment. In AIP Conference Proceedings, Vol. 1890, pp. 030010. Cited by: §IV-A.
  • [43] C. Wong, E. Yang, X. Yan, and D. Gu (2017) Adaptive and intelligent navigation of autonomous planetary rovers—a survey. In 2017 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), pp. 237–244. Cited by: §I.
  • [44] X. Xiao, B. Liu, G. Warnell, and P. Stone (2020) Motion control for mobile robot navigation using machine learning: a survey. arXiv preprint arXiv:2011.13112. Cited by: §V-B, §V, §V.
  • [45] P. Yang, L. Xie, and J. Liu (2014) Simultaneous celestial positioning and orientation for the lunar rover. Aerospace Science and Technology 34, pp. 45–54. Cited by: §IV.
  • [46] F. Zenatti, D. Fontanelli, L. Palopoli, D. Macii, and P. Nazemzadeh (2016) Optimal placement of passive sensors for robot localisation. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4586–4593. Cited by: §IV.
  • [47] Y. Zhan, S. Chen, and X. Zhang (2021) Adaptive celestial positioning for the stationary mars rover based on a self-calibration model for the star sensor. The Journal of Navigation, pp. 1–16. Cited by: §IV.