The need to transport resources from Earth is a serious obstacle to space exploration that must be addressed as a precursor to sustainable deep space missions. In-Situ Resource Utilisation (ISRU), where resources are extracted on other astronomical objects and exploited to support longer and deeper space missions, has been proposed as a way to mitigate the need to carry resources from Earth .
The difficulties of building a large-scale human presence in space and the lack of real-time interplanetary communication means that mining on planetary bodies (primarily the Moon and Mars) will have to depend on robots with a high level of autonomy [40, 28]. Although there exist semi-automated systems for mining on Earth , they are supported by mature infrastructure such as global navigation satellite systems (GNSS), well-maintained roads, ready access to fuel, and maintenance. These facilities will not be available at the onset of space mining missions, where robots will need to contend with hazardous terrain, lack of accurate positioning systems, limited power supply, and many other difficulties [22, 13, 35, 33, 5, 43]. Indeed, space robotics has been identified by NASA as a Centennial Challenge.
For risk and economic reasons, space mining will likely utilise a fleet of heterogeneous robots that must collaborate to accomplish the goal. This accentuates the difficulties alluded to above; apart from being able to navigate in an unstructured environment and avoid obstacles without accurate satellite positioning, a robot must also manoeuvre and interact with other robots without causing damage. This argues for a high degree of intelligence on each agent and a robust multi-robot coordination system to ensure long-term operation.
In this systems paper, we explore robotic vision to address some of the key challenges towards autonomous robots for collaborative space mining: lack of satellite positioning systems, navigation in hazardous terrain, and the need for delicate robot interactions. Specifically, we describe the main components of our solution for the NASA Space Robotics Challenge Phase 2 (SRCP2) , wherein a simulated lunar environment that contained a heterogeneous fleet of rovers was provided; see Fig. 1. The goal was to develop software to enable the rovers to autonomously and collaboratively find and extract resources on the Moon. Our 3rd place and innovation award winning solution, extensively employed machine-learning based robotic perception to accomplish accurate localisation, semantic mapping of the lunar terrain, and object detection to facilitate accurate close range manoeuvring between rovers.
In the rest of the paper, we further introduce SRCP2, and briefly describe our overall solution, before detailing our robotic vision algorithms and their results on the problems above.
Ii NASA Space Robotics Challenge
In SRCP2 , a Gazebo simulated lunar environment that contained several rovers and two lunar landers (“base stations”) was provided; see Fig. 1. Competitors were tasked with developing software that enables the rovers to autonomously find, excavate, and retrieve resources (volatiles) in the lunar regolith. The main features of the challenge are:
Resources are scattered across the 200m200m map with no prior information about their locations. Hence, the resources must be found by exploring the environment.
Hazardous terrain comprising mounds, craters and hills, which can cause the robot to slip, disorient, or flip. Therefore, obstacles must be avoided during navigation.
Absence of a global positioning system. Each rover is allowed to query its global position only once from the simulator (e.g., for initialisation); thus, the rovers need to self-localise. In addition, the positions of the base stations are neither supplied nor retrievable from the simulator.
The base stations comprise a processing plant, where all the excavated resources must be deposited, and a recharge station, to rapidly restore rover batteries.
There are three types of rovers—scout, excavator and hauler (see Fig. 2)—that have complementary specialisations. The scout has a volatile sensor to locate resources. The excavator has an arm that can perform digging. The hauler has a bin to haul the resources back to the processing plant. In addition, each rover is equipped with an IMU, stereo cameras, and a 2D LiDAR.
The challenge allows fleets to comprise of any combination of the rover types up to a maximum of six units.
The final score is the number of volatiles deposited in the processing plant during a 2-hour simulation run.
Iii Overview of our solution
Our paper focuses on the role of robotic perception and multi-robot coordination for SRCP2. It is nevertheless useful to first provide an overview of our solution to help conceptually connect the major components to be described later.
Our solution utilises two scouts, two excavators, and two haulers separated into two largely independent teams. Each team consists of one instance of each rover type. At initialisation the poses of all rovers and base stations are established on a common world coordinate frame (Sec. IV). Upon successful initialisation, the on-board localisation algorithm of each rover is invoked.
The scouts then follow a spiral search pattern centred at the base stations to discover volatiles, which prioritises the discovery of deposits closer to base. Meanwhile, the excavator and the hauler of each team follow their respective scout, ready to extract volatiles as soon as they are found. Throughout the journey, the rovers continuously generate semantic understanding of their surroundings through the camera to conduct real-time obstacle avoidance (Sec. V).
During exploration, the scout continuously monitors its volatile sensor (located at the front of the chasis) that returns a noisy measurement of distance to volatiles within a 2m radius. When a deposit is detected, the scout attempts to precisely pinpoint the location of the deposit. It does this by first rotating on the spot to align its orientation with the direction of the detected volatile (via gradient descent in conjunction with a Savitzky–Golay filter ), and then repeating the process while driving forward. Upon successful volatile detection, the scout pauses and waits for the excavator and hauler to rendezvous with it (Sec. VI). Once a safe parking configuration is reached, the scout continues with exploration while the excavator and hauler begin mining.
The excavator repeatedly digs for volatiles and dumps them into the hauler’s bin using object detection on the camera feed to locate the hauler (Sec. V-A), and LiDAR to accurately infer distance from the hauler’s bin to the excavator chassis (Sec. VI). The excavator then returns to following the scout, and the hauler may return to the processing plant if its bin is full. The above is repeated until both teams exhaust all resources in their respective domain.
When a rover’s battery level is low, it pauses its current task and returns to the repair station. While approaching the base stations, accumulated error in on-board localisation is zeroed by estimating the rover pose with respect to the base stations (Sec.IV) when the latter are in view.
Our solution was able to consistently and continuously operate in 2-hour simulation runs of SRCP2; see  for a video recording. In the following, we will explain in more detail how we accomplished localisation, navigation, robot interaction, and coordination, particularly the robotic vision algorithms that underpin the former three components.
Accurate localisation—estimating position and orientation within the operating environment—is fundamentally important to autonomous robots 
. Localisation techniques can be broadly classified into active and passive methods. Active methods generally involve direct communication of signals that facilitate localisation. Examples include RF beacons, WiFi positioning, RFID positioning and GNSS.
Passive methods utilise onboard sensors to generate relative measurements between the robot and the environment to estimate position. A basic technique is to conduct dead reckoning using interoceptive sensors such as wheel encoders and an IMU to incrementally track the robot motion using Bayesian filtering. However, dead reckoning is subject to drift, hence the filter must be periodically reset using extra information such as celestial positioning [45, 47], fiducial markers [1, 46], or image matching [19, 15].
Simultaneous localisation And Mapping (SLAM) is regarded as a state-of-the-art (SOTA) passive localisation approach. In addition to tracking robot motion, SLAM techniques incrementally build a map of the environment using the sensor percepts. This allows the robot to relocalise itself in the environment (so called “loop closing”) and remove drift by redistributing accumulated error through all variables in the system. A notable instance of SLAM is visual SLAM (VSLAM), whereby the primary sensor is a camera . SOTA VSLAM algorithms [26, 38] detect and map visually salient features or keypoints in the environment.
A practical robot localisation scheme will likely use a combination of active and passive methods. It is worthwhile to point out that existing Earth-centric GNSS will unlikely be sufficient for accurate localisation on the Moon [20, 22].
Iv-a Our localisation technique for SRCP2
Active localisation functionalities and positioning markers are not provided in SRCP2. Realistic star fields are also not rendered in the simulator. Our initial investigation also showed that VSLAM (specifically ) is brittle  (cannot perform over long duration without recurrent failures) in the simulated lunar environment, possibly due to the feature-poor textures used to render the lunar terrain, significant brightness contrast and strong shadowing, which reduce the ability to repeatably detect and match keypoints; see Fig. 4.
Given the above findings, we developed a localisation solution that relies on extended Kalman filtering (EKF)[25, 17] of linear and angular velocity estimates from the wheel odometry and onboard IMU of a rover. To remove drift, we perform visual pose estimation of the base stations. Specifically, as a base station appears in the FOV of the rover camera (per normal return to base runs; see Sec. III), the 6DoF relative pose between the base station and rover is estimated; see Fig. 5. Given the absolute pose of the base station (initialised according to Sec. IV-C), the absolute pose of the rover is inferred and used to reset the EKF. We describe the pose estimation pipeline and initialisation next.
Iv-B Visual pose estimation
. A deep neural network (DNN) (specifically a YOLOv5 object detector) detects the bounding box of base stations in the input image. Given a crop of a base station, a second DNN (a combination of HRNet  and DSNT ) predicts the coordinates of predetermined landmarks in the image. This gives rise to 2D-3D correspondences between the image and the 3D model of the base station, which is fed to a robust perspective-n-point (PnP) solver 
to compute the relative pose. The overall speed of the pipeline was 10 FPS on an RTX 2080. To train the object detector, we collected 30,000 images from the FPV of rovers and labelled them with ground truth bounding boxes containing base stations. To train the landmark predictor, we manually chose, based on visual saliency, 35 points on the CAD model of the base stations provided by NASA, and labelled the pixel locations of the points in the training images. DNN training was done using PyTorch and the YOLOv5 framework.
As mentioned in Sec. II, while the ground truth absolute pose of rovers can be accessed once via an API call to the simulator, no such facility exists for the base stations. To localise the base stations, we developed a routine whereby, upon spawning of the environment and objects, a scout rotates about the azimuth (achievable due to the differential drive) to attempt to find the base stations (note that all rovers are spawned close to the base stations; see Fig. 1). As soon as a base station is in view, its relative pose is estimated. The ground truth absolute pose of the scout is queried and propagated using this relative pose to estimate the absolute pose of the base station.
The uneven lunar terrain is hazardous for rovers due to the presence of mounds, craters, and hills. To accomplish autonomous space mining, rovers need to be able to avoid obstacles while automatically navigating the environment.
There has been recent interest in solving the navigation problem using end-to-end deep learning approaches[23, 36, 3]. However, these methods typically train their models on complex, feature-rich environments and either use simplistic motion models or assume that the agents are equipped with accurate satellite positioning. In addition, they suffer from intrinsic problems of learning methods including high demand of training data , overfitting of the environments, and lack of explainability .
Keeping reliability and robustness in mind, we developed a navigation approach for SRCP2 based on classical methods [44, 18] that is informed by robotic vision. Similar to classical navigation, we use a hierarchical approach consisting of a path planner and a motion controller that work in tandem; see Fig. 6. To achieve real-time obstacle avoidance, a key component of our navigation framework is to perform semantic understanding of the local environment of the rover. The map inset in Fig. 7
illustrates our semantic scene understanding, while is a video recording that highlights our navigation system for SRCP2. We provide more details of our navigation approach in the following.
V-a Semantic scene understanding
We used object detection and depth estimation to generate semantic local scene understanding of a rover, i.e., the identity and positions of select objects close to the rover. A YOLOv5  object detector was trained to detect base stations (processing plant and repair station), other rovers (scouts, excavators, haulers), mounds, and craters in the FOV of the rover camera; see Fig. 8. For each detected object, the distance from the rover is determined using stereo-depth estimation , resulting in a local semantic map as shown in Fig. 7. To train the object detector, approximately 10,000 images labelled with ground truth bounding boxes were used. Training and implementation was done using PyTorch and the YOLOv5 training pipeline.
As EKF localisation accumulates error, the relative position of the detected objects becomes inaccurate. To address this, we introduced a time-to-live (TTL) value for each detected object dictating how long the object should persist in the map before being removed. To avoid situations where rovers are stationary for long periods of time and all the objects in their periphery expire, we continuously extend the TTL value of objects if rovers are not moving. Additionally, we maintain a 7m radius about each rover in which any objects will have their TTL value indefinitely extended, which was particularly useful when maneuvering around obstacles outside the rover’s FOV.
V-B Path planning and motion control
While our paper focuses on robotic vision, we also briefly outline our navigation framework that relies extensively on the ability to generate real-time semantic understanding of the local surrounding of each rover. Akin to the classical methods [44, 18], our system has two levels of planning—path planner and motion controller; see Fig. 6. The path planner, given a final destination and the local (obstacle) map, generates a series of unobstructed waypoints that the rover can follow to reach the desired destination. This path is computed by simply using the A* shortest-path algorithm  on a fully-connected graph composed of points on the boundary of the obstacles in the local map where all the edges that intersect with the obstacles are removed. The motion controller, given an unobstructed waypoint, is then responsible for generating control signals to efficiently move the rover from its current position to the goal waypoint.
Vi Robot interactions
Collaboration between heterogeneous robots is essential in SRCP2. Here, we describe two main aspects of robot interactions (rover rendevzous and excavation/dumping) that employ robotic vision extensively in our solution.
Vi-a Rover rendezvous
Rover rendezvous is the activity whereby a scout, an excavator and a hauler come into close proximity (less than 0.5m) to allow volatile in the regolith to be extracted and deposited from the excavator into the hauler’s bin. Rendezvous is extremely delicate since any error in the process will cause collision, resulting in an increased EKF drift or damage. Fig. 9 depicts rover rendezvous in our solution.
A major obstacle to rendezvous is potential localisation inaccuracies of the rovers (by up to 5m), which we mitigate using visual guidance in the close range. First, once a scout finds a volatile, it pauses on the spot to function as a “marker” for the resource. The scout then broadcasts the (estimated) location to the rovers along with an obstacle-free parking configuration, defined as a triangle pair; see Fig. 9. The excavator then approaches the scout based on the broadcasted position estimate. When the excavator is within 10m of the scout, it engages the camera and objector detector (see Fig. 8) to visually locate the scout. The predicted bounding box of the target rover in conjunction with stereo-depth are used to estimate the precise location of the volatile relative to the excavator. The scout then safely departs the dig-site, allowing the excavator to park itself in front of the deposit, as per the safe “triangle”, ready for extraction. Subsequently and in a similar fashion, the hauler approaches the excavator to complete the rendezvous.
Vi-B Excavation and dumping
With the hauler parked close to the excavator, the digging process begins; illustrated by Fig. 10. The excavator uses vision to estimate its relative location and orientation, which is used to set the scoop angle for resource depositing. This process begins with the excavator panning its camera towards the hauler until the bin is identified using YOLOv5, and then LiDAR is used to measure the closest point between hauler’s bin and the excavator. All LiDAR measurements are projected into a 2D plane and we only consider object detections that intersect within the bounding box of the hauler’s bin. If the closest point measurement reports that the distance between the two rovers is undesirable, the hauler will readjust its park accordingly. The excavator then digs volatiles from ground, and deposits them into the back of the hauler, which continues until the resource patch is depleted.
Vii Robot Coordination
To facilitate task coordination in multi-robot systems, potential architectures  include: a centralised system where all the robots are connected to a central control unit , a distributed system where there is no central control and all the robots are equal and autonomous in decision making , and a decentralised system which is an intermediate between centralised and distributed architectures . We opted for a decentralised approach that offers more scalability and higher risk tolerence than a centralised system, whilst being easier to develop and deploy than a distributed system.
Concretely, each rover in a given team is able to autonomously accomplish generic tasks such as localisation, scene understanding, locomotion, as well as specialised ones such as exploration, volatile detection, digging, dumping, parking, etc. However, transitioning from one task to the other is done via a centralised coordinator service to facilitate task synchronisation across multiple rovers (e.g. excavator shouldn’t dig and deposit resources until the hauler has finished parking). Owing to the decentralised nature of our system, should one of the teams break down, the other can continue functioning without any repercussions.
Qualitative results of the vision-based modules have been provided above. Here, we present some quantitative results.
Fig. 11 displays the localisation error of a hauler over a 2 hour simulation run, where the error is computed as the difference between the EKF estimate and ground truth position. As shown in the plot localisation error accumulates as the rover moves across the lunar environment. During this run, the error for the hauler reached a maximum of 2.67m. Regular PnP resets ensure that the error stays within acceptable bounds. After resetting, the localisation error was reduced to around 0.2m or less in most cases.
Semantic scene understanding is a major part of our navigation system. Fig. 12
shows the confusion matrix of our YOLOv5 object detector model trained on our dataset. The mean testing accuracy of the model is 88%. Notably, the model generalised successfully to detect small distant mounds not in the training or testing set.
Obstacle detections are added to the persistent local map of the rover at 5 FPS, which was sufficient for all the rovers to achieve fine-grained motor control. Throughout 44 hours of simulation testing, the rovers travelled an accumulated distance of approximately 120km. During these runs, no serious navigational failures due to collisions occurred.
As mentioned in Sec. VI, visual object detection plays a significant role in rover rendezvous and excavation. The same detector model for navigation (i.e., with quantitative results in Fig. 12) was used for rover interactions. A more direct measure of success of rover interactions is the amount of volatiles that was extracted by the excavator and deposited into the hauler. Across all resource extraction events attempted in the 44 hours of simulation testing, 84.8% of volatiles were successfully transferred to the hopper. Resource losses were due to rendezvous or deposit inaccuracies, which were more common in challenging terrain (many hills or obstacles in the location of the resource). In overly challenging cases, resource extraction is simply not attempted for the sake of safety; this occurred in about 20% of the resources discovered by the scout.
22 simulation runs were performed to evaluate the overall performance of the final system. Each run consisted of 2 hours of simulation time under the competition configuration (see Sec. II). The average, minimum, and maximum number of volatiles extracted during these runs were 266, 163, and 339 respectively. The scores accumulated during a specific 2-hour run are plotted in Fig. 13. See also qualitative result in the form of a video recording in .
Our system represents a robust implementation of autonomous space mining in the context of the NASA SRCP2. Guided by robotic vision, our rovers are able to reliably navigate and extract resources from the simulated lunar environment for extended periods. The vision system periodically alleviates localisation drift, as well as to build a persistent map providing semantic scene understanding for use in obstacle avoidance and rover interaction. Interesting future research in the context of robotic vision is to perform VSLAM under the guidance of semantic scene understanding to help alleviate issues due to texture poor terrain, so as to build a semantically meaningful map of the lunar environment.
We gratefully acknowledge funding from Andy Thomas Centre for Space Resources. We thank the following team members who contributed in various ways towards our solution: John Culton, Hans C. Culton, Alvaro Parra Bustos, Rijul Ramkumar, Shivam Savani, Amirsalar Aryakia, Sam Bahrami, Aditya Pujara and Matthew Michael. James Bockman acknowledges support by the Australian Government Research Program (RTP) Scholarship in conjunction with the Lockheed Martin Australia supplementary scholarship.
-  (2019) Geometry model for marker-based localisation. Ph.D. Thesis, University of Salford. Cited by: §IV.
-  (Website) External Links: Cited by: Fig. 5.
-  (2020) Learning to explore using active neural slam. arXiv preprint arXiv:2004.05155. Cited by: §V.
-  (2019) Satellite pose estimation with deep landmark regression and nonlinear pose refinement. In ICCV Workshop on Recovering 6D Object Pose, Cited by: §IV-B.
-  (2017) In-situ navigation and timing services for the human mars landing site part 1: system concept. Cited by: §I.
-  (2016) Cooperative exploration based on supervisory control of multi-robot systems. Applied Intelligence 45 (1), pp. 18–29. Cited by: §VII.
-  (2007) MonoSLAM: real-time single camera slam. IEEE transactions on pattern analysis and machine intelligence 29 (6), pp. 1052–1067. Cited by: §IV.
-  (Website) External Links: Cited by: Fig. 7, §V.
-  (1993) Real time correlation-based stereo: algorithm, implementations and applications. Technical report Inria. Cited by: §V-A.
-  (Website) External Links: Cited by: Fig. 10.
-  (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Comm. ACM. (24), pp. 381–395. Cited by: §IV-B.
-  (2015) Reliability considerations in automated mining systems. International Journal of Mining, Reclamation and Environment 29 (5), pp. 404–418. Cited by: §I.
-  (2018) Slippage estimation and compensation for planetary exploration rovers. state of the art and future challenges. Journal of Field Robotics 35 (4), pp. 564–577. Cited by: §I.
A formal basis for the heuristic determination of minimum cost paths. IEEE transactions on Systems Science and Cybernetics 4 (2), pp. 100–107. Cited by: §V-B.
-  (2018) Cvm-net: cross-view matching network for image-based ground-to-aerial geo-localization. In , pp. 7258–7267. Cited by: §IV.
-  ultralytics/yolov5: v5.0 - YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations External Links: Cited by: §IV-B, §V-A.
-  (2020) Team mountaineers space robotic challenge phase-2 qualification round preparation report. arXiv preprint arXiv:2003.09968. Cited by: §IV-A.
-  (2006) Planning algorithms. Cambridge university press. Cited by: §V-B, §V.
-  (2007) Rock modeling and matching for autonomous long-range mars rover localization. Journal of Field Robotics 24 (3), pp. 187–203. Cited by: §IV.
-  (2014) Use of weak gnss signals in a mission to the moon. In 2014 7th ESA Workshop on Satellite Navigation Technologies and European Workshop on GNSS Signals and Signal Processing (NAVITEC), pp. 1–8. Cited by: §IV.
-  (2020) Contribution to the path planning of a multi-robot system: centralized architecture. Intelligent Service Robotics 13 (1), pp. 147–158. Cited by: §VII.
-  (2019) Satellite navigation of lunar orbiting spacecraft and objects on the lunar surface. Gyroscopy and Navigation 10 (2), pp. 54–61. Cited by: §I, §IV.
-  (2016) Learning to navigate in complex environments. CoRR abs/1611.03673. External Links: Cited by: §V.
-  (2018-12) A hybrid decentralized coordinated approach for multi-robot exploration task. The Computer Journal, pp. . Cited by: §VII.
-  (2016) A generalized extended kalman filter implementation for the robot operating system. In Intelligent autonomous systems 13, pp. 335–348. Cited by: §IV-A.
Orb-slam2: an open-source slam system for monocular, stereo, and rgb-d cameras. IEEE transactions on robotics 33 (5), pp. 1255–1262. Cited by: §IV-A, §IV.
-  (Website) External Links: Cited by: §III, §VIII, footnote 1.
-  (2011) Space mining application for south african mining robotics. In Presented at the 4th Robotics and Mechatronics Conference of South Africa (ROBMECH 2011), Vol. 23, pp. 25. Cited by: §I.
Numerical coordinate regression with convolutional neural networks. arXiv:1801.07372. Cited by: §IV-B.
-  (2015) A decision-theoretic planning approach for multi-robot exploration and event search. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5287–5293. Cited by: §VII.
-  (Website) External Links: Cited by: Fig. 9.
-  (2007-01) In-situ resource utilization for lunar and mars exploration. pp. . External Links: Cited by: §I.
-  (2014) Space robotics—present and past challenges. In 2014 19th International Conference on Methods and Models in Automation and Robotics (MMAR), pp. 926–929. Cited by: §I.
-  (1964-01) Smoothing and differentiation of data by simplified least squares procedures. Analytical Chemistry 36, pp. 1627–1639. Cited by: §III.
-  (2014) Space robotics: an overview of challenges, applications and technologies. KI-Künstliche Intelligenz 28 (2), pp. 71–76. Cited by: §I.
-  (2021) MaAST: map attention with semantic transformersfor efficient visual navigation. arXiv preprint arXiv:2103.11374. Cited by: §V.
-  Space robotics challenge phase 2. Note: http://www.spaceroboticschallenge.com/Accessed: 2021-09-09 Cited by: §I, §II.
-  (2019) Openvslam: a versatile visual slam framework. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 2292–2295. Cited by: §IV.
-  (2019) Deep high-resolution representation learning for human pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §IV-B.
-  (2020) Autonomous robot swarms for off-world construction and resource mining. In AIAA Scitech 2020 Forum, pp. 0795. Cited by: §I.
-  (2002) Probabilistic robotics. Communications of the ACM 45 (3), pp. 52–57. Cited by: §IV.
-  (2017) A survey of simultaneous localization and mapping on unstructured lunar complex environment. In AIP Conference Proceedings, Vol. 1890, pp. 030010. Cited by: §IV-A.
-  (2017) Adaptive and intelligent navigation of autonomous planetary rovers—a survey. In 2017 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), pp. 237–244. Cited by: §I.
-  (2020) Motion control for mobile robot navigation using machine learning: a survey. arXiv preprint arXiv:2011.13112. Cited by: §V-B, §V, §V.
-  (2014) Simultaneous celestial positioning and orientation for the lunar rover. Aerospace Science and Technology 34, pp. 45–54. Cited by: §IV.
-  (2016) Optimal placement of passive sensors for robot localisation. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4586–4593. Cited by: §IV.
-  (2021) Adaptive celestial positioning for the stationary mars rover based on a self-calibration model for the star sensor. The Journal of Navigation, pp. 1–16. Cited by: §IV.