Outside of the Mars robotics community, it is commonly presumed that the robotic rovers on Mars are controlled in a time-delayed joystick manner, wherein commands are sent to the rovers several if not many times per day, as new information is acquired from the rovers’ sensors. However, inside the Mars robotics community, they have learned that such a brute force joystick-control process is rather cumbersome, and they have developed much more elegant methods for robotic control of the rovers on Mars, with highly significant degrees of robotic autonomy.
Particularly, the Mars Exploration Rover (MER) team has demonstrated autonomy for the two robotic rovers Spirit & Opportunity to the level that: practically all commands for a given Martian day (1 ‘sol’ 24.6 hours) are delivered to each rover from Earth before the robot wakens from its power-conserving nighttime resting mode [Crisp et al., 2003, Squyres et al., 2004]. Each rover then follows the commanded sequence of moves for the entire sol, moving to desired locations, articulating its arm with its sensors to desired points in the workspace of the robot, and acquiring data from the cameras and chemical sensors. From an outsider’s point of view, these capabilities may not seem to be significantly autonomous, in that all the commands are being sent from Earth, and the MER rovers are merely executing those commands. But the following facts/feats deserve emphasis before judgement is made of the quality of the MER autonomy: this robot is on another planet with a complex surface to navigate and study; and all of the complex command sequence is sent to the robot the previous night for autonomous operation the next day. Sophisticated software and control systems are also part of the system, including the MER autonomous obstacle avoidance system and the MER visual odometry & localization software.111This visual odometry and localization software was added to the systems after the rovers had been on Mars for several months [Squyres, 2004]. One should remember that there is a large team of human roboticists and geologists working here on the Earth in support of the MER missions, to determine science targets and robotic command sequences on a daily basis; after the sun sets for an MER rover, the rover mission team can determine the science priorities and the command sequence for the next sol in less than 4-5 hours.222Right after landing, this command sequencing took about 17 hours [Squyres, 2004].
One future mission deserves special discussion for the technology developments described in this paper: the Mars Science Laboratory, planned for launch in 2009 (MSL’2009). A particular capability desired for this MSL’2009 mission will be to rapidly traverse to up to three geologically-different scientific points-of-interest within the landing ellipse. These three geologically-different sites will be chosen from Earth by analysis of relevant satellite imagery. Possible desired maximal traversal rates could range from 300-2000 meters/sol in order to reach each of the three points-of-interest in the landing ellipse in minimum time.
Given these substantial expected traversal rates of the MSL’2009 rover, autonomous obstacle avoidance [Goldberg et al., 2002] and autonomous visual odometry & localization [Olson et al., 2003] will be essential to achieve these rates, since otherwise, rover damage and slow science-target approach would be the results. Given such autonomy in the rapid traverses, it behooves us to enable the autonomous rover with sufficient scientific responsibility. Otherwise, the robotic rover exploration system might drive right past an important scientific target-of-opportunity along the way to the human-chosen scientific point-of-interest. Crawford & Tamppari [Crawford and Tamppari, 2002] and their NASA/Ames team summarize possible ‘autonomous traverse science’, in which every 20-30 meters during a 300 meter traverse (in their example), science pancam and Mini-TES (Thermal Emission Spectrometer) image mosaics are autonomously obtained. They state that “there may be onboard analysis of the science data from the pancam and the mini-TES, which compares this data to predefined signatures of carbonates or other targets of interest. If detected, traverse may be halted and information relayed back to Earth.” This onboard analysis of the science data is precisely the technology issue that we have been working towards solving. This paper is the first report to the general robotics community describing our progress towards giving a robotic astrobiologist some aspects of autonomous recognition of scientific targets-of-opportunity. This technology development may not be sufficiently mature nor sufficiently necessary for deployment on the MSL’2009 mission, but it should find utility in missions beyond MSL’2009.
Before proceeding, we first note here two of the related efforts in the development of autonomous recognition of scientific targets-of-opportunity for astrobiological exploration: firstly, the work on developing a Nomad robot to search for meteorites in Antartica led by the Carnegie Mellon University Robotics Institute [Apostolopoulos et al., 2000, Pedersen, 2001], and secondly, the work by a group at NASA/Ames on developing a Geological Field Assistant (GFA) [Gulick et al., 2001, Gulick et al., 2002, Gulick et al., 2004]. From an algorithmic point-of-view, the uncommon-mapping technique presented in this paper attempts to identify interest points in a context-free, unbiased manner. In related work, [Heidemann, 2004] has studied the use of spatial symmetry of color pixel values to identify focus points in a context-free, unbiased manner.
2 The Cyborg Geologist & Astrobiologist System
Our ongoing effort in this area of autonomous recognition of scientific targets-of-opportunity for field geology and field astrobiology is beginning to mature as well. To date, we have developed and field-tested a GFA-like “Cyborg Astrobiologist” system [McGuire et al., 2004a, McGuire et al., 2004b, McGuire et al., 2005a, McGuire et al., 2005b] that now can:
Use human mobility to maneuver to and within a geological site and to follow suggestions from the computer as to how to approach a geological outcrop;
Use a portable robotic camera system to obtain a mosaic of color images;
Use a ‘wearable’ computer to search in real-time for the most uncommon regions of these mosaic images;
Use the robotic camera system to re-point at several of the most uncommon areas of the mosaic images, in order to obtain much more detailed information about these ‘interesting’ uncommon areas;
Use human intelligence to choose between the wearable computer’s different options for interesting areas in the panorama for closer approach; and
Repeat the process as often as desired, sometimes retracing a step of geological approach.
In the Mars Exploration Workshop in Madrid in November 2003, we demonstrated some of the early capabilities of our ‘Cyborg’ Geologist/Astrobiologist System [McGuire et al., 2004b]. We have been using this Cyborg system as a platform to develop computer-vision algorithms for recognizing interesting geological and astrobiological features, and for testing these algorithms in the field here on the Earth.
The half-human/half-machine ‘Cyborg’ approach (Fig. 1) uses human locomotion and human-geologist intuition/intelligence for taking the computer vision-algorithms to the field for teaching and testing, using a wearable computer. This is advantageous because we can therefore concentrate on developing the ‘scientific’ aspects for autonomous discovery of features in computer imagery, as opposed to the more ‘engineering’ aspects of using computer vision to guide the locomotion of a robot through treacherous terrain. This means the development of the scientific vision system for the robot is effectively decoupled from the development of the locomotion system for the robot.
After the maturation and optimization of the computer-vision algorithms, we hope to transplant these algorithms from the Cyborg computer to the on-board computer of a semi-autonomous robot that will be bound for Mars or one of the interesting moons in our solar system. Field tests of such a robot have already begun with the Cyborg Astrobiologist’s software for scientific autonomy. Our software has been delivered to the robotic borehole inspection system of the MARTE project333MARTE is a practice mission in the summer of 2005 for tele-operated robotic drilling and tele-operated scientific studies in a Mars-like environment near the Rio Tinto, in Andalucia in southern Spain..
Both of the field geologists on our team, Díaz Martínez and Ormö, have independently stressed the importance to field geologists of geological ‘contacts’ and the differences between the geological units that are separated by the geological contact. For this reason, in March 2003, we decided that the most important tool to develop for the beginning of our computer vision algorithm development was that of ‘image segmentation’. Such image segmentation algorithms would allow the computer to break down a panoramic image into different regions (Fig. 2 for an example), based upon similarity, and to find the boundaries or contacts between the different regions in the image, based upon difference. Much of the remainder of this paper discusses the first geological field trials with the wearable computer of the segmentation algorithm and the associated uncommon map algorithm that we have implemented and developed. In the near future, we hope to use the Cyborg Astrobiologist system to test more advanced image-segmentation algorithms, capable of simultaneous color and texture image segmentation [Freixenet et al., 2004]
, as well as novelty-detection algorithms[Bogacz et al., 1999]
2.1 Image Segmentation, Uncommon Maps, Interest Maps, and Interest Points
With human vision, a geologist:
Firstly, tends to pay attention to those areas of a scene which are most unlike the other areas of the scene; and then,
Secondly, attempts to find the relation between the different areas of the scene, in order to understand the geological history of the outcrop.
The first step in this prototypical thought process of a geologist was our motivation for inventing the concept of uncommon maps. See Fig. 3 for a simple illustration of the concept of an uncommon map. We have not yet attempted to solve the second step in this prototypical thought process of a geologist, but it is evident from the formulation of the second step, that human geologists do not immediately ignore the common areas of the scene. Instead, human geologists catalog the common areas and put them in the back of their minds for “higher-level analysis of the scene”, or in other words, for determining explanations for the relations of the uncommon areas of the scene with the common areas of the scene.
Prior to implementing the ‘uncommon map’, the first step of the prototypical geologist’s thought process, we needed a segmentation algorithm, in order to produce pixel-class maps to serve as input to the uncommon map algorithm. We have implemented the classic co-occurrence histogram algorithm [Haralick et al., 1973, Haddon and Boyce, 1990]. For this work, we have not included texture information in either the segmentation algorithm or in the uncommon map algorithm. Currently, each of the three bands of color information is segmented separately, and later merged in the interest map by summing three independent uncommon maps. In future work, advanced image-segmentation algorithms that simultaneously use color & texture could be developed for and tested on the Cyborg Astrobiologist System (i.e., the algorithms of Freixenet et al., 2004).
The concept of an ‘uncommon map’ is our invention, though it probably has been independently invented by other authors, since it is somewhat useful. In our implementation, the uncommon map algorithm takes the top 8 pixel classes determined by the image segmentation algorithm, and ranks each pixel class according to how many pixels there are in each class. The pixels in the pixel class with the greatest number of pixel members are numerically labelled as ‘common’, and the pixels in the pixel class with the least number of pixel members are numerically labelled as ’uncommon’. The ‘uncommonness’ hence ranges from 1 for a common pixel to 8 for an uncommon pixel, and we can therefore construct an uncommon map given any image segmentation map. In our work, we construct several uncommon maps from the color image mosaic, and then we sum these uncommon maps together, in order to arrive at a final interest map.
In this paper, we develop and test a simple, high-level concept of interest points of an image, which is based upon finding the centroids of the smallest (most uncommon) regions of the image. Such a ‘global’ high-level concept of interest points differs from the lower-level ‘local’ concept of Förstner interest points based upon corners and centers of circular features. However, this latter technique with local interest points is used by the MER team for their stereo-vision image matching and for their visual-odometry and visual-localization image matching [Goldberg et al., 2002, Olson et al., 2003, Nesnas et al., 1999]. Our interest point method bears somewhat more relation to the higher-level wavelet-based salient points technique [Sebe et al., 2003], in that they search first at coarse resolution for the image regions with the largest gradient, and then they use wavelets in order to zoom in towards the salient point within that region that has the highest gradient. Their salient point technique is edge-based, whereas our interest point is currently region-based. Since in the long-term, we have an interest in geological contacts, this edge-based & wavelet-based salient point technique could be a reasonable future interest-point algorithm to incorporate into our Cyborg Astrobiologist system for testing.
2.2 Hardware & Software for the Cyborg Astrobiologist
The non-human hardware of the Cyborg Astrobiologist system consists of:
a 667 MHz wearable computer (from ViA Computer Systems) with a ‘power-saving’ Transmeta ‘Crusoe’ CPU and 112 MB of physical memory,
an SV-6 Head Mounted VGA Display (from Tekgear , via the Spanish supplier Decom) that works well in bright sunlight,
a SONY ‘Handycam’ color video camera (model DCR-TRV620E-PAL), with a Firewire/IEEE1394 cable to the computer,
a thumb-operated USB finger trackball from 3G Green Green Globe Co., resupplied by ViA Computer Systems, and by Decom,
a small keyboard attached to the human’s arm,
a tripod for the camera, and
a Pan-Tilt Unit (model PTU-46-70W) from Directed Perception with a bag of associated power and signal converters.
The wearable computer processes the images acquired by the color digital video camera, to compute a map of interesting areas. The computations include: simple mosaicking by image-butting, as well as two-dimensional histogramming for image segmentation [Haralick et al., 1973, Haddon and Boyce, 1990]. This image segmentation is independently computed for each of the Hue, Saturation, and Intensity (H,S,I) image planes, resulting in three different image-segmentation maps. These image-segmentation maps were used to compute ‘uncommon’ maps (one for each of the three (H,S,I) image-segmentation maps): each of the three resulting uncommon maps gives highest weight to those regions of smallest area for the respective (H,S,I) image planes. Finally, the three (H,S,I) uncommon maps are added together into an interest map, which is used by the Cyborg system for subsequent interest-guided pointing of the camera.
After segmenting the mosaic image (Fig. 7), it becomes obvious that a very simple method to find interesting regions in an image is to look for those regions in the image that have a significant number of uncommon pixels. We accomplish this by (Fig. 5): first, creating an uncommon map based upon a linear reversal of the segment area ranking; second, adding the 3 uncommon maps (for , , & ) together to form an interest map; and third, blurring this interest map444with a gaussian smoothing kernel of width pixels..
Based upon the three largest peaks in the blurred/smoothed interest map, the Cyborg system then guides the Pan-Tilt Unit to point the camera at each of these three positions to acquire high-resolution color images of the three interest points (Fig. 4). By extending a simple image-acquisition and image-processing system to include robotic and mosaicking elements, we were able to conclusively demonstrate that the system can make reasonable decisions by itself in the field for robotic pointing of the camera.
3 Descriptive Summaries of the Field Site and of the Expeditions
On March 3rd and June 11th, 2004, three of the authors, McGuire, Díaz Martínez & Ormö, tested the “Cyborg Astrobiologist” system for the first time at a geological site, the gypsum-bearing southward-facing stratified cliffs near the “El Campillo” lake of Madrid’s Southeast Regional Park, outside the suburb of Rivas Vaciamadrid. Due to the significant storms in the 3 months between the two missions, there were 2 dark & wet areas in the gypsum cliffs that were visible only during the second mission. In Fig. 2, we show the segmentation of the outcrop (during the first mission), according to the human geologist, Díaz Martinez, for reference.
The computer was worn on McGuire’s belt, and typically took 3-5 minutes to acquire and compose a mosaic image composed of subimages. Typical values of used were and . The sub-images were downsampled in both directions by a factor of 4-8 during these tests; the original sub-image dimensions were .
Several mosaics were acquired of the cliff face from a distance of about 300 meters, and the computer automatically determined the three most interesting points in each mosaic. Then, the wearable computer automatically repointed the camera towards each of the three interest points, in order to acquire non-downsampled color images of the region around each interest point in the image. All the original mosaics, all the derived mosaics and all the interest-point subimages were then saved to hard disk for post-mission study.
Two other tripod positions were chosen for acquiring mosaics and interest-point image-chip sets. At each of the three tripod positions, 2-3 mosaic images and interest-point image-chip sets were acquired. One of the chosen tripod locations was about 60 meters from the cliff’s face; the other was about 10 meters (Fig. 1) from the cliff face.
During the 2nd mission at distances of 300 meters and 60 meters, the system most often determined the wet spots (Fig. 4) to be the most interesting regions on the cliff face. This was encouraging to us, because we also found these wet spots to be the most interesting regions.555These dark & wet regions were interesting to us partly because they give information about the development of the outcrop. Even if the relatively small spots were only dark, and not wet (i.e., dark dolerite blocks, or a brecciated basalt), their uniqueness in the otherwise white & tan outcrop would have drawn our immediate attention. Additionally, even if this had been our first trip to the site, and if the dark spots had been present during this first trip, these dark regions would have captured our attention for the same reasons. The fact that these dark spots had appeared after our first trip and before the second trip was not of paramount importance to grab our interest (but the ‘sudden’ appearance of the dark spots between the two missions did arouse our higher-order curiosity).
After the tripod position at 60 meters distance, we chose the next tripod position to be about 10 meters from the cliff face (Fig. 1). During this ‘close-up’ study of the cliff face, we intended to focus the Cyborg Astrobiologist exploration system upon the two points that it found most interesting when it was in the more distant tree grove, namely the two wet and dark regions of the lower part of the cliff face. By moving from 60 meters distance to 10 meters distance and by focusing at the closer distance on the interest points determined at the larger distance, we wished to simulate how a truly autonomous robotic system would approach the cliff face (see the map in Fig. 6). Unfortunately, due to a combination of a lack of human foresight in the choice of tripod position and a lack of more advanced software algorithms to mask out the surrounding & less interesting region (see discussion in Section 4), for one of the two dark spots, the Cyborg system only found interesting points on the undarkened periphery of the dark & wet stains. Furthermore, for the other dark spot, the dark spot was spatially complex, being subdivided into several regions, with some green and brown foliage covering part of the mosaic. Therefore, in both close-up cases the value of the interest mapping is debatable. This interest mapping could be improved in the future, as we discuss in Section 4.2.
4.1 Results from the First Geological Field Test
As first observed during the first mission to Rivas on March 3rd, the characteristics of the southward-facing cliffs at at Rivas Vaciamadrid consist of mostly tan-colored surfaces, with some white veins or layers, and with significant shadow-causing three-dimensional structure. The computer vision algorithms performed adequately for a first visit to a geological site, but they need to be improved in the future. As decided at the end of the first mission by the mission team, the improvements include: shadow-detection and shadow-interpretation algorithms, and segmentation of the images based upon microtexture.
In the last case, we decided that due to the very monochromatic & slightly-shadowy nature of the imagery, the Cortical Interest Map algorithm non-intuitively decided to concentrate its interest on differences in intensity, and it tended to ignore hue and saturation.
After the first geological field test, we spent several months studying the imagery obtained during this mission, and fixing various further problems that were only discovered after the first mission. Though we had hoped that the first mission to Rivas would have been more like a science mission, in reality it was more of an engineering mission.
4.2 Results from the Second Geological Field Test
In Fig. 4, from the tree grove at a distance of 60 meters, the Cyborg Astrobiologist system found the dark & wet spot on the right side to be the most interesting, the dark & wet spot on the left side to be the second most interesting, and the small dark shadow in the upper left hand corner to be the 3rd most interesting. For the first two interest points (the dark & wet spots), it is apparent from the uncommon map for intensity pixels in Fig. 5 that these points are interesting due to their relatively remarkable intensity values. By inspection of Fig. 7, we see that these pixels which reside in the white segment of the intensity segmentation mosaic are unusual because they are a cluster of very dim pixels (relative to the brighter red, blue and green segments). Within the dark wet spots, we observe that these particular points in the white segment of the intensity segmentation in Fig. 7 are interesting because they reside in the shadowy areas of the dark & wet spots. We interpret the interest in the 3rd interest point to be due to the juxtaposition of the small green plant with the shadowing in this region; the interest in this point is significantly smaller than for the 2 other interest points.
More advanced software could be developed to handle better the close-up real-time interest-map analysis of the imagery acquired at the close-up tripod position (10 meter distance from the cliff; not shown here). Here are some options to be included in such software development:
Add hardware & software to the Cyborg Astrobiologist so that it can make intelligent use of its zoom lens. We plan to use the camera’s LANC communication interface to control the zoom lens with the wearable computer. With such software for intelligent zooming, the system could have corrected the human’s mistake in tripod placement and decided to zoom further in, to focus only on the shadowy part of the dark & wet spot (which was determined to be the most interesting point at a distance of 60 meters), rather than the periphery of the entire dark & wet spot.
Enhance the Cyborg Astrobiologist system so that it has a memory of the image segmentations performed at a greater distance or at a lower magnification of the zoom lens. Then, when moving to a closer tripod position or a higher level of zoom-magnification, register the new imagery or the new segmentation maps with the coarser resolution imagery and segmentation maps. Finally, tell the system to mask out or ignore or deemphasize those parts of the higher resolution imagery which were part of the low-interest segments of the coarser, more distant segmentation maps, so that it concentrates on those features that it determined to be interesting at coarse resolution and higher distance.
5 Discussion & Conclusions
Both the human geologists on our team concur with the judgement of the Cyborg Astrobiologist software system, that the two dark & wet spots on the cliff wall were the most interesting spots during the second mission. However, the two geologists also state that this largely depends on the aims of study for the geological field trip; if the aim of the study is to search for hydrological features, then these two dark & wet spots are certainly interesting. One question which we have thus far left unstudied is “What would the Cyborg Astrobiologist system have found interesting during the second mission if the two dark & wet spots had not been present during the second mission?” It is possible that it would again have found some dark shadow particularly interesting, but with the improvements made to the system between the first and second mission, it is also possible that it could have found a different feature of the cliff wall more interesting.
The NEO programming for this Cyborg Geologist project was initiated with the SONY Handycam in April 2002. The wearable computer arrived in June 2003, and the head mounted display arrived in November 2003. We now have a reliably functioning human and hardware and software Cyborg Geologist system, which is partly robotic with its Pan-Tilt camera mount. This robotic extension allows the camera to be pointed repeatedly, precisely & automatically in different directions.
Based upon the significantly-improved performance of the Cyborg Astrobiologist system during the 2nd mission to Rivas in June 2004, we conclude that the system now is debugged sufficiently so as to be able to produce studies of the utility of particular computer vision algorithms for geological deployment in the field.666NOTE IN PROOFS: After this paper was originally written, we did some tests at a second field site (in Guadalajara, Spain) with the same algorithm and the same parameter settings. Despite the change in character of the geological imagery from the first field site (in Rivas Vaciamadrid, discussed below) to the second field site, the uncommon-mapping technique again performed rather well, giving an agreement with post-mission human-geologist assessment 68% of the time (with a 32% false positive rate and a 32% false negative rate), see [McGuire et al., 2005b] for more detail. This success rate is qualitiatively comparable to the results from the first mission in Rivas. This is evidence that the system performs in a context-free, unbiased manner. We have outlined some possibilities for improvement of the system based upon the second field trip, particularly in the improvement in the systems-level algorithms needed in order to more intelligently drive the approach of the Cyborg or robotic system towards a complex geological outcrop. These possible systems-level improvements include: hardware & software for intelligent use of the camera’s zoom lens and a memory of the image segmentation performed at greater distance or lower magnification of the zoom lens.
P. McGuire, J. Ormö and E. Díaz Martínez would all like to thank the Ramon y Cajal Fellowship program of the Spanish Ministry of Education and Science. Many colleagues have made this project possible through their technical assistance, administrative assistance, or scientific conversations. We give special thanks to Kai Neuffer, Antonino Giaquinta, Fernando Camps Martínez, and Alain Lepinette Malvitte for their technical support. We are indebted to Gloria Gallego, Carmen González, Ramon Fernández, Coronel Angel Santamaria, and Juan Pérez Mercader for their administrative support. We acknowledge conversations with Virginia Souza-Egipsy, María Paz Zorzano Mier, Carmen Córdoba Jabonero, Josefina Torres Redondo, Víctor R. Ruiz, Irene Schneider, Carol Stoker, Paula Grunthaner, Maxwell D. Walter, Fernando Ayllón Quevedo, Javier Martín Soler, Jörg Walter, Claudia Noelker, Gunther Heidemann, Robert Rae, and Jonathan Lunine. The field work by J. Ormö was partially supported by grants from the Spanish Ministry of Education and Science (AYA2003-01203 and CGL2004-03215). The equipment used in this work was purchased by grants to our Center for Astrobiology from its sponsoring research organizations, CSIC and INTA.
- [Apostolopoulos et al., 2000] Apostolopoulos, D., Wagner, M., Shamah, B., Pedersen, L., Shillcutt, K., and Whittaker, W. (2000). Technology and field demonstration of robotic search for Antarctic meteorites. International Journal of Robotics Research, 19(11):1015–1032.
[Bogacz et al., 1999]
Bogacz, R., Brown, M. W., and Giraud-Carrier, C. (1999).
High capacity neural networks for familiarity discrimination.In Proceedings of the Ninth International Conference on Artificial Neural Networks (ICANN99), pages 773–776.
- [Crawford and Tamppari, 2002] Crawford, J. and Tamppari, L. (2002). Mars Science Laboratory – autonomy requirements analysis. Intelligent Data Understanding Seminar, available online at: http://is.arc.nasa.gov/IDU/slides/reports02/Crawford_Aut02c.pdf.
- [Crisp et al., 2003] Crisp, J., Adler, M., et al. (2003). Mars Exploration Rover mission. Journal of Geophysical Research (Planets), 108(2):1.
- [Freixenet et al., 2004] Freixenet, J., Muñoz, X., Martí, J., and Lladó, X. (2004). Color texture segmentation by region-boundary cooperation. In Computer Vision – ECCV 2004, Eighth European Conference on Computer Vision, Proceedings, Part II, Lecture Notes in Computer Science. Prague, Czech Republic, volume 3022, pages 250–261. Springer. Ed.: T. Pajdla and J. Matas., (Also available in the CVonline archive).
- [Goldberg et al., 2002] Goldberg, S., Maimone, M., and Matthies, L. (2002). Stereo vision and rover navigation software for planetary exploration. In 2002 IEEE Aerospace Conference Proceedings, pages 2025–2036.
- [Gulick et al., 2004] Gulick, V. C., Hart, S. D., Shi, X., and Siegel, V. L. (2004). Developing an automated science analysis system for Mars surface exploration for MSL and beyond. In Lunar and Planetary Science Conference Abstracts, volume 35, page 2121.
- [Gulick et al., 2002] Gulick, V. C., Morris, R. L., Bishop, J., Gazis, P., Alena, R., and Sierhuis, M. (2002). Geologist’s Field Assistant: developing image and spectral analyses algorithms for remote science exploration. In Lunar and Planetary Science Conference Abstracts, volume 33, page 1961.
- [Gulick et al., 2001] Gulick, V. C., Morris, R. L., Ruzon, M. A., and Roush, T. L. (2001). Autonomous image analyses during the 1999 Marsokhod rover field test. Journal of Geophysical Research, 106:7745–7764.
- [Haddon and Boyce, 1990] Haddon, J. and Boyce, J. (1990). Image segmentation by unifying region and boundary information. IEEE Trans. Pattern Anal. Mach. Intell., 12(10):929–948.
- [Haralick et al., 1973] Haralick, R., Shanmugan, K., and Dinstein, I. (1973). Texture features for image classification. IEEE SMC-3, 6:610–621.
- [Heidemann, 2004] Heidemann, G. (2004). Focus of attention from local color symmetries. IEEE Trans. Pattern Anal. Mach. Intell., 26(7):817–830.
- [McGuire et al., 2005a] McGuire, P., Ormö, J., Gómez-Elvira, J., Rodríguez-Manfredi, J., Sebastián-Martínez, E., Ritter, H., Oesker, M., Haschke, R., Ontrup, J., and Díaz-Martínez, E. (2005a). The Cyborg Astrobiologist: Algorithm development for autonomous planetary (sub)surface exploration. Astrobiology, 5(2):230, oral presentations. Special Issue: Abstracts of NAI’2005: Biennial Meeting of the NASA Astrobiology Institute, April 10-14, Boulder, Colorado.
- [McGuire et al., 2005b] McGuire, P. C., Díaz-Martínez, E., Ormö, J., Gómez-Elvira, J., Rodríguez-Manfredi, J., Sebastián-Martínez, E., Ritter, H., Haschke, R., Oesker, M., and Ontrup, J. (2005b). The Cyborg Astrobiologist: Scouting red beds for uncommon features with geological significance. International Journal of Astrobiology, 4:(in press) http://arxiv.org/abs/cs.CV/0505058.
- [McGuire et al., 2004a] McGuire, P. C., Ormö, J., Diaz-Martinez, E., Rodríguez-Manfredi, J., Gómez-Elvira, J., Ritter, H., Oesker, M., and Ontrup, J. (2004a). The Cyborg Astrobiologist: first field experience. International Journal of Astrobiology, 3(3):189–207, http://arxiv.org/abs/cs.CV/0410071.
- [McGuire et al., 2004b] McGuire, P. C., Rodríguez-Manfredi, J. A., et al. (2004b). Cyborg systems as platforms for computer-vision algorithm-development for astrobiology. In Proc. of the Third European Workshop on Exo-Astrobiology, 18 - 20 November 2003, Madrid, Spain, volume ESA SP-545, pages 141–144, http://arxiv.org/abs/cs.CV/0401004. Ed.: R. A. Harris and L. Ouwehand. Noordwijk, Netherlands: ESA Publications Division, ISBN 92-9092-856-5.
- [Nesnas et al., 1999] Nesnas, I., Maimone, M., and Das, H. (1999). Autonomous vision-based manipulation from a rover platform. In Proceedings of the CIRA Conference, Monterey, California.
- [Olson et al., 2003] Olson, C., Matthies, L., Schoppers, M., and Maimone, M. (2003). Rover navigation using stereo ego-motion. Robotics and Autonomous Systems, 43(4):215–229.
- [Pedersen, 2001] Pedersen, L. (2001). Autonomous characterization of unknown environments. In 2001 IEEE International Conference on Robotics and Automation, volume 1, pages 277–284.
- [Sebe et al., 2003] Sebe, N., Tian, Q., Loupias, E., Lew, M., and Huang, T. (2003). Evaluation of salient points techniques. Image and Vision Computing, Special Issue on Machine Vision, 21:1087–1095.
- [Squyres, 2004] Squyres, S. (2004). private communication.
- [Squyres et al., 2004] Squyres, S., Arvidson, R., et al. (2004). The Spirit rover’s Athena science investigation at Gusev Crater, Mars. Science, 305:794–800.