Field geology with a wearable computer: 1st results of the Cyborg Astrobiologist System

06/24/2005 ∙ by Patrick C. McGuire, et al. ∙ 0

We present results from the first geological field tests of the `Cyborg Astrobiologist', which is a wearable computer and video camcorder system that we are using to test and train a computer-vision system towards having some of the autonomous decision-making capabilities of a field-geologist. The Cyborg Astrobiologist platform has thus far been used for testing and development of these algorithms and systems: robotic acquisition of quasi-mosaics of images, real-time image segmentation, and real-time determination of interesting points in the image mosaics. This work is more of a test of the whole system, rather than of any one part of the system. However, beyond the concept of the system itself, the uncommon map (despite its simplicity) is the main innovative part of the system. The uncommon map helps to determine interest-points in a context-free manner. Overall, the hardware and software systems function reliably, and the computer-vision algorithms are adequate for the first field tests. In addition to the proof-of-concept aspect of these field tests, the main result of these field tests is the enumeration of those issues that we can improve in the future, including: dealing with structural shadow and microtexture, and also, controlling the camera's zoom lens in an intelligent manner. Nonetheless, despite these and other technical inadequacies, this Cyborg Astrobiologist system, consisting of a camera-equipped wearable-computer and its computer-vision algorithms, has demonstrated its ability of finding genuinely interesting points in real-time in the geological scenery, and then gathering more information about these interest points in an automated manner. We use these capabilities for autonomous guidance towards geological points-of-interest.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Outside of the Mars robotics community, it is commonly presumed that the robotic rovers on Mars are controlled in a time-delayed joystick manner, wherein commands are sent to the rovers several if not many times per day, as new information is acquired from the rovers’ sensors. However, inside the Mars robotics community, they have learned that such a brute force joystick-control process is rather cumbersome, and they have developed much more elegant methods for robotic control of the rovers on Mars, with highly significant degrees of robotic autonomy.

Particularly, the Mars Exploration Rover (MER) team has demonstrated autonomy for the two robotic rovers Spirit & Opportunity to the level that: practically all commands for a given Martian day (1 ‘sol’ 24.6 hours) are delivered to each rover from Earth before the robot wakens from its power-conserving nighttime resting mode [Crisp et al., 2003, Squyres et al., 2004]. Each rover then follows the commanded sequence of moves for the entire sol, moving to desired locations, articulating its arm with its sensors to desired points in the workspace of the robot, and acquiring data from the cameras and chemical sensors. From an outsider’s point of view, these capabilities may not seem to be significantly autonomous, in that all the commands are being sent from Earth, and the MER rovers are merely executing those commands. But the following facts/feats deserve emphasis before judgement is made of the quality of the MER autonomy: this robot is on another planet with a complex surface to navigate and study; and all of the complex command sequence is sent to the robot the previous night for autonomous operation the next day. Sophisticated software and control systems are also part of the system, including the MER autonomous obstacle avoidance system and the MER visual odometry & localization software.111This visual odometry and localization software was added to the systems after the rovers had been on Mars for several months [Squyres, 2004]. One should remember that there is a large team of human roboticists and geologists working here on the Earth in support of the MER missions, to determine science targets and robotic command sequences on a daily basis; after the sun sets for an MER rover, the rover mission team can determine the science priorities and the command sequence for the next sol in less than 4-5 hours.222Right after landing, this command sequencing took about 17 hours [Squyres, 2004].

One future mission deserves special discussion for the technology developments described in this paper: the Mars Science Laboratory, planned for launch in 2009 (MSL’2009). A particular capability desired for this MSL’2009 mission will be to rapidly traverse to up to three geologically-different scientific points-of-interest within the landing ellipse. These three geologically-different sites will be chosen from Earth by analysis of relevant satellite imagery. Possible desired maximal traversal rates could range from 300-2000 meters/sol in order to reach each of the three points-of-interest in the landing ellipse in minimum time.

Given these substantial expected traversal rates of the MSL’2009 rover, autonomous obstacle avoidance [Goldberg et al., 2002] and autonomous visual odometry & localization [Olson et al., 2003] will be essential to achieve these rates, since otherwise, rover damage and slow science-target approach would be the results. Given such autonomy in the rapid traverses, it behooves us to enable the autonomous rover with sufficient scientific responsibility. Otherwise, the robotic rover exploration system might drive right past an important scientific target-of-opportunity along the way to the human-chosen scientific point-of-interest. Crawford & Tamppari [Crawford and Tamppari, 2002] and their NASA/Ames team summarize possible ‘autonomous traverse science’, in which every 20-30 meters during a 300 meter traverse (in their example), science pancam and Mini-TES (Thermal Emission Spectrometer) image mosaics are autonomously obtained. They state that “there may be onboard analysis of the science data from the pancam and the mini-TES, which compares this data to predefined signatures of carbonates or other targets of interest. If detected, traverse may be halted and information relayed back to Earth.” This onboard analysis of the science data is precisely the technology issue that we have been working towards solving. This paper is the first report to the general robotics community describing our progress towards giving a robotic astrobiologist some aspects of autonomous recognition of scientific targets-of-opportunity. This technology development may not be sufficiently mature nor sufficiently necessary for deployment on the MSL’2009 mission, but it should find utility in missions beyond MSL’2009.

Before proceeding, we first note here two of the related efforts in the development of autonomous recognition of scientific targets-of-opportunity for astrobiological exploration: firstly, the work on developing a Nomad robot to search for meteorites in Antartica led by the Carnegie Mellon University Robotics Institute [Apostolopoulos et al., 2000, Pedersen, 2001], and secondly, the work by a group at NASA/Ames on developing a Geological Field Assistant (GFA) [Gulick et al., 2001, Gulick et al., 2002, Gulick et al., 2004]. From an algorithmic point-of-view, the uncommon-mapping technique presented in this paper attempts to identify interest points in a context-free, unbiased manner. In related work, [Heidemann, 2004] has studied the use of spatial symmetry of color pixel values to identify focus points in a context-free, unbiased manner.

Figure 1: Díaz Martínez & McGuire with the Cyborg Astrobiologist System on 3 March 2004, 10 meters from the outcrop cliff that is being studied during the first geological field mission to near Rivas Vaciamadrid . We are taking notes prior to acquiring one of our last-of-the-day mosaics and its set of interest-point image chips. This is the tripod position #2 shown in Fig. 6, nearest the cliffs.

2 The Cyborg Geologist & Astrobiologist System

Our ongoing effort in this area of autonomous recognition of scientific targets-of-opportunity for field geology and field astrobiology is beginning to mature as well. To date, we have developed and field-tested a GFA-like “Cyborg Astrobiologist” system [McGuire et al., 2004a, McGuire et al., 2004b, McGuire et al., 2005a, McGuire et al., 2005b] that now can:

  • Use human mobility to maneuver to and within a geological site and to follow suggestions from the computer as to how to approach a geological outcrop;

  • Use a portable robotic camera system to obtain a mosaic of color images;

  • Use a ‘wearable’ computer to search in real-time for the most uncommon regions of these mosaic images;

  • Use the robotic camera system to re-point at several of the most uncommon areas of the mosaic images, in order to obtain much more detailed information about these ‘interesting’ uncommon areas;

  • Use human intelligence to choose between the wearable computer’s different options for interesting areas in the panorama for closer approach; and

  • Repeat the process as often as desired, sometimes retracing a step of geological approach.

In the Mars Exploration Workshop in Madrid in November 2003, we demonstrated some of the early capabilities of our ‘Cyborg’ Geologist/Astrobiologist System [McGuire et al., 2004b]. We have been using this Cyborg system as a platform to develop computer-vision algorithms for recognizing interesting geological and astrobiological features, and for testing these algorithms in the field here on the Earth.

The half-human/half-machine ‘Cyborg’ approach (Fig. 1) uses human locomotion and human-geologist intuition/intelligence for taking the computer vision-algorithms to the field for teaching and testing, using a wearable computer. This is advantageous because we can therefore concentrate on developing the ‘scientific’ aspects for autonomous discovery of features in computer imagery, as opposed to the more ‘engineering’ aspects of using computer vision to guide the locomotion of a robot through treacherous terrain. This means the development of the scientific vision system for the robot is effectively decoupled from the development of the locomotion system for the robot.

After the maturation and optimization of the computer-vision algorithms, we hope to transplant these algorithms from the Cyborg computer to the on-board computer of a semi-autonomous robot that will be bound for Mars or one of the interesting moons in our solar system. Field tests of such a robot have already begun with the Cyborg Astrobiologist’s software for scientific autonomy. Our software has been delivered to the robotic borehole inspection system of the MARTE project333MARTE is a practice mission in the summer of 2005 for tele-operated robotic drilling and tele-operated scientific studies in a Mars-like environment near the Rio Tinto, in Andalucia in southern Spain..

Figure 2: An image segmentation made by human geologist Díaz Martínez of the outcrop during the first mission to Rivas Vaciamadrid. Region 1 has a tan color and a blocky texture; Region 2 is subdivided by a vertical fault and has more red color and a more layered texture than Region 1; Region 3 is dominated by white and tan layering; and Region 4 is covered by vegetation. The dark & wet spots in Region 3 were only observed during the second mission, 3 months later. The Cyborg Geologist/Astrobiologist made its own image segmentations for portions of the cliff face that included the area of the white layering at the bottom of the cliff (Fig. 7).

Both of the field geologists on our team, Díaz Martínez and Ormö, have independently stressed the importance to field geologists of geological ‘contacts’ and the differences between the geological units that are separated by the geological contact. For this reason, in March 2003, we decided that the most important tool to develop for the beginning of our computer vision algorithm development was that of ‘image segmentation’. Such image segmentation algorithms would allow the computer to break down a panoramic image into different regions (Fig. 2 for an example), based upon similarity, and to find the boundaries or contacts between the different regions in the image, based upon difference. Much of the remainder of this paper discusses the first geological field trials with the wearable computer of the segmentation algorithm and the associated uncommon map algorithm that we have implemented and developed. In the near future, we hope to use the Cyborg Astrobiologist system to test more advanced image-segmentation algorithms, capable of simultaneous color and texture image segmentation [Freixenet et al., 2004]

, as well as novelty-detection algorithms

[Bogacz et al., 1999]

2.1 Image Segmentation, Uncommon Maps, Interest Maps, and Interest Points

With human vision, a geologist:

  • Firstly, tends to pay attention to those areas of a scene which are most unlike the other areas of the scene; and then,

  • Secondly, attempts to find the relation between the different areas of the scene, in order to understand the geological history of the outcrop.

The first step in this prototypical thought process of a geologist was our motivation for inventing the concept of uncommon maps. See Fig. 3 for a simple illustration of the concept of an uncommon map. We have not yet attempted to solve the second step in this prototypical thought process of a geologist, but it is evident from the formulation of the second step, that human geologists do not immediately ignore the common areas of the scene. Instead, human geologists catalog the common areas and put them in the back of their minds for “higher-level analysis of the scene”, or in other words, for determining explanations for the relations of the uncommon areas of the scene with the common areas of the scene.

Figure 3: For the simple, idealized image on the left, we show the corresponding uncommon map on the right. The whiter areas in the uncommon map are more uncommon than the darker areas in this map.

Prior to implementing the ‘uncommon map’, the first step of the prototypical geologist’s thought process, we needed a segmentation algorithm, in order to produce pixel-class maps to serve as input to the uncommon map algorithm. We have implemented the classic co-occurrence histogram algorithm [Haralick et al., 1973, Haddon and Boyce, 1990]. For this work, we have not included texture information in either the segmentation algorithm or in the uncommon map algorithm. Currently, each of the three bands of color information is segmented separately, and later merged in the interest map by summing three independent uncommon maps. In future work, advanced image-segmentation algorithms that simultaneously use color & texture could be developed for and tested on the Cyborg Astrobiologist System (i.e., the algorithms of Freixenet et al., 2004).

The concept of an ‘uncommon map’ is our invention, though it probably has been independently invented by other authors, since it is somewhat useful. In our implementation, the uncommon map algorithm takes the top 8 pixel classes determined by the image segmentation algorithm, and ranks each pixel class according to how many pixels there are in each class. The pixels in the pixel class with the greatest number of pixel members are numerically labelled as ‘common’, and the pixels in the pixel class with the least number of pixel members are numerically labelled as ’uncommon’. The ‘uncommonness’ hence ranges from 1 for a common pixel to 8 for an uncommon pixel, and we can therefore construct an uncommon map given any image segmentation map. In our work, we construct several uncommon maps from the color image mosaic, and then we sum these uncommon maps together, in order to arrive at a final interest map.

In this paper, we develop and test a simple, high-level concept of interest points of an image, which is based upon finding the centroids of the smallest (most uncommon) regions of the image. Such a ‘global’ high-level concept of interest points differs from the lower-level ‘local’ concept of Förstner interest points based upon corners and centers of circular features. However, this latter technique with local interest points is used by the MER team for their stereo-vision image matching and for their visual-odometry and visual-localization image matching [Goldberg et al., 2002, Olson et al., 2003, Nesnas et al., 1999]. Our interest point method bears somewhat more relation to the higher-level wavelet-based salient points technique [Sebe et al., 2003], in that they search first at coarse resolution for the image regions with the largest gradient, and then they use wavelets in order to zoom in towards the salient point within that region that has the highest gradient. Their salient point technique is edge-based, whereas our interest point is currently region-based. Since in the long-term, we have an interest in geological contacts, this edge-based & wavelet-based salient point technique could be a reasonable future interest-point algorithm to incorporate into our Cyborg Astrobiologist system for testing.

2.2 Hardware & Software for the Cyborg Astrobiologist

The non-human hardware of the Cyborg Astrobiologist system consists of:

  • a 667 MHz wearable computer (from ViA Computer Systems) with a ‘power-saving’ Transmeta ‘Crusoe’ CPU and 112 MB of physical memory,

  • an SV-6 Head Mounted VGA Display (from Tekgear , via the Spanish supplier Decom) that works well in bright sunlight,

  • a SONY ‘Handycam’ color video camera (model DCR-TRV620E-PAL), with a Firewire/IEEE1394 cable to the computer,

  • a thumb-operated USB finger trackball from 3G Green Green Globe Co., resupplied by ViA Computer Systems, and by Decom,

  • a small keyboard attached to the human’s arm,

  • a tripod for the camera, and

  • a Pan-Tilt Unit (model PTU-46-70W) from Directed Perception with a bag of associated power and signal converters.

The wearable computer processes the images acquired by the color digital video camera, to compute a map of interesting areas. The computations include: simple mosaicking by image-butting, as well as two-dimensional histogramming for image segmentation [Haralick et al., 1973, Haddon and Boyce, 1990]. This image segmentation is independently computed for each of the Hue, Saturation, and Intensity (H,S,I) image planes, resulting in three different image-segmentation maps. These image-segmentation maps were used to compute ‘uncommon’ maps (one for each of the three (H,S,I) image-segmentation maps): each of the three resulting uncommon maps gives highest weight to those regions of smallest area for the respective (H,S,I) image planes. Finally, the three (H,S,I) uncommon maps are added together into an interest map, which is used by the Cyborg system for subsequent interest-guided pointing of the camera.

After segmenting the mosaic image (Fig. 7), it becomes obvious that a very simple method to find interesting regions in an image is to look for those regions in the image that have a significant number of uncommon pixels. We accomplish this by (Fig. 5): first, creating an uncommon map based upon a linear reversal of the segment area ranking; second, adding the 3 uncommon maps (for , , & ) together to form an interest map; and third, blurring this interest map444with a gaussian smoothing kernel of width pixels..

Based upon the three largest peaks in the blurred/smoothed interest map, the Cyborg system then guides the Pan-Tilt Unit to point the camera at each of these three positions to acquire high-resolution color images of the three interest points (Fig. 4). By extending a simple image-acquisition and image-processing system to include robotic and mosaicking elements, we were able to conclusively demonstrate that the system can make reasonable decisions by itself in the field for robotic pointing of the camera.

3 Descriptive Summaries of the Field Site and of the Expeditions

On March 3rd and June 11th, 2004, three of the authors, McGuire, Díaz Martínez & Ormö, tested the “Cyborg Astrobiologist” system for the first time at a geological site, the gypsum-bearing southward-facing stratified cliffs near the “El Campillo” lake of Madrid’s Southeast Regional Park, outside the suburb of Rivas Vaciamadrid. Due to the significant storms in the 3 months between the two missions, there were 2 dark & wet areas in the gypsum cliffs that were visible only during the second mission. In Fig. 2, we show the segmentation of the outcrop (during the first mission), according to the human geologist, Díaz Martinez, for reference.

The computer was worn on McGuire’s belt, and typically took 3-5 minutes to acquire and compose a mosaic image composed of subimages. Typical values of used were and . The sub-images were downsampled in both directions by a factor of 4-8 during these tests; the original sub-image dimensions were .

Several mosaics were acquired of the cliff face from a distance of about 300 meters, and the computer automatically determined the three most interesting points in each mosaic. Then, the wearable computer automatically repointed the camera towards each of the three interest points, in order to acquire non-downsampled color images of the region around each interest point in the image. All the original mosaics, all the derived mosaics and all the interest-point subimages were then saved to hard disk for post-mission study.

Two other tripod positions were chosen for acquiring mosaics and interest-point image-chip sets. At each of the three tripod positions, 2-3 mosaic images and interest-point image-chip sets were acquired. One of the chosen tripod locations was about 60 meters from the cliff’s face; the other was about 10 meters (Fig. 1) from the cliff face.

During the 2nd mission at distances of 300 meters and 60 meters, the system most often determined the wet spots (Fig. 4) to be the most interesting regions on the cliff face. This was encouraging to us, because we also found these wet spots to be the most interesting regions.555These dark & wet regions were interesting to us partly because they give information about the development of the outcrop. Even if the relatively small spots were only dark, and not wet (i.e., dark dolerite blocks, or a brecciated basalt), their uniqueness in the otherwise white & tan outcrop would have drawn our immediate attention. Additionally, even if this had been our first trip to the site, and if the dark spots had been present during this first trip, these dark regions would have captured our attention for the same reasons. The fact that these dark spots had appeared after our first trip and before the second trip was not of paramount importance to grab our interest (but the ‘sudden’ appearance of the dark spots between the two missions did arouse our higher-order curiosity).

After the tripod position at 60 meters distance, we chose the next tripod position to be about 10 meters from the cliff face (Fig. 1). During this ‘close-up’ study of the cliff face, we intended to focus the Cyborg Astrobiologist exploration system upon the two points that it found most interesting when it was in the more distant tree grove, namely the two wet and dark regions of the lower part of the cliff face. By moving from 60 meters distance to 10 meters distance and by focusing at the closer distance on the interest points determined at the larger distance, we wished to simulate how a truly autonomous robotic system would approach the cliff face (see the map in Fig. 6). Unfortunately, due to a combination of a lack of human foresight in the choice of tripod position and a lack of more advanced software algorithms to mask out the surrounding & less interesting region (see discussion in Section 4), for one of the two dark spots, the Cyborg system only found interesting points on the undarkened periphery of the dark & wet stains. Furthermore, for the other dark spot, the dark spot was spatially complex, being subdivided into several regions, with some green and brown foliage covering part of the mosaic. Therefore, in both close-up cases the value of the interest mapping is debatable. This interest mapping could be improved in the future, as we discuss in Section 4.2.

4 Results

4.1 Results from the First Geological Field Test

As first observed during the first mission to Rivas on March 3rd, the characteristics of the southward-facing cliffs at at Rivas Vaciamadrid consist of mostly tan-colored surfaces, with some white veins or layers, and with significant shadow-causing three-dimensional structure. The computer vision algorithms performed adequately for a first visit to a geological site, but they need to be improved in the future. As decided at the end of the first mission by the mission team, the improvements include: shadow-detection and shadow-interpretation algorithms, and segmentation of the images based upon microtexture.

In the last case, we decided that due to the very monochromatic & slightly-shadowy nature of the imagery, the Cortical Interest Map algorithm non-intuitively decided to concentrate its interest on differences in intensity, and it tended to ignore hue and saturation.

After the first geological field test, we spent several months studying the imagery obtained during this mission, and fixing various further problems that were only discovered after the first mission. Though we had hoped that the first mission to Rivas would have been more like a science mission, in reality it was more of an engineering mission.

4.2 Results from the Second Geological Field Test

In Fig. 4, from the tree grove at a distance of 60 meters, the Cyborg Astrobiologist system found the dark & wet spot on the right side to be the most interesting, the dark & wet spot on the left side to be the second most interesting, and the small dark shadow in the upper left hand corner to be the 3rd most interesting. For the first two interest points (the dark & wet spots), it is apparent from the uncommon map for intensity pixels in Fig. 5 that these points are interesting due to their relatively remarkable intensity values. By inspection of Fig. 7, we see that these pixels which reside in the white segment of the intensity segmentation mosaic are unusual because they are a cluster of very dim pixels (relative to the brighter red, blue and green segments). Within the dark wet spots, we observe that these particular points in the white segment of the intensity segmentation in Fig. 7 are interesting because they reside in the shadowy areas of the dark & wet spots. We interpret the interest in the 3rd interest point to be due to the juxtaposition of the small green plant with the shadowing in this region; the interest in this point is significantly smaller than for the 2 other interest points.

More advanced software could be developed to handle better the close-up real-time interest-map analysis of the imagery acquired at the close-up tripod position (10 meter distance from the cliff; not shown here). Here are some options to be included in such software development:

  • Add hardware & software to the Cyborg Astrobiologist so that it can make intelligent use of its zoom lens. We plan to use the camera’s LANC communication interface to control the zoom lens with the wearable computer. With such software for intelligent zooming, the system could have corrected the human’s mistake in tripod placement and decided to zoom further in, to focus only on the shadowy part of the dark & wet spot (which was determined to be the most interesting point at a distance of 60 meters), rather than the periphery of the entire dark & wet spot.

  • Enhance the Cyborg Astrobiologist system so that it has a memory of the image segmentations performed at a greater distance or at a lower magnification of the zoom lens. Then, when moving to a closer tripod position or a higher level of zoom-magnification, register the new imagery or the new segmentation maps with the coarser resolution imagery and segmentation maps. Finally, tell the system to mask out or ignore or deemphasize those parts of the higher resolution imagery which were part of the low-interest segments of the coarser, more distant segmentation maps, so that it concentrates on those features that it determined to be interesting at coarse resolution and higher distance.

Figure 4: Mosaic image of a three-by-four set of grayscale sub-images acquired by the Cyborg Astrobiologist at the beginning of its second expedition. The three most interesting points were subsequently revisited by the camera in order to acquire full-color higher-resolution images of these points-of-interest. The colored points and rectangles represent the points that the Cyborg Astrobiologist determined (on location) to be most interesting; green is most interesting, blue is second most interesting, and red is third most interesting. The images were taken and processed in real-time between 1:25PM and 1:35PM local time on 11 June 2004 about 60 meters from some gypsum-bearing southward-facing cliffs near the “El Campillo” lake of the Madrid southeast regional park outside of Rivas Vaciamadrid . See Figs. 5 & 7 for some details about the real-time image processing that was done in order to determine the location of the interest points in this figure.

5 Discussion & Conclusions

Both the human geologists on our team concur with the judgement of the Cyborg Astrobiologist software system, that the two dark & wet spots on the cliff wall were the most interesting spots during the second mission. However, the two geologists also state that this largely depends on the aims of study for the geological field trip; if the aim of the study is to search for hydrological features, then these two dark & wet spots are certainly interesting. One question which we have thus far left unstudied is “What would the Cyborg Astrobiologist system have found interesting during the second mission if the two dark & wet spots had not been present during the second mission?” It is possible that it would again have found some dark shadow particularly interesting, but with the improvements made to the system between the first and second mission, it is also possible that it could have found a different feature of the cliff wall more interesting.

5.1 Outlook

The NEO programming for this Cyborg Geologist project was initiated with the SONY Handycam in April 2002. The wearable computer arrived in June 2003, and the head mounted display arrived in November 2003. We now have a reliably functioning human and hardware and software Cyborg Geologist system, which is partly robotic with its Pan-Tilt camera mount. This robotic extension allows the camera to be pointed repeatedly, precisely & automatically in different directions.

Based upon the significantly-improved performance of the Cyborg Astrobiologist system during the 2nd mission to Rivas in June 2004, we conclude that the system now is debugged sufficiently so as to be able to produce studies of the utility of particular computer vision algorithms for geological deployment in the field.666NOTE IN PROOFS: After this paper was originally written, we did some tests at a second field site (in Guadalajara, Spain) with the same algorithm and the same parameter settings. Despite the change in character of the geological imagery from the first field site (in Rivas Vaciamadrid, discussed below) to the second field site, the uncommon-mapping technique again performed rather well, giving an agreement with post-mission human-geologist assessment 68% of the time (with a 32% false positive rate and a 32% false negative rate), see [McGuire et al., 2005b] for more detail. This success rate is qualitiatively comparable to the results from the first mission in Rivas. This is evidence that the system performs in a context-free, unbiased manner. We have outlined some possibilities for improvement of the system based upon the second field trip, particularly in the improvement in the systems-level algorithms needed in order to more intelligently drive the approach of the Cyborg or robotic system towards a complex geological outcrop. These possible systems-level improvements include: hardware & software for intelligent use of the camera’s zoom lens and a memory of the image segmentation performed at greater distance or lower magnification of the zoom lens.

6 Acknowledgements

P. McGuire, J. Ormö and E. Díaz Martínez would all like to thank the Ramon y Cajal Fellowship program of the Spanish Ministry of Education and Science. Many colleagues have made this project possible through their technical assistance, administrative assistance, or scientific conversations. We give special thanks to Kai Neuffer, Antonino Giaquinta, Fernando Camps Martínez, and Alain Lepinette Malvitte for their technical support. We are indebted to Gloria Gallego, Carmen González, Ramon Fernández, Coronel Angel Santamaria, and Juan Pérez Mercader for their administrative support. We acknowledge conversations with Virginia Souza-Egipsy, María Paz Zorzano Mier, Carmen Córdoba Jabonero, Josefina Torres Redondo, Víctor R. Ruiz, Irene Schneider, Carol Stoker, Paula Grunthaner, Maxwell D. Walter, Fernando Ayllón Quevedo, Javier Martín Soler, Jörg Walter, Claudia Noelker, Gunther Heidemann, Robert Rae, and Jonathan Lunine. The field work by J. Ormö was partially supported by grants from the Spanish Ministry of Education and Science (AYA2003-01203 and CGL2004-03215). The equipment used in this work was purchased by grants to our Center for Astrobiology from its sponsoring research organizations, CSIC and INTA.

References