GelSlim: A High-Resolution, Compact, Robust, and Calibrated Tactile-sensing Finger

03/01/2018 ∙ by Elliott Donlon, et al. ∙ MIT 0

This work describes the development of a high-resolution tactile-sensing finger for robot grasping. This finger, inspired by previous GelSight sensing techniques, features an integration that is slimer, more robust, and with more homogenoeus output than previous vision-based tactile sensors. To achieve a compact integration, we redesign the optical path from illumination source to camera by combining light guides and an arrangement of mirror reflections. The optical path can be parametrized with geometric design variables and we describe the tradeoffs between the thickness of the finger, the depth of field of the camera, and the size of the tactile sensing pad. The sensor can sustain the wear from continuous use--and abuse--in grasping tasks by combining tougher materials for the compliant soft gel, a textured fabric skin, a structurally rigid body, and a calibration process that ensures homogeneous illumination and contrast of the tactile images during use. Finally, we evaluate the sensor's durability along four metrics that capture the signal quality during more than 3000 grasping experiments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Tight integration of sensing hardware and control is key to mastery of manipulation in cluttered, occluded, or dynamic environments. Artificial tactile sensors, however, are challenging to integrate and maintain: They are most useful when located at the distal end of the manipulation chain (where space is tight); they are subject to high-forces and wear (which reduces their life span or requires tedious maintenance procedures); and they require instrumentation capable of routing and processing high-bandwidth data.

Among the many tactile sensing technologies developed in the last decades [1]

, vision-based tactile sensors are a promising variant. They provide high spatial resolution with compact instrumentation and are synergistic with recent image-based deep learning techniques. Current implementations of these sensors, however, are often bulky and/or fragile 

[2, 3, 4]. Robotic grasping benefits from sensors that are compactly-integrated and that are rugged enough to sustain the shear and normal forces involved in grasping.

To address this need we present a tactile-sensing finger, GelSlim, designed for grasping in cluttered environments (Fig. 1). This finger, similar to other vision-based tactile sensors, uses a camera to measure tactile imprints (Fig. 2).

Fig. 1: GelSlim fingers picking a textured flashlight from clutter with the corresponding tactile image at right. The sensor is calibrated to normalize output over time and reduce the effects of wear on signal quality. The flashlight, though occluded, is shown for the reader’s clarity.
Fig. 2: Tactile imprints. From left to right: The MCube Lab’s logo, 80/20 aluminum extrusion, a PCB, a screw, a Lego brick, and a key.
Fig. 3: GelSlim finger. Pointed adaptation of the GelSight sensor featuring a larger 50mm 50mm sensor pad and strong, slim construction.

In this work we present:

  • Design of a vision-based high-resolution tactile-sensing finger with the form factor necessary to gain access to cluttered objects, and toughness to sustain the forces involved in everyday grasping (Section IV). The sensor outputs raw images of the sensed surface, which encode shape and texture of the object at contact.

  • Calibration framework to regularize the sensor output over time and across sensor individuals (Section V). We suggest four metrics to track the quality of the tactile feedback.

  • Evaluation of the sensor’s durability by monitoring its image quality over more than 3000 grasps (Section V).

The long term goal of this research is to enable reactive grasping and manipulation. The use of tactile feedback in the control loop of robotic manipulation is key for reliability. Our motivation stems from efforts in developing bin-picking systems to grasp novel objects in cluttered scenes and from the need to observe the geometry of contact to evaluate and control the quality of a grasp [5, 6].

In cluttered environments like a pantry or a warehouse storage cell, as in the Amazon Robotics Challenge [7], a robot faces the challenge of singulating target objects from a tightly-packed collection of items. Cramped spaces and clutter lead to frequent contact with non-target objects. Fingers must be compact and, when possible, pointed to squeeze between target and clutter (Fig. 3). To make use of learning approaches, tactile sensors must also be resilient to the wear and tear from long experimental sessions which often yield unexpected collisions. Finally, sensor calibration and signal conditioning are key to the consistency of tactile feedback as the sensor’s physical components decay.

Ii Related Work

The body of literature on tactile sensing technologies is large [1, 8]. Here we discuss relevant works related to the technologies used by the proposed sensor: vision-based tactile sensors and GelSight sensors.

Ii-a Vision-based tactile sensors

Cameras provide high-spatial-resolution 2D signals without the need for many wires. Their sensing field and working distance can also be tuned with an optical lens. For these reasons, cameras are an interesting alternative to several other sensing technologies, which tend to have higher temporal bandwidth but more limited spatial resolution.

Ohka et al[9] designed an early vision-based tactile sensor. It is comprised of a flat rubber sheet, an acrylic plate and a CCD camera to measure three-dimensional force. The prototyped sensor, however, was too large to be realistically integrated in a practical end-effector. GelForce [10], a tactile sensor shaped like a human finger, used a camera to track two layers of dots on the sensor surface to measure both the magnitude and direction of an applied force.

Instead of measuring force, some vision-based tactile sensors focus on measuring geometry, such as edges, texture or 3D shape of the contact surface. Ferrier and Brockett [11] proposed an algorithm to reconstruct the 3D surface by analyzing the distribution of the deformation of a set of markers on a tactile sensor. This principle has inspired several other contributions. The TacTip sensor [2]

uses a similar principle to detect edges and estimate the rough 3D geometry of the contact surface. Mounted on a GR2 gripper, the sensor gave helpful feedback when reorienting a cylindrical object in hand 

[12]. Yamaguchi [13] built a tactile sensor with a clear silicone gel that can be mounted on a Baxter hand. Unlike the previous sensors, Yamaguchi’s also captures the local color and shape information since the sensing region is transparent. The sensor was used to detect slip and estimate contact force.

Ii-B GelSight sensors

The GelSight sensor is a vision-based tactile sensor that measures the 2D texture and 3D topography of the contact surface. It utilizes a piece of elastomeric gel with an opaque coating as the sensing surface, and a webcam above the gel to capture contact deformation from changes in lighting contrast as reflected by the opaque coating. The gel is illuminated by color LEDs with inclined angles and different directions. The resulting colored shading can be used to reconstruct the 3D geometry of the gel deformation. The original, larger GelSight sensor [14, 15] was designed to measure the 3D topography of the contact surface with micrometer-level spatial resolution. Li et al[3] designed a cuboid fingertip version that could be integrated in a robot finger. Li’s sensor has a cm sensing area, and can measure fine 2D texture and coarse 3D information. A new version of the GelSight sensor was more recently proposed by Dong et al[4] to improve 3D geometry measurements and standardize the fabrication process. A detailed review of different versions of GelSight sensors can be found in  [16].

GelSight-like sensors with rich 2D and 3D information have been successfully applied in robotic manipulation. Li et al[3] used GelSight’s localization capabilities to insert a USB connector, where the sensor used the texture of the characteristic USB logo to guide the insertion. Izatt et al[17] explored the use of the 3D point cloud measured by a GelSight sensor in a state estimation filter to find the pose of a grasped object in a peg-in-hole task. Dong et al[4] used the GelSight sensor to detect slip from variations in the 2D texture of the contact surface in a robot picking task. The 2D image structure of the output from a GelSight sensor makes it a good fit for deep learning architectures. GelSight sensors have also been used to estimate grasp quality [18].

Ii-C Durability of tactile sensors

Frictional wear is an issue intrinsic to tactile sensors. Contact forces and torques during manipulation are significant and can be harmful to both the sensor surface and its inner structure. Vision-based tactile sensors are especially sensitive to frictional wear, since they rely on the deformation of a soft surface for their sensitivity. These sensors commonly use some form of soft silicone gel, rubber or other soft material as a sensing surface [10, 19, 13, 4].

To enhance the durability of the sensor surface, researchers have investigated using protective skins such as plastic [13], or making the sensing layer easier to replace by involving 3D printing techniques with soft material [19].

Another mechanical weakness of vision-based tactile sensors is the adhesion between the soft sensing layer and its stronger supporting layer. Most sensors discussed above use either silicone tape or rely on the adhesive property of the silicone rubber, which can be insufficient under practical shear forces involved in picking and lifting objects. The wear effects on these sensors are especially relevant if one attempts to use them in a data-driven/learning context [13, 18].

Durability is key to the practicality of a tactile sensor; however, none of the above provide quantitative analysis of their sensor’s durability over usage.

Iii Design Goals

Fig. 4: The construction of a GelSight sensor. A general integration of a GelSight sensor in a robot finger requires three components: camera, light, and gel, in particular arrangement. Li’s original fingertip schematic [3] is shown at left with our GelSlim at right.

In a typical GelSight-like sensor, a clear gel with an opaque outer membrane is illuminated by a light source and captured by a camera (Fig. 4). The position of each of these elements depends on the specific requirements of the sensor. Typically, for ease of manufacturing and optical simplicity, the camera’s optical axis is normal to the gel (left of Fig. 4). To reproduce 3D using photometric techniques [15], at least three colors of light must be directed across the gel from different directions.

Both of these geometric constraints, the camera placement and the illumination path, are counterproductive to slim robot finger integrations, and existing sensor implementations are cuboid. In most manipulation applications, successful grasping requires fingers with the following qualities:

  • Compactness allows fingers to singulate objects from clutter by squeezing between them or separating them from the environment.

  • Uniform Illumination makes sensor output consistent across as much of the gel pad as possible.

  • Large Sensor Area extends the area of the tactile cues, both where there is and where there is not contact. This can provide a better knowledge of the state of the grasped object and, ultimately, enhanced controllability.

  • Durability affords signal stability, necessary for the time-span of the sensor. This is especially important for data-driven techniques that build models from experience.

In this paper we propose a redesign of the form, materials, and processing of the GelSight sensor to turn it into a GelSight finger, yielding a more useful finger shape with a more consistent and calibrated output (right of Fig. 4). The following sections describe the geometric and optical tradeoffs in its design (Section IV), as well as the process to calibrate and evaluate it (Section V).

Iv Design and Fabrication

Fig. 5: Texture in the sensor fabric skin improves signal strength. When an object with no texture is grasped against the gel with no fabric, signal is very low (a-b). The signal improves with textured fabric skin (c-d). The difference stands out well when processed with Canny edge detection.

To realize the design goals in Section III, we propose the following changes to a standard design of a vision-based GelSight-like sensor: 1) Photometric stereo for 3D reconstruction requires precise illumination. Instead, we focus on recovering texture and contact surface, which will allow more compact light-camera arrangements. 2) The softness of the gel plays a role in the resolution of the sensor, but is also damaging to its life span. We will achieve higher durability by protecting the softest component of the finger, the gel, with textured fabric. 3) Finally, we will improve the finger’s compactness, illumination uniformity, and sensor pad size with a complete redesign of the sensor optics.

Iv-a Gel Materials Selection

A GelSight sensor’s gel must be elastomeric, optically clear, soft, and resilient. Gel hardness represents a tradeoff between spatial resolution and strength. Maximum sensitivity and resolution is only possible when gels are very soft, but their softness yields two major drawbacks: low tensile strength and greater viscoelasticity. Given our application’s lesser need for spatial resolution, we make use of slightly harder, more resilient gels compared to other Gelsight sensors [3, 4]. Our final gel formulation is a two-part silicone (XP-565 from Silicones Inc.) mixed in a 15:1 ratio of parts A to B. The outer surface of our gel is coated with a specular silicone paint using a process developed by Yuan et al[16].

The surface is covered with a stretchy, loose-weave fabric to prevent damage to the gel while increasing signal strength. Signal strength is proportional to deformation due to pressure on gel surface. Because the patterned texture of the fabric lowers the contact area between object and gel, pressure is increased to the point where the sensor can detect the contact patch of flat objects pressed against the flat gel (Fig. 5).

Iv-B Sensor Geometry Design Space

Fig. 6: The design space of a single-reflection GelSight sensor. Based on the camera’s depth of field and viewing angle, it will lie at some distance away from the gel. These parameters, along with mirror and camera angles, determine the thickness of the finger and the size of the gel pad. The virtual camera created by the mirror is drawn for visualization purposes.

We change the sensor’s form factor by using a mirror to reflect the gel image back to the camera. This allows us to have a larger sensor pad by placing the camera farther away while also keeping the finger comparatively thin. A major component of finger thickness is the optical region with thickness shown in Fig. 6, which is given by the trigonometric relation:

(1)

where is the camera’s field of view, is mirror angle, is the camera angle relative to the base, and is the length of the gel. is given by the following equation and also relies on the disparity between the shortest and longest light path from the camera (depth of field):

(2)

Together, the design requirements and , vary with the design variables, and , and are constrained by the camera’s depth of field: and viewing angle: . These design constraints ensure that both near and far edges given by (2) are in focus and that the gel is maximally sized and the finger is minimally thick.

Iv-C Optical Path: Photons From Source, to Gel, to Camera

Fig. 7: The journey of a light ray through the finger. The red line denoting the light ray is: 1) Emitted by two compact, high-powered LEDs on each side. 2) Routed internal to acrylic guides via total internal reflection. 3) Redistributed to be parallel and bounced toward the gel pad by a parabolic mirror. 4) Reflected on a mirror surface to graze across the gel. 5) Reflected up by an object touching the gel. 6) Reflected to the camera by a flat mirror (not shown in the figure).

Our method of illuminating the gel makes three major improvements relative to previous sensors: a slimmer finger tip, more even illumination, and a larger gel pad. Much like Li did in his original GelSight finger design [3], we use acrylic wave guides to move light throughout the sensor with fine control over the angle of incidence across the gel (Fig. 7). However, our design moves the LEDs used for illumination farther back in the finger by using an additional reflection, thus allowing our finger to be slimmer at the tip.

The light cast on the gel originates from a pair of high-powered, neutral white, surface-mount LEDs (OSLON SSL 80) on each side of the finger. Light rays stay inside the acrylic wave guide due to total internal reflection by the difference in refractive index between acrylic and air. Optimally, light rays would be emitted parallel so as to not lose intensity as light is cast across the gel. However, light emitters are usually point sources. A line of LEDs, as in Li’s sensor, helps to evenly distribute illumination across one dimension while intensity decays across the length of the sensor.

Our approach uses a parabolic reflection (Step 3 in Fig. 7) to ensure that light rays entering the gel pad are close to parallel. The two small LEDs are approximated as a single point source and placed at the parabola’s focus. Parallel light rays bounce across the gel via a hard reflection. Hard reflections through acrylic wave guides are accomplished by painting those surfaces with mirror finish paint.

When an object makes contact with the fabric over the gel pad, it creates a pattern of light and dark spots as the specular gel interacts with the grazing light. This image of light and dark spots is transmitted back to the camera off a front-surface glass mirror. The camera (Raspberry Pi Spy Camera) was chosen for its small size, low price, high framerate/resolution, and good depth of field.

Iv-D Lessons Learned

For robotic system integrators, or those interested in designing their own GelSight sensors, the following is a collection of small but important lessons we learned:

  1. Mirror: Back surface mirrors create a “double image” from reflections off front and back surfaces especially at the reflection angles we use in our sensor. Glass front surface mirrors give a sharper image.

  2. Clean acrylic: Even finger oils on the surface of a wave guide can interrupt total internal reflection. Clean acrylic obtains maximum illumination efficiency.

  3. Laser cut acrylic: Acrylic pieces cut by laser exhibit stress cracking at edges after contacting solvents from glue or mirror paint. Cracks break the optical continuity in the substrate and ruin the guide. Stresses can be relieved by annealing first.

  4. LED choice: This LED was chosen for its high luminous efficacy (103 lm/W), compactness (3mm 3mm), and small viewing angle (80). Small viewing angle directs more light into the thin wave guide.

  5. Gel paint type: From our experience in this configuration, semi-specular gel coating provides a higher-contrast signal than lambertian gel coatings. Yuan et al. [16] describe the different types of coatings and how to manufacture them.

  6. Affixing silicone gel: When affixing the silicone gel to the substrate, most adhesives we tried made the images hazy or did not acceptably adhere to either the silicone or substrate. We found that Adhesives Research ARclear 93495 works well. Our gel-substrate bond is also stronger than other gel-based sensors because of its comparatively large contact area.

Some integration lessons revolve around the use of a Raspberry Pi spy camera. It enables a very high data-rate but requires a 15-pin Camera Serial Interface (CSI) connection with the Raspberry Pi. Since the GelSlim sensor was designed for use on a robotic system where movement and contact are part of normal operation, the processor (Raspberry Pi) is placed away from the robot manipulator. We extended the camera’s fragile ribbon cable by first adapting it to an HDMI cable inside the finger, then passing that HDMI cable along the kinematic chain of the robot. Extending the camera this way allows us to make it up to several meters long, mechanically and electrically protect the contacts, and route power to the LEDs through the same cable.

The final integration of the sensor in our robot finger also features a rotating joint to change the angle of the finger tip relative to the rest of the finger body. This movement does not affect the optical system and allows us to more effectively grasp a variety of objects in clutter.

There are numerous ways to continue improving the sensor’s durability and simplify the sensor’s fabrication process. For example, while the finger is slimmer, it is not smaller. It will be a challenge to make integrations sized for smaller robots due to camera field of view and depth of field constraints. Additionally, our finger has an un-sensed, rigid tip that is less than ideal for two reasons: it is the part of the finger with the richest contact information, and its rigidity negatively impacts the sensor’s durability. To decrease contact forces applied due to this rigidity, we will add compliance to the finger-sensor system.

Iv-E Gel Durability Failures

We experimented with several ways to protect the gel surface before selecting a fabric skin. Most non-silicone coatings will not stick to the silicone bulk, so we tested various types of filled and non-filled silicones. Because this skin coats the outside, using filled (tougher, non-transparent) silicones is an option. One thing to note is that thickness added outside of the specular paint increases impedance of the gel, thus decreasing resolution. To deposit a thin layer onto the bulk, we diluted filled, flowable silicone adhesive with NOVOCS silicone thinner from Smooth-On Inc. We found that using solvent in proportions greater than 2:1 (solvent:silicone) caused the gel to wrinkle (possibly because solvent diffused into the gel and caused expansion).

Using a non-solvent approach to deposit thin layers like spin coating is promising, but we did not explore this path. Furthermore, thin silicone coatings often rubbed off after a few hundred grasps signaling that they did not adhere to the gel surface effectively. Plasma pre-treatment of the silicone surface could more effectively bond substrate and coating, but we were unable to explore this route.

V Sensor Calibration

The consistency of sensor output is key for sensor usability. The raw image from a GelSlim sensor right after fabrication has two intrinsic issues: non-uniform illumination and a strong perspective distortion. In addition, the sensor image stream may change during use due to small deformations of the hardware, compression of the gel, or camera shutter speed fluctuations. To improve the consistency of the signal we introduce a two-step calibration process, illustrated in Fig. 8 and Fig. 9.

Fig. 8: Calibration Step 1 (manufacture correction): capture raw image (a1) against a calibration pattern with four rectangles and a non-contact image (b1); calculate the “transform matrix” \⃝raisebox{-0.2pt}{{\tinyT}} according to image (a1); do operation \⃝raisebox{-0.2pt}{{\scriptsize1}} “image warping and cropping” to image (a1) and (b1) and get (a2) and (b2); apply Gaussian filter to (b2) to get “background illumination” \⃝raisebox{-0.2pt}{{\tinyB}}; do operation \⃝raisebox{-0.2pt}{{\scriptsize2}} to (a2) and get the calibrated image (a3); record the “mean value” \⃝raisebox{-0.2pt}{{\tinyM}} of image (b2) as brightness reference.

Calibration Step 1. Manufacture correction. After fabrication, the sensor signal can vary with differences in camera perspective and illumination intensity. To correct for camera perspective, we capture a tactile imprint in Fig. 8 (a1) against a calibration pattern with four flat squares (Fig. 10 left). With the distance between the outer edges of the four squares, we estimate the perspective transformation matrix that allows us to warp the image to a normal perspective and crop the boundaries. The contact surface information in the warped image (Fig. 8 (a2)) is more user-friendly. We assume the perspective camera matrix remains constant, so the manufacture calibration is done only once.

We correct for non-homogenous illumination by estimating the illumination distribution of the background using a strong Gaussian filter on a non-contact warped image (Fig. 8 (b2)). The resulting image, after subtracting the non-uniform illumination background (Fig. 8 (a3)), is visually more homogeneous. In addition, we record the mean value of the warped non-contact image as brightness reference for future use.

Fig. 9: Calibration Step 2 (on-line sensor maintenance): apply transformation \⃝raisebox{-0.2pt}{{\tinyT}} to start from a warped and cropped image (a1) and (b1); operation \⃝raisebox{-0.2pt}{{\scriptsize2}} uses the non-contact image from step 1 and adds constant \⃝raisebox{-0.2pt}{{\tinyM}} to calibrate the image brightness (a2) and (b2); operation \⃝raisebox{-0.2pt}{{\scriptsize3}} performs a local contrast adjustment (a3) and (b3). All the images show the imprint of the calibration “dome” after fabrication and 3300 grasps. The red circles in b(3) highlight the region where the gel wears out after 3300 grasps.
Fig. 10: Three tactile profiles to calibrate the sensor. From left to right: A rectangle with sharp corners, a ball-bearing array, and a 3D printed dome.

Calibration Step 2. On-line Sensor Maintenance. The aim of the second calibration step is to keep the sensor output consistent over time. We define four metrics to evaluate the temporal consistency of the 2D signal: Light intensity and distribution, Signal strength, Signal strength distribution and Gel condition. In the following subsection, we will describe and evaluate these metrics in detail.

We will make use of the calibration targets in Fig. 10 to track the signal quality, including a ball-bearing array and a 3D printed dome. We conduct over 3300 aggressive grasp-lift-vibrate experiments on several daily objects with two GelSlim fingers on a WSG-50 gripper attached to an ABB IRB 1600ID robotic arm. We take a tactile imprint of the two calibration targets every 100 grasps. The data presented in the following sections were gathered with a single prototype and are for the purposes of evaluating sensor durability trends.

V-a Metric I: Light Intensity and Distribution

The light intensity and distribution are the mean and standard deviation of the light intensity in a non-contact image. The light intensity distribution in the gel is influenced by the condition of the light source, the consistency of the optical path and the homogeneity of the paint of the gel. The three factors can change due to wear. Fig. 

11 shows their evolution over grasps before (blue) and after (red) background illumination correction. The standard deviations are shown as error bars. The blue curve (raw output from the sensor) shows that the mean brightness of the image drops slowly over time, especially after around 1750 grasps. This is likely due to slight damage of the optical path. The variation of the image brightness over space decreases slightly, which is likely caused by the fact that the bright two sides of the image get darker and more similar to the center region. Fig. 9 shows an example of the decrease in illumination before (a1) and after (b1) 3300 grasps.

We compensate for the changes in light intensity by subtracting the background and adding a constant

(brightness reference from step one) to the whole image. The background illumination is obtained from the Gaussian filtered non-contact image at that point. The mean and variance of the corrected images, shown in red in Fig. 

11, are more consistent. Fig. 9 shows an example of the improvement after 3300 grasps.

Fig. 11: Evolution of the light intensity distribution.

V-B Metric II: Signal Strength

The signal strength is a measure of the dynamic range of the tactile image under contact. It is intuitively the brightness and contrast of a contact patch, and we define it as:

(3)

where is the mean and the standard deviation of the image intensity in the contact region, and is the Heaviside step function. means that if the standard deviation is smaller than , signal strength is 0. Experimentally, we set to 5, and , the standard deviation normalizer, to 30.

Maintaining a consistent signal strength during use is one of the most important factors for the type of contact information we can extract in a vision-based tactile sensor. In a GelSlim sensor, signal strength is affected by the elasticity of the gel, which degrades after use.

We track the signal strength during grasps by using the “dome” calibration pattern designed to yield a single contact patch. Fig. 12 shows its evolution. The blue curve (from raw output) shows a distinct drop of the signal strength after 1750 grasps. The brightness decrease described in the previous subsection is one of the key reasons.

The signal strength can be enhanced by increasing both the contrast and brightness of the image. The brightness adjustment done after fabrication improves the signal strength, shown in green in Fig. 12. However, the image with brightness correction after 3300 grasps shown in Fig. 9 (b2) still has decreased contrast.

Fig. 12: The change of signal strength across the number of grasps performed.

To enhance the image contrast according to the illumination, we perform adaptive histogram equalization to the image, which increases the contrast on the whole image, and then fuses the images with and without histogram equalization together according to the local background illumination. The two images after the whole calibration are shown in Fig. 9 (a3) and (b3). The signal strength after calibrating illumination and contrast (Fig. 12 in red) shows better consistency during usage.

V-C Metric III: Signal Strength Distribution

The force distribution after grasping an object is non-uniform across the gel. During regular use, the center and distal regions of the gel are more likely to be contacted during grasping, which puts more wear on the gel in those areas. This phenomenon results in a non-uniform degradation of the signal strength. To quantify this phenomenon, we extract the signal strength of each pressed region from the “ball array” calibration images taken every 100 grasps (see Fig. 13 (b) before and (c) after calibration). We use the standard deviation of the array of signal strengths to represent the signal strength distribution, and compensate for variations by increasing the contrast non-uniformly in the decreased regions.

Fig. 13: (a) The evolution of signal strength distribution (b) The raw output of “ball array” calibration image (c) The calibrated “ball array” calibration image.

Fig. 13 shows signal strength distribution before and after calibration in (blue) and (red) respectively. The red curve shows some marginal improvement in the consistency over usage. The sudden increase of the curve after 2500 grasps is caused by the change in light distribution likely due to damage of the optical path by an especially aggressive grasp.

V-D Metric IV: Gel Condition

The sensor’s soft gel is covered by a textured fabric skin for protection. Experimentally, this significantly increases the resilience to wear. However, the reflective paint layer of the gel may still wear out after use. Since the specular paint acts as a reflection surface, the regions of the gel with damaged paint do not respond to contact signal and are seen as black pixels, which we call dead pixels.

We define the gel condition as the percentage of dead pixels in the image. Fig. 14 shows the evolution of the number of dead pixels over the course of 3000 grasps. Only a small amount of pixels (less than 0.06%, around 170 pixels) are damaged, highlighted with red circles in Fig. 9

(b3). Sparse dead pixels can be ignored or fixed with interpolation, but a large number of clustered dead pixels can be solved only by replacing the gel.

Fig. 14: Evolution of the gel condition.

Vi Conclusions and Future Work

In this paper, we present a compact integration of a visual-tactile sensor in a robotic phalange. Our design features: a gel covered with a textured fabric skin that improves durability and contact signal strength; a compact integration of the GelSight sensor optics; and an improved illumination over a larger tactile area. Despite the improved wear resistance, the sensor still ages over use. We propose four metrics to track this aging process and create a calibration framework to regularize sensor output over time. We show that, while the sensor degrades minimally over the course of several thousand grasps, the digital calibration procedure is able to condition the sensor output to improve its usable life-span.

Sensor Functionality. The sensor outputs images of tactile imprints that encode shape and texture of the object at contact. For example, contact geometry in pixel space could be used in combination with knowledge of grasping force and gel material properties to infer 3D local object geometry. If markers are placed on the gel surface, marker flow can be used to estimate object hardness [20] or shear forces [21]

. These quantities, as well as the sensor’s calibrated image output, can be used directly in model-based or learning-based approaches to robot grasping and manipulation. This information could be used to track object pose, inform a data-driven classifier to predict grasp stability, or as real-time observations in a closed-loop regrasp policy 

[22].

Applications in robotic dexterity. We anticipate that GelSlim’s unique form factor will facilitate the use of these sensing modalities in a wide variety of applications – especially in cluttered scenarios where visual feedback is lacking, where access is limited, or where difficult to observe contact forces play a key role. We are especially interested in using real-time contact geometry and force information to monitor and control tasks that require in-hand dexterity and reactivity such as picking a tool in a functional grasp and then using it or grasping a nut and screwing it on a bolt. Ultimately, these contact-rich tasks can only be robustly tackled with tight integration of sensing and control. While the presented solution is just one path forward, we beilive that high-resolution tactile sensors hold particular promise.

References

  • [1] Z. Kappassov, J.-A. Corrales, and V. Perdereau, “Tactile sensing in dexterous robot hands,” Robotics and Autonomous Systems, vol. 74, pp. 195–220, 2015.
  • [2] C. Chorley, C. Melhuish, T. Pipe, and J. Rossiter, “Development of a tactile sensor based on biologically inspired edge encoding,” in ICAR.    IEEE, 2009, pp. 1–6.
  • [3] R. Li, R. Platt, W. Yuan, A. ten Pas, N. Roscup, M. A. Srinivasan, and E. Adelson, “Localization and manipulation of small parts using GelSight tactile sensing,” in IROS.    IEEE, 2014, pp. 3988–3993.
  • [4] S. Dong, W. Yuan, and E. Adelson, “Improved GelSight tactile sensor for measuring geometry and slip,” IROS, 2017.
  • [5] A. Zeng, K.-T. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez, and J. Xiao, “Multi-view Self-Supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge,” in ICRA.    IEEE, 2017, pp. 1386–1383.
  • [6] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, N. Fazeli, F. Alet, N. Chavan-Dafle, R. Holladay, I. Morona, P. Q. Nair, D. Green, I. Taylor, W. Liu, T. Funkhouser, and A. Rodriguez, “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching,” in ICRA.    IEEE, 2018.
  • [7] N. Correll, K. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser, K. Okada, A. Rodriguez, J. Romano, and P. Wurman, “Analysis and observations from the first amazon picking challenge,” T-ASE, vol. 15, no. 1, pp. 172–188, 2018.
  • [8] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensing from humans to humanoids,” IEEE T-RO, vol. 26, no. 1, pp. 1–20, 2010.
  • [9] M. Ohka, Y. Mitsuya, K. Hattori, and I. Higashioka, “Data conversion capability of optical tactile sensor featuring an array of pyramidal projections,” in MFI.    IEEE, 1996, pp. 573–580.
  • [10] K. Kamiyama, K. Vlack, T. Mizota, H. Kajimoto, K. Kawakami, and S. Tachi, “Vision-based sensor for real-time measuring of surface traction fields,” CG&A, vol. 25, no. 1, pp. 68–75, 2005.
  • [11] N. J. Ferrier and R. W. Brockett, “Reconstructing the shape of a deformable membrane from image data,” IJRR, vol. 19, no. 9, pp. 795–816, 2000.
  • [12] B. Ward-Cherrier, N. Rojas, and N. F. Lepora, “Model-free precise in-hand manipulation with a 3d-printed tactile gripper,” RA-L, vol. 2, no. 4, pp. 2056–2063, 2017.
  • [13] A. Yamaguchi and C. G. Atkeson, “Combining finger vision and optical tactile sensing: Reducing and handling errors while cutting vegetables,” in Humanoids.    IEEE, 2016, pp. 1045–1051.
  • [14] M. K. Johnson and E. Adelson, “Retrographic sensing for the measurement of surface texture and shape,” in CVPR.    IEEE, 2009, pp. 1070–1077.
  • [15] M. K. Johnson, F. Cole, A. Raj, and E. H. Adelson, “Microgeometry capture using an elastomeric sensor,” in TOG, vol. 30, no. 4.    ACM, 2011, p. 46.
  • [16] W. Yuan, S. Dong, and E. H. Adelson, “GelSight: High-resolution robot tactile sensors for estimating geometry and force,” Sensors, vol. 17, no. 12, p. 2762, 2017.
  • [17] G. Izatt, G. Mirano, E. Adelson, and R. Tedrake, “Tracking objects with point clouds from vision and touch,” in ICRA.    IEEE, 2017.
  • [18] R. Calandra, A. Owens, M. Upadhyaya, W. Yuan, J. Lin, E. H. Adelson, and S. Levine, “The feeling of success: Does touch sensing help predict grasp outcomes?” arXiv preprint arXiv:1710.05512, 2017.
  • [19] B. Ward-Cherrier, N. Pestell, L. Cramphorn, B. Winstone, M. E. Giannaccini, J. Rossiter, and N. F. Lepora, “The tactip family: Soft optical tactile sensors with 3D-printed biomimetic morphologies,” Soft Robotics, 2018.
  • [20] W. Yuan, M. A. Srinivasan, and E. Adelson, “Estimating object hardness with a gelsight touch sensor,” in Intelligent Robots and Systems (IROS 2016), 2016 IEEE/RSJ International Conference on.    IEEE, 2016.
  • [21] W. Yuan, R. Li, M. A. Srinivasan, and E. H. Adelson, “Measurement of shear and slip with a gelsight tactile sensor,” in ICRA.    IEEE, 2015, pp. 304–311.
  • [22] F. R. Hogan, M. Bauza, O. Canal, E. Donlon, and A. Rodriguez, “Tactile regrasp: Grasp adjustments via simulated tactile transformations,” arXiv preprint arXiv:1803.01940, 2018.