Tight integration of sensing hardware and control is key to mastery of manipulation in cluttered, occluded, or dynamic environments. Artificial tactile sensors, however, are challenging to integrate and maintain: They are most useful when located at the distal end of the manipulation chain (where space is tight); they are subject to high-forces and wear (which reduces their life span or requires tedious maintenance procedures); and they require instrumentation capable of routing and processing high-bandwidth data.
Among the many tactile sensing technologies developed in the last decades 
, vision-based tactile sensors are a promising variant. They provide high spatial resolution with compact instrumentation and are synergistic with recent image-based deep learning techniques. Current implementations of these sensors, however, are often bulky and/or fragile[2, 3, 4]. Robotic grasping benefits from sensors that are compactly-integrated and that are rugged enough to sustain the shear and normal forces involved in grasping.
To address this need we present a tactile-sensing finger, GelSlim, designed for grasping in cluttered environments (Fig. 1). This finger, similar to other vision-based tactile sensors, uses a camera to measure tactile imprints (Fig. 2).
In this work we present:
Design of a vision-based high-resolution tactile-sensing finger with the form factor necessary to gain access to cluttered objects, and toughness to sustain the forces involved in everyday grasping (Section IV). The sensor outputs raw images of the sensed surface, which encode shape and texture of the object at contact.
Calibration framework to regularize the sensor output over time and across sensor individuals (Section V). We suggest four metrics to track the quality of the tactile feedback.
Evaluation of the sensor’s durability by monitoring its image quality over more than 3000 grasps (Section V).
The long term goal of this research is to enable reactive grasping and manipulation. The use of tactile feedback in the control loop of robotic manipulation is key for reliability. Our motivation stems from efforts in developing bin-picking systems to grasp novel objects in cluttered scenes and from the need to observe the geometry of contact to evaluate and control the quality of a grasp [5, 6].
In cluttered environments like a pantry or a warehouse storage cell, as in the Amazon Robotics Challenge , a robot faces the challenge of singulating target objects from a tightly-packed collection of items. Cramped spaces and clutter lead to frequent contact with non-target objects. Fingers must be compact and, when possible, pointed to squeeze between target and clutter (Fig. 3). To make use of learning approaches, tactile sensors must also be resilient to the wear and tear from long experimental sessions which often yield unexpected collisions. Finally, sensor calibration and signal conditioning are key to the consistency of tactile feedback as the sensor’s physical components decay.
Ii Related Work
The body of literature on tactile sensing technologies is large [1, 8]. Here we discuss relevant works related to the technologies used by the proposed sensor: vision-based tactile sensors and GelSight sensors.
Ii-a Vision-based tactile sensors
Cameras provide high-spatial-resolution 2D signals without the need for many wires. Their sensing field and working distance can also be tuned with an optical lens. For these reasons, cameras are an interesting alternative to several other sensing technologies, which tend to have higher temporal bandwidth but more limited spatial resolution.
Ohka et al.  designed an early vision-based tactile sensor. It is comprised of a flat rubber sheet, an acrylic plate and a CCD camera to measure three-dimensional force. The prototyped sensor, however, was too large to be realistically integrated in a practical end-effector. GelForce , a tactile sensor shaped like a human finger, used a camera to track two layers of dots on the sensor surface to measure both the magnitude and direction of an applied force.
Instead of measuring force, some vision-based tactile sensors focus on measuring geometry, such as edges, texture or 3D shape of the contact surface. Ferrier and Brockett  proposed an algorithm to reconstruct the 3D surface by analyzing the distribution of the deformation of a set of markers on a tactile sensor. This principle has inspired several other contributions. The TacTip sensor 
uses a similar principle to detect edges and estimate the rough 3D geometry of the contact surface. Mounted on a GR2 gripper, the sensor gave helpful feedback when reorienting a cylindrical object in hand. Yamaguchi  built a tactile sensor with a clear silicone gel that can be mounted on a Baxter hand. Unlike the previous sensors, Yamaguchi’s also captures the local color and shape information since the sensing region is transparent. The sensor was used to detect slip and estimate contact force.
Ii-B GelSight sensors
The GelSight sensor is a vision-based tactile sensor that measures the 2D texture and 3D topography of the contact surface. It utilizes a piece of elastomeric gel with an opaque coating as the sensing surface, and a webcam above the gel to capture contact deformation from changes in lighting contrast as reflected by the opaque coating. The gel is illuminated by color LEDs with inclined angles and different directions. The resulting colored shading can be used to reconstruct the 3D geometry of the gel deformation. The original, larger GelSight sensor [14, 15] was designed to measure the 3D topography of the contact surface with micrometer-level spatial resolution. Li et al.  designed a cuboid fingertip version that could be integrated in a robot finger. Li’s sensor has a cm sensing area, and can measure fine 2D texture and coarse 3D information. A new version of the GelSight sensor was more recently proposed by Dong et al.  to improve 3D geometry measurements and standardize the fabrication process. A detailed review of different versions of GelSight sensors can be found in .
GelSight-like sensors with rich 2D and 3D information have been successfully applied in robotic manipulation. Li et al.  used GelSight’s localization capabilities to insert a USB connector, where the sensor used the texture of the characteristic USB logo to guide the insertion. Izatt et al.  explored the use of the 3D point cloud measured by a GelSight sensor in a state estimation filter to find the pose of a grasped object in a peg-in-hole task. Dong et al.  used the GelSight sensor to detect slip from variations in the 2D texture of the contact surface in a robot picking task. The 2D image structure of the output from a GelSight sensor makes it a good fit for deep learning architectures. GelSight sensors have also been used to estimate grasp quality .
Ii-C Durability of tactile sensors
Frictional wear is an issue intrinsic to tactile sensors. Contact forces and torques during manipulation are significant and can be harmful to both the sensor surface and its inner structure. Vision-based tactile sensors are especially sensitive to frictional wear, since they rely on the deformation of a soft surface for their sensitivity. These sensors commonly use some form of soft silicone gel, rubber or other soft material as a sensing surface [10, 19, 13, 4].
To enhance the durability of the sensor surface, researchers have investigated using protective skins such as plastic , or making the sensing layer easier to replace by involving 3D printing techniques with soft material .
Another mechanical weakness of vision-based tactile sensors is the adhesion between the soft sensing layer and its stronger supporting layer. Most sensors discussed above use either silicone tape or rely on the adhesive property of the silicone rubber, which can be insufficient under practical shear forces involved in picking and lifting objects. The wear effects on these sensors are especially relevant if one attempts to use them in a data-driven/learning context [13, 18].
Durability is key to the practicality of a tactile sensor; however, none of the above provide quantitative analysis of their sensor’s durability over usage.
Iii Design Goals
In a typical GelSight-like sensor, a clear gel with an opaque outer membrane is illuminated by a light source and captured by a camera (Fig. 4). The position of each of these elements depends on the specific requirements of the sensor. Typically, for ease of manufacturing and optical simplicity, the camera’s optical axis is normal to the gel (left of Fig. 4). To reproduce 3D using photometric techniques , at least three colors of light must be directed across the gel from different directions.
Both of these geometric constraints, the camera placement and the illumination path, are counterproductive to slim robot finger integrations, and existing sensor implementations are cuboid. In most manipulation applications, successful grasping requires fingers with the following qualities:
Compactness allows fingers to singulate objects from clutter by squeezing between them or separating them from the environment.
Uniform Illumination makes sensor output consistent across as much of the gel pad as possible.
Large Sensor Area extends the area of the tactile cues, both where there is and where there is not contact. This can provide a better knowledge of the state of the grasped object and, ultimately, enhanced controllability.
Durability affords signal stability, necessary for the time-span of the sensor. This is especially important for data-driven techniques that build models from experience.
In this paper we propose a redesign of the form, materials, and processing of the GelSight sensor to turn it into a GelSight finger, yielding a more useful finger shape with a more consistent and calibrated output (right of Fig. 4). The following sections describe the geometric and optical tradeoffs in its design (Section IV), as well as the process to calibrate and evaluate it (Section V).
Iv Design and Fabrication
To realize the design goals in Section III, we propose the following changes to a standard design of a vision-based GelSight-like sensor: 1) Photometric stereo for 3D reconstruction requires precise illumination. Instead, we focus on recovering texture and contact surface, which will allow more compact light-camera arrangements. 2) The softness of the gel plays a role in the resolution of the sensor, but is also damaging to its life span. We will achieve higher durability by protecting the softest component of the finger, the gel, with textured fabric. 3) Finally, we will improve the finger’s compactness, illumination uniformity, and sensor pad size with a complete redesign of the sensor optics.
Iv-a Gel Materials Selection
A GelSight sensor’s gel must be elastomeric, optically clear, soft, and resilient. Gel hardness represents a tradeoff between spatial resolution and strength. Maximum sensitivity and resolution is only possible when gels are very soft, but their softness yields two major drawbacks: low tensile strength and greater viscoelasticity. Given our application’s lesser need for spatial resolution, we make use of slightly harder, more resilient gels compared to other Gelsight sensors [3, 4]. Our final gel formulation is a two-part silicone (XP-565 from Silicones Inc.) mixed in a 15:1 ratio of parts A to B. The outer surface of our gel is coated with a specular silicone paint using a process developed by Yuan et al. .
The surface is covered with a stretchy, loose-weave fabric to prevent damage to the gel while increasing signal strength. Signal strength is proportional to deformation due to pressure on gel surface. Because the patterned texture of the fabric lowers the contact area between object and gel, pressure is increased to the point where the sensor can detect the contact patch of flat objects pressed against the flat gel (Fig. 5).
Iv-B Sensor Geometry Design Space
We change the sensor’s form factor by using a mirror to reflect the gel image back to the camera. This allows us to have a larger sensor pad by placing the camera farther away while also keeping the finger comparatively thin. A major component of finger thickness is the optical region with thickness shown in Fig. 6, which is given by the trigonometric relation:
where is the camera’s field of view, is mirror angle, is the camera angle relative to the base, and is the length of the gel. is given by the following equation and also relies on the disparity between the shortest and longest light path from the camera (depth of field):
Together, the design requirements and , vary with the design variables, and , and are constrained by the camera’s depth of field: and viewing angle: . These design constraints ensure that both near and far edges given by (2) are in focus and that the gel is maximally sized and the finger is minimally thick.
Iv-C Optical Path: Photons From Source, to Gel, to Camera
Our method of illuminating the gel makes three major improvements relative to previous sensors: a slimmer finger tip, more even illumination, and a larger gel pad. Much like Li did in his original GelSight finger design , we use acrylic wave guides to move light throughout the sensor with fine control over the angle of incidence across the gel (Fig. 7). However, our design moves the LEDs used for illumination farther back in the finger by using an additional reflection, thus allowing our finger to be slimmer at the tip.
The light cast on the gel originates from a pair of high-powered, neutral white, surface-mount LEDs (OSLON SSL 80) on each side of the finger. Light rays stay inside the acrylic wave guide due to total internal reflection by the difference in refractive index between acrylic and air. Optimally, light rays would be emitted parallel so as to not lose intensity as light is cast across the gel. However, light emitters are usually point sources. A line of LEDs, as in Li’s sensor, helps to evenly distribute illumination across one dimension while intensity decays across the length of the sensor.
Our approach uses a parabolic reflection (Step 3 in Fig. 7) to ensure that light rays entering the gel pad are close to parallel. The two small LEDs are approximated as a single point source and placed at the parabola’s focus. Parallel light rays bounce across the gel via a hard reflection. Hard reflections through acrylic wave guides are accomplished by painting those surfaces with mirror finish paint.
When an object makes contact with the fabric over the gel pad, it creates a pattern of light and dark spots as the specular gel interacts with the grazing light. This image of light and dark spots is transmitted back to the camera off a front-surface glass mirror. The camera (Raspberry Pi Spy Camera) was chosen for its small size, low price, high framerate/resolution, and good depth of field.
Iv-D Lessons Learned
For robotic system integrators, or those interested in designing their own GelSight sensors, the following is a collection of small but important lessons we learned:
Mirror: Back surface mirrors create a “double image” from reflections off front and back surfaces especially at the reflection angles we use in our sensor. Glass front surface mirrors give a sharper image.
Clean acrylic: Even finger oils on the surface of a wave guide can interrupt total internal reflection. Clean acrylic obtains maximum illumination efficiency.
Laser cut acrylic: Acrylic pieces cut by laser exhibit stress cracking at edges after contacting solvents from glue or mirror paint. Cracks break the optical continuity in the substrate and ruin the guide. Stresses can be relieved by annealing first.
LED choice: This LED was chosen for its high luminous efficacy (103 lm/W), compactness (3mm 3mm), and small viewing angle (80). Small viewing angle directs more light into the thin wave guide.
Gel paint type: From our experience in this configuration, semi-specular gel coating provides a higher-contrast signal than lambertian gel coatings. Yuan et al.  describe the different types of coatings and how to manufacture them.
Affixing silicone gel: When affixing the silicone gel to the substrate, most adhesives we tried made the images hazy or did not acceptably adhere to either the silicone or substrate. We found that Adhesives Research ARclear 93495 works well. Our gel-substrate bond is also stronger than other gel-based sensors because of its comparatively large contact area.
Some integration lessons revolve around the use of a Raspberry Pi spy camera. It enables a very high data-rate but requires a 15-pin Camera Serial Interface (CSI) connection with the Raspberry Pi. Since the GelSlim sensor was designed for use on a robotic system where movement and contact are part of normal operation, the processor (Raspberry Pi) is placed away from the robot manipulator. We extended the camera’s fragile ribbon cable by first adapting it to an HDMI cable inside the finger, then passing that HDMI cable along the kinematic chain of the robot. Extending the camera this way allows us to make it up to several meters long, mechanically and electrically protect the contacts, and route power to the LEDs through the same cable.
The final integration of the sensor in our robot finger also features a rotating joint to change the angle of the finger tip relative to the rest of the finger body. This movement does not affect the optical system and allows us to more effectively grasp a variety of objects in clutter.
There are numerous ways to continue improving the sensor’s durability and simplify the sensor’s fabrication process. For example, while the finger is slimmer, it is not smaller. It will be a challenge to make integrations sized for smaller robots due to camera field of view and depth of field constraints. Additionally, our finger has an un-sensed, rigid tip that is less than ideal for two reasons: it is the part of the finger with the richest contact information, and its rigidity negatively impacts the sensor’s durability. To decrease contact forces applied due to this rigidity, we will add compliance to the finger-sensor system.
Iv-E Gel Durability Failures
We experimented with several ways to protect the gel surface before selecting a fabric skin. Most non-silicone coatings will not stick to the silicone bulk, so we tested various types of filled and non-filled silicones. Because this skin coats the outside, using filled (tougher, non-transparent) silicones is an option. One thing to note is that thickness added outside of the specular paint increases impedance of the gel, thus decreasing resolution. To deposit a thin layer onto the bulk, we diluted filled, flowable silicone adhesive with NOVOCS silicone thinner from Smooth-On Inc. We found that using solvent in proportions greater than 2:1 (solvent:silicone) caused the gel to wrinkle (possibly because solvent diffused into the gel and caused expansion).
Using a non-solvent approach to deposit thin layers like spin coating is promising, but we did not explore this path. Furthermore, thin silicone coatings often rubbed off after a few hundred grasps signaling that they did not adhere to the gel surface effectively. Plasma pre-treatment of the silicone surface could more effectively bond substrate and coating, but we were unable to explore this route.
V Sensor Calibration
The consistency of sensor output is key for sensor usability. The raw image from a GelSlim sensor right after fabrication has two intrinsic issues: non-uniform illumination and a strong perspective distortion. In addition, the sensor image stream may change during use due to small deformations of the hardware, compression of the gel, or camera shutter speed fluctuations. To improve the consistency of the signal we introduce a two-step calibration process, illustrated in Fig. 8 and Fig. 9.
Calibration Step 1. Manufacture correction. After fabrication, the sensor signal can vary with differences in camera perspective and illumination intensity. To correct for camera perspective, we capture a tactile imprint in Fig. 8 (a1) against a calibration pattern with four flat squares (Fig. 10 left). With the distance between the outer edges of the four squares, we estimate the perspective transformation matrix that allows us to warp the image to a normal perspective and crop the boundaries. The contact surface information in the warped image (Fig. 8 (a2)) is more user-friendly. We assume the perspective camera matrix remains constant, so the manufacture calibration is done only once.
We correct for non-homogenous illumination by estimating the illumination distribution of the background using a strong Gaussian filter on a non-contact warped image (Fig. 8 (b2)). The resulting image, after subtracting the non-uniform illumination background (Fig. 8 (a3)), is visually more homogeneous. In addition, we record the mean value of the warped non-contact image as brightness reference for future use.
Calibration Step 2. On-line Sensor Maintenance. The aim of the second calibration step is to keep the sensor output consistent over time. We define four metrics to evaluate the temporal consistency of the 2D signal: Light intensity and distribution, Signal strength, Signal strength distribution and Gel condition. In the following subsection, we will describe and evaluate these metrics in detail.
We will make use of the calibration targets in Fig. 10 to track the signal quality, including a ball-bearing array and a 3D printed dome. We conduct over 3300 aggressive grasp-lift-vibrate experiments on several daily objects with two GelSlim fingers on a WSG-50 gripper attached to an ABB IRB 1600ID robotic arm. We take a tactile imprint of the two calibration targets every 100 grasps. The data presented in the following sections were gathered with a single prototype and are for the purposes of evaluating sensor durability trends.
V-a Metric I: Light Intensity and Distribution
The light intensity and distribution are the mean and standard deviation of the light intensity in a non-contact image. The light intensity distribution in the gel is influenced by the condition of the light source, the consistency of the optical path and the homogeneity of the paint of the gel. The three factors can change due to wear. Fig.11 shows their evolution over grasps before (blue) and after (red) background illumination correction. The standard deviations are shown as error bars. The blue curve (raw output from the sensor) shows that the mean brightness of the image drops slowly over time, especially after around 1750 grasps. This is likely due to slight damage of the optical path. The variation of the image brightness over space decreases slightly, which is likely caused by the fact that the bright two sides of the image get darker and more similar to the center region. Fig. 9 shows an example of the decrease in illumination before (a1) and after (b1) 3300 grasps.
We compensate for the changes in light intensity by subtracting the background and adding a constant
(brightness reference from step one) to the whole image. The background illumination is obtained from the Gaussian filtered non-contact image at that point. The mean and variance of the corrected images, shown in red in Fig.11, are more consistent. Fig. 9 shows an example of the improvement after 3300 grasps.
V-B Metric II: Signal Strength
The signal strength is a measure of the dynamic range of the tactile image under contact. It is intuitively the brightness and contrast of a contact patch, and we define it as:
where is the mean and the standard deviation of the image intensity in the contact region, and is the Heaviside step function. means that if the standard deviation is smaller than , signal strength is 0. Experimentally, we set to 5, and , the standard deviation normalizer, to 30.
Maintaining a consistent signal strength during use is one of the most important factors for the type of contact information we can extract in a vision-based tactile sensor. In a GelSlim sensor, signal strength is affected by the elasticity of the gel, which degrades after use.
We track the signal strength during grasps by using the “dome” calibration pattern designed to yield a single contact patch. Fig. 12 shows its evolution. The blue curve (from raw output) shows a distinct drop of the signal strength after 1750 grasps. The brightness decrease described in the previous subsection is one of the key reasons.
The signal strength can be enhanced by increasing both the contrast and brightness of the image. The brightness adjustment done after fabrication improves the signal strength, shown in green in Fig. 12. However, the image with brightness correction after 3300 grasps shown in Fig. 9 (b2) still has decreased contrast.
To enhance the image contrast according to the illumination, we perform adaptive histogram equalization to the image, which increases the contrast on the whole image, and then fuses the images with and without histogram equalization together according to the local background illumination. The two images after the whole calibration are shown in Fig. 9 (a3) and (b3). The signal strength after calibrating illumination and contrast (Fig. 12 in red) shows better consistency during usage.
V-C Metric III: Signal Strength Distribution
The force distribution after grasping an object is non-uniform across the gel. During regular use, the center and distal regions of the gel are more likely to be contacted during grasping, which puts more wear on the gel in those areas. This phenomenon results in a non-uniform degradation of the signal strength. To quantify this phenomenon, we extract the signal strength of each pressed region from the “ball array” calibration images taken every 100 grasps (see Fig. 13 (b) before and (c) after calibration). We use the standard deviation of the array of signal strengths to represent the signal strength distribution, and compensate for variations by increasing the contrast non-uniformly in the decreased regions.
Fig. 13 shows signal strength distribution before and after calibration in (blue) and (red) respectively. The red curve shows some marginal improvement in the consistency over usage. The sudden increase of the curve after 2500 grasps is caused by the change in light distribution likely due to damage of the optical path by an especially aggressive grasp.
V-D Metric IV: Gel Condition
The sensor’s soft gel is covered by a textured fabric skin for protection. Experimentally, this significantly increases the resilience to wear. However, the reflective paint layer of the gel may still wear out after use. Since the specular paint acts as a reflection surface, the regions of the gel with damaged paint do not respond to contact signal and are seen as black pixels, which we call dead pixels.
We define the gel condition as the percentage of dead pixels in the image. Fig. 14 shows the evolution of the number of dead pixels over the course of 3000 grasps. Only a small amount of pixels (less than 0.06%, around 170 pixels) are damaged, highlighted with red circles in Fig. 9
(b3). Sparse dead pixels can be ignored or fixed with interpolation, but a large number of clustered dead pixels can be solved only by replacing the gel.
Vi Conclusions and Future Work
In this paper, we present a compact integration of a visual-tactile sensor in a robotic phalange. Our design features: a gel covered with a textured fabric skin that improves durability and contact signal strength; a compact integration of the GelSight sensor optics; and an improved illumination over a larger tactile area. Despite the improved wear resistance, the sensor still ages over use. We propose four metrics to track this aging process and create a calibration framework to regularize sensor output over time. We show that, while the sensor degrades minimally over the course of several thousand grasps, the digital calibration procedure is able to condition the sensor output to improve its usable life-span.
Sensor Functionality. The sensor outputs images of tactile imprints that encode shape and texture of the object at contact. For example, contact geometry in pixel space could be used in combination with knowledge of grasping force and gel material properties to infer 3D local object geometry. If markers are placed on the gel surface, marker flow can be used to estimate object hardness  or shear forces 
. These quantities, as well as the sensor’s calibrated image output, can be used directly in model-based or learning-based approaches to robot grasping and manipulation. This information could be used to track object pose, inform a data-driven classifier to predict grasp stability, or as real-time observations in a closed-loop regrasp policy.
Applications in robotic dexterity. We anticipate that GelSlim’s unique form factor will facilitate the use of these sensing modalities in a wide variety of applications – especially in cluttered scenarios where visual feedback is lacking, where access is limited, or where difficult to observe contact forces play a key role. We are especially interested in using real-time contact geometry and force information to monitor and control tasks that require in-hand dexterity and reactivity such as picking a tool in a functional grasp and then using it or grasping a nut and screwing it on a bolt. Ultimately, these contact-rich tasks can only be robustly tackled with tight integration of sensing and control. While the presented solution is just one path forward, we beilive that high-resolution tactile sensors hold particular promise.
-  Z. Kappassov, J.-A. Corrales, and V. Perdereau, “Tactile sensing in dexterous robot hands,” Robotics and Autonomous Systems, vol. 74, pp. 195–220, 2015.
-  C. Chorley, C. Melhuish, T. Pipe, and J. Rossiter, “Development of a tactile sensor based on biologically inspired edge encoding,” in ICAR. IEEE, 2009, pp. 1–6.
-  R. Li, R. Platt, W. Yuan, A. ten Pas, N. Roscup, M. A. Srinivasan, and E. Adelson, “Localization and manipulation of small parts using GelSight tactile sensing,” in IROS. IEEE, 2014, pp. 3988–3993.
-  S. Dong, W. Yuan, and E. Adelson, “Improved GelSight tactile sensor for measuring geometry and slip,” IROS, 2017.
-  A. Zeng, K.-T. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez, and J. Xiao, “Multi-view Self-Supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge,” in ICRA. IEEE, 2017, pp. 1386–1383.
-  A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, N. Fazeli, F. Alet, N. Chavan-Dafle, R. Holladay, I. Morona, P. Q. Nair, D. Green, I. Taylor, W. Liu, T. Funkhouser, and A. Rodriguez, “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching,” in ICRA. IEEE, 2018.
-  N. Correll, K. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser, K. Okada, A. Rodriguez, J. Romano, and P. Wurman, “Analysis and observations from the first amazon picking challenge,” T-ASE, vol. 15, no. 1, pp. 172–188, 2018.
-  R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensing from humans to humanoids,” IEEE T-RO, vol. 26, no. 1, pp. 1–20, 2010.
-  M. Ohka, Y. Mitsuya, K. Hattori, and I. Higashioka, “Data conversion capability of optical tactile sensor featuring an array of pyramidal projections,” in MFI. IEEE, 1996, pp. 573–580.
-  K. Kamiyama, K. Vlack, T. Mizota, H. Kajimoto, K. Kawakami, and S. Tachi, “Vision-based sensor for real-time measuring of surface traction fields,” CG&A, vol. 25, no. 1, pp. 68–75, 2005.
-  N. J. Ferrier and R. W. Brockett, “Reconstructing the shape of a deformable membrane from image data,” IJRR, vol. 19, no. 9, pp. 795–816, 2000.
-  B. Ward-Cherrier, N. Rojas, and N. F. Lepora, “Model-free precise in-hand manipulation with a 3d-printed tactile gripper,” RA-L, vol. 2, no. 4, pp. 2056–2063, 2017.
-  A. Yamaguchi and C. G. Atkeson, “Combining finger vision and optical tactile sensing: Reducing and handling errors while cutting vegetables,” in Humanoids. IEEE, 2016, pp. 1045–1051.
-  M. K. Johnson and E. Adelson, “Retrographic sensing for the measurement of surface texture and shape,” in CVPR. IEEE, 2009, pp. 1070–1077.
-  M. K. Johnson, F. Cole, A. Raj, and E. H. Adelson, “Microgeometry capture using an elastomeric sensor,” in TOG, vol. 30, no. 4. ACM, 2011, p. 46.
-  W. Yuan, S. Dong, and E. H. Adelson, “GelSight: High-resolution robot tactile sensors for estimating geometry and force,” Sensors, vol. 17, no. 12, p. 2762, 2017.
-  G. Izatt, G. Mirano, E. Adelson, and R. Tedrake, “Tracking objects with point clouds from vision and touch,” in ICRA. IEEE, 2017.
-  R. Calandra, A. Owens, M. Upadhyaya, W. Yuan, J. Lin, E. H. Adelson, and S. Levine, “The feeling of success: Does touch sensing help predict grasp outcomes?” arXiv preprint arXiv:1710.05512, 2017.
-  B. Ward-Cherrier, N. Pestell, L. Cramphorn, B. Winstone, M. E. Giannaccini, J. Rossiter, and N. F. Lepora, “The tactip family: Soft optical tactile sensors with 3D-printed biomimetic morphologies,” Soft Robotics, 2018.
-  W. Yuan, M. A. Srinivasan, and E. Adelson, “Estimating object hardness with a gelsight touch sensor,” in Intelligent Robots and Systems (IROS 2016), 2016 IEEE/RSJ International Conference on. IEEE, 2016.
-  W. Yuan, R. Li, M. A. Srinivasan, and E. H. Adelson, “Measurement of shear and slip with a gelsight tactile sensor,” in ICRA. IEEE, 2015, pp. 304–311.
-  F. R. Hogan, M. Bauza, O. Canal, E. Donlon, and A. Rodriguez, “Tactile regrasp: Grasp adjustments via simulated tactile transformations,” arXiv preprint arXiv:1803.01940, 2018.