Effective Estimation of Contact Force and Torque for Vision-based Tactile Sensor with Helmholtz-Hodge Decomposition

Retrieving rich contact information from robotic tactile sensing has been a challenging, yet significant task for the effective perception of object properties that the robot interacts with. This work is dedicated to developing an algorithm to estimate contact force and torque for vision-based tactile sensors. We first introduce the observation of the contact deformation patterns of hyperelastic materials under ideal single-axial loads in simulation. Then based on the observation, we propose a method of estimating surface forces and torque from the contact deformation vector field with the Helmholtz-Hodge Decomposition (HHD) algorithm. Extensive experiments of calibration and baseline comparison are followed to verify the effectiveness of the proposed method in terms of prediction error and variance. The proposed algorithm is further integrated into a contact force visualization module as well as a closed-loop adaptive grasp force control framework and is shown to be useful in both visualization of contact stability and minimum force grasping task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 6

page 7

05/03/2021

Viko: An Adaptive Gecko Gripper with Vision-based Tactile Sensor

Monitoring the state of contact is essential for robotic devices, especi...
03/26/2021

A Tactile Sensing Foot for Single Robot Leg Stabilization

Tactile sensing on human feet is crucial for motion control, however, ha...
05/30/2021

Deformation Control of a Deformable Object Based on Visual and Tactile Feedback

In this paper, we presented a new method for deformation control of defo...
09/30/2021

Real-Time Tactile Grasp Force Sensing Using Fingernail Imaging via Deep Neural Networks

This paper has introduced a novel approach for the real-time estimation ...
03/04/2020

Contact Surface Estimation via Haptic Perception

Legged systems need to optimize contact force in order to maintain conta...
09/09/2019

Estimating Fingertip Forces, Torques, and Local Curvatures from Fingernail Images

The study of dexterous manipulation has provided important insights in h...
10/10/2018

Dense Tactile Force Distribution Estimation using GelSlim and inverse FEM

In this paper, we present a new version of tactile sensor GelSlim 2.0 wi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Tactile sensing has been investigated and proven to play critical roles in human interaction with the environment. For a robotic system, tactile sensor is also a key component for its perception system, especially in contact-rich manipulation tasks. However, tactile sensing technologies are relatively unexplored, comparing with great attention drawn to studies of visual perception principles and developments of algorithms in these decades, despite its complementary role to visual sensing in robotic scene perception.

The past few decades have seen vast emergence of various types of tactile sensors with different transducing principles, including capacitive, piezoelectric, piezoresistive, magneto-electric, etc. [1]. Recently, vision-based tactile sensors have been thriving and appearing in various robotic systems with advantages of easy fabrication, high resolution, and multi-axial deformation sensing capability, e.g. Gelforce [2], FingerVision [3], Gelsight [4] and a more compact Gelslim in [5]. In our previous work, we have developed a vision-based tactile sensor also called FingerVision [6] (the name FingerVision was first introduced in [3]) and it was proven to be effective in the slip detection task. In this work, we aim at further exploiting the capability of recovering contact force and torque from the displacement field of the vision-based tactile sensor.

Fig. 1: FingerVision tactile sensor. (a). Rendered 3D model (in cutaway view). (b). Sensor prototype (details are referred to [6]

). (c-e). Raw image obtained from sensor, image with tracked displacement vectors and image with grid interpolated displacement vectors overlaying on top, respectively.

There are various ways to encode tactile signals, among which contact force and torque estimation from raw tactile information is of special interest, for it directly relates to statics or dynamics of the object during the interaction. For instance, human’s intuitive feeling of finger skin traction and pressure and estimation of the center of mass of objects greatly enhance the success rate of dexterous, dynamic manipulation. In a robotic system, similarly, accurate force feedback helps robot capture the motion of the object and state transitions including contact making, slipping and contact breaking. Therefore, it endows robots with the capability of assessing grasp stability, which is essential for the successful execution of complex manipulation tasks.

For FingerVision sensor we developed in [6] as reprinted in Fig. 1, the sensing body is a clear elastomer with embedded black markers used as vision tracking features. The marker displacement vectors are seen as the grid sampling of the deformation in the elastomer layer. When applied with external force and torque, deformation occurs in the hyperelastic body of FingerVision following continuum mechanics and the deformation fields show corresponding patterns under specific single-axial surface force and torque. Bearing this hint, decomposition of raw displacement vector field into multiple separated vector fields with specific patterns would be potentially helpful to decouple the deformation under multi-axial loads. However, the displacement field patterns are also correlated with shapes of contact area and affected by nonlinear deformation induced by large contact forces and torques, and interference of force and torque between axes. Thus, evaluation of method’s generalization capability and proper selection of working range are necessary.

In this paper, our goal is to effectively recover the contact surface force and torque from vision-based tactile sensors. Toward this target, these contributions are generated in our work:

  • Introduction of the displacement field patterns of elastomer on the vision-based tactile sensor when applied with single-axial forces and their quantitative properties are presented.

  • Proposal of a method to decompose displacement field of vision-based sensors into components that can be further used in estimating contact force and torque based on Helmholtz-Hodge Decomposition algorithm. The proposed method is both data efficient and with low model complexity for regression.

The rest of this paper is arranged in the following structure: Section II introduces previous works related to methods of force estimation for tactile sensors. In section III, we explain the patterns observed and formulate mapping functions from vector fields of specific patterns to corresponding contact forces in simulation. Afterwards, we propose that HHD algorithm can be used to decompose displacement vector field into components with similar patterns that leads to estimation of contact force and torque. In section IV, extensive characteristic experiments and comparison to state-of-the-art methods are given to show the effectiveness of the proposed method. In section V, we integrate the proposed method into a contact stability visualization and grasping force feedback control framework. Finally, discussion and conclusion are drawn in section VI.

Ii Related Works

Ii-a Tactile Sensors and Force Measurement

Vision-based tactile sensors attract increasing attention for its sensing capability with multi-modal contact information in addition to advantages of superior sensing resolution, including deformation [2][3], object texture [7, 8, 9], contact area estimation [5], geometry reconstruction [7][10] and force estimation [2][3][10][11]. Besides, vision-based tactile sensors have been shown to perform well in high-level tasks like object recognition [9], localization of dynamic object [12], and slip detection [4][6][8][13]. Surface deformation serves as a basic signal modality for above higher-level information in these sensing systems.

Fig. 2: Applied contact force configurations in simulation. (a). Normal force distributed uniformly. (b). Unidirectional tangential force distributed uniformly. (c). Torsional force along normal axis. (d). Combination of tangential, normal, and torsional forces.

Since the contact deformation is only one of the intermediate states for robotic manipulation feedback loop, researchers have been putting efforts into developing methods for recovering contact forces for tactile sensors. Generally, contact pressure distribution is relatively easier to be extracted for traditional capacitive, piezoelectric tactile array [1] or sensors utilizing total internal reflective (TIR) principle as presented in [14]. However, multi-axial-force estimation is much more challenging by comparison. Ohka et al. [15] presented a tactile sensor made of a rubber layer and a pyramid-shaped indenter on an acrylic plate that was able to capture changes of indentations of the pyramid array into the rubber skin with camera. According to the changes of the indentation areas, they successfully predicted three-axial contact forces. Sato et al. [2] fabricated a vision-based sensor called Gelforce with double-layer markers in different colors as tracking targets, which enables the measurement of motion along the surface normal via tracking the movement differences between markers in two layers. Based on an observational method and calibration, multi-axis force could be extracted from this complex fingertip-shaped sensor. Calibration procedures were specifically designed for the sensors making contact with probe-shape objects and generalization testing to different contact objects were not performed. Vogt et al. [16] built a microfluid-based flexible skin that can detect and differentiate normal and shear force, whereas the system suffered from a lower response time that was not suitable for robotic scenarios. In addition, the microfluid-based sensor could only estimate force and was inferior in multi-modality sensing by comparison with vision-based tactile sensors.

Neural network has shown its usefulness in recovering contact force for tactile sensors. Maria et al. [17] designed a tactile sensor using an array of paired light emitters and receivers that was able to capture deformation in local region and infer contact forces with trained neural network. In [18], multi-layer neural network was utilized in mapping from markers displacement field to three-axial contact force with a relatively low error on a Gelsight-like sensor. However, neural networks are usually not data-efficient and suffers from overfitting when only a small amount of data is available. Additionally, above works also didn’t discuss the generalization performance on different contact objects. In our work, we start by observing the response patterns of displacement field to different force and torque configurations, and based on the observation, we decompose vector field into components containing individual patterns to infer decoupled contact forces. This method significantly reduces the dimension of deformation vector field and is shown to retain good invariance to different contact objects.

Ii-B Helmholtz-Hodge Decomposition

Helmholtz-Hodge Decomposition is commonly used in motion analysis, e.g. target tracking in computer vision, computational fluid motions analysis

[19]

, acting as feature extraction to capture divergence source, sink and vertex of rotational motion for vector fields. HHD describes a vector field in the form of the summation of a divergence-free, a curl-free, and a harmonic flow, with manually set boundary condition imposed to get a unique solution. In

[20], Bhatia et al. proposed a natural Helmholtz-Hodge Decomposition (nHHD) method enabling defect-free analysis for various boundaries conditions with a data-driven method. In this work, we adopt nHHD to decompose our displacement field into separated components corresponding to the responses of specific external contact forces. We show that this tool is effective in recovering contact forces for the deformable medium used in most vision-based tactile sensors by quantitative analysis, although theoretical relations between the decomposition component patterns and patterns observed in the simulation have not been established yet.

Iii Method Description

Fig. 3: Displacement fields of elastomer body under three load configurations shown in Fig. 2(a-c).

Vision-based tactile sensors, such as Gelforce, FingerVision, Gelsight, make use of the deformation captured by the camera to infer contact forces by following hyperelastic continuum mechanics [2], data fitting with calibration [17][18] or both combined [2]. For analysis of hyperelastic deformation, finite element method (FEM) is commonly used. FEM approximates stress and strain response under external force that governed by continuum mechanics with finite number of nodes. To obtain an accurate result of surface motion, it is a common practice to increase the number of nodes with a proper meshing method, which results in increased dimension of the stiffness matrix that might be over demanding for computation in real-time applications. In this work, we take advantage of the insight that the displacement fields of the elastomer show unique and consistent graphical patterns under different single-axial loads (normal, tangential, and torsional loads) in simulation. These patterns possess quantitative properties that can be leveraged to formulate mapping from vector field with patterns to contact forces.

Iii-a Behavior under Different Loads

For contact in reality, any surface traction comes in the form of contact friction, and thus tangential force would not exist without normal pressure being applied simultaneously. To explore the behavior of the displacement field under loads along different axes separately, we simulate with hyperelastic material in Abaqus. As shown in Fig. 2

, within circle region on the top (in red), uniformly distributed normal force, tangential force and torsion along the surface normal (directions are shown with arrows) are applied with fixed bottom faces as boundary conditions. With these three configurations, typical simulation results are shown in Fig.

3. The displacement vector fields are obtained by further interpolating on a fixed-spacing grid and rendered with colors coding vectors’ magnitude.

Fig. 4: Decomposition result of simulated displacement field with the multi-axial loads. (a). Displacement vector field. (b). Curl-free component. (c). Divergence-free component. (d). Harmonic component.

Let denotes displacement vector and denote displacement vector associated with position being the start of the vector. Assume that the rotational centers of configuration are known, let

be the arm of moment of

vector w.r.t rotation center. Let be the arm of moment w.r.t divergence center (the cross in Fig. 3(a)) and be the arm of moment w.r.t the contact center (location of vector with maximum magnitude, the cross in Fig. 3(b)). From the displacement vector fields in simulation, it is observed that three graphical patterns of divergence, unidirection and rotation can be generated under normal, tangential and torsional forces correspondingly. With these patterns in Fig. 3, we notice the following quantitative properties:

  • For pattern (a), norm of vector summation and magnitude of summation of moments w.r.t. the divergence center both yield small values, while summation of vector norms gives a significantly larger value.

  • For pattern (b), summation of moments w.r.t. the contact center yields a small magnitude, while norm of vector summation gives a larger value by comparison.

  • For pattern (c), norm of vector summation yields a small value, while summation of moments w.r.t the rotational center gives a much larger magnitude.

where N is the number of vectors, and M is the number of rotational centers of the vector field.

Assuming that an arbitrary vector field is composed of vector fields with these diverging , unidirectional , and rotational patterns, and following the quantitative properties above, we have formulations below

(1)

where , and , and are summation of vector norms on , norm of vector summation on , and total moments of vectors w.r.t. rotational centers on .

In reverse, estimation of contact force and torque can follow the scheme of computing given a displacement field first, then decomposing into and for computation of and following Eq. (1). The problem boils down to finding a suitable decomposition method.

Iii-B Decomposition Algorithm

HHD method is a tool widely used in flow physics analysis to gain insights into such features as critical points, divergence source, sink, rotational vertex and curl distribution, etc.[19]. H. Bhatia et al. [20] presented a natural HHD (nHHD) to tackle data-dependent boundary condition selection problem. In our work, we adopt nHHD to compute separated vector fields for the reason that in a 2D space, the displacement of elastomer under torsional and normal loads from simulation results have similar pattern representations to that in divergence-free and curl-free fields decomposed by nHHD.

According to [20], considering the above smooth displacement vector field , where (e.g. n = 2 in 2D case), we have

(2)

where denotes curl-free component (), is divergence-free component () and is harmonic (). Eq. (2) is further transformed into Eq. (3) in the form of gradients of two scalar potential functions and

(3)

where and with being the -rotation matrix. By applying divergence and curl operations, we obtain following Poisson equations

(4)

Therefore, Eq. (4) can be solved using Green’s function in the domain to obtain and , and data-dependent boundary conditions are imposed to derived harmonic component uniquely. For more implementation details of the solving process, it is recommended to refer to [20]. The rotational centers and(or) in are localized where the maxima and(or) the minima of are achieved over the discrete domain of if the extrema exist, and the arms of moments and(or) of vector can be obtained as Eq. (5) presents.

(5)

where is the value of the potential function at .

Fig. 5: Contact force and torque computation pipeline.

The result of HHD for the simulated displacement field under multi-axial loads is given in Fig. 4. Although it is noticeable that the patterns given in the separated components are not identical to those shown in Fig. 3 in terms of distribution of vector magnitude, calculation of , , and remains valid according to Eq. (1). By combining the procedure proposed in section III-A and nHHD algorithm, we present the procedure to compute , and . is obtained from the raw displacement field following Eq. (1), and in parallel, the raw displacement field is fed into HHD module to generate two fields of interests: curl-free and divergence-free fields. and are calculated from these two vector fields with Eq. (1) and Eq. (5). The calculation scheme is illustrated in Fig. 5.

Let the mapping from , and to contact normal force , tangential force and torque along surface normal be functions , and , respectively which connect to the estimations of contact force and torque in Eq. (6). With the significantly dimensional reduction from tactile displacement vector field to , and , it can be expected that the complexity of the model used to predict contact force and torque using decomposed results will be much lower, compared to that using raw displacement field.

(6)

Iv Experiments and Evaluation

This section gives description of the characteristic experiments for the proposed decomposition algorithm including mapping function calibration and baseline comparison to evaluate the advantages of the decomposition method, compared with the method taking raw tactile displacement field as input.

Iv-a Mapping Function Calibration

Calibration is performed to find the mapping functions , and . Here we choose regression using a small amount of data, considering the dimensional reduction that our algorithm realizes. We collect force and torque data using highly accurate Force/Torque sensor depicted in Fig. 6a. ATI nano-17 is installed on and driven by the UR10 robot arm. A simple 3D-printed gripper is mounted on tool side of the Force/Torque sensor and used as a fixture of objects. To examine the consistency between and generalization capability to a wide range of objects with different shapes, sizes, textures, hardness and elasticity, 6 objects with these variances are selected, as given in Fig. 6b. when collecting dataset, objects are tightly grasped by the gripper and pressed onto the sensing surface of the tactile sensor. Typical tactile deformation images are shown in Fig. 6c, with the corresponding object labels.

Fig. 6: Experimental setup of data collection for calibration and baseline comparison. (a). Tactile image and contact force/torque collection with robot-arm-driven fixture fixing object to make contact on the tactile sensor. (b). 6 objects for contact making. (c). Examples of contact images for 6 objects.
Fig. 7: Calibration data and fitting results. Data collected using different objects are scattered with different colors. Data regression methods include RANSAC linear model and MLP regression. From left to right, charts are vs. , vs. and vs. .

As for the size of the calibration dataset, a total amount of 300 data points are collected, with 50 for each object. For every object, motions along surface normal and tangent include pressing, surface dragging and twisting with randomized distance and angles in every data collection trial. The ranges of these randomized distance and angles are carefully adjusted to fit the working range of the sensor without dealing damage or too much wearing to the elastomer.

The calibration data is presented in Fig. 7, calculations of , and use the calculation pipeline in Fig. 5. Qualitatively, the linearity of the data is strong in the selected working range, which achieves dimensional reduction and guarantees low complexity for models to approximate the distribution of data. Besides, we notice that for normal force, the data distribution is less concentrated compared with those of the other two sets of data. It is ascribed to the lack of capability of the monocular camera inside the sensor to capture the markers’ motion along the sensor surface normal, in which direction the deformation of elastomer balances a large portion of external normal force. As a result, only divergence motion in 2D plane is used for calculating normal force, leading to a larger variance in the distribution for the normal force data subset.

Two models with low complexity is fitted to the three sets of data. First, linear model with RANSAC outlier rejection algorithm is chosen, considering there exist some outliers in the data. For example, some of the measured normal forces are of negative values, which is impossible in common cases. RANSAC iteratively chooses group of inliers that lead to the lowest regression error. Second model is a three-layer multi-layer perceptron (MLP) that was used in the previous works

[17][18]. Since the underlying distribution is of relative low dimension, the regression of MLP to the data can also generate good performance when small model is applied given small amount of data. As shown in Fig. 7, RANSAC linear model and MLP model have close prediction values, except that RANSAC performs better in capturing underlying distribution in normal force case by rejecting outliers from object 1 and object 6.

Iv-B Baseline Comparison

Baseline comparison is given in this section. Regarding the prediction performance of contact force and torque based on , and calculated from the decomposed components and raw displacement vector fields, three regression models are compared with mean-square-error (RMSE) metric. For decomposed 1D data fitting, RANSAC linear model, and MLP regression with three-layer structure with 10 hidden units are adopted. As for input vectors without decomposition, a significantly larger MLP regressor with five-layer structure and hidden units is used.

Two MLP models are all fully trained with L-BFGS optimizer. 6-fold cross validation with splits of the data from different objects are used for evaluation of the overall performances of models and also biases toward certain objects. The results are shown in Table I. RANSAC linear model excels in terms of the mean RSME of the prediction for normal force and tangential force cases, whereas MLP regressor on the 1D data performs better in the aspect of the prediction variances and slightly better in the mean RSME for torsional case. This could be attributed to the outlier rejection mechanism of RANSAC linear model to sustain the disturbance of noises, which lead to lower average prediction errors. As expected, all three models give a larger RSME in the normal force case when being evaluated on data collected with object 6 after being trained on the other 5 objects during cross validation. However, the combination of raw displacement vector with complex MLP gives a lower variance in this case, showing more consistent performance across different objects and with noises. In summary, linear models with decomposition capture the underlying distribution better given small amount of available data, while one can expect MLP without decomposition can improve highly if large dataset is collected.

RMSE Decomposition No decomposition
Method
RANSAC
Linear
MLP
Regression
MLP
Regression
Model
complexity
2 10
Normal (N) Mean 2.952 3.286 4.482
Stdv 2.584 2.497 1.295
Tangential (N) Mean 0.241 0.242 1.544
Stdv 0.033 0.032 0.813
Torsional (Nmm) Mean 5.862 5.621 6.769
Stdv 1.547 1.353 3.621
TABLE I: RMSE of different methods on estimating the contact force and torque based on the decomposed deformation vector fields or raw deformation one.

V Grasping Tasks

Fig. 8: Contact force signals under multiple sliding motion trials. Upper right chart is generated by measuring ratios at peaks of (as blue circles marked). Lower right chart is the ratio inside the window delineated in purple dash rectangle.
Fig. 9: Adaptive grasping force control experiment. (a). Robotiq 2-finger 140 gripper with FingerVision as fingertips holding an object and then the object is pressed by hand till slip occurs. (b). Gripper with FingerVision holds an object and then the tangential load is increased/decreased by loading and unloading weights on top of the object.

In this section, effectiveness of the proposed contact force and torque estimation method for vision-based sensors is tested in grasping tasks. Sensing and visualization of contact information as well as adaptive control under external disturbances have been challenging tasks in robotic manipulation. Besides, situations are even more complex when introducing soft contact that brings in nonlinearity in deformation. Fig. 8 shows tangential force and normal force signals during multiple surface sliding trials (data collected by ATI nano-17 Force/Torque sensor). It is noticed from the chart on the upper right in Fig. 8 that the friction coefficient (equals to the ratio of tangential and normal forces when tangential force reaches each peak, as marked by blue circles) does not remain constant under different normal forces, which is one of the significant properties differences between hyperelastic contact and rigid contact. It also shows that within each trial of surface sliding, the ratio follows similar evolution: the ratio first rises; once reaches the maximum static friction coefficient, the ratio vibrates in a narrow band; the ratio drops afterwards, suggesting the occurrence of shear slip, as exhibited in the lower right chart of Fig. 8.

V-a Grasping Stability Visualization

Taking behavior of during slip phases into consideration, we implement a visualization system for monitoring of grasping force and slip, as shown in Fig. 10. Since the friction coefficient is not constant, we take the average of friction coefficients across working range of normal force as the nominal value for simplification and visually illustrate friction cones [21]

with this coefficient. The FingerVision sensors are installed on Robotiq 2-finger 140 gripper, serving as finger tips and sensing units, mimicking human fingertips. With the force and torque estimation module, we illustrate transitions of contact phases by classifying the spaces where contact force vectors reside w.r.t. the friction cones. As given in Fig.

10(a), contact statuses are classified into 4 phases: 1) Stable contact; 2) Incipient slip; 3) Slipping; 4) Recovery phase when force vector is regulated back into the yellow or green regions. In Fig. 10(b) contact forces are shown as arrows in green (when the vectors are within the friction cones) and red (outside of the friction cones). The capability of indicating contact phases is beneficial to grasp reconfiguration for stable grasp.

V-B Feedback Control of Grasping Force

In-hand manipulations of objects usually require minimal grasping forces, because the contact condition keeps switching between unstable and stable statuses, e.g. pen rolling in human hand. And when picking up fragile objects, power grasps also need to be avoided. Thus, adaptive control for grasping force is critical in many scenarios. Here we implement a simple feedback controller that takes in contact force estimation and maintains the ratios in a band on the peripheries of the friction cones (visualized with nominal friction coefficients as described previously). Details of the controller are given in Algorithm 1. In the algorithm, variables with subscripts l and r belong to the left and right contacts. A conservative control strategy is implemented in our work. To maintain the contact forces in the vicinity of friction cone margins, the gripper decreases the opening if both left and right forces exceed the upper limits of band of the cones and increases the opening while both left and right forces are lower than the lower limits.

Input: Contact forces , ; Gripper opening ;
         Band width ; Friction coefficient .
      Output: Gripper requested opening .

1:Initialize , with
2:while True do
3:     ,
4:     if  and  then
5:         
6:     end if
7:     if  and  then
8:         
9:     end if
10:end while
Algorithm 1 Grasping force controller
Fig. 10: Schematic diagram of contact phases and visualization for experiment in Fig. 9(a). Force signals are plotted in Fig. 11(a-b).

The controller performs well in maintaining stable contact using minimal grasping forces in the object holding experiment during loading and unloading of weights that result in increase and decrease of tangential forces. As demonstrated in Fig. 11(c-d), with controller being active, there are no or much shorter periods of crossovers(indicated as periods when dramatically rises that leads to contact slip). The ratio in right-side fingertip recovers quickly from the crossover region (rendered in yellow in Fig. 11) due to the regulation of the force controller. The regulation process can also be seen from the visualization system, which keeps the contact force vectors around the margins of friction cones. Without force control, there is an extended longer period of crossover during loading process. The gripper fails to maintain contact forces inside the friction cones. Grasp fails if at least one contact breaks. It is worth noting that after slip happens on the surface, another crossover occurs due to the fact that dynamic friction coefficient is lower than static friction coefficient. The signals of left and right sensors are not of exactly the same forms, which could stem from the variances in sensor fabrication and calibration, object alignment difference for two contact surfaces and gripper pose not being exactly upright that leads to imbalanced loads on two fingertips.

Fig. 11: Grasping contact force signals under loads. Force x, y and z are the projections of the contact force onto the sensor surface coordinate system in Fig. 6. (a-b). Changes of contact forces for manual press on grasped object, with constant opening distance between two fingertips. Four contact phases are illustrated in different colors. (c-d). Contact force signals during loading and unloading, with active grasp force controller. (e-f). Contact force signals during loading, without grasping force controller. The unloading process is not given since contacts are broken, which leads to grasping failure.

Vi Conclusion

In this work, we develop a contact force and torque estimation method for vision-based tactile sensor using Helmholtz-Hodge Decomposition (HHD). Starting from observations of the relations between contact force and torque and marker displacement patterns, we establish the mapping from decomposed components of HHD to contact force and torque estimation. In characteristic experiment, the force and torque estimation results show high linearity and guarantee lower demands for data size and better accuracy on predictions using models with low complexity. The proposed method is further tested in both contact stability visualization and grasping with adaptive force control for verification of effectiveness and presents potential in facilitating studies of grasping stability metric. Future works fall mainly on integrating sensor and algorithm into grasping system to predict high-level physical information including object center of mass, estimation of object dynamics, and prediction of grasping stability.

References

  • [1] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensing-from humans to humanoids.” IEEE Trans. Robotics, vol. 26, no. 1, pp. 1–20, 2010.
  • [2] K. Sato, K. Kamiyama, N. Kawakami, and S. Tachi, “Finger-shaped gelforce: sensor for measuring surface traction fields for robotic hand,” IEEE Transactions on Haptics, vol. 3, no. 1, pp. 37–47, 2010.
  • [3] A. Yamaguchi and C. G. Atkeson, “Combining finger vision and optical tactile sensing: Reducing and handling errors while cutting vegetables,” in 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids).   IEEE, 2016, pp. 1045–1051.
  • [4] W. Yuan, R. Li, M. A. Srinivasan, and E. H. Adelson, “Measurement of shear and slip with a gelsight tactile sensor,” in 2015 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2015, pp. 304–311.
  • [5] E. Donlon, S. Dong, M. Liu, J. Li, E. Adelson, and A. Rodriguez, “Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 1927–1934.
  • [6] Y. Zhang, Z. Kan, Y. A. Tse, Y. Yang, and M. Y. Wang, “Fingervision tactile sensor design and slip detection using convolutional lstm network,” arXiv preprint arXiv:1810.02653, 2018.
  • [7] M. K. Johnson and E. H. Adelson, “Retrographic sensing for the measurement of surface texture and shape,” in

    2009 IEEE Conference on Computer Vision and Pattern Recognition

    .   IEEE, 2009, pp. 1070–1077.
  • [8] S. Dong, W. Yuan, and E. H. Adelson, “Improved gelsight tactile sensor for measuring geometry and slip,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 137–144.
  • [9] W. Yuan, S. Wang, S. Dong, and E. Adelson, “Connecting look and feel: Associating the visual and tactile properties of physical materials,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5580–5588.
  • [10] W. Yuan, S. Dong, and E. H. Adelson, “Gelsight: High-resolution robot tactile sensors for estimating geometry and force,” Sensors, vol. 17, no. 12, p. 2762, 2017.
  • [11] B. W. McInroe, C. L. Chen, K. Y. Goldberg, R. Bajcsy, and R. S. Fearing, “Towards a soft fingertip with integrated sensing and actuation,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 6437–6444.
  • [12] R. Li, R. Platt, W. Yuan, A. ten Pas, N. Roscup, M. A. Srinivasan, and E. Adelson, “Localization and manipulation of small parts using gelsight tactile sensing,” in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on.   IEEE, 2014, pp. 3988–3993.
  • [13] K. Van Wyk and J. Falco, “Slip detection: Analysis and calibration of univariate tactile signals,” arXiv preprint arXiv:1806.10451, 2018.
  • [14] S. Begej, “Planar and finger-shaped optical tactile sensors for robotic applications,” IEEE Journal on Robotics and Automation, vol. 4, no. 5, pp. 472–484, 1988.
  • [15] M. Ohka, Y. Mitsuya, K. Hattori, and I. Higashioka, “Data conversion capability of optical tactile sensor featuring an array of pyramidal projections,” in 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems.   IEEE, 1996, pp. 573–580.
  • [16] D. M. Vogt, Y.-L. Park, and R. J. Wood, “Design and characterization of a soft multi-axis force sensor using embedded microfluidic channels,” IEEE Sensors Journal, vol. 13, no. 10, pp. 4056–4064, 2013.
  • [17] G. De Maria, C. Natale, and S. Pirozzi, “Force/tactile sensor for robotic applications,” Sensors and Actuators A: Physical, vol. 175, pp. 60–72, 2012.
  • [18] B. Fang, F. Sun, C. Yang, H. Xue, W. Chen, C. Zhang, D. Guo, and H. Liu, “A dual-modal vision-based tactile sensor for robotic hand grasping,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 1–9.
  • [19] Q. Guo, M. K. Mandal, and M. Y. Li, “Efficient hodge–helmholtz decomposition of motion fields,” Pattern Recognition Letters, vol. 26, no. 4, pp. 493–501, 2005.
  • [20] H. Bhatia, V. Pascucci, and P.-T. Bremer, “The natural helmholtz-hodge decomposition for open-boundary flow analysis,” IEEE transactions on visualization and computer graphics, vol. 20, no. 11, pp. 1566–1578, 2014.
  • [21] M. T. Mason, Mechanics of robotic manipulation.   MIT press, 2001.