I Introduction
Tactile sensing has been investigated and proven to play critical roles in human interaction with the environment. For a robotic system, tactile sensor is also a key component for its perception system, especially in contactrich manipulation tasks. However, tactile sensing technologies are relatively unexplored, comparing with great attention drawn to studies of visual perception principles and developments of algorithms in these decades, despite its complementary role to visual sensing in robotic scene perception.
The past few decades have seen vast emergence of various types of tactile sensors with different transducing principles, including capacitive, piezoelectric, piezoresistive, magnetoelectric, etc. [1]. Recently, visionbased tactile sensors have been thriving and appearing in various robotic systems with advantages of easy fabrication, high resolution, and multiaxial deformation sensing capability, e.g. Gelforce [2], FingerVision [3], Gelsight [4] and a more compact Gelslim in [5]. In our previous work, we have developed a visionbased tactile sensor also called FingerVision [6] (the name FingerVision was first introduced in [3]) and it was proven to be effective in the slip detection task. In this work, we aim at further exploiting the capability of recovering contact force and torque from the displacement field of the visionbased tactile sensor.
There are various ways to encode tactile signals, among which contact force and torque estimation from raw tactile information is of special interest, for it directly relates to statics or dynamics of the object during the interaction. For instance, human’s intuitive feeling of finger skin traction and pressure and estimation of the center of mass of objects greatly enhance the success rate of dexterous, dynamic manipulation. In a robotic system, similarly, accurate force feedback helps robot capture the motion of the object and state transitions including contact making, slipping and contact breaking. Therefore, it endows robots with the capability of assessing grasp stability, which is essential for the successful execution of complex manipulation tasks.
For FingerVision sensor we developed in [6] as reprinted in Fig. 1, the sensing body is a clear elastomer with embedded black markers used as vision tracking features. The marker displacement vectors are seen as the grid sampling of the deformation in the elastomer layer. When applied with external force and torque, deformation occurs in the hyperelastic body of FingerVision following continuum mechanics and the deformation fields show corresponding patterns under specific singleaxial surface force and torque. Bearing this hint, decomposition of raw displacement vector field into multiple separated vector fields with specific patterns would be potentially helpful to decouple the deformation under multiaxial loads. However, the displacement field patterns are also correlated with shapes of contact area and affected by nonlinear deformation induced by large contact forces and torques, and interference of force and torque between axes. Thus, evaluation of method’s generalization capability and proper selection of working range are necessary.
In this paper, our goal is to effectively recover the contact surface force and torque from visionbased tactile sensors. Toward this target, these contributions are generated in our work:

Introduction of the displacement field patterns of elastomer on the visionbased tactile sensor when applied with singleaxial forces and their quantitative properties are presented.

Proposal of a method to decompose displacement field of visionbased sensors into components that can be further used in estimating contact force and torque based on HelmholtzHodge Decomposition algorithm. The proposed method is both data efficient and with low model complexity for regression.
The rest of this paper is arranged in the following structure: Section II introduces previous works related to methods of force estimation for tactile sensors. In section III, we explain the patterns observed and formulate mapping functions from vector fields of specific patterns to corresponding contact forces in simulation. Afterwards, we propose that HHD algorithm can be used to decompose displacement vector field into components with similar patterns that leads to estimation of contact force and torque. In section IV, extensive characteristic experiments and comparison to stateoftheart methods are given to show the effectiveness of the proposed method. In section V, we integrate the proposed method into a contact stability visualization and grasping force feedback control framework. Finally, discussion and conclusion are drawn in section VI.
Ii Related Works
Iia Tactile Sensors and Force Measurement
Visionbased tactile sensors attract increasing attention for its sensing capability with multimodal contact information in addition to advantages of superior sensing resolution, including deformation [2][3], object texture [7, 8, 9], contact area estimation [5], geometry reconstruction [7][10] and force estimation [2][3][10][11]. Besides, visionbased tactile sensors have been shown to perform well in highlevel tasks like object recognition [9], localization of dynamic object [12], and slip detection [4][6][8][13]. Surface deformation serves as a basic signal modality for above higherlevel information in these sensing systems.
Since the contact deformation is only one of the intermediate states for robotic manipulation feedback loop, researchers have been putting efforts into developing methods for recovering contact forces for tactile sensors. Generally, contact pressure distribution is relatively easier to be extracted for traditional capacitive, piezoelectric tactile array [1] or sensors utilizing total internal reflective (TIR) principle as presented in [14]. However, multiaxialforce estimation is much more challenging by comparison. Ohka et al. [15] presented a tactile sensor made of a rubber layer and a pyramidshaped indenter on an acrylic plate that was able to capture changes of indentations of the pyramid array into the rubber skin with camera. According to the changes of the indentation areas, they successfully predicted threeaxial contact forces. Sato et al. [2] fabricated a visionbased sensor called Gelforce with doublelayer markers in different colors as tracking targets, which enables the measurement of motion along the surface normal via tracking the movement differences between markers in two layers. Based on an observational method and calibration, multiaxis force could be extracted from this complex fingertipshaped sensor. Calibration procedures were specifically designed for the sensors making contact with probeshape objects and generalization testing to different contact objects were not performed. Vogt et al. [16] built a microfluidbased flexible skin that can detect and differentiate normal and shear force, whereas the system suffered from a lower response time that was not suitable for robotic scenarios. In addition, the microfluidbased sensor could only estimate force and was inferior in multimodality sensing by comparison with visionbased tactile sensors.
Neural network has shown its usefulness in recovering contact force for tactile sensors. Maria et al. [17] designed a tactile sensor using an array of paired light emitters and receivers that was able to capture deformation in local region and infer contact forces with trained neural network. In [18], multilayer neural network was utilized in mapping from markers displacement field to threeaxial contact force with a relatively low error on a Gelsightlike sensor. However, neural networks are usually not dataefficient and suffers from overfitting when only a small amount of data is available. Additionally, above works also didn’t discuss the generalization performance on different contact objects. In our work, we start by observing the response patterns of displacement field to different force and torque configurations, and based on the observation, we decompose vector field into components containing individual patterns to infer decoupled contact forces. This method significantly reduces the dimension of deformation vector field and is shown to retain good invariance to different contact objects.
IiB HelmholtzHodge Decomposition
HelmholtzHodge Decomposition is commonly used in motion analysis, e.g. target tracking in computer vision, computational fluid motions analysis
[19], acting as feature extraction to capture divergence source, sink and vertex of rotational motion for vector fields. HHD describes a vector field in the form of the summation of a divergencefree, a curlfree, and a harmonic flow, with manually set boundary condition imposed to get a unique solution. In
[20], Bhatia et al. proposed a natural HelmholtzHodge Decomposition (nHHD) method enabling defectfree analysis for various boundaries conditions with a datadriven method. In this work, we adopt nHHD to decompose our displacement field into separated components corresponding to the responses of specific external contact forces. We show that this tool is effective in recovering contact forces for the deformable medium used in most visionbased tactile sensors by quantitative analysis, although theoretical relations between the decomposition component patterns and patterns observed in the simulation have not been established yet.Iii Method Description
Visionbased tactile sensors, such as Gelforce, FingerVision, Gelsight, make use of the deformation captured by the camera to infer contact forces by following hyperelastic continuum mechanics [2], data fitting with calibration [17][18] or both combined [2]. For analysis of hyperelastic deformation, finite element method (FEM) is commonly used. FEM approximates stress and strain response under external force that governed by continuum mechanics with finite number of nodes. To obtain an accurate result of surface motion, it is a common practice to increase the number of nodes with a proper meshing method, which results in increased dimension of the stiffness matrix that might be over demanding for computation in realtime applications. In this work, we take advantage of the insight that the displacement fields of the elastomer show unique and consistent graphical patterns under different singleaxial loads (normal, tangential, and torsional loads) in simulation. These patterns possess quantitative properties that can be leveraged to formulate mapping from vector field with patterns to contact forces.
Iiia Behavior under Different Loads
For contact in reality, any surface traction comes in the form of contact friction, and thus tangential force would not exist without normal pressure being applied simultaneously. To explore the behavior of the displacement field under loads along different axes separately, we simulate with hyperelastic material in Abaqus. As shown in Fig. 2
, within circle region on the top (in red), uniformly distributed normal force, tangential force and torsion along the surface normal (directions are shown with arrows) are applied with fixed bottom faces as boundary conditions. With these three configurations, typical simulation results are shown in Fig.
3. The displacement vector fields are obtained by further interpolating on a fixedspacing grid and rendered with colors coding vectors’ magnitude.Let denotes displacement vector and denote displacement vector associated with position being the start of the vector. Assume that the rotational centers of configuration are known, let
be the arm of moment of
vector w.r.t rotation center. Let be the arm of moment w.r.t divergence center (the cross in Fig. 3(a)) and be the arm of moment w.r.t the contact center (location of vector with maximum magnitude, the cross in Fig. 3(b)). From the displacement vector fields in simulation, it is observed that three graphical patterns of divergence, unidirection and rotation can be generated under normal, tangential and torsional forces correspondingly. With these patterns in Fig. 3, we notice the following quantitative properties:
For pattern (a), norm of vector summation and magnitude of summation of moments w.r.t. the divergence center both yield small values, while summation of vector norms gives a significantly larger value.

For pattern (b), summation of moments w.r.t. the contact center yields a small magnitude, while norm of vector summation gives a larger value by comparison.

For pattern (c), norm of vector summation yields a small value, while summation of moments w.r.t the rotational center gives a much larger magnitude.
where N is the number of vectors, and M is the number of rotational centers of the vector field.
Assuming that an arbitrary vector field is composed of vector fields with these diverging , unidirectional , and rotational patterns, and following the quantitative properties above, we have formulations below
(1)  
where , and , and are summation of vector norms on , norm of vector summation on , and total moments of vectors w.r.t. rotational centers on .
In reverse, estimation of contact force and torque can follow the scheme of computing given a displacement field first, then decomposing into and for computation of and following Eq. (1). The problem boils down to finding a suitable decomposition method.
IiiB Decomposition Algorithm
HHD method is a tool widely used in flow physics analysis to gain insights into such features as critical points, divergence source, sink, rotational vertex and curl distribution, etc.[19]. H. Bhatia et al. [20] presented a natural HHD (nHHD) to tackle datadependent boundary condition selection problem. In our work, we adopt nHHD to compute separated vector fields for the reason that in a 2D space, the displacement of elastomer under torsional and normal loads from simulation results have similar pattern representations to that in divergencefree and curlfree fields decomposed by nHHD.
According to [20], considering the above smooth displacement vector field , where (e.g. n = 2 in 2D case), we have
(2) 
where denotes curlfree component (), is divergencefree component () and is harmonic (). Eq. (2) is further transformed into Eq. (3) in the form of gradients of two scalar potential functions and
(3) 
where and with being the rotation matrix. By applying divergence and curl operations, we obtain following Poisson equations
(4) 
Therefore, Eq. (4) can be solved using Green’s function in the domain to obtain and , and datadependent boundary conditions are imposed to derived harmonic component uniquely. For more implementation details of the solving process, it is recommended to refer to [20]. The rotational centers and(or) in are localized where the maxima and(or) the minima of are achieved over the discrete domain of if the extrema exist, and the arms of moments and(or) of vector can be obtained as Eq. (5) presents.
(5)  
where is the value of the potential function at .
The result of HHD for the simulated displacement field under multiaxial loads is given in Fig. 4. Although it is noticeable that the patterns given in the separated components are not identical to those shown in Fig. 3 in terms of distribution of vector magnitude, calculation of , , and remains valid according to Eq. (1). By combining the procedure proposed in section IIIA and nHHD algorithm, we present the procedure to compute , and . is obtained from the raw displacement field following Eq. (1), and in parallel, the raw displacement field is fed into HHD module to generate two fields of interests: curlfree and divergencefree fields. and are calculated from these two vector fields with Eq. (1) and Eq. (5). The calculation scheme is illustrated in Fig. 5.
Let the mapping from , and to contact normal force , tangential force and torque along surface normal be functions , and , respectively which connect to the estimations of contact force and torque in Eq. (6). With the significantly dimensional reduction from tactile displacement vector field to , and , it can be expected that the complexity of the model used to predict contact force and torque using decomposed results will be much lower, compared to that using raw displacement field.
(6)  
Iv Experiments and Evaluation
This section gives description of the characteristic experiments for the proposed decomposition algorithm including mapping function calibration and baseline comparison to evaluate the advantages of the decomposition method, compared with the method taking raw tactile displacement field as input.
Iva Mapping Function Calibration
Calibration is performed to find the mapping functions , and . Here we choose regression using a small amount of data, considering the dimensional reduction that our algorithm realizes. We collect force and torque data using highly accurate Force/Torque sensor depicted in Fig. 6a. ATI nano17 is installed on and driven by the UR10 robot arm. A simple 3Dprinted gripper is mounted on tool side of the Force/Torque sensor and used as a fixture of objects. To examine the consistency between and generalization capability to a wide range of objects with different shapes, sizes, textures, hardness and elasticity, 6 objects with these variances are selected, as given in Fig. 6b. when collecting dataset, objects are tightly grasped by the gripper and pressed onto the sensing surface of the tactile sensor. Typical tactile deformation images are shown in Fig. 6c, with the corresponding object labels.
As for the size of the calibration dataset, a total amount of 300 data points are collected, with 50 for each object. For every object, motions along surface normal and tangent include pressing, surface dragging and twisting with randomized distance and angles in every data collection trial. The ranges of these randomized distance and angles are carefully adjusted to fit the working range of the sensor without dealing damage or too much wearing to the elastomer.
The calibration data is presented in Fig. 7, calculations of , and use the calculation pipeline in Fig. 5. Qualitatively, the linearity of the data is strong in the selected working range, which achieves dimensional reduction and guarantees low complexity for models to approximate the distribution of data. Besides, we notice that for normal force, the data distribution is less concentrated compared with those of the other two sets of data. It is ascribed to the lack of capability of the monocular camera inside the sensor to capture the markers’ motion along the sensor surface normal, in which direction the deformation of elastomer balances a large portion of external normal force. As a result, only divergence motion in 2D plane is used for calculating normal force, leading to a larger variance in the distribution for the normal force data subset.
Two models with low complexity is fitted to the three sets of data. First, linear model with RANSAC outlier rejection algorithm is chosen, considering there exist some outliers in the data. For example, some of the measured normal forces are of negative values, which is impossible in common cases. RANSAC iteratively chooses group of inliers that lead to the lowest regression error. Second model is a threelayer multilayer perceptron (MLP) that was used in the previous works
[17][18]. Since the underlying distribution is of relative low dimension, the regression of MLP to the data can also generate good performance when small model is applied given small amount of data. As shown in Fig. 7, RANSAC linear model and MLP model have close prediction values, except that RANSAC performs better in capturing underlying distribution in normal force case by rejecting outliers from object 1 and object 6.IvB Baseline Comparison
Baseline comparison is given in this section. Regarding the prediction performance of contact force and torque based on , and calculated from the decomposed components and raw displacement vector fields, three regression models are compared with meansquareerror (RMSE) metric. For decomposed 1D data fitting, RANSAC linear model, and MLP regression with threelayer structure with 10 hidden units are adopted. As for input vectors without decomposition, a significantly larger MLP regressor with fivelayer structure and hidden units is used.
Two MLP models are all fully trained with LBFGS optimizer. 6fold cross validation with splits of the data from different objects are used for evaluation of the overall performances of models and also biases toward certain objects. The results are shown in Table I. RANSAC linear model excels in terms of the mean RSME of the prediction for normal force and tangential force cases, whereas MLP regressor on the 1D data performs better in the aspect of the prediction variances and slightly better in the mean RSME for torsional case. This could be attributed to the outlier rejection mechanism of RANSAC linear model to sustain the disturbance of noises, which lead to lower average prediction errors. As expected, all three models give a larger RSME in the normal force case when being evaluated on data collected with object 6 after being trained on the other 5 objects during cross validation. However, the combination of raw displacement vector with complex MLP gives a lower variance in this case, showing more consistent performance across different objects and with noises. In summary, linear models with decomposition capture the underlying distribution better given small amount of available data, while one can expect MLP without decomposition can improve highly if large dataset is collected.
RMSE  Decomposition  No decomposition  
Method 





2  10  
Normal (N)  Mean  2.952  3.286  4.482  
Stdv  2.584  2.497  1.295  
Tangential (N)  Mean  0.241  0.242  1.544  
Stdv  0.033  0.032  0.813  
Torsional (Nmm)  Mean  5.862  5.621  6.769  
Stdv  1.547  1.353  3.621 
V Grasping Tasks
In this section, effectiveness of the proposed contact force and torque estimation method for visionbased sensors is tested in grasping tasks. Sensing and visualization of contact information as well as adaptive control under external disturbances have been challenging tasks in robotic manipulation. Besides, situations are even more complex when introducing soft contact that brings in nonlinearity in deformation. Fig. 8 shows tangential force and normal force signals during multiple surface sliding trials (data collected by ATI nano17 Force/Torque sensor). It is noticed from the chart on the upper right in Fig. 8 that the friction coefficient (equals to the ratio of tangential and normal forces when tangential force reaches each peak, as marked by blue circles) does not remain constant under different normal forces, which is one of the significant properties differences between hyperelastic contact and rigid contact. It also shows that within each trial of surface sliding, the ratio follows similar evolution: the ratio first rises; once reaches the maximum static friction coefficient, the ratio vibrates in a narrow band; the ratio drops afterwards, suggesting the occurrence of shear slip, as exhibited in the lower right chart of Fig. 8.
Va Grasping Stability Visualization
Taking behavior of during slip phases into consideration, we implement a visualization system for monitoring of grasping force and slip, as shown in Fig. 10. Since the friction coefficient is not constant, we take the average of friction coefficients across working range of normal force as the nominal value for simplification and visually illustrate friction cones [21]
with this coefficient. The FingerVision sensors are installed on Robotiq 2finger 140 gripper, serving as finger tips and sensing units, mimicking human fingertips. With the force and torque estimation module, we illustrate transitions of contact phases by classifying the spaces where contact force vectors reside w.r.t. the friction cones. As given in Fig.
10(a), contact statuses are classified into 4 phases: 1) Stable contact; 2) Incipient slip; 3) Slipping; 4) Recovery phase when force vector is regulated back into the yellow or green regions. In Fig. 10(b) contact forces are shown as arrows in green (when the vectors are within the friction cones) and red (outside of the friction cones). The capability of indicating contact phases is beneficial to grasp reconfiguration for stable grasp.VB Feedback Control of Grasping Force
Inhand manipulations of objects usually require minimal grasping forces, because the contact condition keeps switching between unstable and stable statuses, e.g. pen rolling in human hand. And when picking up fragile objects, power grasps also need to be avoided. Thus, adaptive control for grasping force is critical in many scenarios. Here we implement a simple feedback controller that takes in contact force estimation and maintains the ratios in a band on the peripheries of the friction cones (visualized with nominal friction coefficients as described previously). Details of the controller are given in Algorithm 1. In the algorithm, variables with subscripts l and r belong to the left and right contacts. A conservative control strategy is implemented in our work. To maintain the contact forces in the vicinity of friction cone margins, the gripper decreases the opening if both left and right forces exceed the upper limits of band of the cones and increases the opening while both left and right forces are lower than the lower limits.
The controller performs well in maintaining stable contact using minimal grasping forces in the object holding experiment during loading and unloading of weights that result in increase and decrease of tangential forces. As demonstrated in Fig. 11(cd), with controller being active, there are no or much shorter periods of crossovers(indicated as periods when dramatically rises that leads to contact slip). The ratio in rightside fingertip recovers quickly from the crossover region (rendered in yellow in Fig. 11) due to the regulation of the force controller. The regulation process can also be seen from the visualization system, which keeps the contact force vectors around the margins of friction cones. Without force control, there is an extended longer period of crossover during loading process. The gripper fails to maintain contact forces inside the friction cones. Grasp fails if at least one contact breaks. It is worth noting that after slip happens on the surface, another crossover occurs due to the fact that dynamic friction coefficient is lower than static friction coefficient. The signals of left and right sensors are not of exactly the same forms, which could stem from the variances in sensor fabrication and calibration, object alignment difference for two contact surfaces and gripper pose not being exactly upright that leads to imbalanced loads on two fingertips.
Vi Conclusion
In this work, we develop a contact force and torque estimation method for visionbased tactile sensor using HelmholtzHodge Decomposition (HHD). Starting from observations of the relations between contact force and torque and marker displacement patterns, we establish the mapping from decomposed components of HHD to contact force and torque estimation. In characteristic experiment, the force and torque estimation results show high linearity and guarantee lower demands for data size and better accuracy on predictions using models with low complexity. The proposed method is further tested in both contact stability visualization and grasping with adaptive force control for verification of effectiveness and presents potential in facilitating studies of grasping stability metric. Future works fall mainly on integrating sensor and algorithm into grasping system to predict highlevel physical information including object center of mass, estimation of object dynamics, and prediction of grasping stability.
References
 [1] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensingfrom humans to humanoids.” IEEE Trans. Robotics, vol. 26, no. 1, pp. 1–20, 2010.
 [2] K. Sato, K. Kamiyama, N. Kawakami, and S. Tachi, “Fingershaped gelforce: sensor for measuring surface traction fields for robotic hand,” IEEE Transactions on Haptics, vol. 3, no. 1, pp. 37–47, 2010.
 [3] A. Yamaguchi and C. G. Atkeson, “Combining finger vision and optical tactile sensing: Reducing and handling errors while cutting vegetables,” in 2016 IEEERAS 16th International Conference on Humanoid Robots (Humanoids). IEEE, 2016, pp. 1045–1051.
 [4] W. Yuan, R. Li, M. A. Srinivasan, and E. H. Adelson, “Measurement of shear and slip with a gelsight tactile sensor,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 304–311.
 [5] E. Donlon, S. Dong, M. Liu, J. Li, E. Adelson, and A. Rodriguez, “Gelslim: A highresolution, compact, robust, and calibrated tactilesensing finger,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 1927–1934.
 [6] Y. Zhang, Z. Kan, Y. A. Tse, Y. Yang, and M. Y. Wang, “Fingervision tactile sensor design and slip detection using convolutional lstm network,” arXiv preprint arXiv:1810.02653, 2018.

[7]
M. K. Johnson and E. H. Adelson, “Retrographic sensing for the measurement of
surface texture and shape,” in
2009 IEEE Conference on Computer Vision and Pattern Recognition
. IEEE, 2009, pp. 1070–1077.  [8] S. Dong, W. Yuan, and E. H. Adelson, “Improved gelsight tactile sensor for measuring geometry and slip,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 137–144.
 [9] W. Yuan, S. Wang, S. Dong, and E. Adelson, “Connecting look and feel: Associating the visual and tactile properties of physical materials,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5580–5588.
 [10] W. Yuan, S. Dong, and E. H. Adelson, “Gelsight: Highresolution robot tactile sensors for estimating geometry and force,” Sensors, vol. 17, no. 12, p. 2762, 2017.
 [11] B. W. McInroe, C. L. Chen, K. Y. Goldberg, R. Bajcsy, and R. S. Fearing, “Towards a soft fingertip with integrated sensing and actuation,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 6437–6444.
 [12] R. Li, R. Platt, W. Yuan, A. ten Pas, N. Roscup, M. A. Srinivasan, and E. Adelson, “Localization and manipulation of small parts using gelsight tactile sensing,” in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on. IEEE, 2014, pp. 3988–3993.
 [13] K. Van Wyk and J. Falco, “Slip detection: Analysis and calibration of univariate tactile signals,” arXiv preprint arXiv:1806.10451, 2018.
 [14] S. Begej, “Planar and fingershaped optical tactile sensors for robotic applications,” IEEE Journal on Robotics and Automation, vol. 4, no. 5, pp. 472–484, 1988.
 [15] M. Ohka, Y. Mitsuya, K. Hattori, and I. Higashioka, “Data conversion capability of optical tactile sensor featuring an array of pyramidal projections,” in 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems. IEEE, 1996, pp. 573–580.
 [16] D. M. Vogt, Y.L. Park, and R. J. Wood, “Design and characterization of a soft multiaxis force sensor using embedded microfluidic channels,” IEEE Sensors Journal, vol. 13, no. 10, pp. 4056–4064, 2013.
 [17] G. De Maria, C. Natale, and S. Pirozzi, “Force/tactile sensor for robotic applications,” Sensors and Actuators A: Physical, vol. 175, pp. 60–72, 2012.
 [18] B. Fang, F. Sun, C. Yang, H. Xue, W. Chen, C. Zhang, D. Guo, and H. Liu, “A dualmodal visionbased tactile sensor for robotic hand grasping,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 1–9.
 [19] Q. Guo, M. K. Mandal, and M. Y. Li, “Efficient hodge–helmholtz decomposition of motion fields,” Pattern Recognition Letters, vol. 26, no. 4, pp. 493–501, 2005.
 [20] H. Bhatia, V. Pascucci, and P.T. Bremer, “The natural helmholtzhodge decomposition for openboundary flow analysis,” IEEE transactions on visualization and computer graphics, vol. 20, no. 11, pp. 1566–1578, 2014.
 [21] M. T. Mason, Mechanics of robotic manipulation. MIT press, 2001.
Comments
There are no comments yet.