Existing shape estimation methods for deformable object manipulation suffer from the drawbacks of being off-line, model dependent, noise-sensitive or occlusion-sensitive, and thus are not appropriate for manipulation tasks requiring high precision. In this paper, we present a real-time shape estimation approach for autonomous robotic manipulation of 3D deformable objects. Our method fulfills all the requirements necessary for the high-quality deformable object manipulation in terms of being real-time, model-free and robust to noise and occlusion. These advantages are accomplished using a joint tracking and reconstruction framework, in which we track the object deformation by aligning a reference shape model with the stream input from the RGB-D camera, and simultaneously upgrade the reference shape model according to the newly captured RGB-D data. We have evaluated the quality and robustness of our real-time shape estimation pipeline on a set of deformable manipulation tasks implemented on physical robots. Videos are available at https://lifeisfantastic.github.io/DeformShapeEst/READ FULL TEXT VIEW PDF
Robotic manipulation of deformable objects is a difficult problem especi...
In this paper, a B-spline chained multiple random matrices representatio...
The last several years have seen significant progress in using depth cam...
In order to manipulate a deformable object, such as rope or cloth, in
We present a parametric deformable model which recovers image components...
We propose a new methodology to estimate the 3D displacement field of
While robotic manipulation of rigid objects is quite straightforward, co...
Autonomous manipulation of deformable objects is an important and challenging topic in robotics, and recently it attracts much interest due to its potential applications in robot-assisted surgery [1, 2, 3, 4] and service robots, including garments folding [5, 6, 7], ironing , and robot-assisted dressing . Despite the difference in their technical details employed for specific tasks, most existing systems for deformable object manipulation can be described using the same shape control framework as shown in the top row of fig:overview. Given a desired shape of the target object, the robot system iteratively estimates the object’s shape state according to either the sensor measurements or the simulation results, and applies the difference between the object’s current shape state and the desired shape as an error signal to generate a control output to reduce or eliminate the error by deforming the object appropriately. The entire process repeats until the shape state converges to the desired value.
In this work, we focus on the shape estimation problem arising from the aforementioned control loop (as shown in the bottom row of fig:overview). In particular, to represent the shape state of a deforming object, two models are needed: a shape model encoding the geometry and texture of the deformable object’s surface, and a deformation model describing the deformation kinematics or dynamics of the object surface. When provided with sufficient prior knowledge about these two models, modern physically-based simulators such as [10, 11, 12] can provide a long-horizon prediction about the shape state of the underlying deformable object. As a result, recent work such as [13, 1, 14, 9] embedded a physically-based engine into their pipeline and designed or learned a manipulation control according to the shape feedbacks provided by the simulator. The resulting model-based control could be robust to noise and occlusion, if the simulator has been carefully calibrated to be consistent with the real-world physics, which unfortunately is difficult in practice. In particular, the quality of the simulation is extremely sensitive to the model parameters, and this is considered as one main bottleneck of the model-based control for tasks involving deformable objects. In addition, running a physically-based simulation is time-consuming and thus infeasible for estimating fast or large deformation in real time.
. Instead of representing the object’s shape via dense mesh structure as in the model-based method, the shape servoing method approximates the shape state using sparse key features extracted from image data. Because the feature descriptor is usually low-dimensional, the shape servoing method can learn the control policy between the shape feature and the manipulator motion directly in a data-driven manner. Such online policy learning framework makes the shape servoing method independent from an explicit deformation model to achieve shape control. However, existing methods in this direction still suffer from several drawbacks:
Low-resolution shape modeling
: Using sparse features as the shape feedback will omit some geometric details of the object. In other words, even when the feature representation of the object’s current configuration perfectly matches the target feature vector, there is no guarantee that the object’s shape can completely fit into its target shape. This limitation can be problematic for manipulation tasks which require high-precision goal reaching. As a result, a rich representation for the object deformation state is desirable.
Noise-sensitive feature extraction: Most existing shape servoing approaches extract shape features from a single frame of image. The extracted vector can be unreliable for closed-loop control due to the noises in the feedback image that are ubiquitous in real robotic systems. As a result, we need a sophisticated method to obtain robust features from a sequence of feedback inputs.
Occlusion-sensitive feature correspondences: Existing shape servoing methods rely on a feature descriptor to determine inter-frame feature correspondences. However, many deformable objects lack visually significant feature points, and thus they must have additional markers mounted on the surface to provide reliable feedbacks. Such marker-based workaround leads to inconvenience for practical applications. Moreover, most shape servoing systems assume that the full state of the object’s surface can be observed all the time during the task. As a result, when some feature points or markers become invisible to the visual sensor due to occlusion, these systems may fail to capture enough feature vectors for servoing control.
The aforementioned problems in both model-based method and shape servoing method motivated us to propose a novel shape estimation method in this paper. Our method satisfies the requirements of being real-time, model-free and robust to noise and occlusion, and thus can be easily embedded into current robotic systems for autonomous manipulation of general 3D soft objects. In our work, we further divide the shape estimation problem into two subproblems, namely tracking and reconstruction (as shown in the bottom row of fig:overview). In the tracking phase, we estimate an inter-frame deformation model through non-rigid registration between a reference shape model and the depth images provided by an RGB-D camera. In the reconstruction phase, we integrate multiple RGB-D images into the reference shape model according to the estimated deformation model. One key contribution of our work is that our simultaneous tracking and reconstruction framework can capture the surface model of a deforming object, while gradually complete and refine its details based on new RGB-D measurements. Because the generated surface model is of high precision and is also robust to single-frame noise and occlusion, it serves as an excellent feedback signal for shape control.
|reference frame, live frame|
|mesh model defined in the corresponding frame space|
|voxel defined in the corresponding frame space|
|the -th vertex element and normal element of mesh|
|the -th vertex element and normal element of mesh|
|reference volume defined in the space of|
|TSDF component of the voxel in|
|color component of the voxel in|
|weight component of the voxel in|
|TSDF component of the voxel contributed by|
|color component of the voxel contributed by|
|weight component of the voxel contributed by|
|deformation model of the target soft object|
|rigid component separated from|
|non-rigid component of represented as graph|
|the -th node of the graph defined in the space of|
|local deformation defined in|
|effective radius of|
|neighbor set of|
|deformation function maps to , parameterized by|
|color map of|
|depth map of|
|vertex map extracted from|
|normal map extracted from|
In this section, we first present the mathematical notation to define the shape and deformation models employed in our work. Then we outline the pipeline of our simultaneous shape tracking and reconstruction framework.
The main objective of deformable object manipulation is to deform the object’s surface from an initial shape into a desired shape. The inner state of the target object is actually ignored for shape control in most current methods. Therefore in this work we are only interested in how to use an appropriate shape representation to model the surface state. To generate high-quality shape model containing rich geometry and texture details, one commonly used solution, similar to the model-based method, is to represent the surface based on the mesh structure extracted from the RGB-D image data . While such mesh-based representation is suitable to serve as the reference model for shape tracking, its graph structure makes it hard to be fused with multiple image frames for shape reconstruction. Instead, encoding the surface geometry and texture into a 3D volumetric grid structure is more feasible since the shape reconstruction progress can be implemented efficiently via parallel operation on the grids.
To combine the advantages of both representations, we follow previous work [21, 22, 23] by projecting multiple image frames data back into the space of a reference frame (which is usually set as the initial frame) according to the estimated inter-frame deformations and then integrating these frames into a reference mesh . For efficient image integration, the reference mesh is maintained via a discrete truncated signed distance function (TSDF) volume (as illustrated in fig:tsdf), which we denote as the reference volume . In this reference volume , the surface geometry is voxelized as , where encodes the truncated signed distance value for each voxel , and is the associated weight. The content of each voxel will be updated independently for image integration. To obtain a high-quality mesh with texture, we also maintain the RGB information in each voxel. As a summary, our reference volume can be represented as .
From the reference volume , we extract the reference mesh with Marching Cubes algorithm . The reference mesh can be further deformed according to the estimated inter-frame deformation model to obtain the live mesh , which indicates the object shape in the live frame . For the convenience of discussion, we define the mesh vertices and corresponding normals as in this work, where is the number of vertices in the mesh.
To achieve simultaneous shape tracking and reconstruction, we need a model to formulate the deformation from the reference frame to the live frame . In consideration of our model-free requirement, skeleton-based kinematic models for articulated objects are infeasible to represent the deformation of general soft objects. One possible solution is to model the deformation based on Eulerian (grid-based) method or Lagrangian (particle-based) method used in fluid simulation . While both methods can provide high-quality representation for general deformation based on their dense structures, such high-dimensional models are not feasible for real-time estimation.
As a trade-off between complexity and precision, we employ the sparse deformation graph model 
with reduced dimensions for real-time implementation. In this method, the graph nodes are uniformly sampled from the mesh model to have a layout roughly conform to the object’s shape. Then the whole deformation is divided into a set of local transformations, which are then assigned to the graph nodes one-to-one. The graph nodes have overlapping domain of influence with their neighboring nodes in local transformations. Thus for any given point in the nearby space of the graph nodes, a smooth deformation function can be computed via interpolation of the local transformations in the point’s nearest graph nodes.
Moreover, to avoid over-fitting of the deformation graph model during estimation, a regularization constraint is also needed. In our work, we regularize the deformation graph via the widely used as-rigid-as-possible (ARAP) constraint , which penalizes inconsistent local transformations between neighboring graph nodes. Such penalty function is usually be represented geometrically as the graph edges.
Except the graph model, we further divide the global rigid component from the total deformation and model it separately. Overall, our deformation model can be represented as . Here defines the separated global rigid transformation. denotes the non-rigid component of represented in the graph model. We further parameterize the graph model as . indicates the local transformation in the -th node. represents the position of the -th node in the reference frame . defines the effective radius of . The neighbor set contains the indices of those nodes which are connected with the -th node by graph edges. These nodes are considered as the closest neighbors of the -th node. Note that in our method, remains constant during estimation, and thus our deformation model can be fully parameterized as .
To deform the reference mesh according to the above model , we first assign each mesh vertex to its -nearest nodes on the graph model of based on a set of skinning weights . The skinning weight is calculated as , where is a normalization factor ensuring . Then we calculate the deformed vertex in the live frame based on the following blending function:
Similarly, the corresponding normal of the vertex can be deformed using the following blending function:
As demonstrated in fig:overview, our system takes the image stream provided by an RGB-D camera as input. It is composed of two parallel threads: tracking and reconstruction. The tracking thread takes charge of the real-time estimation of the deformation model . It aligns the live RGB-D frame with the reference mesh model for geometric consistency. To capture the surface geometry for alignment, the tracking thread extracts dense features from the received depth map of the live frame , including a vertex map and a corresponding normal map . At the core of this thread is a highly-efficient GPU solver which optimizes the deformation model for frame-to-model alignment under the regularization constraint. We implement the optimization solver based on a kernel merged preconditioned conjugate gradient (PCG) algorithm using CUDA.
Given the estimation of the deformation model , the reconstruction thread computes the voxel-to-pixel correspondences between the reference volume and the live frame and updates the content in each voxel of accordingly. After the previous volume fusion operation, the reconstruction thread extracts a new reference mesh from the reference volume and obtains the associated live mesh based on the estimated deformation model .
The objective of the tracking thread is to provide accurate estimation of the deformation model to assist the reconstruction thread. In the tracking step, we optimize the deformation model to obtain the best frame-to-model alignment between the live frame and the reference mesh model .
To estimate the deformation model , we formulate the following energy function :
where is the data term which penalizes the misalignment between the reference mesh model and the dense features extracted from the live image frame . is the regularization term which penalizes the inconsistent local transformations between neighboring nodes in the graph model of . and denote the associated weights of these two terms.
Data Term To measure the misalignment between the reference mesh model and the live frame , we first calculate a vertex map and a normal map from the depth map to represent the geometric features of the live frame . Then we deform the reference mesh vertices and normals according to the estimation of the deformation model to obtain their prediction represented in the live frame . In particular, the predicated vertices are and the predicated normals are . Finally, we quantify the misalignment between the predicted vertices and the geometric features based on the point-to-plane error function widely used in the Iterative Closest Point (ICP) algorithm. As a result, we represent the geometric data term as
where denote the 3D-to-2D projected correspondences of the predicted vertex in the feature map .
Regularization Term The deformation graph model can easily become over-fitting if it is not well regularized during estimation. To solve this problem, one widely used template-free method for general soft objects is to introduce the ARAP constraint. In our work, we encode the neighboring relationship of the ARAP constraint into the deformation graph neighbor set . Based on the neighbor set, we define the regularization term as
The reconstruction thread takes the live frame and the newly estimated deformation model as input. It updates the surface geometry and texture by integrating multiple image frames incrementally into the reference volume . Because the surface geometry and texture are encoded in the volumetric structure, we refer this procedure as volume fusion.
We implement the volume fusion operation based on a non-rigid projective fusion approach . In this approach, we first scan the voxels of and get their positions in the reference frame space, which are denoted as . Then we calculate the corresponding deformed positions in the live frame based on the blending function in eq:blending func. The deformed voxels are projected into the live frame image map to find their corresponding pixels . Thus we determine the voxel-to-pixel correspondence between the reference volume and the live frame image . For each voxel , we calculate its new TSDF component and color component contributed by the live frame as
respectively. Here and denote the depth image and color image of the live frame . represents the position of point along Z-axis. is the truncated threshold of TSDF value. Besides, we assign a weight to the new components. Finally, we update the reference volume as
where is the upper threshold of the weight. In eq:color_fusion, we do not update the color component via integration as in eq:tsdf_fusion. The main reason here is that our current work does not model and track the light environment and material albedo. As a result, setting the color component directly as its new value in live frame can obtain better response than data fusion.
We implement our shape estimation pipeline on a desktop PC with Intel Core i7 3.4GHz CPU, 32GB of RAM and an NVIDIA GeForce GTX 1080 GPU. To setup the working environment of typical deformable object manipulation tasks, we employ one dual-arm robot (ABB YuMi, with seven degrees-of-freedom in each arm) to perform demonstrations with different materials. Besides, we take the RGB-D data provided by an Intel RealSense SR300 camera as input. The entire experimental setup is shown in the bottom left corner of fig:exp_setup.
Because we encode the reference surface model explicitly into the volumetric structure , the real-time performance of our pipeline largely depends on the parameters of the volumetric model , including the volume’s dimension in voxels, the truncated threshold of TSDF value , the weight of the newly captured TSDF component , and the upper-bound weight of reference TSDF component . In our experiment, we pre-defined a cubic space as the reference volume and discretized it based on voxels. Thus the actual resolution of the volumetric model is voxels per cubic meters. Besides, we set , and as constants, in particular, , , and . We measured the runtime cost of each main computational components in our pipeline during our experiment, including for preprocessing (e.g., depth image filtering, vertex map and normal map extracting), for deformation tracking, and for volume fusion. On average, our pipeline runs at per frame to track and reconstruct the surface of all deforming targets employed in our experiment, which satisfies the real-time requirement of most robotic applications.
As previously mentioned, one main advantage of our method is that it is robust to occlusion. Such robustness is crucial for applications involving human-machine cooperation, where the human body may occlude the target object from the camera. To demonstrate the advantage of our method, we test it with a plastic sheet bending task. In this task, we let the robotic arm deform the target sheet in front of an RGB-D camera. During the task, we introduce some synthetic occlusions into the captured RGB-D stream and make the corresponding surface areas become invisible to our system. The synthetic occlusion masks are illustrated as the red-solid boxes in the top row of fig:exp_setup. To compared with most previous work [15, 16, 17, 18] which employed single-frame data for shape estimation, we show the corresponding surface mesh model extracted from each frame in the second row of fig:exp_setup. Note since we encoded all surface information captured by the single-frame image into such mesh model, it indicates the input data adopted by the aforemetioned work. In our experiment, we extracted such mesh model by projecting the recently captured singe-frame data into a new TSDF volume, and then locating the zero-level surface based on the Marching Cubes algorithm  without consideration of the previously observed data. We refer to such method as the single-frame method. The mesh models generated by our method are illustrated in the third row of fig:exp_setup. As we can observe, the single-frame method cannot capture the geometry of the occluded part of the object. Our method, on the other hand, approximates the deformation behavior of the unobserved part based on the ARAP constraint and generates a complete shape estimation accordingly.
Because the occlusions in the aforementioned experiment are synthetic, we further evaluated the accuracy of our method, especially for the estimate of the occluded surface part, by measuring the non-rigid alignment error between the reconstructed mesh model and the raw image data without synthetic occlusions based on the data term in eq:geo_data_term. We plot the alignment errors measured at different time instants in the bottom right corner of fig:exp_setup. Consider the occluded surface part is totally driven by the ARAP constraint in our method, there is no doubt a gap between our deformation graph model and the object’s real deformation behavior. When the occluded part undergoes small deformation as in the second column of fig:exp_setup, the ARAP constraint holds well and the alignment error is quite close to the case without occlusion. The mentioned gap will become obvious when the occluded part undergoes large deformation as in the fourth column of fig:exp_setup. However, even in the latter case, the ARAP constraint can still contribute to a compelling mesh reconstruction. Moreover, its independence of object prior knowledge is essential for our model-free implementation.
To demonstrate the robustness of our method to sensor data noise, we design another folding towel task for testing. Again, we compare the reconstruction results of two different methods in fig:exp_noise, including the single-frame method and our method. Because the RGB-D camera cannot provide stable depth measurement for the wrinkle areas (as highlighted in the green rectangle) and the nearly parallel areas (as highlighted in the red rectangle) on the folded towel, the single-frame reconstructions omit some important geometric details for the shape feedback. This problem exists in most current shape servoing methods. Instead, our method updates the geometry of the surface model via efficient image data fusion, and is capable of providing continuous and smooth mesh reconstruction.
We present a novel shape estimation method to provide reliable shape feedback for the deformable object manipulation problem. A series of experiments are conducted to show the advantages of our method in terms of being real-time, model-free and robust to noise and occlusion. All these features make our method promising to be embedded into current robotic manipulation systems for challenging applications.
Our method still has some limitations. First, our method relies on high-precision deformation estimation for consistent and accurate shape reconstruction. In other words, when the estimation step fails in some cases, the drift error will be accumulated into the reconstruction result and cannot be corrected. One possible solution to this problem is to add a module into the pipeline which does not rely on the deformation estimates provided by the tracking thread for drift correction.
Second, our deformation model, especially the ARAP regularization term, cannot always hold when the target object undergoes large deformations or complex topological changes. The reasons for such limitation are two-fold: On one hand, to stay within our computational budget for real-time application, we approximated the ARAP regularization term by penalizing inconsistent local deformations close in Euclidean space rather than on the mesh manifold, which unfortunately introduces a gap between the employed deformation model and the real-world physics. On the other hand, the proposed system lacks the ability for perceiving and inferring the surface topology. Thus even if the ARAP constraint is formulated strictly according to the distance on the mesh manifold, it is still difficult for the system to track and reconstruct the deforming surface undergoes fast or complex topological changes. Due to these reasons, we only present experiments based on deforming objects with simple topology and geometry. Note improving the system’s robustness to handle topological changes is still challenging for all model-free methods in related fields, and we sincerely believe that a topological segmentation front-end is essential for solving such a problem.
Besides resolving above limitations, for our future work, we will also present a complete shape control pipeline embedded with our shape estimation method.