DeepAI
Log In Sign Up

TransNet: Category-Level Transparent Object Pose Estimation

Transparent objects present multiple distinct challenges to visual perception systems. First, their lack of distinguishing visual features makes transparent objects harder to detect and localize than opaque objects. Even humans find certain transparent surfaces with little specular reflection or refraction, e.g. glass doors, difficult to perceive. A second challenge is that common depth sensors typically used for opaque object perception cannot obtain accurate depth measurements on transparent objects due to their unique reflective properties. Stemming from these challenges, we observe that transparent object instances within the same category (e.g. cups) look more similar to each other than to ordinary opaque objects of that same category. Given this observation, the present paper sets out to explore the possibility of category-level transparent object pose estimation rather than instance-level pose estimation. We propose TransNet, a two-stage pipeline that learns to estimate category-level transparent object pose using localized depth completion and surface normal estimation. TransNet is evaluated in terms of pose estimation accuracy on a recent, large-scale transparent object dataset and compared to a state-of-the-art category-level pose estimation approach. Results from this comparison demonstrate that TransNet achieves improved pose estimation accuracy on transparent objects and key findings from the included ablation studies suggest future directions for performance improvements.

READ FULL TEXT VIEW PDF

page 3

page 14

03/08/2022

ClearPose: Large-scale Transparent Object Dataset and Benchmark

Transparent objects are ubiquitous in household settings and pose distin...
11/03/2022

StereoPose: Category-Level 6D Transparent Object Pose Estimation from Stereo Images via Back-View NOCS

Most existing methods for category-level pose estimation rely on object ...
12/07/2021

Polarimetric Pose Prediction

Light has many properties that can be passively measured by vision senso...
03/07/2019

Locating Transparent Objects to Millimetre Accuracy

Transparent surfaces, such as glass, transmit most of the visible light ...
12/05/2019

KeyPose: Multi-view 3D Labeling and Keypoint Estimation for Transparent Objects

Estimating the 3D pose of desktop objects is crucial for applications su...
03/01/2022

ProgressLabeller: Visual Data Stream Annotation for Training Object-Centric 3D Perception

Visual perception tasks often require vast amounts of labelled data, inc...
05/07/2021

Towards Real-World Category-level Articulation Pose Estimation

Human life is populated with articulated objects. Current Category-level...

1 Introduction

From glass doors and windows to kitchenware and all kinds of containers, transparent materials are prevalent throughout daily life. Thus, perceiving the pose (position and orientation) of transparent objects is a crucial capability for autonomous perception systems seeking to interact with their environment. However, transparent objects present unique perception challenges both in the RGB and depth domains. As shown in Figure 2, for RGB, the color appearance of transparent objects is highly dependent on the background, viewing angle, material, lighting condition, etc. due to light reflection and refraction effects. For depth, common commercially available depth sensors record mostly invalid or inaccurate depth values within the region of transparency. Such visual challenges, especially missing detection in the depth domain, pose severe problems for autonomous object manipulation and obstacle avoidance tasks. This paper sets out to address these problems by studying how category-level transparent object pose estimation may be achieved using end-to-end learning.

Recent works have shown promising results on grasping transparent objects by completing the missing depth values followed by the use of a geometry-based grasp engine [29, 12, 9]

, or transfer learning from RGB-based grasping neural networks 

[36]. For more advanced manipulation tasks such as rigid body pick-and-place or liquid pouring, geometry-based estimations, such as symmetrical axes, edges [27] or object poses [26], are required to model the manipulation trajectories. Instance-level transparent object poses could be estimated from keypoints on stereo RGB images [24, 23] or directly from a single RGB-D image [38] with support plane assumptions. Recently emerged large-scale transparent object datasets [29, 39, 23, 9, 6]

pave the way for addressing the problem using deep learning.

In this work, we aim to extend the frontier of 3D transparent object perception with three primary contributions.

  • First, we explore the importance of depth completion and surface normal estimation in transparent object pose estimation. Results from these studies indicate the relative importance of each modality and their analysis suggests promising directions for follow-on studies.

  • Second, we introduce TransNet, a category-level pose estimation pipeline for transparent objects as illustrated in Figure 1. It utilizes surface normal estimation, depth completion, and a transformer-based architecture to estimate transparent objects’ 6D poses and scales.

  • Third, we demonstrate that TransNet outperforms a baseline that uses a state-of-the-art opaque object pose estimation approach [7] along with transparent object depth completion [9].

Figure 2: Challenge for transparent object perception. Images are from Clearpose dataset [6]. The left is an RGB image. The top right is the raw depth image and the bottom right is the ground truth depth image.

2 Related Works

2.1 Transparent Object Visual Perception for Manipulation

Transparent objects need to be perceived before being manipulated. Lai et al. [18] and Khaing et al. [16] developed CNN models to detect transparent objects from RGB images. Xie et al. [37] proposed a deep segmentation model that achieved state-of-the-art segmentation accuracy. ClearGrasp [29] employed depth completion for use with pose estimation on robotic grasping tasks, where they trained three DeepLabv3+ [4] models to perform image segmentation, surface normal estimation, and boundary segmentation. Follow-on studies developed different approaches for depth completion, including implicit functions [47], NeRF features [12], combined point cloud and depth features [39], adversarial learning [30], multi-view geometry [1], and RGB image completion [9]. Without completing depth, Weng et al. [36] proposed a method to transfer the learned grasping policy from the RGB domain to the raw sensor depth domain. For instance-level pose estimation, Xu et al. [38] utilized segmentation, surface normal, and image coordinate UV-map as input to a network similar to [32] that can estimate 6 DOF object pose. Keypose [24] was proposed to estimate 2D keypoints and regress object poses from stereo images using triangulation. For other special sensors, Xu et al. [40] used light-field images to do segmentation using a graph-cut-based approach. Kalra et al. [15] trained Mask R-CNN [11] using polarization images as input to outperform the baseline that was trained on only RGB images by a large margin. Zhou et al. [46, 45, 44] employed light-field images to learn features for robotic grasping and object pose estimation. Along with the proposed methods, massive datasets, across different sensors and both synthetic and real-world domains, have been collected and made public for various related tasks [37, 29, 24, 44, 15, 23, 47, 39, 9, 6]. Compared with these previous works, and to the best of our knowledge we propose the first category-level pose estimation approach for transparent objects. Notably, the proposed approach provides reliable 6D pose and scale estimates across instances with similar shapes.

2.2 Opaque Object Category-level Pose Estimation

Category-level object pose estimation is aimed at estimating unseen objects’ 6D pose within seen categories, together with their scales or canonical shape. To the best of our knowledge, there is not currently any category-level pose estimation works focusing on transparent objects, and the works mentioned below mostly consider opaque objects. They won’t work well for transparency due to their dependence on accurate depth. Wang et al. [35] introduced the Normalized Object Coordinate Space (NOCS) for dense 3D correspondence learning, and used the Umeyama algorithm [33] to solve the object pose and scale. They also contributed both a synthetic and a real dataset used extensively by the following works for benchmarking. Later, Li et al. [19] extended the idea towards articulated objects. To simultaneously reconstruct the canonical point cloud and estimate the pose, Chen et al. [2] proposed a method based on canonical shape space (CASS). Tian et al. [31]

learned category-specific shape priors from an autoencoder, and demonstrated its power for pose estimation and shape completion. 6D-ViT 

[48] and ACR-Pose [8]

extended this idea by utilizing pyramid visual transformer (PVT) and generative adversarial network (GAN) 

[10] respectively. Structure-guided prior adaptation (SGPA) [3] utilized a transformer architecture for a dynamic shape prior adaptation. Other than learning a dense correspondence, FS-Net [5] regressed the pose parameters directly, and it proposed to learn two orthogonal axes for 3D orientation. Also, it contributed to an efficient data augmentation process for depth-only approaches. GPV-Pose [7] further improved FS-Net by adding a geometric consistency loss between 3D bounding boxes, reconstruction, and pose. Also with depth as the only input, category-level point pair feature (CPPF) [42] could reduce the sim-to-real gap by learning deep point pairs features. DualPoseNet [20] benefited from rotation-invariant embedding for category-level pose estimation. Differing from other works using segmentation networks to crop image patches as the first stage, CenterSnap [13] presented a single-stage approach for the prediction of 3D shape, 6D pose, and size.

Compared with opaque objects, we find the main challenge to perceive transparent objects is the poor quality of input depth. Thus, the proposed TransNet takes inspiration from the above category-level pose estimation works regarding feature embedding and architecture design. More specifically, TransNet leverages both Pointformer from PVT and the pose decoder from FS-Net and GPV-Pose. In the following section, the TransNet architecture is described, focusing on how to integrate the single-view depth completion module and utilize imperfect depth predictions to learn pose estimates of transparent objects.

3 TransNet

Figure 3: Architecture for TransNet. TransNet is a two-stage deep neural network for category-level transparent object pose estimation. The first stage uses an object instance segmentation (from Mask R-CNN [11], which is not included in the diagram) to generate patches of RGB-D then used as input to a depth completion and a surface normal estimation network (RGB only). The second stage uses randomly sampled pixels within the objects’ segmentation mask to generate a generalized point cloud formed as the per-pixel concatenation of ray direction, RGB, surface normal, and completed depth features. Pointformer [48], a transformer-based point cloud embedding architecture, transforms the generalized point cloud into high-dimensional features. A concatenation of embedding features, global features, and a one-hot category label (from Mask R-CNN) is provided for the pose estimation module. The pose estimation module is composed of four decoders, one each for translation, -axis, -axis, and scale regression respectively. Finally, the estimated object pose is recovered and returned as output.

Given an input RGB-D pair (, ), our goal is to predict objects’ 6D rigid body transformations and 3D scales s in the camera coordinate frame, where and . In this problem, inaccurate/invalid depth readings exist within the image region corresponding to transparent objects (represented as a binary mask ). To approach the category-level pose estimation problem along with inaccurate depth input, we propose a novel two-stage deep neural network pipeline, called TransNet.

3.1 Architecture Overview

Following recent work in object pose estimation [34, 5, 7], we first apply a pre-trained instance segmentation module (Mask R-CNN [11]) that has been fine-tuned on the pose estimation dataset to extract the objects’ bounding box patches, masks, and category labels to separate the objects of interest from the entire image.

The first stage of TransNet takes the patches as input and attempts to correct the inaccurate depth posed by transparent objects. Depth completion (TransCG [9]) and surface normal estimation (U-Net [28]) are applied on RGB-D patches to obtain estimated depth-normal pairs. The estimated depth-normal pairs, together with RGB and ray direction patches, are concatenated to feature patches, followed by a random sampling strategy within the instance masks to generate generalized point cloud features.

In the second stage of TransNet, the generalized point cloud is processed through Pointformer [48]

, a transformer-based point cloud embedding module, to produce concatenated feature vectors. The pose is then separately estimated in four decoder modules for object translation,

-axis, -axis, and scale respectively. The estimated rotation matrix can be recovered using the estimated two axes. Each component is discussed in more detail in the following sections.

3.2 Object Instance Segmentation

Similar to other categorical pose estimation work [7], we train a Mask R-CNN [11] model on the same dataset used for pose estimation to obtain the object’s bounding box , mask and category label . Patches of ray direction , RGB and raw depth are extracted from the original data source following bounding box , before inputting to the first stage of TransNet.

3.3 Transparent object depth completion

Due to light reflection and refraction on transparent material, the depth of transparent objects is very noisy. Therefore, depth completion is necessary to reduce the sensor noise. Given the raw RGB-D patch (, ) pair and transparent mask (a intersection of transparent objects’ masks within bounding box ), transparent object depth completion is applied to obtain the completed depth of the transparent region .

Inspired by one state-of-the-art depth completion method, TransCG [9], we incorporate a similar multi-scale depth completion architecture into TransNet.

(1)

We use the same training loss as TransCG:

(2)

where is the ground truth depth image patch, represents the transparent region in the patch, denotes the dot product operator and denotes the operator to calculate surface normal from depth. is distance between estimated and ground truth depth within the transparency mask.

is the cosine similarity between surface normal calculated from estimated and ground truth depth.

is the weight between the two losses.

3.4 Transparent object surface normal estimation

Surface normal estimation estimates surface normal from RGB image . Although previous category-level pose estimation works [7, 5] show that depth is enough to obtain opaque objects’ pose, experiments in Section 4.3 demonstrate that surface normal is not a redundant input for transparent object pose estimation. Here, we slightly modify U-Net [28] to perform the surface normal estimation.

(3)

We use the cosine similarity loss:

(4)

where means the loss is applied for all pixels in the bounding box .

3.5 Generalized point cloud

As input to the second stage, generalized point cloud is a stack of -dimensional features from the first stage taken at sample points, inspired from [38]. To be more specific, in our work. Given the completed depth and predicted surface normal from Equation (1), (3), together with RGB patch and ray direction patch , a concatenated feature patch is given as . Here the ray direction represents the direction from camera origin to each pixel in the camera frame. For each pixel :

(5)

where is the homogeneous UV coordinate in the image plane and is the camera intrinsic. The UV mapping itself is an important cue when estimating poses from patches [14], as it provides information about the relative position and size of the patches within the overall image. We use ray direction instead of UV mapping because it also contains camera intrinsic information.

We randomly sample pixels within the transparent mask of the feature patch to obtain the generalized point cloud . A more detailed experiment in Section 4.3 explores the best choice of the generalized point cloud.

3.6 Transformer Feature embedding

Given generalized point cloud , we apply an encoder and multi-head decoder strategy to get objects’ poses and scales. We use Pointformer [48], a multi-stage transformer-based point cloud embedding method:

(6)

where is a high-dimensional feature embedding. During our experiments, we considered other common point cloud embedding methods such as 3D-GCN [21] demonstrating their power in many category-level pose estimation methods [5, 7]. During feature aggregation for each point, they use the nearest neighbor algorithm to search nearby points within coordinate space, then calculate new features as a weighted sum of the features within surrounding points. Due to the noisy input from Equation (1), the nearest neighbor may become unreliable by producing noisy feature embeddings. On the other hand, Pointformer aggregates feature by a transformer-based method. The gradient back-propagates through the whole point cloud. More comparisons and discussions in Section 4.2 demonstrate that transformer-based embedding methods are more stable than nearest neighbor-based methods when both are trained on noisy depth data.

Then we use a Point Pooling layer (a multilayer perceptron (MLP) plus max-pooling) to extract the global feature

, and concatenate it with local feature and the one-hot category label from instance segmentation for the decoder:

(7)

3.7 Pose and Scale Estimation

After we extract the feature embeddings from multi-modal input, we apply four separate decoders for translation, -axis, -axis, and scale estimation.

Translation Residual Estimation As demonstrated in [5], residual estimation achieves better performance than direct regression by learning the distribution of the residual between the prior and actual value. The translation decoder learns a 3D translation residual from the object translation prior calculated as the average of predicted 3D coordinate over the sampled pixels in . To be more specific:

(8)

Where is the camera intrinsic and , are the 2D pixel coordinate for the selected pixel. We also use the loss between the ground truth and estimated position:

(9)

Pose Estimation Similar to [5], rather than directly regress the rotation matrix , it is more effective to decouple it into two orthogonal axes and estimate them separately. As shown in Figure 3, we decouple into the -axis (red axis) and -axis (green axis). Following the strategy of confidence learning in [7], the network learns confidence values to deal with the problem that the regressed two axes are not orthogonal:

(10)

where denote the confidence for the learned axes. represents the angle between and . are obtained by solving an optimization problem and then used to rotate the and within their common plane. More details can be found in [7]. For the training loss, first, we use loss and cosine similarity loss for axis estimation:

(11)

Then to constrain the perpendicular relationship between two axes, we add the angular loss:

(12)

To learn the axis confidence, we add the confidence loss, which is the distance between estimated confidence and exponential distance between the ground truth and estimated axis:

(13)

where is a constant to scale the distance.

Thus the overall loss for the second stage is:

(14)

To deal with object symmetry, we apply specific treatments for different symmetry types. For axial symmetric objects (those that remain the same shape when rotating around one axis), we ignore the loss for the -axis, . For planar symmetric objects (those that remain the same shape when mirrored about one or more planes), we generate all candidate -axis rotations. For example, for an object symmetric about the plane and plane, rotating the -axis about the -axis by radians will not affect the object’s shape. The new -axis is denoted as and the loss for the -axis is defined as the minimum loss of both candidates:

(15)

Scale Residual Estimation Similar to the translation decoder, we define the scale prior as the average of scales of all object 3D CAD models within each category. Then the scale of a given instance is calculated as follows:

(16)

The loss function is defined as the

loss between the ground truth scale and estimated scale:

(17)

4 Experiments

Dataset We evaluated TransNet and baseline models on the Clearpose Dataset [6] for categorical transparent object pose estimation. The Clearpose Dataset contains over 350K real-world labeled RGB-D frames in 51 scenes, 9 sets, and around 5M instance annotations covering 63 household objects. We selected 47 objects and categorize them into 6 categories, bottle, bowl, container, tableware, water cup, wine cup.

We used all the scenes in set2, set4, set5, and set6 for training and scenes in set3 and set7 for validation and testing. The division guaranteed that there were some unseen objects for testing within each category. Overall, we used 190K images for training and 6K for testing. For training depth completion and surface normal estimation, we used the same dataset split.

Implementation Details Our model was trained in several stages. For all the experiments in this paper, we were using the ground truth instance segmentation as input, which could also be obtained by Mask R-CNN [11]. The image patches were generated from object bounding boxes and re-scaled to a fixed shape of pixels. For TransCG, we used AdamW optimizer [25] for training with and the overall learning rate is to train the model till converge. For U-Net, we used the Adam optimizer [17] with a learning rate of to train the model until convergence. For both surface normal estimation and depth completion, the batch size was set to 24 images. The surface normal estimation and depth completion model were frozen during the training of the second stage.

For the second stage, the training hyperparameters for Pointformer followed those used in

[48]. We used data augmentation for RGB features and instance mask for sampling generalized point cloud. A batch size of 18 was used. To balance sampling distribution across categories, 3 instance samples were selected randomly for each of 6 categories. We followed GPV-Pose [7] on training hyper-parameters. The learning rate for all loss terms were kept the same during training, . We used the Ranger optimizer [22, 41, 43]

and used a linear warm-up for the first 1000 iterations, then used a cosine annealing method at the 0.72 anneal point. All the experiments for pose estimation were trained on a 16G RTX3080 GPU for 30 epochs with 6000 iterations each. All the categories were trained on the same model, instead of one model per category.

Evaluation metrics For category-level pose estimation, we followed [7, 5] using 3D intersection over union (IoU) between the ground truth and estimated 3D bounding box (we used the estimated scale and pose to draw an estimated 3D bounding box) at 25%, 50% and 75% thresholds. Additionally, we used , , , as metrics. The numbers in the metrics represent the percentage of the estimations with errors under such degree and distance. For section 4.4, we also used separated translation and rotation metrics: , , , , that calculate percentage with respect to one factor.

For depth completion evaluation, we calculated the root of mean squared error (RMSE), absolute relative error (REL) and mean absolute error (MAE), and used , , as metrics, while was calculated as:

(18)

where represents the indicator function. and mean estimated and ground truth depth for each pixel .

For surface normal estimation, we calculated RMSE and MAE errors and used , , and as thresholds. Here represents the percentage of estimates with an angular distance less than from ground truth surface normal.

4.1 Comparison with Baseline

Method
GPV-Pose 93.7 58.3 10.5 0.4 1.5 7.4 9.1
TransNet 90.3 67.4 22.1 2.4 7.5 23.6 27.6
Table 1: Comparison with the baseline on the Clearpose Dataset.

We chose one state-of-the-art categorical opaque object pose estimation model (GPV-Pose [7]) as a baseline, which was trained with estimated depth from TransCG [9] for a fair comparison. From Table 1, TransNet outperformed the baseline in most of the metrics on the Clearpose dataset. is very easy to learn, so there is no huge difference between them. For the rest of the metrics, TransNet achieved around 2 the percentage on , 3 on and 5 on over the baseline. Qualitative results are shown in Figure 4 for TransNet.

Figure 4: Qualitative results of category-level pose estimates from TransNet. The left column is the original RGB image within our test set and the right column is the pose estimation results. The white bounding box is the ground truth and the colored one is the estimation result. Different colors represent different categories. For axial symmetric objects, because we only care about the scale and z-axis, we use the ground truth x-axis and estimated z-axis to calculate the estimated x-axis, for better visualization. In the figure, there is a pitcher without either ground truth or estimated bounding box because it is not within any of the defined categories, so we ignore it for both training and testing.

4.2 Embedding method analysis

In Table 2, we compared the embedding method between 3D-GCN [21] and Pointformer [48] on TransNet. Modalities for generalized point cloud were depth, RGB and ray direction (without surface normal) for all the trials. The only differences between them were depth type and embedding methods. With ground truth input, 3D-GCN and Pointformer achieved similar results. For some metrics, i.e. , 3D-GCN was even better. But when the ground truth depth was changed to estimated depth (modeling the change from opaque to transparent setting), Pointformer retained much more accuracy than 3D-GCN. Here is our explanation. Like many point cloud embedding methods, 3D-GCN propagates information between nearest neighbors. It is a very efficient method given a point cloud with low noise. But given the completed depth, high noise makes it unstable to pass data among neighbors. While for Pointformer, information is passed through the whole point cloud, no matter how large the noise is. Therefore, given depth information with large uncertainty, the transformer-based embedding method might be more powerful than embedding methods using nearest neighbors.

Depth type Embedding
Ground truth 3D-GCN 90.0 84.1 43.0 21.4 48.0 61.8 64.7
Pointformer 90.0 81.8 56.5 24.1 39.3 59.0 60.7
Estimation 3D-GCN 88.8 59.8 10.4 0.9 3.4 12.3 15.4
Pointformer 88.5 62.2 17.6 1.6 5.0 17.4 20.9
Table 2: Comparison between different embedding methods

4.3 Ablation study of generalized point cloud

We explored different combinations of feature inputs for the generalized point cloud to find the one most suitable for TransNet. Results are shown in Table 3. For trials 1 and 2, we compared the effect of adding estimated surface normal to the generalized point cloud. All the metrics demonstrated that the inclusion of surface normal does improve the resulting pose estimation accuracy.

width=center Trial depth normal ray-direction 1 88.5 62.2 17.6 1.6 5.0 17.4 20.9 2 90.3 67.4 22.1 2.4 7.5 23.6 27.6

Table 3: Ablation study for a different combination of the generalized point cloud. For both trials, we also use RGB as an input feature for the generalized point cloud.

4.4 Depth and surface normal exploration on TransNet

Metric RMSE REL MAE
Value 0.055 0.044 0.041 68.93 89.40 98.93
Table 5: Accuracy for surface normal estimation on Clearpose dataset.
Metric RMSE MAE
Value 0.1915 0.1334 56.75 88.45 96.64
Table 6: Evaluation for depth and surface normal accuracy on TransNet.

width=center Trial Depth Normal 1 GT GT 95.1 87.7 66.7 31.8 48.4 66.5 66.7 47.3 66.3 63.3 97.9 99.9 2 GT EST 90.9 82.1 56.3 23.4 36.5 58.0 59.6 37.3 59.6 53.6 97.2 99.9 3 EST GT 94.0 83.8 34.3 8.1 29.9 47.8 60.3 37.3 61.8 22.2 77.1 97.4 4 EST EST 90.3 67.4 22.1 2.4 7.5 23.6 27.6 8.8 28.1 16.6 77.4 96.8

Table 4: Accuracy for depth completion on Clearpose dataset. All the metrics are calculated within the transparent mask.

We explored the combination of depth and surface normal with different accuracy. Results in Table 6 and Table 6 show performance for TransCG and U-Net separately. “GT” and “EST” in Table 6 represent ground truth and estimated input for depth and surface normal respectively. From the comparison of results among trials 1 - 3, accurate depth is more essential than surface normal for category-level transparent object pose estimation. For instance, as the ground truth depth changes to the estimated depth from trial 1 to trial 3, decreases by 23.7. Compared with surface normal estimation, only decreases by 8.4 between trial 1 and trial 2. More specifically, from decoupled rotation and translation metrics, we can see that decreases by 41.1 between trial 1 and trial 3 compared to 9.7 between trial 1 and trial 2, meaning that depth accuracy is more important for translation estimation. Focusing on between trial 1 and trial 4, the first metric decreases by 46.7 but the latter two lose much less (20.5 for and 3.1 for ). This can be explained by the result of depth completion accuracy shown in Table 6 (MAE = 0.041m, between and ). From the comparison of trial 1-4 on metrics and , we can see that either accurate surface normal or accurate depth can support good performance in rotation metrics (for either trial 2 or trial 3, decreases by 10.0 and decreased by around 7). Once we use the estimation version of both, decreases by 38.5 and decreases by 38.2.

5 Conclusions

In this paper, we proposed TransNet, a two-stage pipeline for category-level transparent object pose estimation. TransNet outperformed a baseline by taking advantage of both state-of-the-art depth completion and opaque object category pose estimation. Ablation studies about multi-modal input and feature embedding modules were performed to guide deeper explorations. In the future, we plan to explore how category information can be used earlier in the network for better accuracy, improve depth completion potentially using additional consistency losses, and extend the model to be category-level across both transparent and opaque instances.

References

  • [1] J. Chang, M. Kim, S. Kang, H. Han, S. Hong, K. Jang, and S. Kang (2021) GhostPose*: multi-view pose estimation of transparent objects for robot hand grasping. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5749–5755. Cited by: §2.1.
  • [2] D. Chen, J. Li, Z. Wang, and K. Xu (2020) Learning canonical shape space for category-level 6d object pose and size estimation. In

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

    ,
    pp. 11973–11982. Cited by: §2.2.
  • [3] K. Chen and Q. Dou (2021) Sgpa: structure-guided prior adaptation for category-level 6d object pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2773–2782. Cited by: §2.2.
  • [4] L. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 801–818. Cited by: §2.1.
  • [5] W. Chen, X. Jia, H. J. Chang, J. Duan, L. Shen, and A. Leonardis (2021) Fs-net: fast shape-based network for category-level 6d object pose estimation with decoupled rotation mechanism. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1581–1590. Cited by: §2.2, §3.1, §3.4, §3.6, §3.7, §3.7, §4.
  • [6] X. Chen, H. Zhang, Z. Yu, A. Opipari, and O. C. Jenkins (2022) ClearPose: large-scale transparent object dataset and benchmark. arXiv preprint arXiv:2203.03890. Cited by: Figure 2, §1, §2.1, §4.
  • [7] Y. Di, R. Zhang, Z. Lou, F. Manhardt, X. Ji, N. Navab, and F. Tombari (2022) GPV-pose: category-level object pose estimation via geometry-guided point-wise voting. arXiv preprint arXiv:2203.07918. Cited by: 3rd item, §2.2, §3.1, §3.2, §3.4, §3.6, §3.7, §4.1, §4, §4.
  • [8] Z. Fan, Z. Song, J. Xu, Z. Wang, K. Wu, H. Liu, and J. He (2021) ACR-pose: adversarial canonical representation reconstruction network for category level 6d object pose estimation. arXiv preprint arXiv:2111.10524. Cited by: §2.2.
  • [9] H. Fang, H. Fang, S. Xu, and C. Lu (2022) TransCG: a large-scale real-world dataset for transparent object depth completion and grasping. arXiv preprint arXiv:2202.08471. Cited by: 3rd item, §1, §2.1, §3.1, §3.3, §4.1.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. Advances in Neural Information Processing Systems 27. Cited by: §2.2.
  • [11] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969. Cited by: §2.1, Figure 3, §3.1, §3.2, §4.
  • [12] J. Ichnowski, Y. Avigal, J. Kerr, and K. Goldberg (2021) Dex-nerf: using a neural radiance field to grasp transparent objects. arXiv preprint arXiv:2110.14217. Cited by: §1, §2.1.
  • [13] M. Z. Irshad, T. Kollar, M. Laskey, K. Stone, and Z. Kira (2022) CenterSnap: single-shot multi-object 3d shape reconstruction and categorical 6d pose and size estimation. arXiv preprint arXiv:2203.01929. Cited by: §2.2.
  • [14] X. Jiang, D. Li, H. Chen, Y. Zheng, R. Zhao, and L. Wu (2022) Uni6D: a unified cnn framework without projection breakdown for 6d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11174–11184. Cited by: §3.5.
  • [15] A. Kalra, V. Taamazyan, S. K. Rao, K. Venkataraman, R. Raskar, and A. Kadambi (2020) Deep polarization cues for transparent object segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8602–8611. Cited by: §2.1.
  • [16] M. P. Khaing and M. Masayuki (2018)

    Transparent object detection using convolutional neural network

    .
    In International Conference on Big Data Analysis and Deep Learning Applications, pp. 86–93. Cited by: §2.1.
  • [17] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.
  • [18] P. Lai and C. Fuh (2015) Transparent object detection using regions with convolutional neural network. In IPPR Conference on Computer Vision, Graphics, and Image Processing, Vol. 2. Cited by: §2.1.
  • [19] X. Li, H. Wang, L. Yi, L. J. Guibas, A. L. Abbott, and S. Song (2020) Category-level articulated object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3706–3715. Cited by: §2.2.
  • [20] J. Lin, Z. Wei, Z. Li, S. Xu, K. Jia, and Y. Li (2021) Dualposenet: category-level 6d object pose and size estimation using dual pose network with refined learning of pose consistency. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3560–3569. Cited by: §2.2.
  • [21] Z. Lin, S. Huang, and Y. F. Wang (2020) Convolution in the cloud: learning deformable kernels in 3d graph convolution networks for point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.6, §4.2.
  • [22] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han (2019)

    On the variance of the adaptive learning rate and beyond

    .
    arXiv preprint arXiv:1908.03265. Cited by: §4.
  • [23] X. Liu, S. Iwase, and K. M. Kitani (2021) Stereobj-1m: large-scale stereo image dataset for 6d object pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10870–10879. Cited by: §1, §2.1.
  • [24] X. Liu, R. Jonschkowski, A. Angelova, and K. Konolige (2020) Keypose: multi-view 3d labeling and keypoint estimation for transparent objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11602–11610. Cited by: §1, §2.1.
  • [25] I. Loshchilov and F. Hutter (2017) Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Cited by: §4.
  • [26] I. Lysenkov, V. Eruhimov, and G. Bradski (2013) Recognition and pose estimation of rigid transparent objects with a kinect sensor. Robotics 273 (273-280), pp. 2. Cited by: §1.
  • [27] C. J. Phillips, M. Lecce, and K. Daniilidis (2016) Seeing glassware: from edge detection to pose estimation and shape recovery.. In Robotics: Science and Systems, Vol. 3, pp. 3. Cited by: §1.
  • [28] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 234–241. Cited by: §3.1, §3.4.
  • [29] S. Sajjan, M. Moore, M. Pan, G. Nagaraja, J. Lee, A. Zeng, and S. Song (2020) Clear grasp: 3d shape estimation of transparent objects for manipulation. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 3634–3642. Cited by: §1, §2.1.
  • [30] Y. Tang, J. Chen, Z. Yang, Z. Lin, Q. Li, and W. Liu (2021) DepthGrasp: depth completion of transparent objects using self-attentive adversarial network with spectral residual for grasping. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5710–5716. Cited by: §2.1.
  • [31] M. Tian, M. H. Ang, and G. H. Lee (2020) Shape prior deformation for categorical 6d object pose and size estimation. In European Conference on Computer Vision, pp. 530–546. Cited by: §2.2.
  • [32] M. Tian, L. Pan, M. H. Ang, and G. H. Lee (2020) Robust 6d object pose estimation by learning rgb-d features. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 6218–6224. Cited by: §2.1.
  • [33] S. Umeyama (1991) Least-squares estimation of transformation parameters between two point patterns. IEEE Transactions on Pattern Analysis & Machine Intelligence 13 (04), pp. 376–380. Cited by: §2.2.
  • [34] C. Wang, D. Xu, Y. Zhu, R. Martín-Martín, C. Lu, L. Fei-Fei, and S. Savarese (2019) Densefusion: 6d object pose estimation by iterative dense fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3343–3352. Cited by: §3.1.
  • [35] H. Wang, S. Sridhar, J. Huang, J. Valentin, S. Song, and L. J. Guibas (2019) Normalized object coordinate space for category-level 6d object pose and size estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2642–2651. Cited by: §2.2.
  • [36] T. Weng, A. Pallankize, Y. Tang, O. Kroemer, and D. Held (2020) Multi-modal transfer learning for grasping transparent and specular objects. IEEE Robotics and Automation Letters 5 (3), pp. 3791–3798. Cited by: §1, §2.1.
  • [37] E. Xie, W. Wang, W. Wang, M. Ding, C. Shen, and P. Luo (2020) Segmenting transparent objects in the wild. In European Conference on Computer Vision, pp. 696–711. Cited by: §2.1.
  • [38] C. Xu, J. Chen, M. Yao, J. Zhou, L. Zhang, and Y. Liu (2020) 6dof pose estimation of transparent object from a single rgb-d image. Sensors 20 (23), pp. 6790. Cited by: §1, §2.1, §3.5.
  • [39] H. Xu, Y. R. Wang, S. Eppel, A. Aspuru-Guzik, F. Shkurti, and A. Garg (2021) Seeing glass: joint point cloud and depth completion for transparent objects. arXiv preprint arXiv:2110.00087. Cited by: §1, §2.1.
  • [40] Y. Xu, H. Nagahara, A. Shimada, and R. Taniguchi (2015) Transcut: transparent object segmentation from a light-field image. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3442–3450. Cited by: §2.1.
  • [41] H. Yong, J. Huang, X. Hua, and L. Zhang (2020) Gradient centralization: a new optimization technique for deep neural networks. In European Conference on Computer Vision, pp. 635–652. Cited by: §4.
  • [42] Y. You, R. Shi, W. Wang, and C. Lu (2022) CPPF: towards robust category-level 9d pose estimation in the wild. arXiv preprint arXiv:2203.03089. Cited by: §2.2.
  • [43] M. Zhang, J. Lucas, J. Ba, and G. E. Hinton (2019) Lookahead optimizer: k steps forward, 1 step back. Advances in Neural Information Processing Systems 32. Cited by: §4.
  • [44] Z. Zhou, X. Chen, and O. C. Jenkins (2020) Lit: light-field inference of transparency for refractive object localization. IEEE Robotics and Automation Letters 5 (3), pp. 4548–4555. Cited by: §2.1.
  • [45] Z. Zhou, T. Pan, S. Wu, H. Chang, and O. C. Jenkins (2019) Glassloc: plenoptic grasp pose detection in transparent clutter. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4776–4783. Cited by: §2.1.
  • [46] Z. Zhou, Z. Sui, and O. C. Jenkins (2018) Plenoptic monte carlo object localization for robot grasping under layered translucency. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–8. Cited by: §2.1.
  • [47] L. Zhu, A. Mousavian, Y. Xiang, H. Mazhar, J. van Eenbergen, S. Debnath, and D. Fox (2021) RGB-d local implicit function for depth completion of transparent objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4649–4658. Cited by: §2.1.
  • [48] L. Zou, Z. Huang, N. Gu, and G. Wang (2021) 6D-vit: category-level 6d object pose estimation via transformer-based instance representation learning. arXiv preprint arXiv:2110.04792. Cited by: §2.2, Figure 3, §3.1, §3.6, §4.2, §4.