I Introduction
Robots working in human environments often encounter a wide range of articulated objects, such as tools, cabinets, and other kinematically jointed objects. For example, the cabinet with three drawers shown in Fig 6 functions as a storage container. An robot would need to perform open and close actions on the drawers to accomplish storage and retrieval tasks. Accomplishing such tasks involves repeated senseplanact phases under uncertainty in the robot’s observations and demands pose estimation that accommodates uncertainty to inform a planner with current state of the world. This uncertainty poses the challenge of dealing with sensor noise and inherent environmental occlusions. Ability to perceive articulated pose under partial observations due to self and environmental occlusions makes the inference problem multimodal. Further, the inference becomes a highdimensional problem when the number of object parts grow in number.
Pose estimation methods have been proposed that take a generative approach to the problem [22, 2, 26]. These methods aim to explain a scene as a collection of object/parts poses, using a particle filter formulation to iteratively maintain belief over possible states in the form of particles. Though these approaches hold the power of modeling the world generatively, they have a inherent drawback of being slow with the increase in the number of rigid bodies. In this paper, we focus on overcoming this drawback by factoring the state as individual object parts constrained by their articulations to create an efficient inference framework for pose estimation.
Generative methods exploiting articulation constraints are widely used in human pose estimation problems [17, 21, 24] where human body parts have constrained articulation. We take a similar approach and factor the problem using a Markov Random Field (MRF) formulation where each hidden node in the probabilistic graphical model represents an observed objectpart’s pose (continuous variable), each observed node has the information about the objectpart from observation and edges of the graph denote the articulation constraints between the parts. Inference on the graph is performed using a message passing algorithm that share the information between the parts’ pose variables, to produce their pose beliefs which collectively gives the state of the articulated object.
Existing message passing approaches [10, 20] represent message as a mixture of Gaussian components and provide Gibbs sampling based techniques to approximate message product and update operations. Their message representation and product techniques limits the number of samples used in the inference and is not applicable to our application domain. In this paper we provide a more efficient “pull” Message Passing algorithm for Nonparametric Belief Propagation (PMPNBP). The key idea of pull message updating is to evaluate samples taken from the belief of the receiving node with respect to the densities informing the sending node. The mixture product approximation can then be performed individually per sample, and then later normalized into a distribution. This pull updating avoids the computational pitfalls of push updating of message distributions used in [10, 20].
Our system takes in 3D point cloud as the sensor data and object geometry models in the form of an URDF (Unified Robot Description Format) as input and outputs belief samples in continuous pose domain. We use these belief samples to compute a maximum likely estimate to let the robot act on the object. We evaluate the performance of the system by quantifying over an articulated object on compelling scenes. Contributions of this paper include: a) proposal of an efficient belief propagation algorithm to estimate articulated object poses, b) discussion and comparisons with traditional particle filter as baseline, c) a belief representation from perception to inform a task planner. A simple task is illustrated to show how the belief propagation informs a task planner to choose an information gain action and overcome uncertainty in the perceptual estimation.
Ii Related work
Existing methods in the literature have set out to address the challenge of manipulating articulated objects by robots in complex human environments. Particular focus has been placed on addressing the task of estimating novel articulated objects’ kinematic models by a robot through interactive perception. Hausman et al. [7] propose a particle filtering approach to estimate articulation models and plan actions to reduce the model uncertainty. In [12], Martin et al. suggest an online interactive perception technique for estimating kinematic models by incorporating lowlevel point tracking and midlevel rigid body tracking with highlevel kinematic model estimation over time. Sturm et al. [19, 18] addressed the task of estimating articulation models in a probabilistic fashion by human demonstration of manipulation examples.
All of these approaches discover the articulated object’s kinematic model by alternating between action and sensing and are important methods to enable a robot is to reliably interact with novel articulated objects. In this paper we assume that such kinematic models once learned for an object can be reused to localize their articulated pose under real world ambiguous observations. The method proposed in this paper could compliment the existing body of work towards task completion in unstructured human environment.
Probabilistic graphical model representations such as Markov random field (MRF) are widely used in computer vision problems where the variables take discrete labels such as foreground/background. Many algorithms have been proposed to compute the joint probability of the graphical model. Belief propagation algorithms are guaranteed to converge on treestructured graphs. For graph structures with loops, Loopy Belief Propagation (LBP)
[13] is empirically proven to perform well for discrete variables. The problem becomes nontrivial when the variables take continuous values. Sudderth et.al (NBP) [20] and Particle Message Passing (PAMPAS) by Isard et.al [10] provide sampling approaches to perform belief propagation with continuous variables. Both of these approaches approximate a continuous function as a mixture of weighted Gaussians and use local Gibbs sampling to approximate the product of mixtures. NBP has been effectively used in applications such as human pose estimation [17] and hand tracking [21]by modelling the graph as a tree structured particle network. Scene understanding problems where a scene is composed of household objects with articulations demands large number of samples in the representation to handle the highdimensional multimodal state space. The algorithm proposed in this paper produces promising results to handle such demands. We reported comparisons with existing NBP algorithm
[10] in [3] with 2D examples.Model based generative methods [14, 23, 25]
are increasingly being used to solve scene estimation problems where heuristics from discriminative approaches
[16, 5] are used to infer object poses. These approaches do not account for objectobject interactions or articulations and rely significantly on the effectiveness of recognition. Our framework doesn’t rely on any prior detections but can benefit from them while inherently handling noisy priors [20, 10, 3]. Chua et. al [1] proposed a scene grammar representation and belief propagation over factor graphs, whose objective similar to ours for generating scenes with multipleobjects satisfying the scene grammars. This approach is similar to ours however, we specifically deal with 3D observations along with continuous variables.Iii Problem Statement
We consider an articulated object to be comprised of with objectparts and points of articulation. Such an object description conforms with the Unified Robot Description Format (URDF) commonly used in the Robot Operating System (ROS) [15]. Such a URDFcompliant kinematic model can be represented using an undirected graph with nodes for objectpart links and edges for points of articulation. If is a Markov Random Field (MRF), it has two types of variables and that are, respectively, hidden and observed variables. Let , where , with being the point cloud observed by the robot’s 3D sensor. Each objectpart has an observed node in the graph . serves as a region of interest if a trained object detector is used to find the object in the scene, but is optional in our current approach. Each observed node is connected to a hidden node that represents the pose of the underlying object part. Let , where is a dual quaternion pose of an objectpart. Dual quaternions [4, 11] are a quaternion equivalent to dual numbers representing a 6D pose as where is the real component and is the dual component. Alternatively it is represented as . Constructing a dual quaternion is similar to rotation matrices, with a product of dual quaternions representing translation and orientation as , where is a dual quaternion multiplication. is the dual quaternion representation of pure rotation and is the dual quaternion representation of pure translation. This dual quaternion representation is widely used for rigid body kinematics, where the operation due to its efficiency and elegance compared with matrix multiplication. In addition to the representing the hidden variable , dual quaternions can capture the constraints in the edges and represent articulation types such as prismatic, revolute, and fixed effectively. This will be discussed in detail in Section IVD2.
Pose estimation of the articulated object involves inferring the hidden variables that maximizes the joint probability of the graph considering only second order cliques, which is given as:
(1) 
where is the pairwise potential between nodes and , is the unary potential between the hidden node and observed node , and is a normalizing factor. The problem is to infer belief over the possible articulation poses assigned to hidden variables that are continuous such that the joint probability is maximized. This inference is generally performed by passing messages between hidden variables until convergence of their belief distributions over several iterations. After converging over iterations, a maximum likelihood estimate of the marginal belief gives the pose estimate of a objectpart corresponding to the node in the graph . The collection of all such objectpart pose estimates form the entire object’s pose estimate.
Iv Nonparametric Belief Propagation
Iva Overview
A message is denoted as directed from node to node if there is an edge between the nodes in the graph . The message represents the distribution of what node thinks node should take in terms of the hidden variable . Typically, if is in the continuous domain, then is represented as a Gaussian mixture to approximate the real distribution:
(2) 
where , is the number of Gaussian components, is the weight associated with the component, and are the mean and covariance of the component, respectively. We use the terms components, particles and samples interchangeably in this paper. Hence, a message can be expressed as triplets:
(3) 
[ innerlinewidth=0.5pt, innerleftmargin=10pt, innerrightmargin=10pt, innertopmargin = 10pt, innerbottommargin=10pt, skipabove=roundcorner=5pt, frametitle=Algorithm  Message update, frametitlerule=true, frametitlerulewidth=1pt] Given input messages for each , and methods to compute functions and pointwise, the algorithm computes

Draw independent samples from .

If the
is a uniform distribution or informed by a prior distribution.

If the is a belief computed at iteration using importance sampling.


For each , compute

Sample

Unary weight is computed using .

Neighboring weight is computed using .

For each compute where
. 
Each neighboring weight is computed by


The final weights are computed as .


The weights are associated with the samples to represent .
Assuming the graph has tree or loopy structure, computing these message updates is nontrivial computationally. A message update in a continuous domain at an iteration from a node is given by
(4) 
where is a set of neighbor nodes of . The marginal belief over each hidden node at iteration is given by
(5)  
where is the number of components used to represent the belief.
IvB “Push” Message Update
NBP [20] provides a Gibbs sampling approach to compute an approximation of the product . Assuming that is pointwise computable, a “premessage” [9] is defined as
(6) 
which can be computed in the Gibbs sampling procedure. This reduces Equation 4 to
(7) 
[ innerlinewidth=0.5pt, innerleftmargin=10pt, innerrightmargin=10pt, innertopmargin = 10pt, innerbottommargin=10pt, skipabove=roundcorner=5pt, frametitle=Algorithm  Belief update, frametitlerule=true, frametitlerulewidth=1pt] Given incoming messages for each , and methods to compute functions pointwise, the algorithm computes

For each

Update weights .

Normalize the weights such that .


Combine all the incoming messages to form a single set of samples and their weights , where is the sum of all the incoming number of samples.

Normalize the weights such that .

Perform a resampling step followed by diffusion with Gaussian noise, to sample new set that represent the marginal belief of .
NBP[20] sample from the “premessage” followed by a pairwise sampling where is acting as to get a sample .
The Gibbs sampling procedure in itself is an iterative procedure and hence makes the computation of the ”premessage” (as the Foundation function described for PAMPAS) expensive as increases.










Convergence of pose estimation on two different scenes: the first column shows the RGB image of each scene, second to fourth columns show the convergence results of PMPNBP. The second column shows randomly initialized belief particles, the third column shows the belief particles after 100 iterations, and the fourth column shows the maximum likely estimates of each part. The fifth column shows the estimation error (0.95 confidence interval) using PMPNBP with respect to the baseline particle filter method across 10 runs (400 particles and 100 iterations each). It can be seen that the baseline suffers from local minimas while PMPNBP is able to recover from them effectively.
IvC “Pull” Message Update
Given the overview of Nonparametric Belief Propagation above in Section IVA, we now describe our “pull” message passing algorithm. We represent message as a set of pairs instead of triplets in Equation 3 which is
(8) 
Similarly, the marginal belief is summarized as a sample set
(9) 
where is the number of samples representing the marginal belief. We assume that there is a marginal belief over as from the previous iteration. To compute the , at iteration , we initially sample from the belief . Pass these samples over to the neighboring nodes and compute the weights . This step is described in IVA. The computation of is described in IVB. The key difference between the “push” approach of the earlier methods (NBP [20] and PAMPAS [10]) and our “pull” approach is the message generation. In the “push” approach, the incoming messages to determines the outgoing message . Whereas, in the “pull” approach, samples representing are drawn from its belief from previous iteration and weighted by the incoming messages to . This weighting strategy is computationally efficient. Additionally, the product of incoming messages to compute is approximated by a resampling step as described in IVB.
IvD Potential functions
IvD1 Unary potential
Unary potential is used to model the likelihood by measuring how a pose explains the point cloud observation . The hypothesized object pose is used to position the given geometric object model and generate a synthetic point cloud that can be matched with the observation . The synthetic point cloud is constructed using the objectpart’s geometric model available a priori. The likelihood is calculated as
(10) 
where is the scaling factor, is the sum of 3D Euclidean distance between the observed point and rendered point at each pixel location in the region of interest.
IvD2 Pairwise potential and sampling
Pairwise potential gives information about how compatible two object poses are given their joint articulation constraints captured by the edge between them. As mentioned in the Section8, these constraints are captured using dual quaternions. Most often, the joint articulation constraints have minimum and maximum range in either prismatic or revolute types. We capture this information from URDF to get , giving the limits of articulations. For a given and , we find the distance between and the limits as and , as well as the distance between the limits . Using a joint limit kernel parameterized by , we evaluate the pairwise potential as:
(11) 
The pairwise sampling uses the same limits to sample for given a . We uniformly sample a dual quaternion that is between and transform it back to the ’s current frame of reference by .
V Experiments and Results












Va Experimental setup
We use Fetch robot, a mobile manipulation platform for our data collection and manipulation experiments. RGBD data is collected using an ASUS Xtion RGBD sensor mounted on the robot along with the intrinsic and camera to robot base transform. We use CUDAOpenGL interoperation to render synthetic scenes on large set of poses in a single render buffer on a GPU. We render scenes as depth images, then project them back to 3D point clouds via camera intrinsic parameters.
VB Articulated Objects Models
We used a cabinet with three drawers as our articulated object in the experiment. CAD model of the object is obtained from the Internet and annotation of their articulations are performed on Blender to generate URDF models. Obtaining geometrical models and articulation models can either be crowdsourced [6] or learned using human or robot interactions [12].
VC Baseline
We implemented Monte Carlo localization (particle filter) method that has object specific state representation. For example, the Cabinet with 3 drawers have state representation of where the first 6 elements describe the 6D pose of the object in the world and represent the prismatic articulation. The measurement model in the implementation uses the unary potential described in the Section IVD1. Instead of rendering a point cloud of each objectpart, the entire object in the hypothesized pose is rendered for measuring the likelihood. As the observations are static, the action model in the standard particle filter is replaced with a Gaussian diffusion over the object poses.














Robot Experiments 1: The task for the robot is to open the drawer 3 (bottom) while having the drawer 1 open. The robot estimates the state of the object with certainty as shown in (b) with drawer 1 open and drawer 3 closed. In addition to the estimate, covariance can be calculated (shown as ellipsoids in (c) with 75% confidence interval). This could be used to decide that the estimation is certain with a threshold on the standard deviation on each of the dimensions of the pose. In this case the standard deviation of (
) falls below the threshold 0.25cm, and allows the robot to perform the opening action. (de) shows the robot performing opening action on drawer 3 using the estimate.VD Convergence Results
In the Figure. 19, we show the convergence of the proposed method visually for two scenes containing different point cloud observations. We collected point cloud observations of the objects in arbitrary poses and performed inference using both the proposed PMPNBP and the baseline Monte Carlo localization. Entire point cloud observed by the sensor is used as the observation for all the objectparts. The first column shows the scene (RGB not used in the inference). Second column shows the uniformly initialized poses of the objectparts on the entire point cloud. Third column shows the propagated belief particles for each objectpart after 100 iterations. Fourth column shows the Maximum Likely Estimate (MLE) of each objectpart using the belief particles from the third column.
For the results shown in Figure. 19
, we ran our inference for 100 iterations with 400 particles representing the messages. 10 different runs are used to generate the convergence plot that shows the mean and variance in error across the runs. We adopt the average distance metric (ADD) proposed in
[8, 25] for the evaluation. The point cloud model of the objectpart is transformed to its ground truth dual quaternion () and to the estimated pose’s dual quaternion (). Error is calculated as the pointwise distance of these transformation pairs normalized by the number of points in the model point cloud.(12) 
where () and () are the conjugates of the dual quaternions [4, 11], is the number of 3D points in the model set .
VE Partial and incomplete observations
Articulated models suffer from selfocclusions and often environmental occlusions. By exploiting the articulation constraints of an object in the pose estimation, our inference method is able to estimate a physically plausible estimate that can explain the partial or incomplete observations. In Figure. 32 we show three compelling cases that indicates the strength of our inference method. In the first case, the drawer 1 heavily occludes the bottom drawers resulting in limited observations on drawer 2 and 3. PMPNBP is able to estimate a plausible pose given the constraints. In the second case, the cabinet is occluded by the robot’s arm, while in the third case, a blanket from the drawer 1 occludes half of the object. PMPNBP is able to recover from these occlusions and produce a plausible estimate along with belief of possible poses.
The factored approach proposed in this paper scales to objects with higher number of links and joints with combinations of articulations. This is evaluated by estimating the pose of a Fetch robot that has 12 nodes and 11 edges in its graphical model. The graphical model is constructed using the URDF model of the robot. This is shown in Figure. 42(c) where the robot is observed using the a depth camera. Figure. 42(a & b) show the original scene and its point cloud observation with partial sensor data on the base, torso and the head of the robot. PMPNBP is able to estimate the pose of the robot by iteratively passing messages for 1000 iterations. Figures. 42(df) and Figures. 42(gi) show the belief samples of the robot links at iteration 1 and 1000 followed by the most likely estimation (MLE) from two different view points for better visualization.
VF Benefits of maintaining belief towards planning actions
We show how the belief propagation approach aids in planning with a simple task illustration. Assume that the robot is performing a larger task of storing elements into the drawer 3. In a subtask, the goal is to open the drawer 3. With this setting (see Figure. 48) the robot is perceiving the current scene by estimating the pose of the cabinet, along with covariance on the belief for each part. We set a maximum threshold of 0.25cm on the standard deviation of dimensions to decide if the estimation is certain or not. In this case, the standard deviation from the belief falls within this threshold and the robot is certain that the drawer 1 is open and drawer 3 is closed. Hence, the robot performs opening drawer 3 action. For the same task but with a different observation (see Figure. 57), the robot estimates the pose of the cabinet, along with its covariance. However, in this case, the robot is not certain about the estimation as the standard deviation is bigger than the threshold. This enables the robot to take an intermediate action (of lowering its torso) that provides a new observation of the cabinet. With this new observation, the robot perceives that the drawer 3 is closed with more certainty and performs an open action. This is an illustration of how the belief can be used in planning actions. More rigorous experiments with the choice of thresholds for different objects and tasks will be detailed in the future work.








Vi Conclusion
We proposed Pull Message Passing algorithm for Nonparametric Belief Propagation (PMPNBP), an efficient algorithm to estimate the poses of articulated objects. This problem was formulated as a graph inference problem for a Markov Random Field (MRF). We showed that the PMPNBP outperforms the baseline Monte Carlo localization method quantitatively. Qualitative results are provided to show the pose estimation accuracy of PMPNBP under a variety of occlusions. We also showed the scalability of the algorithm to articulated objects with higher number of nodes and edges in their probabilistic graphical models. In addition, we illustrated how belief propagation can benefit robot manipulation tasks. The notion of uncertainty in the inference is inevitable in robotic perception. Our proposed PMPNBP algorithm is able to accurately estimate the pose of articulated objects and maintain belief over possible poses that can benefit a robot in performing a task.
References
 [1] J. Chua and P. F. Felzenszwalb. Scene grammars, factor graphs, and belief propagation. arXiv preprint arXiv:1606.01307, 2016.
 [2] K. Desingh, O. C. Jenkins, L. Reveret, and Z. Sui. Physically plausible scene estimation for manipulation in clutter. In IEEERAS 16th International Conference on Humanoid Robots (Humanoids), pages 1073–1080, 2016.
 [3] K. Desingh, A. Opipari, and O. C. Jenkins. Pull message passing for nonparametric belief propagation. CoRR, abs/1807.10487, 2018.

[4]
I. Gilitschenski, G. Kurz, S. J. Julier, and U. D. Hanebeck.
A new probability distribution for simultaneous representation of uncertain position and orientation.
In Information Fusion (FUSION), 2014 17th International Conference on, pages 1–7. IEEE, 2014. 
[5]
R. Girshick, J. Donahue, T. Darrell, and J. Malik.
Rich feature hierarchies for accurate object detection and semantic
segmentation.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 580–587, 2014.  [6] S. R. Gouravajhala, J. Yim, K. Desingh, Y. Huang, O. C. Jenkins, and W. S. Lasecki. Eureca: Enhanced understanding of real environments via crowd assistance. 2018.
 [7] K. Hausman, S. Niekum, S. Osentoski, and G. S. Sukhatme. Active articulation model estimation through interactive perception. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 3305–3312, May 2015.
 [8] S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab. Model based training, detection and pose estimation of textureless 3d objects in heavily cluttered scenes. In Asian conference on computer vision, pages 548–562. Springer, 2012.
 [9] A. Ihler and D. McAllester. Particle belief propagation. In Artificial Intelligence and Statistics, pages 256–263, 2009.
 [10] M. Isard. PAMPAS: Realvalued graphical models for computer vision. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pages 613–620, 2003.
 [11] B. Kenwright. A beginners guide to dualquaternions: what they are, how they work, and how to use them for 3d character hierarchies. 2012.
 [12] R. M. Martin and O. Brock. Online interactive perception of articulated objects with multilevel recursive estimation based on taskspecific priors. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, September 1418, 2014, pages 2494–2501, 2014.
 [13] K. P. Murphy, Y. Weiss, and M. I. Jordan. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 467–475, 1999.
 [14] V. Narayanan and M. Likhachev. Discriminativelyguided deliberative perception for pose estimation of multiple 3d object instances. In Robotics: Science and Systems, 2016.
 [15] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng. Ros: an opensource robot operating system. In ICRA Workshop on Open Source Software, 2009.
 [16] S. Ren, K. He, R. Girshick, and J. Sun. Faster rcnn: Towards realtime object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 91–99. Curran Associates, Inc., 2015.
 [17] L. Sigal, S. Bhatia, S. Roth, M. J. Black, and M. Isard. Tracking looselimbed people. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pages 421–428, 2004.
 [18] J. Sturm. Approaches to Probabilistic Model Learning for Mobile Manipulation Robots. Springer Tracts in Advanced Robotics (STAR). Springer, 2013.
 [19] J. Sturm, C. Stachniss, and W. Burgard. A probabilistic framework for learning kinematic models of articulated objects. J. Artif. Intell. Res., 41:477–526, 2011.
 [20] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), page 605, 2003.
 [21] E. B. Sudderth, M. I. Mandel, W. T. Freeman, and A. S. Willsky. Visual hand tracking using nonparametric belief propagation. In IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’04), pages 189–189, 2004.
 [22] Z. Sui, L. Xiang, O. C. Jenkins, and K. Desingh. Goaldirected robot manipulation through axiomatic scene estimation. The International Journal of Robotics Research, 36(1):86–104, 2017.
 [23] Z. Sui, Z. Zhou, Z. Zeng, and O. C. Jenkins. Sum: Sequential scene understanding and manipulation. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on. IEEE, 2017.
 [24] M. Vondrak, L. Sigal, and O. C. Jenkins. Dynamical simulation priors for human motion tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):52–65, 2013.
 [25] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017.
 [26] Z. Zeng, Z. Zhou, Z. Sui, and O. C. Jenkins. Semantic robot programming for goaldirected manipulation in cluttered scenes. In IEEE/RSJ International Conference on Robotics and Automation (ICRA), 2018.
Comments
There are no comments yet.