I Introduction
From frosted windows to plastic containers to refractive fluids, translucency is prevalent in human environments. Translucent materials are commonplace in our daily lives and households, but remain an open challenge for autonomous mobile manipulators. Previous methods, such as work by Foster et al. [7], have enabled robots to navigate autonomously in the presence of glass and transparent surfaces. When handling objects, however, robot perception systems must contend with a wider diversity of objects and materials.
Translucent objects, in particular, break many of our assumptions in robot sensing and perception about opacity and transparency. For example, existing sixDoF pose estimation methods
[25, 19] often heavily rely on RGBD sensors to reconstruct 3D point clouds. Such sensors are typically illequipped to handle the uncertainty caused by the reflection and refraction properties of translucent materials. As a result, translucent objects are often invisible to the robots for the purposes of dexterous manipulation.An important topic related to this problem is multilayer stereo depth estimation as studied by Borga and Knutsson [3]. These findings establish that even transparent surfaces will emit their own distinguishable patterns. When the pattern from translucent surfaces interacts with patterns from Lambertian surfaces, the result will be multiorientation epipolar image lines in multiview stereo images. These stereo images can record these patterns within light fields, and equip a robot with the ability to identify surfaces with transparent properties.
Light field photography offers considerable potential for robot perception in scenes with translucency. For example, Oberlin and Tellex [21] found that a highresolution camera on the wrist of a robot manipulator can capture light fields for a static scene. By moving the robot endeffector in a designed trajectory, this time lapse approach to light field capture was demonstrated as capable of manipulating transparent and reflective objects. We now aim to extend similar ideas to a larger class of translucent materials, along with explicit pose estimation for more purposeful object manipulation.
In this paper, we propose Plenoptic Monte Carlo Localization (PMCL) as a method for sixDoF object pose estimation and manipulation under uncertainty due to translucency. Our PMCL method uses observations from light field imagery collected by a Lytro camera mounted on the wrist of a mobile manipulator. These observations are used to form a new plenoptic descriptor, the Depth Likelihood Volume (DLV). The DLV is introduced to describe a scene with multiple layers of depth due to translucency. The DLV is then used as a likelihood function with a Monte Carlo localization method for our PMCL algorithm to estimate object poses. We demonstrate the efficacy of PMCL with DLV for manipulation in translucency with an implementation using a Michigan Progress Fetch robot. We present results of object localization and grasping for two situations: transparent objects in transparent media (Figure 1) and opaque objects diffusely occluded by translucent media.
Ii Related Work
Iia Perception for Manipulation
The problem of perception for manipulation remains challenging for robots working in human environments and the natural world. The presented concepts for PMCL build on a substantial body of work in this area, which we summarize briefly. Ciocarlie et al. [4] proposed a robust pickandplace pipeline for the Willow Garage PR2 robot. This pipeline segments and clusters points which comprise isolated opaque tabletop objects observed from an RGBD sensor. For more cluttered environments, Collet et al. [5] proposed the MOPED perception framework for localizing objects by discriminatively clustering multiview features in color images. Narayanan et al. [18] take a deliberative approach to infer the pose of objects in clutter from RGBD observations. This work performs A* search over possible scene states using a discriminative algorithm for 3D pose estimation. Similar in its aims, Sui et al. [24, 25]
have proposed generative models for scene inference and estimation. Such generative models combine object detection from neural networks with Monte Carlo localization algorithms in the scenario of object sorting on highly cluttered tabletops.
For transparent object perception, McHenry et al. [16, 17] have used reflective features from transparent objects for segmentation in a single RGB image. Lei at al. [12] segment out transparent objects by searching failure detection from laser rangefinding (LIDAR) combined with RGB image features. Methods by Phillips et al. [22] describe detection and estimation of rotationally symmetric transparent objects using edge features. Lysenkov et al. [14] perform sixDoF pose estimation of transparent objects based on a silhoutte model corresponding with invalid RGBD depth measurements. Partial opacity from translucent materials can be problematic for such methods, where clear edge features become blurred due to diffuse reflection.
IiB Light Field Photography
The contributions of this paper are founded upon models described by Levoy and Hanrahan [13] for understanding light fields and plenoptic functions. Their seminal paper covers the foundations of capturing light fields from digital imagery and using them to synthesize new viewpoints from arbitary camera positions. Building on this work, microlensbased light field photography [8, 20] has witnessed significant advancements in depth estimation, image refocusing, transparent object recognition, and surface reconstruction.
In computer vision, Maneo et al.
[15] proposed “light field distortion features” to capture distortions and recognize transparent objects. Sulc et al. [26] separates diffuse color components from 4D light field imagery to suppress nonlambertian surface’s reflection. Wang et al. [28] introduced a light field occlusion model for accurate recovery of the depth information around the edge where occlusion occurs. Jeon et al. [10] overcome the narrow baseline problem of light field cameras based on the subpixel shift method. This method generates accurate depth images even when the displacement of two adjacent subaperture images is less than 1 pixel. Our presented methods for PMCL build directly upon ideas in recent work by Goldluecke et al. [11, 29] for 3D reconstruction in multitranslucent environments. This work proposes generating multiorientation features observed in epipolar plane images generated by a light field imagery, with impressive results for 3D reconstruction in high translucency.In robotics, Oberlin and Tellex [21] introduced a time lapse approach to capture light for pickandplace localization with a Rethink Baxter robot. This work demonstrated compelling results for localizing grasp and placement points in scenes with transparency and reflection, which has been problematic for current sensors. Our PMCL method shares similar aims with more general models of translucency in mind. Further, estimation of sixDoF object pose estimation by PMCL will allow for greater flexibility in planning and executing manipulation actions. We posit PMCL to be readily capable of object tracking from plenoptic observations, although such experiments are left for future work.
Iii Problem Formulation
Given an input light field image observation , the purpose of sixDoF pose estimation is to infer the rigid transformation from an object’s local coordinate frame to the camera’s coordinate frame . We assume as given the geometry of the target object . Formally, we aim to find the maximum likelihood estimate for the object’s pose given and a map representation in 3D world coordinates:
(1) 
The map is often computed as a metric representation, such as a 3D reconstruction or point cloud. In the case of common RGBD cameras, the map representation is a onetoone mapping from locations in 3D space into depth value at pixel index of a depth image. Such a onetoone mapping assumes opacity in that the sensed depth at a particular pixel is due to light from only one object.
We propose the Depth Likelihood Volume (DLV) as an alternative onetomany mapping to consider the likelihood of a pixel over multiple levels of depth. As the case for translucent objects, the DLV representation is advantageous in environments where multiple objects at more than one depth are responsible for the light sensed at a pixel. The DLV representation expresses as the mapping:
(2) 
where represents a 3D point along a light ray taken as input. The output is the likelihood of light along the ray emitted from depth being received by pixel in the image plane. For our light field cameras, we assume the image plane is determined by the center view image of the subaperture images extracted from light field observation . is discretized possible depths along light ray . An overview of our approach to this problem is shown in Figure 2.
Iv Depth Likelihood Volumes
Before presenting our PMCL method for pose estimation, we first define the Depth Likelihood Volume. We describe the properties of the DLV for distinguishing multiple depths at a given point in an image due to translucency. The construction of the DLV and its use for pose localization is described in the following section.
Iva Formulation
Given a known 3D workspace and its corresponding center view subaperture image plane , a Depth Likelihood Volume is defined in Equation 2. The DLV makes the following basic assumptions and notations for the scene:

Each surface point emits light rays in each channel as a Gaussian over with mean
and variance
which means , as similarly assumed by Oberlin and Tellex [21]. Under constant lighting condition we assume every point in the scene shares the same variance for the same color channel which means for all points in the scene. 
An observed bundle of rays located at pixel plane is a linear combination of all light rays emitted by surface points along the light rays with the normalization scalers . indicates the percentage of rays emitted by the surface in observed rays which measures the transparency of the surface, and we have .
Consider the example in Figure 3 (Left) of two light rays imaged by the central view subaperture image. The index indicates center view, and are pixel coordinates in the center view. These rays are in the 3D space hitting the center view plane at , respectively. Along , there are two surfaces emitting light which are sensed by the central view: one is a pingpong ball and the other is the glass jar. In contrast, along , only light emitted by the table is sensed in the central view. Then can be expressed respectively as:
(3)  
where represents the light rays emitted by glass, pingpong, and table surfaces, respectively. According to our second assumption, we also have and .
Then the depth likelihood is defined as:
(4)  
where is the transformation function finding the light ray corresponding to in stereo pair image index with that indicates depth . For light field camera known baseline and focal length , the can be expressed as , where is disparity which is the function of and . is the squared similarity distance between two light rays over color space which is defined as
distance between two Gaussian mixture models according to assumption (1) and (2) and can be expressed as Equation
9.IvB Validity
We claim that for a given in DLV the following Lemma holds:
Lemma 1
where indicates the true surface depth viewed from center view with transparency indicator . This means, the more transparent a surface, the less likelihood the depth of this surface will be in the DLV.
To show the Lemma 1, we consider the scene as shown in Figure 3 (Right). In the center view (where DLV will be built), (simplify notation as ) contains rays from the glass surface point and pingpong surface point which has depths , respectively. We then evaluate three possible depths in this scene: , , and a invalid depth . For every surface point, corresponding are set as . Notice that since glass is a transparent surface while pingpong is not. Using function we can find three rays (,,) in stereo image corresponding to three depths , , and separately. Then, we can write ray as:
(5) 
where represents three color channels. Without loss of generality, we investigate the red channel and write in same fashion:
(6) 
(7) 
(8) 
Here, we assume that transparent surfaces emit an equal amount of light rays between any two stereo images because the disparity range between adjacent subaperture views of the Lytro camera is smaller than pixel [30] (around rads in view angle in our experiment setting). The squared similarity ( ) distance between and any other rays can be expressed as:
(9) 
where . Given this general expression of distance, we can now provide explicit expressions for the example shown in Figure 3 Right:
(10) 
(11) 
(12)  
where and given the following relation:
(13) 
For the same object, under rads view difference, we assume the color difference between two surface points have the same scale . This assumption implies, for some small value , that .
Disregarding constant scale , Equation 10, 11, 12 can be simplified as Equation 14, 15, 16:
(14) 
(15) 
(16) 
Considering an individual stereo pair and applying Equation 4, we can now express the DLV values for the possible depths for the surface of pingpong ball, , the glass surface, , and the invalid depth, , as:
(17) 
(18) 
(19) 
which implies that the pingpong surface must return more light than the glass surface:
(20) 
Therefore, Lemma 1 holds.
IvC Computation
Our implementation uses the distance between adjacent pixel colors to approximate the similarity of rays in stereo pairs, as photosensors are unable to capture the distribution over wavelengths of light. Considering this limitation, a costvolume stereo comparison method based on subpixel shift [9, 10] was implemented. Two different cost volumes were implemented: the sum of distance in color space () and the sum of gradient differences (). The cost volume then can be defined as:
(21) 
where describes the image coordinate of ray , is depth labels and is a scalar to weight two parts. The terms and are defined as:
(22)  
where is the image, is the image gradient in direction, is a rectangular region that center at ; is a truncation value of a robust function, is the subpixel displacement, and weights different subaperture’s gradient contributions to the center view image. Variables represent pixel in subaperture image index coordinate and represent pixel in the center view.
For a certain depth label , the depth likelihood can be expressed as below based on Equation 4:
(23) 
Optionally, to further distinguish possible depths, the DLV can be truncated by finding number of local maximum with its number of neighbors and setting the other depth likelihoods to .
V Plenoptic Monte Carlo Object Localization
Building on the DLV, we now describe our method of object pose estimation as Plenoptic Monte Carlo Localization. PMCL employs particle filtering to estimate the pose of target objects from the computed DLV. PMCL takes direct inspiration from the work of Dellaert et al. [6] for approximate inference in the form of a sequential Bayesian filter,
(24) 
where a collection of weighted particles is used to represent the pose belief .
Each particle is a hypothesized sixDoF pose of the object and is associated with the weight
indicating how likely the sample is to be close to the actual pose. The initial samples are generated by uniformly sampling the sixDoF pose with identical weight. The weight of each sample is then calculated by using the observation likelihood function described in the next paragraph. With the computed weights, an importance sampling with resampling procedure is performed to concentrate hypothesized particles to more weighted range. For state transition, each particle will be perturbed by a zeromean Gaussian distribution in the space of sixDoF in the action model. This inference can be naturally extended to the case of tracking with an explicit action model and observations over time. In our implementation, the process will iteratively repeat until the average weight is above a chosen threshold for taking an estimate.
Our likelihood function measures the score of a sample’s rendered depth image for a scene DLV. The zbuffer of a 3D graphics engine is used to render each sample into a depth image for comparison with the observation. This rendered depth image, represented as , is mapping back to DLV to find the corresponding depth likelihood interval
. Here, we use an interval because the rendered depth value for a certain pixel may not exactly match its discretized depth value. After finding the corresponding interval, the depth likelihood is calculated using linear interpolation:
(25) 
For the rendered image, with every rendered pixel having nonzero (vaild) depth value , the score for this depth image can be expressed as:
(26) 
where is the number of valid depths in the rendered image.
Vi Results
We now present results for our implementation of PMCL for object localization and grasping in environments with different forms of translucency. We have implemented PMCL using observations from a Lytro light field camera mounted on the wrist of a Michigan Progress Fetch robot (Figure 4). These results consider pickandplace grasping in two types of scenes with: 1) a single transparent object with an opaque but possible reflective background objects (Figures (a)a, (b)b), and 2) opaque objects behind translucent nontransparent surfaces (Figures (c)c, (d)d).
Our implementation uses the Lytro onchip wifi to trigger the shutter remotely and receive raw image data. We are currently unable to capture video with this triggering system. Calibration and subaperture images are generated using the methods described by Bok et al. [2]. This toolbox generates subaperture images, where the image at index is deemed the center view image. Each subaperture image has resolution . During DLV construction, we disregard edge subaperture images due to strong color distortion and pixel shifting artifacts.
Our PMCL algorithm is implemented on CUDA and OpenGL. This implementation ran on a Ubuntu 14.04 operating system with a Titan X graphics card and CUDA 8.0. The light field camera calibration, subaperture images extraction, and DLV construction ran in MATLAB. The chosen parameters for building the DLV were , , , , , and . The Monte Carlo localization process ran on the GPU with 100 particle samples over 500 iterations. With an assumed object geometry, our implementation renders all the particle hypotheses on the GPU. These renderings can be accessed by the CUDA kernels to compute the corresponding weights. Our implementation additionally assumes a given 3D region of interest on the object pose in workspace.
For robot control, we use our custom manipulation pipeline developed by the Laboratory for Progress. This pipeline uses our implementation of handle grasp localization as proposed by ten Pas and Platt [27]. This grasp localization returns an endeffector pose for grasping from an estimated object pose with a given geometric model. Grasping is then executed for this endeffector pose using TRACIK [1] and MoveIt! [23] for inverse kinematics and motion planning.
To evaluate the pose estimation accuracy of our algorithm, we used two methods to collect groundtruth object poses. For objects behind the window covered by stained glass film, we captured point clouds by removing the glass and using Asus Xtion Pro RGBD on the robot. Object models were then fit manually to determine ground truth pose values. For transparent objects, their surfaces were covered with opaque tape to generate point clouds for ground truth annotation.
Via Pose Estimation Results
We evaluate our proposed algorithm on six scenes and run ten trials for each. Two types of error are applied to evaluate our pose estimation accuracy:

Translation error: defined as the Euclidean distance between estimated object position and ground truth position

Rotation error: defined as dot product between ground truth pose zaxis and estimated pose zaxis. We assume the objects are rotational symmetric along zaxis.
We consider an object is correctly localized when both translation and rotation errors fall into a certain threshold. Figure 6 establishes our estimation accuracy on two types of the scene. For the single transparent object, the all rotation error in dot product space laid in [0,1], which leads to the overlapping of yellow and purple lines in both plots. For an object behind stained glass panels, the estimated poses sometimes have 180 degree flipping, a negligible form of error assuming symmetry.
ViB Manipulation Results
We succeed in demonstrating our method in two challenge scenarios for manipulation^{1}^{1}1Video available on https://youtu.be/Fu_SVRXsdU8,

Pickandplace glass cup from a sink with running water

Pickandplace bleach bottle from an aquatic tank covered with private window film.
The scenarios are shown in Figure 1 and Figure 7. We attach the Lytro camera to the wrist of the robot and add extra link for it. For both scenarios, the robot moves its arm to the appropriate area to capture the light field images, from which the DLV is calculated. Our PMCL then performs estimation to infer the pose of the object and the final estimation is taken to transform the precalculated grasp poses in robot base link. With the accurate pose estimation, the robot is able to pick up objects from both aquatic tank and sink and place the objects on the desired location.
Vii Conclusion
In this paper, we present Plenoptic Monte Carlo Localization for localizing object pose in the presence of translucency from plenoptic (lightfield) observations. We propose a new depth descriptor, the Depth Likelihood Volume, to address the uncertainties from the translucency by generating possible depth likelihoods for each pixel. We show that by using the Depth Likelihood Volume within a Monte Carlo object localization algorithm our method is able to accurately localize objects with translucent materials and objects occluded by layered translucency and perform manipulation.
References
 [1] P. Beeson and B. Ames. Tracik: An opensource library for improved solving of generic inverse kinematics. In IEEERAS International Conference on Humanoid Robots, 2015.
 [2] Y. Bok, H.G. Jeon, and I. S. Kweon. Geometric calibration of microlensbased light field cameras using line features. IEEE transactions on pattern analysis and machine intelligence, 39(2):287–300, 2017.
 [3] M. Borga and H. Knutsson. Estimating multiple depths in semitransparent stereo images. 1999.
 [4] M. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. B. Rusu, and I. A. Şucan. Towards reliable grasping and manipulation in household environments. In Experimental Robotics, pages 241–252. Springer Berlin Heidelberg, 2014.
 [5] A. Collet, M. Martinez, and S. S. Srinivasa. The moped framework: Object recognition and pose estimation for manipulation. Int. J. Rob. Res., 30(10):1284–1306, Sept. 2011.
 [6] F. Dellaert, D. Fox, W. Burgard, and S. Thrun. Monte carlo localization for mobile robots. In IEEE International Conference on Robotics and Automation (ICRA 1999), May 1999.
 [7] P. Foster, Z. Sun, J. J. Park, and B. Kuipers. Visagge: Visible angle grid for glass environments. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages 2213–2220. IEEE, 2013.
 [8] T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma. Lytro camera technology: theory, algorithms, performance analysis. In Multimedia Content and Mobile Devices, volume 8667, page 86671J. International Society for Optics and Photonics, 2013.
 [9] A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz. Fast costvolume filtering for visual correspondence and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2):504–511, 2013.

[10]
H.G. Jeon, J. Park, G. Choe, J. Park, Y. Bok, Y.W. Tai, and I. So Kweon.
Accurate depth map estimation from a lenslet light field camera.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 1547–1555, 2015.  [11] O. Johannsen, A. Sulc, N. Marniok, and B. Goldluecke. Layered scene reconstruction from multiple light field camera views. In S.H. Lai, V. Lepetit, K. Nishino, and Y. Sato, editors, Computer Vision – ACCV 2016, pages 3–18, Cham, 2017. Springer International Publishing.
 [12] Z. Lei, K. Ohno, M. Tsubota, E. Takeuchi, and S. Tadokoro. Transparent object detection using color image and laser reflectance image for mobile manipulator. In Robotics and Biomimetics (ROBIO), 2011 IEEE International Conference on, pages 1–7. IEEE, 2011.
 [13] M. Levoy and P. Hanrahan. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 31–42. ACM, 1996.
 [14] I. Lysenkov. Recognition and pose estimation of rigid transparent objects with a kinect sensor. Robotics, 273, 2013.
 [15] K. Maeno, H. Nagahara, A. Shimada, and R.i. Taniguchi. Light field distortion feature for transparent object recognition. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 2786–2793. IEEE, 2013.
 [16] K. McHenry and J. Ponce. A geodesic active contour framework for finding glass. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 1, pages 1038–1044. IEEE, 2006.
 [17] K. McHenry, J. Ponce, and D. Forsyth. Finding glass. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages 973–979. IEEE, 2005.
 [18] V. Narayanan and M. Likhachev. Discriminativelyguided deliberative perception for pose estimation of multiple 3d object instances. In Proceedings of Robotics: Science and Systems, AnnArbor, Michigan, June 2016.
 [19] V. Narayanan and M. Likhachev. Perch: perception via search for multiobject recognition and localization. In Robotics and Automation (ICRA), 2016 IEEE International Conference on, pages 5052–5059. IEEE, 2016.
 [20] R. Ng. Digital light field photography. stanford university California.
 [21] J. Oberlin and S. Tellex. Timelapse light field photography for perceiving transparent and reflective objects. 2017.
 [22] C. J. Phillips, M. Lecce, and K. Daniilidis. Seeing glassware: from edge detection to pose estimation and shape recovery. In Proceedings of Robotics: Science and Systems, 2016.
 [23] I. A. Sucan and S. Chitta. Moveit! Online Availabl e: http://moveit. ros. org, 2013.
 [24] Z. Sui, L. Xiang, O. C. Jenkins, and K. Desingh. Goaldirected robot manipulation through axiomatic scene estimation. The International Journal of Robotics Research, 36(1):86–104, 2017.

[25]
Z. Sui, Z. Zhou, Z. Zeng, and O. C. Jenkins.
Sum: Sequential scene understanding and manipulation.
In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3281–3288, Sept 2017.  [26] A. Sulc, A. Alperovich, N. Marniok, and B. Goldluecke. Reflection separation in light fields based on sparse coding and specular flow. In Proceedings of the Conference on Vision, Modeling and Visualization, pages 137–144. Eurographics Association, 2016.
 [27] A. ten Pas and R. Platt. Using geometry to detect grasp poses in 3d point clouds. In International Symposium on Robotics Research, 2015.
 [28] T.C. Wang, A. A. Efros, and R. Ramamoorthi. Occlusionaware depth estimation using lightfield cameras. In Computer Vision (ICCV), 2015 IEEE International Conference on, pages 3487–3495. IEEE, 2015.
 [29] S. Wanner and B. Goldluecke. Reconstructing reflective and transparent surfaces from epipolar plane images. In German Conference on Pattern Recognition, pages 1–10. Springer, 2013.
 [30] Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu. Line assisted light field triangulation and stereo matching. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 2792–2799. IEEE, 2013.
Comments
There are no comments yet.