For robots to effectively operate and interact with objects, they need to understand not only the metric geometry of their surroundings but also its semantic aspects. When requested to organize a room or search for an object, robots must be able to reason about object locations and plan goal-directed mobile manipulation accordingly. We aim to enable robots to semantically map the world at the object level, where the representation of the world is a belief over object classes and poses. With the recent advances in object detection via neural networks, we have stronger building blocks for semantic mapping. Yet, such object detections are often times noisy in the wild, due to biases and insufficient diversity in training dataset. In our work, we aim to be robust to false detections from such networks. We model the object class as part of our hidden state for generative inference, rather than making hard decisions on class labels as given by the detector.
Given streaming RGB-D observations, our goal is to infer object classes and poses that explain observations, while accounting for contextual relations between objects and temporal consistency of object poses. Instead of assuming that every object is independent in the environment, we aim to explicitly model the object-object contextual relations during semantic mapping. More specifically, objects from the same category (e.g., food category) are expected to co-occur more often than objects that belong to different categories. Additionally, physical plausibility should be enforced to prevent objects from intersecting with each other, as well as floating in the air.
Temporal consistency of object poses also plays an important role in semantic mapping. Objects could stay where they were observed in the past, or gradually change their semantic locations over time. Under cases of occlusion, modeling temporal consistency can potentially help the localization of partially observed objects. Through temporal consistency modeling, the robot could gain a notion of object permanence, i.e., believing that objects continue to exist even when they are not being directly observed.
Considering both contextual and temporal factors in semantic mapping, we propose the Contextual Temporal Mapping (CT-MAP) method to simultaneously infer object classes and poses. Examples of semantic maps generated by CT-Map are shown in Figure 1. To avoid deterministically representing the world as a collection of recognized objects with poses, we maintain a belief over the object classes and poses across observations.
For generative inference, CT-MAP probabilistically formalizes the semantic mapping problem in the form of a Conditional Random Field (CRF). Dependencies in the CRF model capture the following aspects: 1) compatibility between the latent semantic mapping variables and observations, 2) contextual relations between objects, and 3) temporal consistency of object poses. We propose a particle filtering based algorithm to perform generative inference in CT-MAP, inspired by Limketkai et al .
We evaluate the proposed semantic mapping method CT-MAP with the Michigan Progress Fetch robot. The performance of CT-MAP is quantitatively evaluated in terms of object detection and pose estimation accuracy. We show that CT-MAP is effective in simultaneously detecting and localizing objects in cluttered scenes. We demonstrate object detection performance superior to Faster R-CNN , and accurate 6 DOF object pose estimation compared to 3D registration methods such as ICP, and FPFH . We also highlight examples in which our method benefits from modeling temporal consistency of object poses and object contextual relations.
Ii Related Work
Our work semantically maps the world through simultaneous object detection and 6 DOF object pose estimation. Contextual relations between objects and temporal consistency of object poses are being modeled for better scene understanding. Here we discuss the related works in a) semantic mapping, b) object detection and pose estimation, c) object contextual relations, and d) object temporal dynamics modeling.
Considering the plethora of work  in the field of semantic mapping which vary in semantic representations, we limit our focus to the works that provide object-level semantics. Works in semantic SLAM [3, 30, 5] demonstrated SLAM at the object level. Similarly, we aim at providing a semantic map of the world at the object level, and we focus on mapping while making use of existing metric slam method (e.g., ORB-SLAM ) to stay localized.
A widely used approach for semantic mapping is to augment 3D reconstructed map with objects. Civera et al.  ran an object detection thread parallelly with a monocular SLAM thread. They registered objects to the map by aligning the object faces relying on the SURF features. Ekvall et al.  actively recognized objects based on SIFT features, and integrated object recognition with SLAM for triangulation of object locations. But Civera et al. and Ekvall et al. did not deal with false detections, and their experiments were carried out in environments with no clutter.
To be robust to false detections, Pillai et al.  proposed aggregating object evidence over multiple frames to get better detection, compared to single frame object detection. But their method relied on 3D geometric segmentation that singulates objects from the background, which is vulnerable when dealing with clutter. Sünderhauf et al.  combined object detection over multiple frames and 3D geometric segmentation to get reasonable object boundaries. They produced 3D reconstructed map with object instance segments as central semantic entities. But their method did not provide object pose information, which is critical for robotic manipulation tasks.
Other works have focused on scene labeling of 3D map as a parallel SLAM thread is running in the background. People have proposed different methods for single frame scene labeling [44, 33, 21, 41], and fused labels across multiple frames to generate a dense 3D semantic map. Our work focus on detecting and localizing object entities in the environment, instead of dense labeling of every surfel or voxel in the reconstructed 3D map.
Object Detection and Pose Estimation
Deep neural network based object detectors [26, 20, 27] are nowadays widely adopted for focusing attention in region of interest given an image. Works in object pose estimation adopt these object detectors to get prior on object locations. Zhen et al.  generated scene hypotheses based on object detections returned by R-CNN , and they used Bayesian based bootstrap filter to estimate object poses. Similarly, Sui et al.  and Narayanan et al.  proposed generative approach for object pose estimation given RGB-D observation. Discriminative object pose estimation methods use local [14, 28] or global [29, 1] descriptors to estimate object poses via feature matching. However, feature-based methods are sensitive to the clutterness in the environment. Our work takes the generative approach and builds on Zhen et al.  for object pose estimation through Bayesian filtering, while  modeled objects independently and took single image at input, we model the contextual dependencies between objects and temporal consistency of each object instance given streaming data.
Works that simultaneously detect and localize objects are highly related to our work. Xiang et al.  proposed PoseCNN as a novel network for object detection and 6 DOF object pose estimation given a RGB image. Tremblay et al.  and Tekin et al.  converted the problem of simultaneous object detection and pose estimation into a problem of detecting the vertices of object bounding cuboid. Unlike these works that take single image as input and outputs deterministic estimate of object poses, our work maintains a belief over object classes and poses across observations.
Given streaming data, Salas-Moreno et al.  assumed repeated object instances in the environment to effectively recognize and localize objects, but their model lacks inter-object dependences. Tateno et al.  incrementally segmented 3D surface reconstructed by an underlying SLAM thread, then 3D segments were recognized as objects and object poses were estimated via 3D descriptor matching. Their work is similar to our work in terms of the output, but they depend on 3D geometric segmentation which is not guaranteed to segment objects out in clutter. In addition, they require dense SLAM with small voxel size which is hard to scale.
Object Contextual Relations
Contextual relations play a key role in modeling spatial relations between objects for scene understanding. Koppula et al.  showed semantic labeling on point clouds using co-occurrence and geometric relations between objects. Jiang et al.  explored indirectly modeling object contextual relations by hallucinating human interactions with the environment. Similarly, [9, 11, 15, 8, 2] have proven modeling object-object and object-place contextual relations to be useful in place recognition, object detection and object search tasks. In our work, we mainly utilize object-object contextual relations in terms of co-occurrence and geometric relations.
Object Temporal Dynamics Modeling
We need to maintain the belief over object poses even when objects are not being observed. Different types of the objects share different characteristics of dynamics. For example, structural objects such as furnitures tend to stay approximately at the same location, while small objects such as food items can often be moved from one place to another. Bore et al.  proposed to learn long-term object dynamics over multiple visits of the same environment. Russel et. al. 
proposed a temporal persistence model to predict the probability of an object staying at the location where it is last observed after certain time period. We are inspired by the temporal persistence model proposed in, and we reason about the possible locations of an object observed in the past based on the contextual relations between objects.
Iii Problem Formulation
We focus on semantic mapping at the object level. Our proposed CT-Map method maintains a belief over object classes and poses across an observed scene. We assume that the robot stays localized in the environment through an external localization routine (e.g., ORB-SLAM ). The semantic map is composed by a set of objects . Each object contains the object class , object geometry , and object pose , where is the set of object classes .
At time , the robot is localized at . The robot observes , where is the observed RGB-D image, and are semantic measurements. The semantic measurements are returned by an object detector (as explained in section V-A
), which contains: 1) a object detection score vector, with each element in denoting the detection confidence of each object class, and 2) a 2D bounding box .
We probabilistically formalize the semantic mapping problem in the form of a CRF, as shown in Figure 2. Robot pose and observation are known. The set of objects
are unknown variables. We model the contextual dependencies between objects and the temporal consistency of each individual object over time. The posterior probability of the semantic map is expressed as:
where is a normalization constant, and action applied to object at time is denoted by . is the prediction potential that models the temporal consistency of the object poses. is the measurement potential that accounts for the observation model given 3D mesh of objects. is the context potential that captures the contextual relations between objects.
Iii-a Prediction Potential
We use two different prediction models for predicting object pose, depending on whether the object is in the field of view or not. If the object is being observed, we model the action. This assumption leads to prediction of small object movements in 3D to be modeled as:
which allows us to express the prediction potential as:
When object is not in the field of view for a significant period of time, it can be either located at the same location or moved to a different location due to the actions applied by other agents. As stated by Toris et al. , the probability of the object still being at the same location where it was last seen is a function of time. To take into account the fact that object can be moved to other locations, we model the temporal action
with a discrete random variable. Specifically, denotes no action and the object stays at the same location, and denotes a move action is applied and the object is moved to other locations. And these high-level actions follow certain distribution ,
where are constants, and is the time duration that object is not being observed. As increases, the probability of decays, and eventually as . For different objects , the coefficients
that control the speed of the decay are different. We provide heuristicfor different objects in our experiments, while these coefficients can also be learned as introduced by Toris et al. .
Iii-B Measurement Potential
The measurement potential of object is expressed as:
We use non-zero constant to account for cases where objects are not in the field of view. measures the compatibility between the observation and ,
where is the confidence score of class from the detection confidence vector . Function evaluates the intersection over minimum area of two bounding boxes. is the minimum enclosing bounding box of projected in image space based on .
We assume known 3D mesh models of objects. Function computes the similarity between the projected and inside bounding box , as explained in detail in section V-B. In the case that robot has observed object in the past, and the belief over indicates that it is in the field of current view of the robot. If the robot cannot detect object , then the object could be occluded, in which case we use for the object to be potentially localized.
Iii-C Context Potential
There exist common contextual relations between object categories across all environments. For example, a cup would appear on a table much more often than on the floor, and a mouse would appear besides a keyboard much more often than besides a coffee machine. We refer to these common contextual relations as category-level contextual relations. In a specific environment, there exist contextual relations between certain object instances. For example, a TV always stays on a certain table, and a cereal box is usually stored in a particular cabinet. We refer to these contextual relations in a specific environment as instance-level contextual relations.
We manually encode category-level contextual relations as prior knowledge to our model, which also can be learned from public scene dataset (e.g., McCormac et al. ). Because instance-level contextual relations vary across different environments, these relations of a specific environment must be learned over time. The context potential is composed by category-level potential and instance-level potential ,
We model as mixture of Gaussians, with and each being a Gaussian component.
In our experiments, we manually designed as prior knowledge, and is updated via Bayesian updates. The principle while designing follows two constraints: 1) simple physical constraints such as no object intersection is allowed, and objects should not be floating in the air, and 2) object pairs that belong to the same category co-occur more often than objects from different categories.
is not directly applicable to our problem because we are dealing with high-dimensional data. Sener et al. proposed recursive CRF that deals with discrete hidden state with forward-backward algorithm, while our hidden state is mixed, i.e., object class label in discrete space and object pose in continuous space.
Instead of estimating the posterior of the complete history of objects as expressed in Equation III, CT-Map can recursively estimate the posterior of each object . This approach to inference is similar to the CRF-filter proposed by Limketkai et al. . We represent the posterior of object with a set of weighted particles, i.e., , where contains object class and pose information as introduced in III-A, and is the associated weight for the particle. In each particle filtering iteration, particles are first resampled based on their associated weights, then propagated forward in time through object temporal consistency, and re-weighted according to the measurement and context potentials.
We associate bounding boxes across consecutive frames based on their overlap. Only if a bounding box has been consistently associated for certain number of frames will we start initiating object class and pose estimation for that bounding box. The initial set of particles given a detected bounding box are drawn as following: 1) first we sample the object class based on the corresponding detection confidence score vector ; 2) then we sample the 6 DOF object pose inside , by putting the object center around the 3D points at the center region of , with orientation uniformly sampled.
To sample the pose of from (Step 4 in Algorithm 1), there are two cases as following:
If is within the field of view of the robot, we sample according to Equation 2.
If is not within the field of view of the robot, we first sample the high-level action according to Equation 3.
If is sampled, then is sampled based on Equation 2.
If is sampled, then another object is uniformly sampled from , which indicates the place that has been moved to. is then sampled from the region that can physically support.
In step 5 of Algorithm 1, we use to denote the indices of objects that are in the neighborhood of object . Because each neighbor object is represented by particles, it is computationally expensive to evaluate the context potential against each particle of . Thus, we only evaluate the context potential against the most likely particle of .
V-a Faster R-CNN object detector
We deploy Faster R-CNN  as our object detector. Given the RGB channel of our RGB-D observation, we apply the object detector and get the bounding boxes from the region proposal network, along with the corresponding class score vector. Then we apply non-maximum suppression to these boxes and merge boxes that have Intersection Over Union (IoU) larger than 0.5. For training, our dataset has 970 groundtruth images for 13 object classes. Each image has around labeled objects. We fine-tuned the object detector based on VGG16  pretrained on COCO . In case of overfitting, we fine-tuned the network for interations with learning rate.
V-B Similarity function
We assume as given the 3D mesh model of objects. Thus, we can render the depth image of based on its object class and 6 DOF pose in the frame of . With rendered depth image :
where is a constant scaling factor. is the sum of squared differences between the depth values in observed and rendered depth images.
We collected our indoor scene dataset with a Michigan Progress Fetch robot for evaluation on our proposed CT-Map method. Our indoor scene dataset contains 20 RGB-D sequences of various indoor scenes. We measure the quality of inference for various scenes in terms of 1) object detection and 2) pose estimation. Thus, we follow the mean average precision (mAP) metric and 6 DOF pose estimation accuracy for benchmarking our method. We also show qualitative examples of our semantic maps in Figure 1. More qualitative examples are provided in the video111https://youtu.be/W-6ViSlrrZg.
Across all experiments, we use in Equation 5 to treat category-level and instance-level potentials equally. If an object has not been observed for infinite long period of time, we assume that object has equal probabilities of either staying at the same location or not. Thus, we use in Equation 3.
Vi-a Object Detection
We have noisy object detections coming from baseline Faster R-CNN object detector, while CT-Map can correct some false detections by modeling the object class as part of our hidden state. To evaluate the object detection performance of CT-Map, we take the estimated 6 DOF pose of all objects in the scene at the end of each RGB-D sequence in our dataset, and project them back onto each camera frame in that sequence to generate bounding boxes with class labels. We run two semantic mapping processes by considering different sets of potentials: 1) Temporal Mapping (T-Map): we consider prediction potential in the CRF model; 2) Contextual Temporal Mapping (CT-Map): we consider both prediction and context potential in the CRF model, which is the proposed method. For both T-Map and CT-Map, we include the measurement potential on observation.
We use mAP as our object detection metric. As shown in Table I, T-Map improves upon the baseline method Faster R-CNN by incorporating prediction and observation potentials, and CT-Map improves the performance further by additionally incorporating context potential. Faster R-CNN did not perform quite well on the test scenarios because the training data do not necessarily cover the variances encountered at test time. Though the performance of Faster-RCNN can be further improved by providing more training data, CT-Map provides more robust object detection when training remains limited.
In some cases, objects are not being reliably detected by Faster R-CNN due to occlusion. If an object has been observed in the environment in the past, our method makes predictions on locations that objects can go by modeling the temporal consistency of objects. Thus, even if a detection is not fired on the object due to occlusion, our method can still localize the object and claim a detection. However, in cases where an object is severely occluded and the depth observation lacks enough geometric information from the object, our method will not be able to localize the object. Example detection results highlighting the benefits of the proposed method compared to baseline Faster R-CNN are shown in Figure 4.
Vi-B Pose Estimation
For each RGB-D sequence in our dataset, we locate the frames that each object is last seen, and project the depth frame back into 3D point clouds using known camera matrix. We then manually label the ground truth 6 DOF pose of objects. We compare the estimated object poses at the end of each RGB-D sequence against the ground truth.
Pose estimation accuracy is measured as , where is the number of objects that are considered correctly localized, and is the total number of objects that are present in the dataset. If the object pose estimation error falls under certain position error threshold and rotation error threshold , we claim that the object is correctly localized. is the translation error in Euclidean distance, and is the absolute angle difference in orientation. For symmetrical objects, the rotation error with respect to the symmetric axis is ignored.
We apply the Iterative Closest Point (ICP) and Fast Point Feature Histogram (FPFH)  algorithms as our baselines for 6 DOF object pose estimation. For each RGB-D sequence in our dataset, we take the 3D point clouds of the labeled frame, and crop them based on ground truth bounding boxes. These cropped point clouds are given to the baselines as observations, along with object 3D mesh models. ICP and FPFH are applied to register the object model to the cropped observed point cloud. We allow maximum iterations of 50000.
Our proposed method CT-Map significantly outperforms ICP and FPFH by a large margin. As our generative inference iteratively samples object pose hypotheses and evaluates them against the observations, CT-Map does not suffer from local minima as much as discriminative methods such as ICP and FPFH.
We propose a semantic mapping method CT-Map that simultaneously detects objects and localizes their 6 DOF pose given streaming RGB-D observations. CT-Map represents the semantic map with a belief over object classes and poses. We probabilistically formalize the semantic mapping problem in the form of a CRF, which accounts for contextual relations between objects and temporal consistency of object poses, as well as measurement potential on observation. We demonstrate that CT-Map outperforms Faster R-CNN in object detection and FPFH, ICP in object pose estimation. In the future, we would like to investigate the inference problem of object semantic locations given partial observations of an environment, e.g., inferring a query object to be on a dining table, or in a kitchen cabinet. Ideally, maintaining a belief over object semantic locations can serve as a notion of generalized object permanence, and facilitate object search tasks.
This work was supported in part by NSF award IIS-1638060.
A. Aldoma, F. Tombari, R. B. Rusu, and M. Vincze.
Our-cvfh–oriented, unique and repeatable clustered viewpoint feature
histogram for object recognition and 6dof pose estimation.
Joint DAGM (German Association for Pattern Recognition) and OAGM Symposium, pages 113–122. Springer, 2012.
-  A. Aydemir, A. Pronobis, M. Göbelbecker, and P. Jensfelt. Active visual object search in unknown environments using uncertain semantics. IEEE Transactions on Robotics, 29(4):986–1002, 2013.
-  S. Y. Bao, M. Bagra, Y.-W. Chao, and S. Savarese. Semantic structure from motion with points, regions, and objects. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2703–2710. IEEE, 2012.
-  N. Bore, P. Jensfelt, and J. Folkesson. Multiple object detection, tracking and long-term dynamics learning in large 3d maps. arXiv preprint arXiv:1801.09292, 2018.
-  S. L. Bowman, N. Atanasov, K. Daniilidis, and G. J. Pappas. Probabilistic data association for semantic slam. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 1722–1729. IEEE, 2017.
-  J. Civera, D. Gálvez-López, L. Riazuelo, J. D. Tardós, and J. Montiel. Towards semantic slam using a monocular camera. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 1277–1284. IEEE, 2011.
-  S. Ekvall, P. Jensfelt, and D. Kragic. Integrating active mobile robot object recognition and slam in natural environments. In Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, pages 5792–5797. IEEE, 2006.
-  P. Espinace, T. Kollar, N. Roy, and A. Soto. Indoor scene recognition by a mobile robot through adaptive object detection. Robotics and Autonomous Systems, 61(9):932–947, 2013.
-  C. Galindo, A. Saffiotti, S. Coradeschi, P. Buschka, J.-A. Fernandez-Madrigal, and J. González. Multi-hierarchical semantic maps for mobile robotics. In Intelligent Robots and Systems, 2005.(IROS 2005). 2005 IEEE/RSJ International Conference on, pages 2278–2283. IEEE, 2005.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
-  G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In European conference on computer vision, pages 30–43. Springer, 2008.
-  M. Isard. Pampas: Real-valued graphical models for computer vision. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, volume 1, pages I–I. IEEE, 2003.
-  Y. Jiang, M. Lim, and A. Saxena. Learning object arrangements in 3d scenes using human context. arXiv preprint arXiv:1206.6462, 2012.
-  A. E. Johnson and M. Hebert. Using spin images for efficient object recognition in cluttered 3d scenes. IEEE Transactions on pattern analysis and machine intelligence, 21(5):433–449, 1999.
-  T. Kollar and N. Roy. Utilizing object-object and object-scene context when planning to find things. In Robotics and Automation, 2009. ICRA’09. IEEE International Conference on, pages 2168–2173. IEEE, 2009.
-  H. S. Koppula, A. Anand, T. Joachims, and A. Saxena. Semantic labeling of 3d point clouds for indoor scenes. In Advances in neural information processing systems, pages 244–252, 2011.
-  I. Kostavelis and A. Gasteratos. Semantic mapping for mobile robotics tasks: A survey. Robotics and Autonomous Systems, 66:86–103, 2015.
-  B. Limketkai, D. Fox, and L. Liao. Crf-filters: Discriminative particle filters for sequential state estimation. In Robotics and Automation, 2007 IEEE International Conference on, pages 3142–3147. IEEE, 2007.
-  T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: common objects in context. CoRR, abs/1405.0312, 2014.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
J. McCormac, A. Handa, A. Davison, and S. Leutenegger.
Semanticfusion: Dense 3d semantic mapping with convolutional neural networks.In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 4628–4635. IEEE, 2017.
-  J. McCormac, A. Handa, S. Leutenegger, and A. J. Davison. Scenenet rgb-d: 5m photorealistic images of synthetic indoor trajectories with ground truth. arXiv preprint arXiv:1612.05079, 2016.
-  R. Mur-Artal and J. D. Tardós. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Transactions on Robotics, 33(5):1255–1262, 2017.
-  V. Narayanan and M. Likhachev. Discriminatively-guided deliberative perception for pose estimation of multiple 3d object instances. In Robotics: Science and Systems, June 2016.
-  S. Pillai and J. Leonard. Monocular slam supported object recognition. arXiv preprint arXiv:1506.01732, 2015.
-  J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6):1137–1149, 2017.
-  R. B. Rusu, N. Blodow, and M. Beetz. Fast point feature histograms (fpfh) for 3d registration. In Robotics and Automation, 2009. ICRA’09. IEEE International Conference on, pages 3212–3217. IEEE, 2009.
-  R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu. Fast 3d recognition and pose using the viewpoint feature histogram. In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, pages 2155–2162. IEEE, 2010.
-  R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison. Slam++: Simultaneous localisation and mapping at the level of objects. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 1352–1359. IEEE, 2013.
-  O. Sener and A. Saxena. rcrf: Recursive belief estimation over crfs in rgb-d activity videos. In Proceedings of Robotics: Science and Systems, July 2015.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
-  J. Stückler, B. Waldvogel, H. Schulz, and S. Behnke. Dense real-time mapping of object-class semantics from rgb-d video. Journal of Real-Time Image Processing, 10(4):599–609, 2015.
-  E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2003.
-  Z. Sui, L. Xiang, O. C. Jenkins, and K. Desingh. Goal-directed robot manipulation through axiomatic scene estimation. The International Journal of Robotics Research, 36(1):86–104, 2017.
-  N. Sünderhauf, T. T. Pham, Y. Latif, M. Milford, and I. Reid. Meaningful maps with object-oriented semantic mapping. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pages 5079–5085. IEEE, 2017.
-  K. Tateno, F. Tombari, and N. Navab. When 2.5 d is not enough: Simultaneous reconstruction, segmentation and recognition on dense slam. In Robotics and Automation (ICRA), 2016 IEEE International Conference on, pages 2295–2302. IEEE, 2016.
-  B. Tekin, S. N. Sinha, and P. Fua. Real-time seamless single shot 6d object pose prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 292–301, 2018.
-  R. Toris and S. Chernova. Temporal persistence modeling for object search. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 3215–3222. IEEE, 2017.
-  J. Tremblay, T. To, A. Molchanov, S. Tyree, J. Kautz, and S. Birchfield. Synthetically trained neural networks for learning human-readable plans from real-world demonstrations. arXiv preprint arXiv:1805.07054, 2018.
-  Y. Xiang and D. Fox. Da-rnn: Semantic mapping with data associated recurrent neural networks. arXiv preprint arXiv:1703.03098, 2017.
-  Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017.
-  Z. Zeng, Z. Zhou, Z. Sui, and C. J. Odest. Semantic robot programming for goal-directed manipulation in cluttered scenes. In Robotics and Automation (ICRA), 2018 IEEE International Conference on. IEEE, 2018.
-  Z. Zhao and X. Chen. Semantic mapping for object category and structural class. In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pages 724–729. IEEE, 2014.