Point cloud analysis has been studied widely due to its important applications in autonomous driving[1, 2, 3], augmented reality, and medical applications. However, most of the point cloud based work has been focusing on applications with only one depth sensor, or multiple sensors facing outwards, covering non-overlapping regions. Less work has been reported that tackles the 3D object detection problem in in-door multi-camera settings, which is a very common and important scenario in applications such as operating room monitoring, indoor social interaction studies, indoor surveillance etc.
Using multiple cameras reduces the amount of occlusions, but fusing information from multiple sensors is very challenging and still an open problem in the community. Traditionally fusing 2D and or 3D images coming from multiple sensors is handled by complex models that either process each sensor separately and fuse the decision at a later stage [8, 9, 10], or fuse the information earlier on feature level. Such algorithms tend to suffer from poor generalization due to the complexity of the model and heavy assumptions. In comparison, we argue that using multiple sourced point clouds is a more straightforward and natural alternative, which provides better generalization under various challenging real world scenarios.
In this work we study the multi-person 3D pose estimation problem for indoor multi-camera settings using point cloud and propose an end-to-end multi-person 3D pose estimation network, “Point R-CNN”. We tested our method by simulating scenarios such as camera failures and camera view changes on the CMU panoptic dataset, which was collected in the CMU Panoptic studio to capture various social interaction events with 10 depth cameras. We further test our method on the challenging real world dataset MVOR, which was captured in hospital operation rooms with 3 depth cameras. The experiment demonstrates the robustness of the algorithm for challenging scenes and shows good generalization ability on multi-sensor point clouds. Furthermore, we show that the proposed end-to-end network outperforms the baseline cascaded model by a large margin.
The contributions of our work are as follows:
We propose to use point cloud as the only source for fusing data from multiple cameras and show that our proposed method is efficient and generalizes well to various challenging scenarios.
We propose an end-to-end multi-person 3D pose estimation network, Point R-CNN, based solely on point clouds. Through extended experiments, we show the proposed network outperforms the cascaded state-of-the-art models.
We present extensive experimental results simulating challenging in-door multi-camera application problems, such as repeated camera failures and view changes.
2 Related work
2.1 Point cloud based approaches
Processing point cloud data is challenging due to its unstructured nature. Typically, in order to use Convolutional Neural Network (CNN) like methods, point clouds are usually pre-processed and projected to some ordered space, such as in[13, 14, 15]. Projecting point clouds is also helpful for more complex localization tasks such as 3D object detection. For example, Zhou et al.  proposed to use voxelized point clouds to detect 3D objects using a Region Proposal Network, which simultaneously produces voxel class labels and regression results for bounding box “anchors”.
to classify and segment point clouds in their native space. These networks are the building blocks for many later point cloud based algorithms. Recently, Geet al.  proposed to directly use point clouds for hand pose estimation. We further discuss this approach in section 2.3.
Besides the above mentioned trends, on-the-fly point cloud transformations have also been explored. Su et al.  proposed using Bilateral Convolutional Layers to filter the point cloud and further process the data without explicitly pre-processing the input point cloud. Li et al.  proposed to use -transform to gradually transform the point clouds into a higher order representation without pre-processing steps.
Our method is inspired from the first two approaches to detect people in ordered 3D space using voxelized input, as well as detecting human body joints on the segmented point cloud.
2.2 Multi-sensor applications
As discussed earlier, combining the information of multiple sensors is challenging. Conventionally the information is fused at a later stage, either by fusing the decision or fusing the feature space. Hedge et al.  proposed to use one network per sensor and fuse the result at a later stage to perform object recognition. Xu et al. proposed PointFusion to fuse point cloud features and 2D images to detect 3D bounding boxes.
However, complex fusion models present weaknesses in terms of generalization. For example, in indoor surveillance systems, the number of cameras and their position vary widely making it challenging to generalize. Hence we argue that combining multiple point cloud sources is a more natural and effective alternative where the effort of fusing information is very low in comparison. Furthermore, in case of camera failures, the structure of the input does not change, only the density, i.e. number of points does. This can easily be accounted for by training the point cloud network on input cloud with variable density.
2.3 3D human pose estimation
3D human pose estimation is a challenging problem. Most recent works focused on using depth images or combining 2D and 3D information to detect landmarks for a single or for multiple persons[22, 23, 24, 25].
Rhodin et al.  proposed to use weakly-supervised training methods to circumvent the annotation problem. This is achieved by assuming consistency in the pose across different views. During testing the pose is estimated based on a single camera input. More recently, Moon et al.  proposed to use voxelized depth images to detect 3D hands and estimate single human pose from depth images. Haque et al. 
describe a view point invariant 3D human pose estimation network, where the input depth image is either a top view or a front view. They refine the landmark position using a Recurrent Neural Network and tested view transfer by training on the front view and testing in on the side view. While this relates to our problem and presents encouraging results, the assumption of having fixed viewpoints does not apply for our application.
The work we are presenting is built upon the VoxelNet paradigm. Our end to end framework can be split into two parts, (1) instance detection and (2) instance processing.
Our framework is outlined in Figure 2
. The architecture can be split into several modules which are (1) per-voxel feature extraction, (2) voxel features aggregation, (3) instance detection, and finally (4) instance-wise processing,i.e. in our case point to point regression. In the following sections we describe these modules in detail.
3.2 Input preprocessing
The input to our algorithm is the unstructured point cloud , where denotes the point cloud acquired from sensor . The point clouds acquired from all the sensors are assumed to be time-synchronized and registered within the same world coordinate system. We further assume our world coordinate system to be axis aligned with the ground plane with the -axis being the ground normal.
As a first step we define an axis-aligned cuboid working space resting on top of the ground plane. All points outside this volume are discarded at this time.
In order to reduce the impact of variable point cloud densities across the scene and to speed up processing we down-sample using a voxel grid filter to our working space. This filter merges all points that fall within one voxel into their centroid such that after filtering each voxel contains at most one point.
We then subdivide the working space into a different regular axis-aligned grid of (larger) voxels as follows. The origin of our working space is denoted by , the dimensions of each voxel , and , and the number of voxels along each axis , and . We also choose the number of points to be considered per voxel, in our case we chose .
Each point of the point cloud is now assigned to the voxel it falls in, denoted by the directional voxel indices , and , where etc.
The voxel grid is then flattened by assigning a linear index to each voxel via
After this grouping, each point is assigned the of the corresponding voxel.
Using the we previously computed for each point, we can find the list of unique s of every voxel containing at least one point. Since we already sampled and shuffled the whole point cloud, we just have to take for each voxel the first points with this and put those
points in a tensor of size, 3 being our input dimension and being the total number of voxel in the scene .
Instead of using the world coordinate of each points we use the relative position within its corresponding voxel, i.e.
This prevents the network from learning global aspects of the scene as opposed to the desired local structures,
We pad with zeros the voxel which do not have enough points.
3.3 Instance detection
Now that our scene is defined, we can regress the bounding cylinder for each instance (i.e. person) in the scene. Our approach to do this is inspired by previous work on 2D instance detection and segmentation [28, 29].
Per-voxel feature extraction.
The first part of our architecture is using 2 stacked voxel feature encoding (VFE) layers  in order to learn a voxel-wise set of features which is invariant to point permutation. Those VFE layers are efficiently implemented as 2D convolutions with kernels of size 1. We use a similar notation as in  for the VFE layers. To represent the i-th VFE layer, we use VFE-i with and being respectively the dimension of the input features and the dimension of the output features of the layer. As in , the VFE layer will transform its input into a feature vector of dimension before doing the point-wise concatenation, which then yields the output of dimension .
The VFE are VFE-1(3, 32) and VFE-2(32, 64) followed by a fully connected layer of input and output 64 right before the element-wise max pooling. Having the VFE instead of solely using PointNet  helps add neighborhood information to the points (defined by being every point in this voxel). After the max pooling of the last layer, the output size is . We can thus reshape our output in order to retrieve a 3D image of size .
In this work we process every voxel. However, this could be sped up as shown in  by only processing non-empty voxels.
In order to aggregate information from the scene and learn multi-scale features from our scene, we use a DenseUNet 
. The first step makes our network invariant to point permutation. Then, when working on voxels instead of point clouds, we go from an unstructured representation to a structured one, which is easier to apply classic deep learning methods on. By doing those two steps we provide additional neighborhood information to each point at voxel and scene level. This is more efficient than processing the whole point cloud and looking for k-closest neighbor, and it gives better results than just processing the whole point cloud as a set of patches.
After the feature aggregation part we take the output and feed it to two parallel heads: one for the per-voxel classification (doing the detection) and one for the per-voxel bounding cylinder regression.
With the classification branch we want to find the voxel which should be used for the bounding cylinder regression. This is done by classifying each voxel into one of two classes: containing the top of a cylinder or not. At the moment we assume that each voxel can only have one bounding cylinder. This could be extended by having several bounding cylinder per voxel, or a second network for refinement.
Since there are not many point cloud datasets where the instance segmentation and/or detection are provided together with the 3D joints of each instance we decided to work on datasets that provide at least the point cloud and the joint positions. The cylinders are then defined based on the joint positions for each person.
We define each cylinder axis as being aligned with the person’s neck. The top of the cylinder is at the same height as the joint which is furthest from the ground (for this person). The radius of the cylinder is determined by the distance of the joint furthest from the neck axis of this person.
During training we use the ground truth of the classification to mask the voxel that should be used for back propagation while during testing and inference time we use the output of the classification branch to retrieve the voxel which contains the desired bounding cylinders, as in the RCNN family of frameworks.
The loss for this part is defined as:
Here represents the output of the head of our network doing the classification and represents the output of the head doing the bounding cylinder regression. Their respective ground truth is represented by and . is a cross-entropy loss doing the classification of voxels. For the regression loss, we use where
is the robust loss function (smooth L1) defined in.
In the case of human detection only a relatively small number of voxels contain a cylinder to regress, so we need to re-sample our data. Instead of working with and which is the entire output of our networks, we use every voxel containing an object across the batch, as many non empty voxels randomly chosen across the batch and the same amount of voxel across every voxel in the batch. In the case where there is no human in our whole batch, we just pick 32 random voxels. That way, we maintain a bias towards "empty" voxels, we are sure to use the voxels with data in them and we do not have to compute the whole loss on every empty voxel in our batch.
The two terms of the loss are normalized by and and weighted by a balancing parameter . In our current implementation, and both set to the number of voxels used after the sampling while is set to 1.
We use a normalization analog to the one presented in [1, Equation 1], i.e. the cylinder coordinates are normalized relative to the voxel size. By transforming the cylinders back into our world coordinate system we can extract a point cloud consisting of all points within this cylinder.
3.4 Instance wise processing
Using the extracted point cloud for each cylinder, we can set up a batch of instances to be processed by the last stage of our framework. During training we take at most 32 persons across the frames in the minibatch and for each person we sample at most 1024 points. If a person does not have enough points (32 in our case), we discard this instance.
If there are no extracted point clouds at all, i.e. there was no detection at all, we skip this whole part and just compute the loss previously mentioned. In our use case, we are doing joint regression from the point cloud using PointNet .This framework could of course be modified to replace what should be regressed and how it is regressed. The only normalization we do on those point cloud is to put them in a sphere located at the middle of cylinder with a radius of half the height of the cylinder. The final loss of our whole framework is:
Here, we take the loss of the first part and add the loss of the regression part. The loss for the joint regression we use is Mean Square Error Loss calculated between the joints predicted and the ground truth joints . As stated before, we sample through our batch for the regression part, this being 32 during training.
If there are too many persons we pick the person to regress randomly across the batch. If there are not enough persons, we pad this batch with zeros. During testing, we use every detected bounding box instead. We add another balancing term that we set to 1, being set to 1 too.
The is the total number of joints to regress across our instances. We show that using an end to end training on this whole framework improves the result compared to a two-stage solution trained separately.
To simulate real world scenarios we designed four challenging experiments and compared our results with the baseline method. In particular, we train stage-wise state-of-the-art 3D point cloud based object detection via VoxelNet and a state-of-the-art point cloud based regression network, namely PointNet.
To the best of our knowledge no end-to-end solution is available for pure point cloud based 3D multi-person pose estimation. Thus for comparison we cascade the above mentioned state-of-the-art algorithms to perform each part, i.e. 3D person detection and per-person joint detection. Each part is trained separately and the best model is chosen for final evaluation.
In order to simulate camera failures in real life scenarios all datasets used in first three experiments are conducted with a random number of camera inputs, i.e. we randomly drop one or more of the camera inputs. Details are discussed in the respective experiment sections.
The first three experiments are conducted on the CMU Panoptic dataset, which was created to capture social interaction using several modalities. Here several actors are interacting in a closed environment and captured by a multitude of RGB and RGBD cameras. For our experiments we use only the depth images from the ten Kinect 2 RGBD sensors and show that point clouds alone are sufficient to accurately detect people and identify their pose.
The cameras in the CMU Panoptic dataset are placed in various positions on a dome over the room, all pointing towards the center. The dataset contains several different social scenes with 3D joint annotations of the actors. For our experiments we randomly choose four scenes to conduct our evaluation. Namely, “160224_haggling1”, “160226_haggling1”, “160422_haggling1”, “160906_pizza1”. For more information about the dataset, readers are referred to .
The last experiment is conducted on the MVOR dataset , which was acquired over 4 days at a hospital operation room with a varying number of medical personnel visible in the scene and recorded by three RGBD cameras. The dataset captures a real world operation room scenarios, including cluttered background, various medical devices, random number of people at any given time, and ubiquitous occlusion caused by medical devices. The point clouds were cropped to a 4x2x3 m size cube and sampled using a voxel grid filter with a 2.5x2.5x2.5 cm grid size.
To evaluate the overall performance of the algorithm we use both 3D object detection metrics, Average Precision (AP) on Intersection of Union (IoU) > 0.5, used in KITTI Vision Benchmark Suite  to evaluate the instance detection part of the algorithm, and per joint mean distance metrics, denoted as DIST for per joint distance and ACC for accuracy under 10cm threshold in the following sections. When calculating the distance, we only count the joints in true positive detections. And if there is duplicated detections, we only calculate the joints difference of the highest scored detection.
4.2 View Generalization
In this experiment, we simulate cameras being placed in different locations in the room to demonstrate the view generalization of the algorithm. We achieve this by using different cameras at training and test time. We show that our algorithm is robust when changing the view or location of the camera.
For our training dataset we randomly choose 8 camera inputs as our training dataset, namely camera 0, 1, 2, 3, 4, 6, 7, 9. The remaining cameras 5 and 8 are used for testing. Note that in both training and testing, we choose a random number of cameras at any given time to simulate camera failures.
The first experiment is conducted on scene “160224_haggling1”, “160226_haggling1” and “160906_pizza1”. The training dataset has 6858 frames, uniformly down-sampled by a factor of 3. The testing dataset has 897 frames, and is down-sampled by a factor of 30.
As we can see from Table 1, even with severe camera view changes and camera failures, the proposed method performs well. Meanwhile, our end-to-end solution outperforms the baseline by a large margin, both from detection perspective and joint regression perspective.
| + ||Point R-CNN|
4.3 Actor generalization
In this experiment, we explore the generalization problem in terms of the objects and scenes. In particular, we train the network with scene “160224_haggling1”, “160226_haggling1” and “160906_pizza1” and tested with scene “160422_haggling1” with same camera view, i.e. both training and testing data uses camera 0, 1, 2, 3, 4, 6, 7, 9 with random camera failure. The training dataset has 6858 frames, uniformly downsampled by factor 3. The testing dataset has 430 frames, down-sampled with factor 30. Table 2 shows that the proposed method outperforms the baseline method by a large margin, especially for shoulders, hips and Knees.
| + ||Point R-CNN|
4.4 View and Actor generalization
In this study, we demonstrate the evaluation under both view change and actor changes. We trained the network with scene “160224_haggling1“, “160226_haggling1” and “160906_pizza1” using camera 0, 1, 2, 3, 4, 6, 7, 9 and tested on scene “160422_haggling1” on camera 5, 8, both with random camera failures. The training dataset has 6858 frames, uniformly downsampled with factor 3. Testing dataset has 432 frames, downsampled with factor 30. From Table 3 we can see that the proposed method outperforms in all the joints, some with large margins, for example the accuracy of headtop, left and right shoulder joints.
| + ||Point R-CNN|
4.5 Handling background and cluttered scenes
In this experiment, we use the MVOR dataset which shows a very cluttered room and walls in the background. Unlike previous datasets, this is a real world operation room scenario, which is naturally more complex. Furthermore, due to the size of the dataset, it is not feasible to train the network from scratch, as it is mentioned in the original paper , so in this experiment, we fine-tuned our previous model on the first 3 days of data and tested on the 4th day data with all 3 cameras used. Since the number of joints is different from the previous dataset, we only fine-tuned the joints common to both dataset’s annotations. The training dataset has 513 frames and 113 testing frames. Table 4 shows that our algorithm outperforms the baseline.
| + ||Point R-CNN|
4.6 Qualitative Evaluation and Discussion
Apart from the experiments we mentioned above, we evaluated the mean of per joint distance accuracy under different thresholds. As shown in Figure 6, the proposed method consistently outperforms the baseline method under various thresholds in all four experiments. Sample testing results from all experiments are shown in Figure 7, where the ground truth is color coded green and the prediction red.
In this work we have demonstrated through extended experiments that point cloud is the natural and straightforward alternative for a multi-sensor indoor system, where the fusion of multi-sensor information is efficient. Unlike conventional methods that use complex fusion models to combine information, which tend to generalize poorly, we show through various challenging real world scenarios that the proposed algorithm can generalize well. Furthermore, we propose an end-to-end multi-person 3D pose estimation network, “Point R-CNN”, and show that the proposed network outperforms the simply cascaded model by large margins in various experiments. The study shows that using an end-to-end network greatly improves both object detection and joint regression performance.
-  Yin Zhou and Oncel Tuzel. VoxelNet: End-to-end learning for point cloud based 3d object detection. arXiv preprint arXiv:1711.06396, 2017.
-  Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum pointnets for 3d object detection from rgb-d data. arXiv preprint arXiv:1711.08488, 2017.
-  Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven Waslander. Joint 3d proposal generation and object detection from view aggregation. arXiv preprint arXiv:1712.02294, 2017.
-  Youngmin Park, Vincent Lepetit, and Woontack Woo. Multiple 3d object tracking for augmented reality. In Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 117–120. IEEE Computer Society, 2008.
-  Vivek Singh, Kai Ma, Birgi Tamersoy, Yao-Jen Chang, Andreas Wimmer, Thomas O’Donnell, and Terrence Chen. Darwin: Deformable patient avatar representation with deep image network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 497–504. Springer, 2017.
Abdolrahim Kadkhodamohammadi, Afshin Gangi, Michel de Mathelin, and Nicolas
A multi-view rgb-d approach for human pose estimation in operating
Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, pages 363–372. IEEE, 2017.
Hanbyul Joo, Tomas Simon, and Yaser Sheikh.
Total capture: A 3d deformation model for tracking faces, hands, and
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8320–8329, 2018.
-  Hyunggi Cho, Young-Woo Seo, BVK Vijaya Kumar, and Ragunathan Raj Rajkumar. A multi-sensor fusion system for moving object detection and tracking in urban driving environments. In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pages 1836–1843. IEEE, 2014.
-  Saurabh Gupta, Pablo Arbeláez, Ross Girshick, and Jitendra Malik. Aligning 3d models to rgb-d images of cluttered scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4731–4740, 2015.
-  Shubham Tulsiani and Jitendra Malik. Viewpoints and keypoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1510–1519, 2015.
-  Danfei Xu, Dragomir Anguelov, and Ashesh Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. arXiv preprint arXiv:1711.10871, 2017.
-  Vinkle Srivastav, Thibaut Issenhuth, Abdolrahim Kadkhodamohammadi, Michel de Mathelin, Afshin Gangi, and Nicolas Padoy. Mvor: A multi-view rgb-d operating room dataset for 2d and 3d human pose estimation. arXiv preprint arXiv:1808.08180, 2018.
-  Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912–1920, 2015.
-  Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 3, 2017.
-  Varun Jampani, Martin Kiefel, and Peter V Gehler. Learning sparse high dimensional filters: Image filtering, dense crfs and bilateral neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4452–4461, 2016.
-  Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 1(2):4, 2017.
-  Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pages 5099–5108, 2017.
-  Liuhao Ge, Yujun Cai, Junwu Weng, and Junsong Yuan. Hand pointnet: 3d hand pose estimation using point sets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8417–8426, 2018.
-  Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, and Jan Kautz. Splatnet: Sparse lattice networks for point cloud processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2530–2539, 2018.
-  Yangyan Li, Rui Bu, Mingchao Sun, and Baoquan Chen. Pointcnn. arXiv preprint arXiv:1801.07791, 2018.
-  Vishakh Hegde and Reza Zadeh. Fusionnet: 3d object classification using multiple data representations. arXiv preprint arXiv:1607.05695, 2016.
-  Nikolaos Sarafianos, Bogdan Boteanu, Bogdan Ionescu, and Ioannis A Kakadiaris. 3d human pose estimation: A review of the literature and analysis of covariates. Computer Vision and Image Understanding, 152:1–20, 2016.
-  Wei Yang, Wanli Ouyang, Xiaolong Wang, Jimmy Ren, Hongsheng Li, and Xiaogang Wang. 3d human pose estimation in the wild by adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 1, 2018.
-  Helge Rhodin, Jörg Spörri, Isinsu Katircioglu, Victor Constantin, Frédéric Meyer, Erich Müller, Mathieu Salzmann, and Pascal Fua. Learning monocular 3d human pose estimation from multi-view images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
-  Abdolrahim Kadkhodamohammadi and Nicolas Padoy. A generalizable approach for multi-view 3d human pose regression. arXiv preprint arXiv:1804.10462, 2018.
-  Gyeongsik Moon, Ju Yong Chang, and Kyoung Mu Lee. V2v-posenet: Voxel-to-voxel prediction network for accurate 3d hand and human pose estimation from a single depth map. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, 2018.
-  Albert Haque, Boya Peng, Zelun Luo, Alexandre Alahi, Serena Yeung, and Fei-Fei Li. Viewpoint invariant 3d human pose estimation with recurrent error feedback. CoRR, abs/1603.07076, 2016.
-  Ross B. Girshick. Fast R-CNN. CoRR, abs/1504.08083, 2015.
-  Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B. Girshick. Mask R-CNN. CoRR, abs/1703.06870, 2017.
-  Xiaomeng Li, Hao Chen, Xiaojuan Qi, Qi Dou, Chi-Wing Fu, and Pheng-Ann Heng. H-denseunet: Hybrid densely connected unet for liver and liver tumor segmentation from CT volumes. CoRR, abs/1709.07330, 2017.
-  Hanbyul Joo, Tomas Simon, Xulong Li, Hao Liu, Lei Tan, Lin Gui, Sean Banerjee, Timothy Scott Godisart, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, and Yaser Sheikh. Panoptic studio: A massively multiview system for social interaction capture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
-  V. Srivastav, T. Issenhuth, A. Kadkhodamohammadi, M. de Mathelin, A. Gangi, and N. Padoy. MVOR: A Multi-view RGB-D Operating Room Dataset for 2D and 3D Human Pose Estimation. ArXiv e-prints, August 2018.
-  Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012.