Detecting objects in point clouds obtained from rapidly developing 3D scanners is an important first step for 3D scene understanding, and benefits various real-world applications such as autonomous navigation, housekeeping robots , and augmented/virtual reality . However, unlike RGB images, point cloud data have own inherent properties — they are sparse, irregular, with non-uniform density and lack of visual appearance information, which cause difficulty for object localization and recognition. How to effectively extract local patterns in point data to assist detection still remains challenging.
Many approaches [39, 33, 12, 27, 34, 23] have been developed for learning local geometric structures in the point clouds, which is crucial to detecting objects. One line of works [39, 17, 33, 12] voxelize the irregular point clouds to a regular 3D grid and apply 3D CNNs to extract features from neighboring voxels progressively, as shown in Fig. 1(a). The advantage of such solutions lies in utilizing the prominent power of convolutional kernels for learning spatial patterns to detect geometric structures. However, these approaches require to transform the entire point clouds to volumetric representation, where the sparsity in point data does not get exploited, thus suffer high computational cost from applying 3D convolutions over dense grids mainly occupied with empty voxels.
(b). Such point-based models abstract local patterns from small neighborhoods of subsampled points through either point-wise Multi-layer Perceptrons (MLPs) followed by a symmetric function (e.g. max pooling), or based on graphs with MLPs applied on each graph edge [34, 15]. In contrast to voxelization-based models, point-based models respect the inherent sparsity of point data and enable efficient computation by only computing on sensed regions. However, using discrete MLPs leads to loss of relations among many points, thus tends to miss the fine-grained geometry.
In this work, we aim to improve performance of point-based models by strengthening their pattern learning ability with 3D CNNs. The key challenge is that point-based models directly operate on sparse and irregular points and hence are not compatible with standard convolutions. Our key idea is to develop a new local reshaping approach of point clouds such that they are compatible with regular convolutions without incurring too much computation overhead. To this end, we operate on subsampled points from the input to leverage its sparsity, and introduce a novel and principled Local Grid Rendering (LGR) operation to render a small neighborhood of each sampled point to a regular grid of low-resolution independently. In this way, a small-size CNN is allowed to only compute on a set of small grids from sensed regions to abstract fine-grained patterns efficiently. See the illustration in Fig. 1(c).
Concretely, we implement the above idea with three steps. 1) Data structuring. A subset of input points is sampled as centroids of sensed regions, and neighboring points to each region centroid are queried to construct local point sets. 2) LGR operation
. Each local point set is converted to a small regular grid independently through an interpolation function. 3)Perceptual feature extraction
. With the LGR operation, an efficient mini-CNN perceives and abstracts spatial patterns in each rendered grid into a feature vector.
We wrap these separate steps into an integrated Set Perception (SP) module, which abstracts a set of input points to produce a new set with fewer points, and enriches them with local context perceived by CNNs in a certain neighborhood. By applying the SP module repeatedly, fewer points with context from larger neighboring regions can be obtained progressively. With the newly designed SP module, we build a simple yet efficient backbone network, named LGR-Net, for effective feature extraction for point clouds. Our LGR-Net is superior in learning geometric structures by applying the LGR operation, which makes irregular point data compatible with 3D CNNs locally. Meanwhile, it also enables efficient computation in that only sensed regions are projected to small grids, which avoids redundant convolutions in empty space.
Our proposed LGR-Net is generic and applicable to various point-based models. In this work, we apply LGR-Net to build a 3D detection model on top of the recent successful deep Hough voting framework VoteNet . We perform evaluations on two challenging 3D indoor object detection datasets, ScanNet  and SUN RGB-D . Our model achieves state-of-the-art results on both of them with significant improvements ( and mAP, respectively) over the prior VoteNet, bringing only slightly increased computation overhead. It is evidenced that incorporating CNNs through LGR enables more effective abstraction of local patterns than conventional point-based models, thus benefits the detection of 3D objects in point clouds.
In summary, this work makes the following contributions:
We propose a novel LGR operation that projects local point sets to small 3D grids independently. It allows using 3D CNNs to abstract local geometric patterns explicitly and efficiently.
Based on the LGR, we introduce a new backbone network (LGR-Net) to the community, which is simple and efficient.
Our model establishes new state-of-the-art on the ScanNet and SUN RGB-D datasets.
2 Related Work
2.0.1 Point-based Models for Point Clouds
develop hand-crafted feature descriptors to capture geometric patterns. Recently, point-based models are proposed to learn deep features on raw point data. PointNet and PointNet++  use point-wise MLPs followed by max pooling to aggregate point features. Some other works [34, 15] construct a local neighborhood graph based on the neighbors of each point, and perform MLPs on the edges to learn their relations. Though enjoying a high processing speed, these point-based models endure implicit abstraction of local geometry. Considering explicit modeling of geometric structures, InterpConv  directly convolves on irregular point clouds by interpolating point features to the neighboring weights of discrete convolutional kernels. Our approach is different from InterpConv in that we first convert the point set in a local region into a small regular grid, which allows using a mini-CNN flexibly with arbitrary numbers and kernel sizes of successive convolutional layers to fully exploit local patterns.
2.0.2 Point-based Models in 3D Object Detection
Recent development of real-time applications such as autonomous driving and robotics has motivated increasing attention to 3D object detection in point clouds. Some early works [20, 18, 32, 1] rely on template matching to localize objects by aligning a set of CAD models to 3D scans. MV3D  projects 3D data to the bird’s eye view representation to generate candidate boxes. With the demand for high processing speed, point-based models suited for raw point data have been working well for 3D object detection [16, 30, 23] and also semantic and instance segmentation [38, 4, 10]. Among these models, PointRCNN  develops a two-stage detector which generates 3D proposals and refines them directly from input points. Notably, VoteNet  constructs an end-to-end detection framework by incorporating PointNet++  and Hough voting process, hitting new state-of-the-art when applied to indoor scenes [5, 31] with just point data input.
2.0.3 Voxelization-based Models for Point Clouds
Voxelization-based models [39, 26, 35, 6] convert input point clouds into a regular 3D grid upon which 3D CNNs are utilized for feature extraction. The limitation of such models is that a sparse grid suffers quantization artifacts while a dense grid leads to exponentially growing computational complexity. Indexing techniques have been used to operate on non-uniform grids [13, 29], but they focus more on subdivisions of point clouds than on modeling geometric structures. Voxelization-based models have also been applied to 3D object detection. VoxelNet , DSS  and 3D-SIS  divide input point clouds into equally spaced 3D grids for unified feature extraction and bounding box prediction using 3D CNNs. However, such methods are computationally expensive due to redundant convolutions over dense grids occupied with many empty voxels.
We propose a simple yet efficient backbone LGR-Net for point cloud feature extraction based on a novel Local Grid Rending (LGR) operation, and apply LGR-Net to 3D object detection. In following, we will first elaborate on our LGR operation and then LGR-Net, and finally explain its application to 3D objection detection.
3.1 Set Perception with LGR
We progressively abstract local patterns from input points with three steps including data structuring, Local Grid Rendering (LGR) and perceptual feature extraction, which are integrated into a Set Perception (SP) module. See the illustration in Fig. 2. The SP module perceives and abstracts a set of input points to produce a new set with fewer elements. It takes as input points, each of which with xyz-coordinates and -dim features, forming input data size . It outputs subsampled points of size comprising xyz-coordinates and new -dim features enriched with local context. The three steps are explained one by one at below.
3.1.1 Data Structuring
Given input points of size , we sample a subset of points as centroids of sensed regions. We use Farthest Point Sampling (FPS)  to get a better coverage of the entire input points compared to random sampling. We then select neighboring points for each region centroid, forming groups of local point sets. Each point set represents geometric structures corresponding to a local region. Output data size is .
Concretely, we adopt Cube query  to construct a cube region with a preset half-edge length centered at each region centroid, and randomly pick points within the cube to form a local point set. We translate the coordinates of points in each set into a local frame relative to its region centroid, by firstly subtracting the centroid’s coordinates and then divided by the half-edge length , so that the translated point coordinates fall in the interval .
3.1.2 Local Grid Rendering
Suppose the points in a local point set are parameterized as , in which the feature vector and coordinates for the point . The novel LGR operation rasterizes each local point set into a regular 3D grid independently through an interpolation function. Since each grid only represents geometric structures of a local neighborhood, the grid resolution is allowed to be very small. The rasterization is performed as follows.
Each point in the set can be rendered into its own grayscale 3D grid of preset resolution (widthheightlength in voxels), where voxel coordinates are uniformly spaced in the interval so as to be compatible with the coordinate range of the point.
For a single point , we implement an interpolation kernel for its rasterization. Its interpolated response on voxel in the grayscale gird is computed as
where computes the distance between the voxel and the point . We use Euclidean distance in our experiments. The interpolation kernel is defined as
where is a preset radius within which the voxels can get a non-zero interpolated response from the point . We set as half the diagonal of a voxel cell (discussed in Sec. 4.3.2). The parameter adjusts the rate at which interpolated response is decreased as the distance between the voxel and the point increases (in default we use ).
The rendering output is a multi-channel 3D grid of dimension , where each channel corresponds to one of the channels of point features. Voxel of is a feature activation vector for that voxel, and is computed as
where is the -th channel feature of the point . As a result, though points in a local point set are distributed non-uniformly across space, the given weighted average aggregation of point features ensures that the scale of activation on different voxels in the grid is invariant to varying point density.
3.1.3 Perceptual Feature Extraction
This step perceives and abstracts spatial patterns in a rendered grid into a -dim feature vector by performing a function :
where and are functions for feature embedding and spatial pooling respectively.
Concretely, we apply two 3D convolutional layers followed by a global max pooling layer, forming a shared mini-CNN, to abstract patterns in each grid. Since the mini-CNN only consists of a handful of convolutional layers, which are performed on a set of low-resolution grids, massive convolutions over dense grids are thus avoided to remain efficiency.
3.2 LGR-Net Architecture
The SP module samples a subset of points from the input points and enriches each sampled point with local context from a neighboring cube region with a preset half-edge length . By stacking SP modules with decreasing and increasing , fewer points with context from larger regions can be obtained progressively. We thus build a new backnone LGR-Net by forming a hierarchy of SP modules for point feature extraction.
Concretely, LGR-Net comprises four SP modules with decreasing including , , , , and increasing including , , , in meters respectively. In addition, two Feature Propagation (FP) layers  upsample the output from the 4th SP module back to points with -dim features, by interpolating the features on input points to output points (each output point’s feature is the inverse distance weighted average of its three nearest input points’ features). Details of the LGR-Net architecture are specified in Table 1.
|Sampling||(2048, 0.15, 64)||(1024, 0.3, 32)||(512, 0.6, 16)||(256, 1.0, 16)|
|max pool||max pool||max pool||max pool|
3.3 Application for Detection
Our LGR-Net is generic. We apply it to object detection considering its ability to extract fine-grained local geometric structures that are crucial to detection performance. We build a point-based detection model based on the recent successful VoteNet  framework, which directly works on raw point clouds and outputs object proposals in one forward pass by using a backbone network, a voting module and a proposal module.
Concretely, we apply LGR-Net as the backnone network to output a subset of input points featured by local patterns, which are considered as seed points. The voting module , implemented as a MLP, generates votes from each seed independently. Every vote is a 3D point with its coordinates regressed to approach the object center, and also a refined feature vector. The proposal module, implemented as a set abstraction layer 
, groups the votes into clusters and aggregates their features to generate object proposals. These proposals are further classified and NMSed to output final 3D bounding boxes.
In this section, we firstly compare our method with previous state-of-the-arts on two popular 3D object detection benchmarks of indoor scenes. We then provide experimental analysis to validate our design choices. Finally, qualitative results along with discussions are given.
4.1 Implementation Details
Our detection model is based on the VoteNet  framework, thus we adopt the same input and data augmentation scheme as in VoteNet for a fair comparison. Specifically, the network input is a set of points randomly subsampled from a popped-up depth image (=) or a 3D scan (mesh vertices, =) on-the-fly. Each point is represented by a 4-dim vector of xyz-coordinates with an additional height feature indicating the distance to the floor . Several augmentations are applied to the points, including random flipping in both horizontal directions, random rotation by around the upright-axis, and random scaling by .
The entire network is trained end-to-end from scratch. We adopt Adam optimizer with batch size and an initial learning rate of which is decreased by after epochs and by another after epochs. For inference, we perform 3D non-maximum suppression on the generated object proposals with an IoU threshold of . The evaluation protocol is Average Precision (AP) following Song et al. .
|VoteNet ||Geo only||74.4||83.0||28.8||75.3||22.0||29.8||62.2||64.0||47.3||90.1||57.7|
|LGR-Net (ours)||Geo only||80.2||85.1||39.7||76.7||29.4||35.1||66.5||67.6||52.7||89.1||62.2|
|DSS [33, 12]||Geo + RGB||15.2||6.8|
|MRCNN 2D-3D [11, 12]||Geo + RGB||17.3||10.5|
|F-PointNet [24, 12]||Geo + RGB||19.8||10.8|
|GSPN ||Geo + RGB||30.6||17.7|
|3D-SIS ||Geo + 1 view||35.1||18.7|
|3D-SIS ||Geo + 3 views||36.6||19.0|
|3D-SIS ||Geo + 5 views||40.2||22.5|
|3D-SIS ||Geo only||25.4||14.6|
|VoteNet ||Geo only||58.6||33.5|
|LGR-Net (ours)||Geo only||64.1||42.0|
4.2 Comparing with State-of-the-Arts
4.2.1 Benchmark Datasets.
SUN RGB-D  contains single-view RGB-D images (5,000 for training) with dense annotations of oriented 3D bounding boxes for object categories. We reconstruct point clouds from the depth images using the provided camera parameters, as in VoteNet. Following standard evaluation protocol , we only train and report results on the most common categories. ScanNetV2  is a large-scale RGB-D dataset containing scans of over unique indoor scenes. The scans are annotated with surface reconstructions, textured meshes and semantic and instance segmentation for object categories. We reconstruct input point clouds by sampling vertices from the reconstructed meshes and predict axis-aligned bounding boxes . We use scenes for training, and scenes for testing .
4.2.2 Methods in Comparison.
2D-driven , PointFusion  and F-PointNet  are 2D-driven 3D detection methods, which benefit from detection techniques for 2D images and use 2D information to reduce the search space in 3D. Cloud of Gradients (COG)  designs new 3D HoG-like features to detect objects in a sliding shape manner. MRCNN 2D-3D [11, 12]estimates 3D bounding boxes by directly projecting instance segmentation results from Mask-RCNN  into 3D. GSPN  utilizes a generative model to generate object proposals for instance segmentation in point clouds.
Notably, Deep Sliding Shapes (DSS)  and 3D-SIS  are voxelization-based detectors, which convert the entire input point clouds to dense grids and employ 3D CNNs to learn from both geometry and RGB cues for object proposal generation and classification. VoteNet  is a point-based detector, which adopts PointNet++  as backbone to abstract and aggregate point features to generate proposals and classify them.
4.2.3 Detection Performance.
Table 2 and 3 show detection results on SUN RGB-D and ScanNet respectively. Our model outperforms all prior arts by large margins on both benchmarks, i.e., at least 4.5 and 5.5 mAP increase, respectively. Note that our model takes only point clouds as input, while most previous methods use both geometry and RGB or even multi-view RGB images. Especially, Table 2 shows our model achieves better results on nearly all categories, over both the best-performing point-based detector (VoteNet) and the voxelization-based detector (DSS). Similar conclusions can be drawn from Table 3 that LGR-Net significantly outperforms both VoteNet and methods based on voxelization (DSS and 3D-SIS), evidencing the superiority of LGR-Net in local pattern abstraction and in turn detection. In addition, Table 3 shows that the stricter criterion of mAP@0.5 is greatly boosted by 8.5, suggesting our model advances object localization benefiting from the learned fine-grained geometry.
Our model is computationally efficient since it only performs small-size CNNs on a set of grids of low-resolution, thanks to the LGR operation. Table 4 depicts that our model is more than 8 times faster than voxelization-based 3D-SIS which applies CNNs on a dense grid derived from the entire point clouds. In addition, our model is comparable to the point-based VoteNet in terms of speed, while achieving significant improvement in detection performance.
4.3 Ablation Studies
4.3.1 How does LGR help?
We argue that leveraging 3D CNNs thanks to the LGR operation enables better abstraction of fine-grained structures than conventional point-based models. However, directly analyzing learned features to support our argument is not trivial. Fortunately, the backbone network integrates learned local patterns into the features of a set of output points for predicting object proposals. Since fine-grained structures are crucial to correctly localizing and recognizing objects, an output point enriched with such features (called informative point here) is more likely to generate good votes, and in turn a good object proposal (with correct class and over IoU with ground-truth). Therefore, a backbone better at capturing fine-grained structures should produce more informative points given a fixed number of output points.
Fig. 3 demonstrates this phenomenon. We analyze points output by the backbone, and trace those points that can generate an accurate object proposal. One can see that VoteNet (left) fails to produce informative points on some objects (especially small ones) to support detection. In comparison, our model (right) offers more informative points with a denser coverage of the scene, especially on objects that VoteNet has missed, evidencing its superiority in exploiting fine-grained geometry.
4.3.2 Effect of Feature Aggregation
The radius in Eqn. 2 controls how much features from neighboring points are aggregated to each voxel. We here investigate how it affects feature aggregation and in turn detection. We set half the diagonal of a voxel cell as a unit of radius. Fig. 4 (left) shows as the radius increases, the mAP reaches its peak at radius. However, embracing more points from a larger neighborhood, though introducing more context, could excessively reduces representative features from voxel’s nearby points, thus blurs voxelized representation and hurts detection performance.
We proceed with a second analysis on the influence of different feature aggregation methods. We test two alternative aggregation methods, average pooling and nearest neighbor, which compute the activation on each voxel by averaging features of its neighboring points and by directly taking its nearest point’s features, respectively (tested with radius). Fig. 4 (right) shows that aggregation via interpolation outperforms both alternatives. One possible explanation is that it not only accumulates appropriate context but also respects positional correspondence between voxels and points.
4.3.3 Effect of Grid Resolution
The resolution of the rendered grid influences the quantization degree of local geometric structures and also computation overhead. To determine its value, we compare detection performance and processing time (per scan) by using different grid resolutions. Fig. 5 (left) shows that a proper resolution of 555 (voxels) gives the best detection performance. One explanation is that a lower resolution such as 333 introduces quantization artifacts during rendering, which obscures fine-grained geometry. In the contrary, projecting a point set with limited points to a grid of a higher resolution such as 777 would cause sparsity in voxel activation, which makes pattern learning difficult and increases processing time. We thus adopt 555 as the grid resolution throughout our experiments.
4.3.4 Effect of Mini-CNN Architecture
We are interested in how different mini-CNN architectures influence local pattern abstraction. Fig. 5 (right) provides detection performance of using different numbers and kernel sizes of convolutional layers. One can see that using two 111 convolutional layers (somewhat like point-wise MLPs used by VoteNet) leads to a largely decreased mAP (), which is similar as the VoteNet performance (). This indicates that leveraging larger convolutional kernels to model spatial relations among voxels is the key to effectively capturing fine-grained geometric structures. In addition, two 333 convolutional layers (followed by a global max pooling) are sufficient for abstracting patterns in the grid. Since no noticeable improvement is observed by further increasing the number or kernel size of convolutional layers, we use the above settings throughout our experiment.
4.3.5 Effect of Query Strategy
We adopt cube query to construct local point sets for sensed regions. To validate our design choice, we further test with ball query as used by PointNet++ . For fair comparisons, we set the cube and ball to have the same volume in each SP module. Experiments show that either query strategy obtains satisfying results. Specifically, cube query performs slightly better than ball query: vs. in mAP. We argue that cube query’s local neighborhood is more compatible with the rendered grid in shape compared to ball query’s. This guarantees a more complete coverage of voxels when interpolating queried points onto the grid, which brings informative grid representation and thus facilitates pattern abstraction.
4.4 Qualitative Results and Discussion
In Fig. 6, we provide qualitative results of our model and VoteNet on SUN RGB-D to show how extracting point features through LGR-Net benefits detection in various ways. The first two examples show that a clustered dresser/bookshelf is missed by VoteNet, while our model is able to collect sufficient geometric cues at corresponding positions, thus boosts confidence to recognize the object. The second example also reveals the strengths of our model in avoiding false positives like the chair as that in VoteNet output, indicating our model is superior in learning informative region features to resolve local ambiguities. The above results evidence that our model effectively promotes extracted features from point clouds, and thus helps localize and recognize objects. The last example shows a less successful prediction where an extremely partial observation of chairs is given. Similar advantages of our approach are also revealed by qualitative results on ScanNetV2, as shown in Fig. 7.
In this work, we propose a novel Local Grid Rendering (LGR) operation which allows using small-size CNNs to effectively abstract fine-grained geometric structures while preserving computational efficiency. A simple yet efficient backbone LGR-Net for point cloud feature extraction is further introduced based on the LGR operation. We apply the LGR-Net to object detection. With only point input, our model shows significant improvements over prior arts. Our proposed LGR-Net is generic. In future work we intend to utilize it in downstream applications such as 3D instance segmentation in point clouds.
-  (2019) Scan2CAD: learning cad model alignment in rgb-d scans. In CVPR, pp. 2614–2623. Cited by: §2.0.2.
-  (2010) Scale-invariant heat kernel signatures for non-rigid shape recognition. In CVPR, pp. 1704–1711. Cited by: §2.0.1.
-  (2017) Multi-view 3d object detection network for autonomous driving. In CVPR, pp. 1907–1915. Cited by: §2.0.2.
4d spatio-temporal convnets: minkowski convolutional neural networks. In CVPR, pp. 3075–3084. Cited by: §2.0.2.
-  (2017) Scannet: richly-annotated 3d reconstructions of indoor scenes. In CVPR, pp. 5828–5839. Cited by: §1, §2.0.2, §4.2.1.
-  (2017) Vote3deep: fast object detection in 3d point clouds using efficient convolutional neural networks. In ICRA, pp. 1355–1361. Cited by: §2.0.3.
-  (2009) Shape-based recognition of 3d point clouds in urban environments. In ICCV, pp. 2154–2161. Cited by: §2.0.1.
-  (2013) Efficient 3d object recognition using foveated point clouds. Computers & Graphics 37 (5), pp. 496–508. Cited by: §2.0.1.
-  (2016) Pl-svo: semi-direct monocular visual odometry by combining points and line segments. In IROS, pp. 4211–4216. Cited by: §1.
-  (2018) 3d semantic segmentation with submanifold sparse convolutional networks. In CVPR, pp. 9224–9232. Cited by: §2.0.2.
-  (2017) Mask r-cnn. In ICCV, pp. 2961–2969. Cited by: §4.2.2, Table 3.
-  (2019) 3D-sis: 3d semantic instance segmentation of rgb-d scans. In CVPR, pp. 4421–4430. Cited by: §1, §2.0.3, §4.2.2, §4.2.2, Table 3, Table 4.
-  (2017) Escape from cells: deep kd-networks for the recognition of 3d point cloud models. In ICCV, pp. 863–872. Cited by: §2.0.3.
-  (2017) 2D-driven 3d object detection in rgb-d images. In CVPR, pp. 4622–4630. Cited by: §4.2.2, Table 2.
-  (2019) Modeling local geometric structure of 3d point clouds using geo-cnn. In CVPR, pp. 998–1008. Cited by: §1, §2.0.1.
-  (2019) PointPillars: fast encoders for object detection from point clouds. In CVPR, pp. 12697–12705. Cited by: §2.0.2.
-  (2018) Pointgrid: a deep network for 3d shape understanding. In CVPR, pp. 9204–9214. Cited by: §1.
-  (2017) ASIST: automatic semantically invariant scene transformation. Computer Vision and Image Understanding 157, pp. 284–299. Cited by: §2.0.2.
-  (2019) Interpolated convolutional networks for 3d point cloud understanding. In ICCV, pp. 1578–1587. Cited by: §2.0.1.
-  (2012) A search-classify approach for cluttered indoor scene understanding. ACM TOG 31 (6), pp. 1–10. Cited by: §2.0.2.
-  (2002) Development of small robot for home floor cleaning. In SICE, Vol. 5, pp. 3222–3223. Cited by: §1.
-  (2008) Multiple 3d object tracking for augmented reality. In ISMAR, pp. 117–120. Cited by: §1.
-  (2019) Deep hough voting for 3d object detection in point clouds. In ICCV, pp. 9277–9286. Cited by: §1, §1, §1, §2.0.2, §3.3, §3.3, §4.1, §4.2.1, §4.2.2, Table 2, Table 3, Table 4.
-  (2018) Frustum pointnets for 3d object detection from rgb-d data. In CVPR, pp. 918–927. Cited by: §4.2.2, Table 2, Table 3.
PointNet: deep learning on point sets for 3d classification and segmentation. In CVPR, pp. 652–660. Cited by: §2.0.1.
-  (2016) Volumetric and multi-view cnns for object classification on 3d data. In CVPR, pp. 5648–5656. Cited by: §2.0.3.
-  (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In NeurIPS, pp. 5099–5108. Cited by: §1, §1, §2.0.1, §2.0.2, §3.1.1, §3.2, §3.3, §4.2.2, §4.3.5.
-  (2016) Three-dimensional object detection and layout prediction using clouds of oriented gradients. In CVPR, pp. 1525–1533. Cited by: §4.2.2, Table 2.
-  (2017) Octnet: learning deep 3d representations at high resolutions. In CVPR, pp. 3577–3586. Cited by: §2.0.3.
-  (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In CVPR, pp. 770–779. Cited by: §2.0.2.
-  (2015) SUN rgb-d: a rgb-d scene understanding benchmark suite. In CVPR, pp. 567–576. Cited by: §1, §2.0.2, §4.2.1, Table 2.
-  (2014) Sliding shapes for 3d object detection in depth images. In ECCV, pp. 634–651. Cited by: §2.0.2.
-  (2016) Deep sliding shapes for amodal 3d object detection in rgb-d images. In CVPR, pp. 808–816. Cited by: §1, §2.0.3, §4.1, §4.2.1, §4.2.2, Table 2, Table 3.
-  (2019) Dynamic graph cnn for learning on point clouds. ACM TOG 38 (5), pp. 1–12. Cited by: §1, §1, §2.0.1.
-  (2015) 3d shapenets: a deep representation for volumetric shapes. In CVPR, pp. 1912–1920. Cited by: §2.0.3.
-  (2018) Pointfusion: deep sensor fusion for 3d bounding box estimation. In CVPR, pp. 244–253. Cited by: §4.2.2, Table 2.
-  (2019) Grid-gcn for fast and scalable point cloud learning. arXiv preprint arXiv:1912.02984. Cited by: §3.1.1.
-  (2019) Gspn: generative shape proposal network for 3d instance segmentation in point cloud. In CVPR, pp. 3947–3956. Cited by: §2.0.2, §4.2.2, Table 3.
-  (2018) Voxelnet: end-to-end learning for point cloud based 3d object detection. In CVPR, pp. 4490–4499. Cited by: §1, §2.0.3.