Local Grid Rendering Networks for 3D Object Detection in Point Clouds

07/04/2020 ∙ by Jianan Li, et al. ∙ National University of Singapore 0

The performance of 3D object detection models over point clouds highly depends on their capability of modeling local geometric patterns. Conventional point-based models exploit local patterns through a symmetric function (e.g. max pooling) or based on graphs, which easily leads to loss of fine-grained geometric structures. Regarding capturing spatial patterns, CNNs are powerful but it would be computationally costly to directly apply convolutions on point data after voxelizing the entire point clouds to a dense regular 3D grid. In this work, we aim to improve performance of point-based models by enhancing their pattern learning ability through leveraging CNNs while preserving computational efficiency. We propose a novel and principled Local Grid Rendering (LGR) operation to render the small neighborhood of a subset of input points into a low-resolution 3D grid independently, which allows small-size CNNs to accurately model local patterns and avoids convolutions over a dense grid to save computation cost. With the LGR operation, we introduce a new generic backbone called LGR-Net for point cloud feature extraction with simple design and high efficiency. We validate LGR-Net for 3D object detection on the challenging ScanNet and SUN RGB-D datasets. It advances state-of-the-art results significantly by 5.5 and 4.5 mAP, respectively, with only slight increased computation overhead.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Detecting objects in point clouds obtained from rapidly developing 3D scanners is an important first step for 3D scene understanding, and benefits various real-world applications such as autonomous navigation 

[9], housekeeping robots [21], and augmented/virtual reality [22]. However, unlike RGB images, point cloud data have own inherent properties — they are sparse, irregular, with non-uniform density and lack of visual appearance information, which cause difficulty for object localization and recognition. How to effectively extract local patterns in point data to assist detection still remains challenging.

Many approaches [39, 33, 12, 27, 34, 23] have been developed for learning local geometric structures in the point clouds, which is crucial to detecting objects. One line of works [39, 17, 33, 12] voxelize the irregular point clouds to a regular 3D grid and apply 3D CNNs to extract features from neighboring voxels progressively, as shown in Fig. 1(a). The advantage of such solutions lies in utilizing the prominent power of convolutional kernels for learning spatial patterns to detect geometric structures. However, these approaches require to transform the entire point clouds to volumetric representation, where the sparsity in point data does not get exploited, thus suffer high computational cost from applying 3D convolutions over dense grids mainly occupied with empty voxels.

On the other hand, some works [27, 34, 15, 23] accept raw points without voxelization, as illustrated in Fig. 1

(b). Such point-based models abstract local patterns from small neighborhoods of subsampled points through either point-wise Multi-layer Perceptrons (MLPs) followed by a symmetric function (e.g. max pooling 

[27]), or based on graphs with MLPs applied on each graph edge [34, 15]. In contrast to voxelization-based models, point-based models respect the inherent sparsity of point data and enable efficient computation by only computing on sensed regions. However, using discrete MLPs leads to loss of relations among many points, thus tends to miss the fine-grained geometry.

Figure 1: Approaches for learning geometric patterns in point clouds. (a) Voxelization-based model. The entire input points are directly voxelized into a dense grid. Geometric patterns are then learned through CNNs, which is accurate but computationally expensive. (b) Point-based model. A subset of input points is sampled and enriched with local context through a symmetric function (e.g. max pooling) or based on graphs, which is fast but tends to miss fine-grained geometry. (c) Our LGR-Net. It benefits from both (a) and (b) by sampling local point sets and converting each of them to a small grid independently, which allows using powerful small-size CNNs to effectively capture fine-grained geometry yet preserves computational efficiency by only convolving on small grids derived from sensed regions

In this work, we aim to improve performance of point-based models by strengthening their pattern learning ability with 3D CNNs. The key challenge is that point-based models directly operate on sparse and irregular points and hence are not compatible with standard convolutions. Our key idea is to develop a new local reshaping approach of point clouds such that they are compatible with regular convolutions without incurring too much computation overhead. To this end, we operate on subsampled points from the input to leverage its sparsity, and introduce a novel and principled Local Grid Rendering (LGR) operation to render a small neighborhood of each sampled point to a regular grid of low-resolution independently. In this way, a small-size CNN is allowed to only compute on a set of small grids from sensed regions to abstract fine-grained patterns efficiently. See the illustration in Fig. 1(c).

Concretely, we implement the above idea with three steps. 1) Data structuring. A subset of input points is sampled as centroids of sensed regions, and neighboring points to each region centroid are queried to construct local point sets. 2) LGR operation

. Each local point set is converted to a small regular grid independently through an interpolation function. 3)

Perceptual feature extraction

. With the LGR operation, an efficient mini-CNN perceives and abstracts spatial patterns in each rendered grid into a feature vector.

We wrap these separate steps into an integrated Set Perception (SP) module, which abstracts a set of input points to produce a new set with fewer points, and enriches them with local context perceived by CNNs in a certain neighborhood. By applying the SP module repeatedly, fewer points with context from larger neighboring regions can be obtained progressively. With the newly designed SP module, we build a simple yet efficient backbone network, named LGR-Net, for effective feature extraction for point clouds. Our LGR-Net is superior in learning geometric structures by applying the LGR operation, which makes irregular point data compatible with 3D CNNs locally. Meanwhile, it also enables efficient computation in that only sensed regions are projected to small grids, which avoids redundant convolutions in empty space.

Our proposed LGR-Net is generic and applicable to various point-based models. In this work, we apply LGR-Net to build a 3D detection model on top of the recent successful deep Hough voting framework VoteNet [23]. We perform evaluations on two challenging 3D indoor object detection datasets, ScanNet [5] and SUN RGB-D [31]. Our model achieves state-of-the-art results on both of them with significant improvements ( and mAP, respectively) over the prior VoteNet, bringing only slightly increased computation overhead. It is evidenced that incorporating CNNs through LGR enables more effective abstraction of local patterns than conventional point-based models, thus benefits the detection of 3D objects in point clouds.

In summary, this work makes the following contributions:

  • We propose a novel LGR operation that projects local point sets to small 3D grids independently. It allows using 3D CNNs to abstract local geometric patterns explicitly and efficiently.

  • Based on the LGR, we introduce a new backbone network (LGR-Net) to the community, which is simple and efficient.

  • Our model establishes new state-of-the-art on the ScanNet and SUN RGB-D datasets.

2 Related Work

2.0.1 Point-based Models for Point Clouds

Modeling geometric relations among points is a fundamental step for analyzing point clouds. Some earlier works [2, 7, 8]

develop hand-crafted feature descriptors to capture geometric patterns. Recently, point-based models are proposed to learn deep features on raw point data. PointNet 

[25] and PointNet++ [27] use point-wise MLPs followed by max pooling to aggregate point features. Some other works [34, 15] construct a local neighborhood graph based on the neighbors of each point, and perform MLPs on the edges to learn their relations. Though enjoying a high processing speed, these point-based models endure implicit abstraction of local geometry. Considering explicit modeling of geometric structures, InterpConv [19] directly convolves on irregular point clouds by interpolating point features to the neighboring weights of discrete convolutional kernels. Our approach is different from InterpConv in that we first convert the point set in a local region into a small regular grid, which allows using a mini-CNN flexibly with arbitrary numbers and kernel sizes of successive convolutional layers to fully exploit local patterns.

2.0.2 Point-based Models in 3D Object Detection

Recent development of real-time applications such as autonomous driving and robotics has motivated increasing attention to 3D object detection in point clouds. Some early works [20, 18, 32, 1] rely on template matching to localize objects by aligning a set of CAD models to 3D scans. MV3D [3] projects 3D data to the bird’s eye view representation to generate candidate boxes. With the demand for high processing speed, point-based models suited for raw point data have been working well for 3D object detection [16, 30, 23] and also semantic and instance segmentation [38, 4, 10]. Among these models, PointRCNN [30] develops a two-stage detector which generates 3D proposals and refines them directly from input points. Notably, VoteNet [23] constructs an end-to-end detection framework by incorporating PointNet++ [27] and Hough voting process, hitting new state-of-the-art when applied to indoor scenes [5, 31] with just point data input.

2.0.3 Voxelization-based Models for Point Clouds

Voxelization-based models [39, 26, 35, 6] convert input point clouds into a regular 3D grid upon which 3D CNNs are utilized for feature extraction. The limitation of such models is that a sparse grid suffers quantization artifacts while a dense grid leads to exponentially growing computational complexity. Indexing techniques have been used to operate on non-uniform grids [13, 29], but they focus more on subdivisions of point clouds than on modeling geometric structures. Voxelization-based models have also been applied to 3D object detection. VoxelNet [39], DSS [33] and 3D-SIS [12] divide input point clouds into equally spaced 3D grids for unified feature extraction and bounding box prediction using 3D CNNs. However, such methods are computationally expensive due to redundant convolutions over dense grids occupied with many empty voxels.

3 Method

We propose a simple yet efficient backbone LGR-Net for point cloud feature extraction based on a novel Local Grid Rending (LGR) operation, and apply LGR-Net to 3D object detection. In following, we will first elaborate on our LGR operation and then LGR-Net, and finally explain its application to 3D objection detection.

Figure 2: Feature abstraction from point clouds with LGR. Input points are abstracted through a hierarchy of set perception processes, each of which enriches a subsampled set of input points with local context. Staged procedures for set perception are presented in 2D for clearness. (a) Sampling a subset of input points as region centroids and querying their neighboring points to construct local point sets. (b) Projecting each local point set to a small grid independently. The activation on each voxel is computed as a weighted average of features of its neighboring points within a preset radius to deal with non-uniform point density. (c) The Rendered grid. Voxels with a non-zero response are highlighted. (d) Applying a mini-CNN on the rendered grid to perceive and abstract local patterns into a feature vector

3.1 Set Perception with LGR

We progressively abstract local patterns from input points with three steps including data structuring, Local Grid Rendering (LGR) and perceptual feature extraction, which are integrated into a Set Perception (SP) module. See the illustration in Fig. 2. The SP module perceives and abstracts a set of input points to produce a new set with fewer elements. It takes as input points, each of which with xyz-coordinates and -dim features, forming input data size . It outputs subsampled points of size comprising xyz-coordinates and new -dim features enriched with local context. The three steps are explained one by one at below.

3.1.1 Data Structuring

Given input points of size , we sample a subset of points as centroids of sensed regions. We use Farthest Point Sampling (FPS) [27] to get a better coverage of the entire input points compared to random sampling. We then select neighboring points for each region centroid, forming groups of local point sets. Each point set represents geometric structures corresponding to a local region. Output data size is .

Concretely, we adopt Cube query [37] to construct a cube region with a preset half-edge length centered at each region centroid, and randomly pick points within the cube to form a local point set. We translate the coordinates of points in each set into a local frame relative to its region centroid, by firstly subtracting the centroid’s coordinates and then divided by the half-edge length , so that the translated point coordinates fall in the interval .

3.1.2 Local Grid Rendering

Suppose the points in a local point set are parameterized as , in which the feature vector and coordinates for the point . The novel LGR operation rasterizes each local point set into a regular 3D grid independently through an interpolation function. Since each grid only represents geometric structures of a local neighborhood, the grid resolution is allowed to be very small. The rasterization is performed as follows.

Each point in the set can be rendered into its own grayscale 3D grid of preset resolution (widthheightlength in voxels), where voxel coordinates are uniformly spaced in the interval so as to be compatible with the coordinate range of the point.

For a single point , we implement an interpolation kernel for its rasterization. Its interpolated response on voxel in the grayscale gird is computed as

(1)

where computes the distance between the voxel and the point . We use Euclidean distance in our experiments. The interpolation kernel is defined as

(2)

where is a preset radius within which the voxels can get a non-zero interpolated response from the point . We set as half the diagonal of a voxel cell (discussed in Sec. 4.3.2). The parameter adjusts the rate at which interpolated response is decreased as the distance between the voxel and the point increases (in default we use ).

The rendering output is a multi-channel 3D grid of dimension , where each channel corresponds to one of the channels of point features. Voxel of is a feature activation vector for that voxel, and is computed as

(3)

where is the -th channel feature of the point . As a result, though points in a local point set are distributed non-uniformly across space, the given weighted average aggregation of point features ensures that the scale of activation on different voxels in the grid is invariant to varying point density.

3.1.3 Perceptual Feature Extraction

This step perceives and abstracts spatial patterns in a rendered grid into a -dim feature vector by performing a function :

(4)

where and are functions for feature embedding and spatial pooling respectively.

Concretely, we apply two 3D convolutional layers followed by a global max pooling layer, forming a shared mini-CNN, to abstract patterns in each grid. Since the mini-CNN only consists of a handful of convolutional layers, which are performed on a set of low-resolution grids, massive convolutions over dense grids are thus avoided to remain efficiency.

3.2 LGR-Net Architecture

The SP module samples a subset of points from the input points and enriches each sampled point with local context from a neighboring cube region with a preset half-edge length . By stacking SP modules with decreasing and increasing , fewer points with context from larger regions can be obtained progressively. We thus build a new backnone LGR-Net by forming a hierarchy of SP modules for point feature extraction.

Concretely, LGR-Net comprises four SP modules with decreasing including , , , , and increasing including , , , in meters respectively. In addition, two Feature Propagation (FP) layers [27] upsample the output from the 4th SP module back to points with -dim features, by interpolating the features on input points to output points (each output point’s feature is the inverse distance weighted average of its three nearest input points’ features). Details of the LGR-Net architecture are specified in Table 1.

Module SP SP SP SP
Input size N(3+1) 2048128 1024256 512256
Sampling (2048, 0.15, 64) (1024, 0.3, 32) (512, 0.6, 16) (256, 1.0, 16)
Mini-CNN
max pool max pool max pool max pool
Output size 2048128 1024256 512256 256256
Table 1: Details of LGR-Net architecture. Feature propagation layers are not shown here. Module specifications (1st column) are given, including input data size, (number of subsampled points, half-edge length for cube query, number of queried neighboring points), kernel size and channel number of 3D convolutional layers, and output data size

3.3 Application for Detection

Our LGR-Net is generic. We apply it to object detection considering its ability to extract fine-grained local geometric structures that are crucial to detection performance. We build a point-based detection model based on the recent successful VoteNet [23] framework, which directly works on raw point clouds and outputs object proposals in one forward pass by using a backbone network, a voting module and a proposal module.

Concretely, we apply LGR-Net as the backnone network to output a subset of input points featured by local patterns, which are considered as seed points. The voting module [23], implemented as a MLP, generates votes from each seed independently. Every vote is a 3D point with its coordinates regressed to approach the object center, and also a refined feature vector. The proposal module, implemented as a set abstraction layer [27]

, groups the votes into clusters and aggregates their features to generate object proposals. These proposals are further classified and NMSed to output final 3D bounding boxes.

4 Experiments

In this section, we firstly compare our method with previous state-of-the-arts on two popular 3D object detection benchmarks of indoor scenes. We then provide experimental analysis to validate our design choices. Finally, qualitative results along with discussions are given.

4.1 Implementation Details

Our detection model is based on the VoteNet [23] framework, thus we adopt the same input and data augmentation scheme as in VoteNet for a fair comparison. Specifically, the network input is a set of points randomly subsampled from a popped-up depth image (=) or a 3D scan (mesh vertices, =) on-the-fly. Each point is represented by a 4-dim vector of xyz-coordinates with an additional height feature indicating the distance to the floor [23]. Several augmentations are applied to the points, including random flipping in both horizontal directions, random rotation by around the upright-axis, and random scaling by .

The entire network is trained end-to-end from scratch. We adopt Adam optimizer with batch size and an initial learning rate of which is decreased by after epochs and by another after epochs. For inference, we perform 3D non-maximum suppression on the generated object proposals with an IoU threshold of . The evaluation protocol is Average Precision (AP) following Song et al. [33].

Methods Input bathtub bed bkshf chair desk dresser ntstd sofa table toilet mAP
DSS [33] Geo+RGB 44.2 78.8 11.9 61.2 20.5 6.4 15.4 53.5 50.3 78.9 42.1
COG [28] Geo+RGB 58.3 63.7 31.8 62.2 45.2 15.5 27.4 51.0 51.3 70.1 47.6
2D-driven [14] Geo+RGB 43.5 64.5 31.4 48.3 27.9 25.9 41.9 50.4 37.0 80.4 45.1
PointFusion [36] Geo+RGB 37.3 68.6 37.7 55.1 17.2 23.9 32.3 53.8 31.0 83.8 45.4
F-PointNet [24] Geo+RGB 43.3 81.1 33.3 64.2 24.7 32.0 58.1 61.1 51.1 90.9 54.0
VoteNet [23] Geo only 74.4 83.0 28.8 75.3 22.0 29.8 62.2 64.0 47.3 90.1 57.7
LGR-Net (ours) Geo only 80.2 85.1 39.7 76.7 29.4 35.1 66.5 67.6 52.7 89.1 62.2
Table 2: 3D object detection results on SUN RGB-D val set. Evaluation is measured by average precision with 3D IoU threshold of 0.25 [31], and conducted on SUN RGB-D V1 data for fair comparisons with previous methods
Methods Input mAP@0.25 mAP@0.5
DSS [33, 12] Geo + RGB 15.2 6.8
MRCNN 2D-3D [11, 12] Geo + RGB 17.3 10.5
F-PointNet [24, 12] Geo + RGB 19.8 10.8
GSPN [38] Geo + RGB 30.6 17.7
3D-SIS [12] Geo + 1 view 35.1 18.7
3D-SIS [12] Geo + 3 views 36.6 19.0
3D-SIS [12] Geo + 5 views 40.2 22.5
3D-SIS [12] Geo only 25.4 14.6
VoteNet [23] Geo only 58.6 33.5
LGR-Net (ours) Geo only 64.1 42.0
Table 3: 3D object detection results on ScanNetV2 val set.Evaluation metric is mean average precision with 3D IoU threshold of 0.25 and 0.5

4.2 Comparing with State-of-the-Arts

4.2.1 Benchmark Datasets.

SUN RGB-D [31] contains single-view RGB-D images (5,000 for training) with dense annotations of oriented 3D bounding boxes for object categories. We reconstruct point clouds from the depth images using the provided camera parameters, as in VoteNet. Following standard evaluation protocol [33], we only train and report results on the most common categories. ScanNetV2 [5] is a large-scale RGB-D dataset containing scans of over unique indoor scenes. The scans are annotated with surface reconstructions, textured meshes and semantic and instance segmentation for object categories. We reconstruct input point clouds by sampling vertices from the reconstructed meshes and predict axis-aligned bounding boxes [23]. We use scenes for training, and scenes for testing [5].

4.2.2 Methods in Comparison.

2D-driven [14], PointFusion [36] and F-PointNet [24] are 2D-driven 3D detection methods, which benefit from detection techniques for 2D images and use 2D information to reduce the search space in 3D. Cloud of Gradients (COG) [28] designs new 3D HoG-like features to detect objects in a sliding shape manner. MRCNN 2D-3D [11, 12]estimates 3D bounding boxes by directly projecting instance segmentation results from Mask-RCNN [11] into 3D. GSPN [38] utilizes a generative model to generate object proposals for instance segmentation in point clouds.

Notably, Deep Sliding Shapes (DSS) [33] and 3D-SIS [12] are voxelization-based detectors, which convert the entire input point clouds to dense grids and employ 3D CNNs to learn from both geometry and RGB cues for object proposal generation and classification. VoteNet [23] is a point-based detector, which adopts PointNet++ [27] as backbone to abstract and aggregate point features to generate proposals and classify them.

4.2.3 Detection Performance.

Table 2 and 3 show detection results on SUN RGB-D and ScanNet respectively. Our model outperforms all prior arts by large margins on both benchmarks, i.e., at least 4.5 and 5.5 mAP increase, respectively. Note that our model takes only point clouds as input, while most previous methods use both geometry and RGB or even multi-view RGB images. Especially, Table 2 shows our model achieves better results on nearly all categories, over both the best-performing point-based detector (VoteNet) and the voxelization-based detector (DSS). Similar conclusions can be drawn from Table 3 that LGR-Net significantly outperforms both VoteNet and methods based on voxelization (DSS and 3D-SIS), evidencing the superiority of LGR-Net in local pattern abstraction and in turn detection. In addition, Table 3 shows that the stricter criterion of mAP@0.5 is greatly boosted by 8.5, suggesting our model advances object localization benefiting from the learned fine-grained geometry.

4.2.4 Speed.

Our model is computationally efficient since it only performs small-size CNNs on a set of grids of low-resolution, thanks to the LGR operation. Table 4 depicts that our model is more than 8 times faster than voxelization-based 3D-SIS which applies CNNs on a dense grid derived from the entire point clouds. In addition, our model is comparable to the point-based VoteNet in terms of speed, while achieving significant improvement in detection performance.

Methods 3D-SIS [12] VoteNet [23] LGR-Net (ours)
Time s s s
Table 4: Processing time per scan on ScanNetV2. Our method is more than 8 times faster than voxelization-based 3D-SIS [12] and comparable to point-based VoteNet [23]

4.3 Ablation Studies

4.3.1 How does LGR help?

We argue that leveraging 3D CNNs thanks to the LGR operation enables better abstraction of fine-grained structures than conventional point-based models. However, directly analyzing learned features to support our argument is not trivial. Fortunately, the backbone network integrates learned local patterns into the features of a set of output points for predicting object proposals. Since fine-grained structures are crucial to correctly localizing and recognizing objects, an output point enriched with such features (called informative point here) is more likely to generate good votes, and in turn a good object proposal (with correct class and over IoU with ground-truth). Therefore, a backbone better at capturing fine-grained structures should produce more informative points given a fixed number of output points.

Fig. 3 demonstrates this phenomenon. We analyze points output by the backbone, and trace those points that can generate an accurate object proposal. One can see that VoteNet (left) fails to produce informative points on some objects (especially small ones) to support detection. In comparison, our model (right) offers more informative points with a denser coverage of the scene, especially on objects that VoteNet has missed, evidencing its superiority in exploiting fine-grained geometry.

Figure 3: LGR helps abstract fine-grained geometry. We trace points (output by the backbone) that generate good votes which in turn generate good object proposals, and overlay such points (in blue) on top of input ScanNet scene points. Our model shows a broader coverage of the scene, especially on some small objects (in orange box) missed by VoteNet, proving LGR-Net better preserves fine-grained geometry and thus benefits detection

4.3.2 Effect of Feature Aggregation

The radius in Eqn. 2 controls how much features from neighboring points are aggregated to each voxel. We here investigate how it affects feature aggregation and in turn detection. We set half the diagonal of a voxel cell as a unit of radius. Fig. 4 (left) shows as the radius increases, the mAP reaches its peak at radius. However, embracing more points from a larger neighborhood, though introducing more context, could excessively reduces representative features from voxel’s nearby points, thus blurs voxelized representation and hurts detection performance.

We proceed with a second analysis on the influence of different feature aggregation methods. We test two alternative aggregation methods, average pooling and nearest neighbor, which compute the activation on each voxel by averaging features of its neighboring points and by directly taking its nearest point’s features, respectively (tested with radius). Fig. 4 (right) shows that aggregation via interpolation outperforms both alternatives. One possible explanation is that it not only accumulates appropriate context but also respects positional correspondence between voxels and points.

Aggregation methods mAP Avg. pooling 62.6 Nearest neighbor 62.4 Interpolation 64.1
Figure 4: Feature aggregation analysis. Left: mAP@0.25 on ScanNetV2 for varying aggregation radii (aggregation via interpolation). Right: Comparisons of using different aggregation methods (radius = )
mini-CNNs mAP 1  (333) 61.2 2  (111) 58.1 2  (333) 64.1 2  (555) 63.0
Figure 5: Analysis on grid resolution and mini-CNN architecture. Left: mAP@0.25 on ScanNetV2 for varying grid resolutions (convolve via two 333 convolutional layers). Right: Comparisons of using different mini-CNN architectures (grid resolution = )

4.3.3 Effect of Grid Resolution

The resolution of the rendered grid influences the quantization degree of local geometric structures and also computation overhead. To determine its value, we compare detection performance and processing time (per scan) by using different grid resolutions. Fig. 5 (left) shows that a proper resolution of 555 (voxels) gives the best detection performance. One explanation is that a lower resolution such as 333 introduces quantization artifacts during rendering, which obscures fine-grained geometry. In the contrary, projecting a point set with limited points to a grid of a higher resolution such as 777 would cause sparsity in voxel activation, which makes pattern learning difficult and increases processing time. We thus adopt 555 as the grid resolution throughout our experiments.

4.3.4 Effect of Mini-CNN Architecture

We are interested in how different mini-CNN architectures influence local pattern abstraction. Fig. 5 (right) provides detection performance of using different numbers and kernel sizes of convolutional layers. One can see that using two 111 convolutional layers (somewhat like point-wise MLPs used by VoteNet) leads to a largely decreased mAP (), which is similar as the VoteNet performance (). This indicates that leveraging larger convolutional kernels to model spatial relations among voxels is the key to effectively capturing fine-grained geometric structures. In addition, two 333 convolutional layers (followed by a global max pooling) are sufficient for abstracting patterns in the grid. Since no noticeable improvement is observed by further increasing the number or kernel size of convolutional layers, we use the above settings throughout our experiment.

4.3.5 Effect of Query Strategy

We adopt cube query to construct local point sets for sensed regions. To validate our design choice, we further test with ball query as used by PointNet++ [27]. For fair comparisons, we set the cube and ball to have the same volume in each SP module. Experiments show that either query strategy obtains satisfying results. Specifically, cube query performs slightly better than ball query: vs. in mAP. We argue that cube query’s local neighborhood is more compatible with the rendered grid in shape compared to ball query’s. This guarantees a more complete coverage of voxels when interpolating queried points onto the grid, which brings informative grid representation and thus facilitates pattern abstraction.

Figure 6: Qualitative results of 3D object detection on SUN RGB-D. Each row shows (from left to right): an image of the scene (not used by our network), 3D object detection by our model and by VoteNet, and ground-truth annotations
Figure 7: Qualitative results of 3D object detection on ScanNetV2. Each row shows (from left to right): 3D object detection by our model and by VoteNet, and ground-truth annotations

4.4 Qualitative Results and Discussion

In Fig. 6, we provide qualitative results of our model and VoteNet on SUN RGB-D to show how extracting point features through LGR-Net benefits detection in various ways. The first two examples show that a clustered dresser/bookshelf is missed by VoteNet, while our model is able to collect sufficient geometric cues at corresponding positions, thus boosts confidence to recognize the object. The second example also reveals the strengths of our model in avoiding false positives like the chair as that in VoteNet output, indicating our model is superior in learning informative region features to resolve local ambiguities. The above results evidence that our model effectively promotes extracted features from point clouds, and thus helps localize and recognize objects. The last example shows a less successful prediction where an extremely partial observation of chairs is given. Similar advantages of our approach are also revealed by qualitative results on ScanNetV2, as shown in Fig. 7.

5 Conclusions

In this work, we propose a novel Local Grid Rendering (LGR) operation which allows using small-size CNNs to effectively abstract fine-grained geometric structures while preserving computational efficiency. A simple yet efficient backbone LGR-Net for point cloud feature extraction is further introduced based on the LGR operation. We apply the LGR-Net to object detection. With only point input, our model shows significant improvements over prior arts. Our proposed LGR-Net is generic. In future work we intend to utilize it in downstream applications such as 3D instance segmentation in point clouds.

References

  • [1] A. Avetisyan, M. Dahnert, A. Dai, M. Savva, A. X. Chang, and M. Nießner (2019) Scan2CAD: learning cad model alignment in rgb-d scans. In CVPR, pp. 2614–2623. Cited by: §2.0.2.
  • [2] M. M. Bronstein and I. Kokkinos (2010) Scale-invariant heat kernel signatures for non-rigid shape recognition. In CVPR, pp. 1704–1711. Cited by: §2.0.1.
  • [3] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia (2017) Multi-view 3d object detection network for autonomous driving. In CVPR, pp. 1907–1915. Cited by: §2.0.2.
  • [4] C. Choy, J. Gwak, and S. Savarese (2019)

    4d spatio-temporal convnets: minkowski convolutional neural networks

    .
    In CVPR, pp. 3075–3084. Cited by: §2.0.2.
  • [5] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner (2017) Scannet: richly-annotated 3d reconstructions of indoor scenes. In CVPR, pp. 5828–5839. Cited by: §1, §2.0.2, §4.2.1.
  • [6] M. Engelcke, D. Rao, D. Z. Wang, C. H. Tong, and I. Posner (2017) Vote3deep: fast object detection in 3d point clouds using efficient convolutional neural networks. In ICRA, pp. 1355–1361. Cited by: §2.0.3.
  • [7] A. Golovinskiy, V. G. Kim, and T. Funkhouser (2009) Shape-based recognition of 3d point clouds in urban environments. In ICCV, pp. 2154–2161. Cited by: §2.0.1.
  • [8] R. B. Gomes, B. M. F. da Silva, L. K. de Medeiros Rocha, R. V. Aroca, L. C. P. R. Velho, and L. M. G. Gonçalves (2013) Efficient 3d object recognition using foveated point clouds. Computers & Graphics 37 (5), pp. 496–508. Cited by: §2.0.1.
  • [9] R. Gomez-Ojeda, J. Briales, and J. Gonzalez-Jimenez (2016) Pl-svo: semi-direct monocular visual odometry by combining points and line segments. In IROS, pp. 4211–4216. Cited by: §1.
  • [10] B. Graham, M. Engelcke, and L. van der Maaten (2018) 3d semantic segmentation with submanifold sparse convolutional networks. In CVPR, pp. 9224–9232. Cited by: §2.0.2.
  • [11] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In ICCV, pp. 2961–2969. Cited by: §4.2.2, Table 3.
  • [12] J. Hou, A. Dai, and M. Nießner (2019) 3D-sis: 3d semantic instance segmentation of rgb-d scans. In CVPR, pp. 4421–4430. Cited by: §1, §2.0.3, §4.2.2, §4.2.2, Table 3, Table 4.
  • [13] R. Klokov and V. Lempitsky (2017) Escape from cells: deep kd-networks for the recognition of 3d point cloud models. In ICCV, pp. 863–872. Cited by: §2.0.3.
  • [14] J. Lahoud and B. Ghanem (2017) 2D-driven 3d object detection in rgb-d images. In CVPR, pp. 4622–4630. Cited by: §4.2.2, Table 2.
  • [15] S. Lan, R. Yu, G. Yu, and L. S. Davis (2019) Modeling local geometric structure of 3d point clouds using geo-cnn. In CVPR, pp. 998–1008. Cited by: §1, §2.0.1.
  • [16] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019) PointPillars: fast encoders for object detection from point clouds. In CVPR, pp. 12697–12705. Cited by: §2.0.2.
  • [17] T. Le and Y. Duan (2018) Pointgrid: a deep network for 3d shape understanding. In CVPR, pp. 9204–9214. Cited by: §1.
  • [18] O. Litany, T. Remez, D. Freedman, L. Shapira, A. Bronstein, and R. Gal (2017) ASIST: automatic semantically invariant scene transformation. Computer Vision and Image Understanding 157, pp. 284–299. Cited by: §2.0.2.
  • [19] J. Mao, X. Wang, and H. Li (2019) Interpolated convolutional networks for 3d point cloud understanding. In ICCV, pp. 1578–1587. Cited by: §2.0.1.
  • [20] L. Nan, K. Xie, and A. Sharf (2012) A search-classify approach for cluttered indoor scene understanding. ACM TOG 31 (6), pp. 1–10. Cited by: §2.0.2.
  • [21] Y. Oh and Y. Watanabe (2002) Development of small robot for home floor cleaning. In SICE, Vol. 5, pp. 3222–3223. Cited by: §1.
  • [22] Y. Park, V. Lepetit, and W. Woo (2008) Multiple 3d object tracking for augmented reality. In ISMAR, pp. 117–120. Cited by: §1.
  • [23] C. R. Qi, O. Litany, K. He, and L. J. Guibas (2019) Deep hough voting for 3d object detection in point clouds. In ICCV, pp. 9277–9286. Cited by: §1, §1, §1, §2.0.2, §3.3, §3.3, §4.1, §4.2.1, §4.2.2, Table 2, Table 3, Table 4.
  • [24] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas (2018) Frustum pointnets for 3d object detection from rgb-d data. In CVPR, pp. 918–927. Cited by: §4.2.2, Table 2, Table 3.
  • [25] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017)

    PointNet: deep learning on point sets for 3d classification and segmentation

    .
    In CVPR, pp. 652–660. Cited by: §2.0.1.
  • [26] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas (2016) Volumetric and multi-view cnns for object classification on 3d data. In CVPR, pp. 5648–5656. Cited by: §2.0.3.
  • [27] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In NeurIPS, pp. 5099–5108. Cited by: §1, §1, §2.0.1, §2.0.2, §3.1.1, §3.2, §3.3, §4.2.2, §4.3.5.
  • [28] Z. Ren and E. B. Sudderth (2016) Three-dimensional object detection and layout prediction using clouds of oriented gradients. In CVPR, pp. 1525–1533. Cited by: §4.2.2, Table 2.
  • [29] G. Riegler, A. Osman Ulusoy, and A. Geiger (2017) Octnet: learning deep 3d representations at high resolutions. In CVPR, pp. 3577–3586. Cited by: §2.0.3.
  • [30] S. Shi, X. Wang, and H. Li (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In CVPR, pp. 770–779. Cited by: §2.0.2.
  • [31] S. Song, S. P. Lichtenberg, and J. Xiao (2015) SUN rgb-d: a rgb-d scene understanding benchmark suite. In CVPR, pp. 567–576. Cited by: §1, §2.0.2, §4.2.1, Table 2.
  • [32] S. Song and J. Xiao (2014) Sliding shapes for 3d object detection in depth images. In ECCV, pp. 634–651. Cited by: §2.0.2.
  • [33] S. Song and J. Xiao (2016) Deep sliding shapes for amodal 3d object detection in rgb-d images. In CVPR, pp. 808–816. Cited by: §1, §2.0.3, §4.1, §4.2.1, §4.2.2, Table 2, Table 3.
  • [34] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon (2019) Dynamic graph cnn for learning on point clouds. ACM TOG 38 (5), pp. 1–12. Cited by: §1, §1, §2.0.1.
  • [35] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao (2015) 3d shapenets: a deep representation for volumetric shapes. In CVPR, pp. 1912–1920. Cited by: §2.0.3.
  • [36] D. Xu, D. Anguelov, and A. Jain (2018) Pointfusion: deep sensor fusion for 3d bounding box estimation. In CVPR, pp. 244–253. Cited by: §4.2.2, Table 2.
  • [37] Q. Xu (2019) Grid-gcn for fast and scalable point cloud learning. arXiv preprint arXiv:1912.02984. Cited by: §3.1.1.
  • [38] L. Yi, W. Zhao, H. Wang, M. Sung, and L. J. Guibas (2019) Gspn: generative shape proposal network for 3d instance segmentation in point cloud. In CVPR, pp. 3947–3956. Cited by: §2.0.2, §4.2.2, Table 3.
  • [39] Y. Zhou and O. Tuzel (2018) Voxelnet: end-to-end learning for point cloud based 3d object detection. In CVPR, pp. 4490–4499. Cited by: §1, §2.0.3.