KPConv
Kernel Point Convolutions
view repo
We present Kernel Point Convolution (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KPConv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv.
READ FULL TEXT VIEW PDFKernel Point Convolutions
Kernel Point Convolution implemented in PyTorch
PyTorch reimplementation for "KPConv: Flexible and Deformable Convolution for Point Clouds" https://arxiv.org/abs/1904.08889
Multi-Path Region Mining For Weakly Supervised 3D Semantic Segmentation on Point Clouds
None
The dawn of deep learning has boosted modern computer vision with discrete convolution as its fundamental building block. This operation combines the data of local neighborhoods on a 2D grid. Thanks to this regular structure, it can be computed with high efficiency on modern hardware, but when deprived of this regular structure, the convolution operation has yet to be defined properly, with the same efficiency as on 2D grids.
Many applications relying on such irregular data have grown with the rise of 3D scanning technologies. For example, 3D point cloud segmentation or 3D simultaneous localization and mapping rely on non-grid structured data: point clouds. A point cloud is a set of points in 3D (or higher-dimensional) space. In many applications, the points are coupled with corresponding features like colors. In this work, we will always consider a point cloud as those two elements: the points and the features . Such a point cloud is a sparse structure that has the property to be unordered, which makes it very different from a grid. However, it shares a common property with a grid which is essential to the definition of convolutions: it is spatially localized. In a grid, the features are localized by their index in a matrix, while in a point cloud, they are localized by their corresponding point coordinates. Thus, the points are to be considered as structural elements, and the features as the real data.
Various approaches have been proposed to handle such data, and can be grouped into different categories that we will develop in the related work section. Several methods fall into the grid-based category, whose principle is to project the sparse 3D data on a regular structure where a convolution operation can be defined more easily [24, 29, 34]
. Other approaches use multilayer perceptrons (MLP) to process point clouds directly, following the idea proposed by
[49, 26].More recently, some attempts have been made to design a convolution that operates directly on points [2, 45, 20, 14, 13]. These methods use the spatial localization property of a point cloud to define point convolutions with spatial kernels. They share the idea that a convolution should define a set of customizable spatial filters applied locally in the point cloud.
This paper introduces a new point convolution operator named Kernel Point Convolution (KPConv). KPConv also consists of a set of local 3D filters, but overcomes previous point convolution limitations as shown in related work. KPConv is inspired by image-based convolution, but in place of kernel pixels, we use a set of kernel points to define the area where each kernel weight is applied, like shown in Figure 1. The kernel weights are thus carried by points, like the input features, and their area of influence is defined by a correlation function [32]. The number of kernel points is not constrained, making our design very flexible.
Furthermore, we propose a deformable version of our convolution [7], which consists of learning local shifts applied to the kernel points (see Figure 3). Our network generates different shifts at each convolution location, meaning that it can adapt the shape of its kernels for different regions of the input cloud. Our deformable convolution is not designed the same way as its image counterpart. Due to the different nature of the data, it needs a regularization to help the deformed kernels fit the point cloud geometry and avoid empty space. We use Effective Receptive Field (ERF) [22] and ablation studies to compare rigid KPConv with deformable KPConv.
, we favor radius neighborhoods instead of k-nearest-neighbors (KNN). As shown by
[13], KNN is not robust in non-uniform sampling settings. The robustness of our convolution to varying densities is ensured by the combination of radius neighborhoods and regular subsampling of the input cloud [38]. Compared to normalization strategies [13, 14], our approach also alleviates the computational cost of our convolution.In our experiments section, we show that KPConv can be used to build very deep architectures for classification and segmentation, while keeping fast training and inference times. Overall, rigid and deformable KPConv both perform very well, topping competing algorithms on several datasets. We find that rigid KPConv achieves better performances on simpler tasks, like object classification, or small segmentation datasets. Deformable KPConv thrives on more difficult tasks, like large segmentation datasets offering many object instances and greater diversity. We also show that deformable KPConv is more robust to a lower number of kernel points, which implies a greater descriptive power. Last but not least, a qualitative study of KPConv ERF shows that deformable kernels improve the network ability to adapt to the geometry of the scene objects.
Comparison between an image convolution (left) and a KPConv (right) on 2D points for a simpler illustration. In the image, each pixel feature vector is multiplied by a weight matrix
assigned by the alignment of the kernel with the image. In KPConv, input points are not aligned with kernel points, and their number can vary. Therefore, each point feature is multiplied by all the kernel weight matrices, with a correlation coefficient depending on its relative position to kernel points.In this section, we briefly review previous deep learning methods to analyze point clouds, paying particular attention to the methods closer to our definition of point convolutions.
Projection networks. Several methods project points to an intermediate grid structure. Image-based networks are often multi-view, using a set of 2D images rendered from the point cloud at different viewpoints [35, 4, 18]. For scene segmentation, these methods suffer from occluded surfaces and density variations. Instead of choosing a global projection viewpoint, [36]
proposed projecting local neighborhoods to local tangent planes and processing them with 2D convolutions. However, this method relies heavily on tangent estimation.
In the case of voxel-based methods, the points are projected on 3D grids in Euclidean space [24, 30, 3]. Using sparse structures like octrees or hash-maps allows larger grids and enhanced performances [29, 9], but these networks still lack flexibility as their kernels are constrained to use or voxels. Using a permutohedral lattice instead of an Euclidean grid reduces the kernel to lattices [34], but this number is still constrained, while KPConv allows any number of kernel points. Moreover, avoiding intermediate structures should make the design of more complex architectures like instance mask detector or generative models more straightforward in future works.
Graph convolution networks. The definition of a convolution operator on a graph has been addressed in different ways. A convolution on a graph can be computed as a multiplication on its spectral representation [8, 48], or it can focus on the surface represented by the graph [23, 5, 33, 25]. Despite the similarity between point convolutions and the most recent graph convolutions [39, 43], the latter learn filters on edge relationships instead of points relative positions. In other words, a graph convolution combines features on local surface patches, while being invariant to the deformations of those patches in Euclidean space. In contrast, KPConv combines features locally according to the 3D geometry, thus capturing the deformations of the surfaces.
Pointwise MLP networks. PointNet [26]
is considered a milestone in point cloud deep learning. This network uses a shared MLP on every point individually followed by a global max-pooling. The shared MLP acts as a set of learned spatial encodings and the global signature of the input point cloud is computed as the maximal response among all the points for each of these encodings. The network’s performances are limited because it does not consider local spatial relationships in the data. Following PointNet, some hierarchical architectures have been developed to aggregate local neighborhood information with MLPs
[27, 19, 21].As shown by [41, 20, 13], the kernel of a point convolution can be implemented with a MLP, because of its ability to approximate any continuous function. However, using such a representation makes the convolution operator more complex and the convergence of the network harder. In our case, we define an explicit convolution kernel, like image convolutions, whose weights are directly learned, without the intermediate representation of a MLP. Our design also offers a straightforward deformable version, as offsets can directly be applied to kernel points.
Point convolution networks. Some very recent works also defined explicit convolution kernels for points, but KPConv stands out with unique design choices.
Pointwise CNN [14] locates the kernel weights with voxel bins, and thus lacks flexibility like grid networks. Furthermore, their normalization strategy burdens their network with unnecessary computations, while KPConv subsampling strategy alleviates both varying densities and computational cost.
SpiderCNN [45] defines its kernel as a family of polynomial functions applied with a different weight for each neighbor. The weight applied to a neighbor depends on the neighbor’s distance-wise order, making the filters spatially inconsistent. By contrast, KPConv weights are located in space and its result is invariant to point order.
Flex-convolution [10] uses linear functions to model its kernel, which could limit its representative power. It also uses KNN, which is not robust to varying densities as discussed above.
PCNN [2]
design is the closest to KPConv. Its definition also uses points to carry kernel weights, and a correlation function. However, this design is not scalable because it does not use any form of neighborhood, making the convolution computations quadratic on the number of points. In addition, it uses a Gaussian correlation where KPConv uses a simpler linear correlation, which helps gradient backpropagation when learning deformations
[7].We show that KPConv networks outperform all comparable networks in the experiments section. Furthermore, to the best of our knowledge, none of the previous works experimented a spatially deformable point convolution.
Like previous works, KPConv can be formulated with the general definition of a point convolution (Eq. 1), inspired by image convolutions. For the sake of clarity, we call and the points from and their corresponding features from . The general point convolution of by a kernel at a point is defined as:
(1) |
We stand with [13] advising radius neighborhoods to ensure robustness to varying densities, therefore, with being the chosen radius. In addition, [38] showed that hand-crafted 3D point features offer a better representation when computed with radius neighborhoods than with KNN, and we believe that that having a consistent domain for helps the network to learn meaningful representations.
The crucial part in Eq. 1 is the definition of the kernel function , which is where KPConv singularity lies. takes the neighbors positions centered on as input. We call them in the following. As our neighborhoods are defined by a radius , the domain of definition of is the ball . Like image convolution kernels (see Figure 2 for a detailed comparison between image convolution and KPConv), we want to apply different weights to different areas inside this domain. There are many ways to define areas in 3D space, and points are the most intuitive as features are also localized by them. Let be the kernel points and be the associated weight matrices that map features from dimension to . We define the kernel function for any point as :
(2) |
where is the correlation between and , that should be higher when is closer to
. Inspired by the bilinear interpolation in
[7], we use the linear correlation:(3) |
where is the influence distance of the kernel points, and will be chosen according to the input density (see Section 3.3). Compared to a gaussian correlation, which is used by [2]
, linear correlation is a simpler representation. We advocate this simpler correlation to ease gradient backpropagation when learning kernel deformations. A parallel can be drawn with rectified linear unit, which is the most popular activation function for deep neural networks, thanks to its efficiency for gradient backpropagation.
Kernel point positions are critical to the convolution operator. Our rigid kernels in particular need to be arranged regularly to be efficient. As we claimed that one of the KPConv strengths is its flexibility, we need to find a regular disposition for any . We chose to place the kernel points by solving an optimization problem where each point applies a repulsive force on the others. The points are constrained to stay in the sphere with an attractive force, and one of them is constrained to be at the center. We detail this process and show some regular dispositions in the supplementary material. Eventually, the surrounding points are rescaled to an average radius of , ensuring a small overlap between each kernel point area of influence and a good space coverage.
With properly initialized kernels, the rigid version of KPConv is extremely efficient, in particular when given a large enough to cover the spherical domain of . However it is possible to increase its capacity by learning the kernel point positions. The kernel function is indeed differentiable with respect to , which means they are learnable parameters. We could consider learning one global set of for each convolution layer, but it would not bring more descriptive power than a fixed regular disposition. Instead the network generates a set of shifts for every convolution location like [7] and define deformable KPConv as:
(4) |
(5) |
We define the offsets as the output of a rigid KPConv mapping input features to values, as shown in Figure 3. During training, the network learns the rigid kernel generating the shifts and the deformable kernel generating the output features simultaneously, but the learning rate of the first one is set to times the global network learning rate.
Given the different nature of point clouds and images, we found useful to add a regularization loss. Compared to an image, an input point cloud is sparse, and a kernel point has no effect if it has no input point in its influence area. In the rigid setup, it is necessary, because such a kernel point would have an effect at other locations, but with deformable KPConv, all the kernel points can have an effect at any location. The network just needs to generate shifts that fit the local geometry of the input point cloud. We propose a “fitting” regularization loss which penalizes the distance between a kernel point and its closest neighbor among the input neighbors. To avoid the kernel points to collapse together, we also add a “repulsive” regularization loss between all pair off kernel points when their influence area overlap. As a whole our regularization loss for all convolution locations is:
(6) |
(7) |
(8) |
We show the effect of our regularization strategy in the supplementary material.
This section elucidates how we effectively put the KPConv theory into practice. For further details, we have released our code using Tensorflow library.
Subsampling to deal with varying densities. As explained in the introduction, we use a subsampling strategy to control the density of input points at each layer. To ensure a spatial consistency of the point sampling locations, we favor grid subsampling. Thus, the support points of each layer, carrying the features locations, are chosen as barycenters of the original input points contained in all non-empty grid cells.
Pooling layer
. To create architectures with multiple layer scales, we need to reduce the number of points progressively. As we already have a grid subsampling, we double the cell size at every pooling layer, along with the other related parameters, incrementally increasing the receptive field of KPConv. The features pooled at each new location can either be obtained by a max-pooling or a KPConv. We use the latter in our architectures and call it “strided KPConv”, by analogy to the image strided convolution.
KPConv layer. Our convolution layer takes as input the points , their corresponding features , and the matrix of neighborhood indices . is the number of locations where the neighborhoods are computed, which can be different from (in the case of “strided” KPConv). The neighborhood matrix is forced to have the size of the biggest neighborhood . The other neighborhoods thus have “shadow” neighbors, which will be ignored during the convolution computations.
Network parameters. Each layer has a cell size from which we infer other parameters. The kernel points influence distance is set as equal to . For rigid KPConv, the convolution radius is automatically set to given that the average kernel point radius is . For deformable KPConv, the convolution radius can be chosen as . and are proportional coefficients set for the whole network. Unless stated otherwise, we will use the following set of parameters, chosen by cross validation, for all experiments: , and . The first subsampling cell size will depend on the dataset and, as stated above, .
Combining analogy with successful image networks and empirical studies, we designed two network architectures for the classification and the segmentation tasks. Diagrams detailing both architectures are available in the supplementary material.
KP-CNN is a 5-layer classification convolutional network. Each layer contains two convolutional blocks, the first one being strided except for the first layer. Our convolutional blocks are designed like bottleneck ResNet blocks [12]
with a KPConv replacing the image convolution, batch normalization and leaky ReLu activation. After the last layer, the features are aggregated by a global average pooling and processed by the fully connected and softmax layers like in an image CNN. For the results with deformable KPConv, we only use deformable kernels in the last 5 KPConv blocks (see architecture details in the supplementary material).
KP-FCNN is a fully convolutional network for segmentation. The encoder part is the same as in KP-CNN, and the decoder part uses nearest upsampling to get the final pointwise features. Skip links are used to pass the features between intermediate layers of the encoder and the decoder. Those features are concatenated to the upsampled ones and processed by a unary convolution, which is the equivalent of a convolution in image or a shared MLP in PointNet. It is possible to replace the nearest upsampling operation by a KPConv, in the same way as the strided KPConv, but it does not lead to a significant improvement of the performances.
Data. First, we evaluate our networks on two common model datasets. We use ModelNet40 [44] for classification and ShapenetPart [47] for part segmentation. ModelNet40 contains 12,311 meshed CAD models from 40 categories. ShapenetPart is a collection of 16,681 point clouds from 16 categories, each with 2-6 part labels. For benchmarking purpose, we use data provided by [27]. In both cases, we follow standard train/test splits and rescale objects to fit them into a unit sphere (and consider units to be meters for the rest of this experiment). We ignore normals because they are only available for artificial data.
Classification task. We set the first subsampling grid size to . We do not add any feature as input; each input point is assigned a constant feature equal to 1, as opposed to empty space which can be considered as 0. This constant feature encodes the geometry of the input points. Like [2], our augmentation procedure consists of scaling, flipping and perturbing the points. In this setup, we are able to process batches of 16 clouds per second on an Nvidia Titan Xp. Because of our subsampling strategy, the input point clouds do not all have the same number of points, which is not a problem as our networks accept variable input point cloud size. On average, a ModelNet40 object point cloud comprises 6,800 points in our framework. The other training parameters are detailed in the supplementary material, along with the architecture details. We also include the number of parameters and the training/inference speeds for both rigid and deformable KPConv.
As shown on Table 1, our networks outperform other state-of-the-art methods using only points (we do not take into account methods using normals as additional input). We also notice that rigid KPConv performances are slightly better. We suspect that it can be explained by the task simplicity. If deformable kernels add more descriptive power, they also increase the overall network complexity, which can disturb the convergence or lead to overfitting on simpler tasks like this shape classification.
Segmentation task. For this task, we use KP-FCNN architecture with the same parameters as in the classification task, adding the positions as additional features to the constant 1, and using the same augmentation procedure. We train a single network with multiple heads to segment the parts of each object class. For this task, the clouds are smaller (2,300 points on average), and we can process batches of 16 shapes per second. Table 1 shows the instance average, and the class average mIoU. We detail each class mIoU in the supplementary material. KP-FCNN outperforms all other algorithms, including those using additional inputs like images or normals. Shape segmentation is a more difficult task than shape classification, and we see that KPConv has better performances with deformable kernels.
ModelNet40 | ShapeNetPart | ||
Methods | OA | mcIoU | mIoU |
SPLATNet [34] | - | ||
SGPN [42] | - | ||
3DmFV-Net [9] | |||
SynSpecCNN [48] | - | ||
RSNet [15] | - | ||
SpecGCN [40] | - | ||
PointNet++ [27] | |||
SO-Net [19] | |||
PCNN by Ext [2] | |||
SpiderCNN [45] | |||
MCConv [13] | - | ||
FlexConv [10] | |||
PointCNN [20] | |||
DGCNN [43] | |||
SubSparseCNN [9] | - | ||
KPConv rigid | |||
KPConv deform |
Data. Our second experiment shows how our segmentation architecture generalizes to real indoor and outdoor data. To this end, we chose to test our network on 4 datasets of different natures. Scannet [6], for indoor cluttered scenes, S3DIS [1], for indoor large spaces, Semantic3D [11], for outdoor fixed scans, and NPM3D [31], for outdoor mobile scans. Scannet contains 1,513 small training scenes and 100 test scenes for online benchmarking, all annotated with 20 semantic classes. S3DIS covers six large-scale indoor areas from three different buildings for a total of 273 million points annotated with 13 classes. Like [37], we advocate the use of Area-5 as test scene to better measure the generalization ability of our method. Semantic3D is an online benchmark comprising several fixed lidar scans of different outdoor scenes. More than 4 billion points are annotated with 8 classes in this dataset, but they mostly cover ground, building or vegetation and there are fewer object instances than in the other datasets. We favor the reduced-8 challenge because it is less biased by the objects close to the scanner. NPM3D contains more than 2km of streets in 4 different cities and is also an online benchmark. The 160 million points of this dataset are annotated with 10 semantic classes.
Pipeline for real scene segmentation
. The 3D scenes in these datasets are too big to be segmented as a whole. Our KP-FCNN architecture is used to segment small subclouds contained in spheres. At training, the spheres are picked randomly in the scenes. At testing, we pick spheres regularly in the point clouds but ensure each point is tested multiple times by different sphere locations. As in a voting scheme on model datasets, the predicted probabilities for each point are averaged. When datasets are colorized, we use the three color channels as features. We still keep the constant 1 feature to ensure black/dark points are not ignored. To our convolution, a point with all features equal to zero is equivalent to empty space. The input sphere radius is chosen as
(in accordance to Modelnet40 experiment).Methods | Scannet | Sem3D | S3DIS | NPM3D |
Pointnet [26] |
- | - | - | |
Pointnet++ [27] | - | - | - | |
SnapNet [4] | - | - | - | |
SPLATNet [34] | - | - | - | |
SegCloud [37] | - | - | ||
RF_MSSF [38] | - | |||
Eff3DConv [50] | - | - | - | |
TangentConv [36] | - | - | ||
MSDVN [30] | - | |||
RSNet [15] | - | - | - | |
FCPN [28] | - | - | - | |
PointCNN [20] | - | - | ||
PCNN [2] | - | - | - | |
SPGraph [17] | - | - | ||
ParamConv [41] | - | - | - | |
SubSparseCNN [9] | - | - | - | |
KPConv rigid | ||||
KPConv deform |
Results. Because outdoor objects are larger than indoor objects, we use on Semantic3D and NPM3D, and on Scannet and S3DIS. As shown in Table 2, our architecture ranks second on Scannet and outperforms all other segmentation architectures on the other datasets. Compared to other point convolution architectures [2, 20, 41], KPConv performances exceed previous scores by 19 mIoU points on Scannet and 9 mIoU points on S3DIS. SubSparseCNN score on Scannet was not reported in their original paper [9], so it is hard to compare without knowing their experimental setup. We can notice that, in the same experimental setup on ShapeNetPart segmentation, KPConv outperformed SubSparseCNN by nearly 2 mIoU points.
Among these 4 datasets, KPConv deformable kernels improved the results on NPM3D and S3DIS while the rigid version was better on Scannet and Semantic3D. If we follow our assumption, we can explain the lower scores on Semantic3D by the lack of diversity in this dataset. Indeed, despite comprising 15 scenes and 4 billion points, it contains a majority of ground, building and vegetation points and a few real objects like car or pedestrians. Although this is not the case of Scannet, which comprises more than 1,500 scenes with various objects and shapes, our validation studies are not reflected by the test scores on this benchmark. We found that the deformable KPConv outperformed its rigid counterpart on several different validation sets (see Section 4.3). As a conclusion, these results show that the descriptive power of deformable KPConv is useful to the network on large and diverse datasets. An illustration of segmented scenes on Semantic3D and S3DIS is shown in Figure 4. More results visualizations are provided in the supplementary material.
In this experiment, we conduct an ablation study to support our claim that deformable KPConv has a stronger descriptive power than rigid KPConv. The idea is to impede the capabilities of the network, in order to reveal the real potential of deformable kernels. We use Scannet dataset (same parameters as before) and use the official validation set, because the test set cannot be used for such evaluations. As depicted in Figure 5, the deformable KPConv only loses mIoU when restricted to 4 kernel points. In the same configuration, the rigid KPConv loses mIoU.
As stated in Section 4.2, we can also see that deformable KPConv performs better than rigid KPConv with 15 kernel points. Although it is not the case on the test set, we tried different validation sets that confirmed the superior performances of deformable KPConv. This is not surprising as we obtained the same results on S3DIS. Deformable KPConv seem to thrive on indoor datasets, which offer more diversity than outdoor datasets. To understand why, we need to go beyond numbers and see what is effectively learned by the two version of KPConv.
Outdoor and Indoor scenes, respectively from Semantic3D and S3DIS, classified by KP-FCNN with deformable kernels.
To achieve a deeper understanding of KPConv, we offer two insights of the learning mechanisms.
Learned features. Our first idea was to visualize the features learned by our network. In this experiment, we trained KP-CNN on ModelNet40 with rigid KPConv. We added random rotation augmentations around vertical axis to increase the input shape diversity. Then we visualize each learned feature by coloring the points according to their level of activation for this features. In Figure 6, we chose input point clouds maximizing the activation for different features at the first and third layer. For a cleaner display, we projected the activations from the layer subsampled points to the original input points. We observe that, in its first layer, the network is able to learn low-level features like vertical/horizontal planes (a/b), linear structures (c), or corners (d). In the later layers, the network detects more complex shapes like small buttresses (e), balls (f), cones (g), or stairs (h). However, it is difficult to see a difference between rigid and deformable KPConv. This tool is very useful to understand what KPConv can learn in general, but we need another one to compare the two versions.
Effective Receptive Field. To apprehend the differences between the representations learned by rigid and deformable KPConv, we can compute its Effective Receptive Field (ERF) [22] at different locations. The ERF is a measure of the influence that each input point has on the result of a KPConv layer at a particular location. It is computed as the gradient of KPConv responses at this particular location with respect to the input point features. As we can see in Figure 7, the ERF varies depending on the object it is centered on. We see that rigid KPConv ERF has a relatively consistent range on every type of object, whereas deformable KPConv ERF seems to adapt to the object size. Indeed, it covers the whole bed, and concentrates more on the chair that on the surrounding ground. When centered on a flat surface, it also seems to ignore most of it and reach for further details in the scene. This adaptive behavior shows that deformable KPConv improves the network ability to adapt to the geometry of the scene objects, and explains the better performances on indoor datasets.
In this work, we propose KPConv, a convolution that operates on point clouds. KPConv takes radius neighborhoods as input and processes them with weights spatially located by a small set of kernel points. We define a deformable version of this convolution operator that learns local shifts effectively deforming the convolution kernels to make them fit the point cloud geometry. Depending on the diversity of the datasets, or the chosen network configuration, deformable and rigid KPConv are both valuable, and our networks brought new state-of-the-art performances for nearly every tested dataset. We release our source code, hoping to help further research on point cloud convolutional architectures. Beyond the proposed classification and segmentation networks, KPConv can be used in any other application addressed by CNNs. We believe that deformable convolutions can thrive in larger datasets or challenging tasks such as object detection, lidar flow computation, or point cloud completion.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 1534–1543, 2016. http://buildingparser.stanford.edu/dataset.html.3dmfv: Three-dimensional point cloud classification in real-time using convolutional neural networks.
IEEE Robotics and Automation Letters, 3(4):3145–3152, 2018.3d recurrent neural networks with context fusion for point cloud semantic segmentation.
In European Conference on Computer Vision, pages 415–430. Springer, 2018.As explained in the main paper, our architectures are built with convolutional blocks, designed like bottleneck ResNet blocks [12]. This is the case whether we use a normal or strided KPConv, with rigid or deformable kernels. Figure 9 describes these blocks, and Figure 8 our two network architectures made from them. In Figure 8, we show an example of point cloud from ModelNet40 dataset, subsampled at every layer. It illustrates how the convolution radius (red sphere) grows proportionally to the subsampling grid size. In all our experiments with deformable KPConv, we use deformable kernels in the last 5 convolutional blocks (2 block from layer 3, and both block from layer 4 and 5). The green number above layers in Figure 8 are the feature dimensions used in our blocks ( in Figure 9).
Our layers process point clouds of variable sizes, so we cannot stack them along a new “batch” dimension. We thus stack our point and feature tensors along their first dimension (number of points). As the neighbor and pooling indices do not point from one input cloud to another, each batch element is processed independently without any implementation trick. We only need to keep track of the batch element indices in order to define the global pooling of
KP-CNN. Since the number of points can vary a lot, we use a variable batch size by selecting as many elements as possible until a certain number of total of batch points is reached. This limit is chosen so that the average batch size correspond to the target batch size. A very similar batch strategy has already been described by [13].KP-CNN training. We use a Momentum gradient Descent optimizer to minimize a cross-entropy loss, with a batch size of 16, a momentum of an initial learning rate of
. Our learning rate is scheduled to decrease exponentially, and we choose the exponential decay to ensure it is divided by 10 every 100 epochs. A
probability dropout is used in the final fully connected layers. The network converges in 200 epochs. In the case of deformable kernels, the regularization loss is added to the output loss with a multiplicative factor of .KP-FCNN training. We also use a Momentum gradient Descent optimizer to minimize a point-wise cross-entropy loss, with a batch size of 10, a momentum of an initial learning rate of . The same learning rate schedule is used and no dropout is used. Among all experiments, the network needs 400 epochs at most to converge. For real scene segmentation, we can generate any number of input spheres, so we define an epoch as 500 optimizer steps, which is equivalent to 5000 spheres seen by the network. The same deformable regularization loss is used.
MN40 | SNP | Scannet | Sem3D | ||
Avg pts/elem | |||||
Avg pts/batch | |||||
Params | rigid | ||||
deform | |||||
Training | rigid | ||||
(batch/s) | deform | ||||
Inference | rigid | ||||
(batch/s) | deform |
Model sizes and speeds. Table 3 shows the statistics of our models on different datasets. First we notice that KP-FCNN and KP-CNN have similar number of parameters, because the decoder part of KP-FCNN only involves light 1x1 convolution. We see that the running speeds are different from one dataset to another, which is not surprising. Indeed, the number of operations performed during a forward pass of our network depends on the number of points of the current batch, and the maximum number of neighbors of these points. Our models have been prototyped with a RTX 2080Ti in this experiment, which explains the slight difference with the Titan Xp used in the main paper.
Our KPConv operates in a ball, and requires kernel points regularly placed in this domain. There is no obvious regular disposition of points in a sphere, so we chose to solve this issue by translating it into an optimization problem. The problem is simple, we want the points to be as far from each other as possible inside a given sphere. We thus assign a repulsive potential to each point:
(9) |
And add an attractive potential to the sphere center to avoid them diverging indefinitely:
(10) |
The problem then consists of minimizing the global energy:
(11) |
The solution is found by gradient descent with the points initialized randomly and some optional constraints. In our case, we fix one of the points at the center of the sphere. For some values of (listed in Table 4), the points converge to a unique stable disposition. Those stable dispositions are in fact regular polyhedrons. Each polyhedron can be described by grouping points sharing a plane perpendicular to the polyhedron symmetrical axis. For a better understanding, some of these dispositions are shown in Figure 10.
In every layer of KP-CNN and KP-FCNN, the points locations are rescaled from the chosen stable disposition to the appropriate radius and randomly rotated. Note that can also be used as a regularization loss in KP-CNN, when the kernel point positions are trained.
In this section, we provide more details on our segmentation experiments, for benchmarking purpose with future works. We give class scores for our experiments on ShapeNetPart (Table 5) and S3DIS (Tables 6 and 7) dataset. Scannet [6], Semantic3D [11] and NPM3D [31] are online benchmarks, the class scores can be found on their respective website.
disposition | groups along | |
---|---|---|
name | symmetrical axis | |
Tetrahedron | ||
Octahedron | 1-4-1 | |
Icosahedron | 1-5-5-1 | |
- | 1-6-6-1 | |
- | 1-5-5-5-1 | |
- | 1-4-4-4-4-1 | |
- | 1-6-6-6-1 | |
- | 4-4-4-4-4-4 |
Method | class | inst. | aero | bag | cap | car | chair | ear | guit | knif | lamp | lapt | moto | mug | pist | rock | skate | table |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
avg. | avg. | |||||||||||||||||
Kd-Net [16] | ||||||||||||||||||
SO-Net [19] | ||||||||||||||||||
PCNN by Ext [2] | ||||||||||||||||||
PointNet++ [27] | ||||||||||||||||||
SynSpecCNN [48] | ||||||||||||||||||
DGCNN [43] | ||||||||||||||||||
SpiderCNN [45] | ||||||||||||||||||
SubSparseCNN [9] | ||||||||||||||||||
SPLATNet [34] | ||||||||||||||||||
PointCNN [20] | ||||||||||||||||||
FlexConv [10] | ||||||||||||||||||
KPConv rigid | ||||||||||||||||||
KPConv deform |
Method | mIoU | mRec | ceil. | floor | wall | beam | col. | wind. | door | chair | table | book. | sofa | board | clut. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pointnet [26] | |||||||||||||||
SegCloud [37] | |||||||||||||||
Eff 3D Conv [50] | |||||||||||||||
TangentConv [36] | |||||||||||||||
RNN Fusion [46] | |||||||||||||||
SPGraph [17] | |||||||||||||||
ParamConv [41] | |||||||||||||||
KPConv rigid | |||||||||||||||
KPConv deform |
Method | mIoU | mRec | ceil. | floor | wall | beam | col. | wind. | door | chair | table | book. | sofa | board | clut. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pointnet [26] | |||||||||||||||
RSNet [15] | |||||||||||||||
SPGraph [17] | |||||||||||||||
PointCNN [20] | |||||||||||||||
KPConv rigid | |||||||||||||||
KPConv deform |