Deep Learning for 3D Point Clouds: A Survey

12/27/2019 ∙ by Yulan Guo, et al. ∙ 156

Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.



There are no comments yet.


page 9

page 14

page 24

Code Repositories


Deep Learning for 3D Point Clouds

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the rapid development of 3D acquisition technologies, 3D sensors are becoming increasingly available and affordable, including various types of 3D scanners, LiDARs, and RGB-D cameras (such as Kinect, RealSense and Apple depth cameras) [91]. 3D data acquired by these sensors can provide rich geometric, shape and scale information [51, 50]. Complemented with 2D images, 3D data provides an opportunity for a better understanding of the surrounding environment for machines. 3D data has numerous applications in different areas, including autonomous driving, robotics, remote sensing, medical treatment, and design industry [19].

3D data can usually be represented with different formats, including depth images, point clouds, meshes, and volumetric grids. As a commonly used format, point cloud representation preserves the original geometric information in 3D space without any discretization. Therefore, it is the preferred representation for many scene understanding related applications such as autonomous driving and robotics. Recently, deep learning techniques have dominated many research areas, such as computer vision, speech recognition, Natural Language Processing (NLP), and bioinformatics. However, deep learning on 3D point clouds still face several significant challenges

[129], such as the small scale of datasets, the high dimensionality and the unstructured nature of 3D point clouds. On this basis, this paper focuses on the analysis of deep learning methods which have been used to process 3D point clouds.

Deep learning on point clouds has been attracting more and more attention, especially in the last five years. Several publicly available datasets are also released, such as ModelNet [176], ShapeNet [139], ScanNet [26], Semantic3D [52], and the KITTI Vision Benchmark Suite [44]. These datasets have further boosted the research of deep learning on 3D point clouds, with an increasingly number of methods being proposed to address various problems related to point cloud processing, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. Few surveys of deep learning on 3D data are also available, such as [63, 2, 178, 133]. However, our paper is the first to specifically focus on deep learning methods for point clouds. Besides, our paper comprehensively covers different applications including classification, detection, tracking, and segmentation. A taxonomy of existing deep learning methods for 3D point clouds in shown in Fig. 1.

Compared to the existing literature, the major contributions of this work can be summarized as follows:

  1. To the best of our knowledge, this is the first survey paper to comprehensively cover deep learning methods for several important point cloud related tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation.

  2. As opposed to existing reviews [2, 63], we specifically focus on deep learning methods for 3D point clouds rather than all types of 3D data.

  3. This paper covers the most recent and advanced progress of deep learning on point clouds. Therefore, it provides the reader with the state-of-the-art methods.

  4. Comprehensive comparisons of existing methods on several publicly available datasets are provided (e.g., in Tables I, II, III, IV), with brief summaries and insightful discussions being presented.

Fig. 1: A taxonomy of deep learning methods for 3D point clouds.

The structure of this paper is as follows. Section 2 reviews the methods for 3D shape classification. Section 3 provides a survey of existing methods for 3D object detection and tracking. Section 4 presents a review of methods for point cloud segmentation, including semantic segmentation, instance segmentation, and part segmentation. Finally, Section 5 concludes the paper. We also provide a regularly updated project page on:

2 3D Shape Classification

These methods usually learn the embedding of each point first and then extract a global shape embedding from the whole point cloud using an aggregation method. Classification is finally achieved by several fully connected layers. Based on the way that feature learning is performed on each point, existing 3D shape classification methods can be divided into projection-based networks and point-based networks. Several milestone methods are illustrated in Fig. 2.

Projection-based methods first project an unstructured point cloud into an intermediate regular representation, and then leverage the well-established 2D or 3D convolution to achieve shape classification. In contrast, point-based methods directly work on raw point clouds without any voxelization or projection. Point-based methods do not introduce explicit information loss and becoming increasingly popular. In this paper, we mainly focus on point-based networks, but also include few projection-based networks for completeness.

Fig. 2: Chronological overview of 3D shape classification networks.

2.1 Projection-based Networks

These methods project 3D point clouds into different representation modalities (e.g., multi-view, volumetric representations) for feature learning and shape classification.

2.1.1 Multi-view representation

These methods first project a 3D object into multiple views and extract the corresponding view-wise features, and then fuse these features for accurate object recognition. How to aggregate multiple view-wise features into a discriminative global representation is a key challenge. MVCNN [151]

is a pioneering work, which simply max-pools multi-view features into a global descriptor. However, max-pooling only retains the maximum elements from a specific view, resulting in information loss. MHBN

[196] integrates local convolutional features by harmonized bilinear pooling to produce a compact global descriptor. Yang et al. [188] first leveraged a relation network to exploit the inter-relationships (e.g., region-region relationship and view-view relationship) over a group of views, and then aggregated these views to obtain a discriminative 3D object representation. In addition, several other methods [130, 43, 160, 109] have also been proposed to improve the recognition accuracy.

2.1.2 Volumetric representation

Early methods usually apply 3D Convolution Neural Network (CNN) build upon the volumetric representation of 3D point clouds. Daniel et al.

[112] introduced a volumetric occupancy network called VoxNet to achieve robust 3D object recognition. Wu et al. [176]

proposed a convolutional deep belief-based 3D ShapeNets to learn the distribution of points from various 3D shapes. A 3D shape is usually represented by a probability distribution of binary variables on voxel grids. Although encouraging performance has been achieved, these methods are unable to scale well to dense 3D data since the computation and memory footprint grows cubically with the resolution. To this end, a hierarchical and compact graph structure (such as octree) is introduced to reduce the computational and memory costs of these methods. OctNet


first hierarchically partitions a point cloud using a hybrid grid-octree structure, which represents the scene with several shallow octrees along a regular grid. The structure of octree is encoded efficiently using a bit string representation, and the feature vector of each voxel is indexed by simple arithmetic. Wang et al.

[163] proposed an Octree-based CNN for 3D shape classification. The average normal vectors of a 3D model sampled in the finest leaf octants are fed into the network, and 3D-CNN is applied on the octants occupied by the 3D shape surface. Compared to a baseline network based on dense input grids, OctNet requires much less memory and runtime for high-resolution point clouds. Le et al. [81] proposed a hybrid network called PointGrid, which integrates the point and grid representation for efficient point cloud processing. A constant number of points is sampled within each embedding volumetric grid cell, which allows the network to extract geometric details by using 3D convolutions.

2.2 Point-based Networks

According to the network architecture used for the feature learning of each point, methods in this category can be divided into pointwise MLP, convolution-based, graph-based, data indexing-based networks and other typical networks.

2.2.1 Pointwise MLP Networks

These methods model each point independently using several Multi-Layer Perceptrons (MLPs) and then aggregate a global feature using a symmetric function, as shown in Fig.

3. These networks can achieve permutation invariance for unordered 3D point clouds. However, the geometric relationships among 3D points are not fully considered.

Fig. 3: The architecture of PointNet. denotes the number of input points, denotes the dimension of the learned features for each point. After max pooling, the dimension of the global feature of the whole point cloud is also .

As a pioneering work, PointNet [129] learns pointwise features with several MLP layers and extracts global shape features with a max-pooling layer. The classification score is obtained using several MLP layers. Zaheer et al. [197] also theoretically demonstrated that the key to achieve permutation invariance is by summing up all representations and applying nonlinear transformations. They also designed a fundamental architecture, DeepSets, for various applications including shape classification [197].

Since features are learned independently for each point in PointNet [129], the local structural information between points cannot be captured. Therefore, Qi et al. [131] proposed a hierarchical network PointNet++ to capture fine geometric structures from the neighborhood of each point. As the core of PointNet++ hierarchy, its set abstraction level is composed of three layers: the sampling layer, the grouping layer and the PointNet layer. By stacking several set abstraction levels, PointNet++ can learn features from a local geometric structure and abstract the local features layer by layer.

Because of its simplicity and strong representation ability, many networks have been developed based on PointNet [129]. Achlioptas et al. [1]

introduced a deep autoencoder network to learn point cloud representation. Its encoder follows the design of PointNet and learns point features independently using five 1-D convolutional layers, a ReLU non-linear activation, batch normalization and max-pooling operations. In Point Attention Transformers (PATs)

[186], each point is represented by its own absolute position and relative positions with respect to its neighbors. Then, Group Shuffle Attention (GSA) is used to capture relations between points, and a permutation invariant, differentiable and trainable end-to-end Gumbel Subset Sampling (GSS) layer is developed to learn hierarchical features. The architecture of Mo-Net [69] is similar to PointNet [129]

but it takes a finite set of moments as the input of its network.

PointWeb [208] is also built upon PointNet++ and uses context of the local neighborhood to improve point features using Adaptive Feature Adjustment (AFA). Duan et al. [33] proposed a Structural Relational Network (SRN) to learn structural relational features between different local structures using MLP. Lin et al. [94] accelerated the inference process by constructing a lookup table for both input and function spaces learned by PointNet. The inference time on the ModelNet and ShapeNet datasets is sped up by 1.5 ms and 32 times over PointNet on a moderate machine. SRINet [152] first projects a point cloud to obtain rotation invariant representations, and then utilizes PointNet-based backbone to extract a global feature and graph-based aggregation to extract local features.

2.2.2 Convolution-based Networks

Compared to kernels defined on 2D grid structures (e.g., images), convolutional kernels for 3D point clouds are hard to design due to the irregularity of point clouds. According to the type of convolutional kernels, current 3D convolution networks can be divided into continuous convolution networks and discrete convolution networks, as shown in Fig. 4.

Fig. 4: An illustration of a continuous and discrete convolution for local neighbors of a point. (a) represents a local neighborhood; (b) and (c) represent 3D continuous and discrete convolution, respectively.

3D Continuous Convolution Networks. These methods define convolutional kernels on a continuous space, where the weights for neighboring points are related to the spatial distribution with respect to the center point.

3D convolution can be interpreted as a weighted sum over a given subset. MLP is a simple way to learn weights. As the core layer of RS-CNN [104], RS-Conv takes a local subset of points around a certain point as its input, and the convolution is implemented using an MLP by learning the mapping from low-level relations (such as Euclidean distance and relative position) to high-level relations between points in the local subset. In [14], kernel elements are selected randomly in a unit sphere. An MLP-based continuous function is then used to establish relation between the locations of the kernel elements and the point cloud. In DensePoint [103], convolution is defined as a Single-Layer Perceptron (SLP) with a nonlinear activator. Features are learned by concatenating features from all previous layers to sufficiently exploit the contextual information.

Some methods also use existing algorithms to perform convolution. In PointConv [175]

, convolution is defined as a Monte Carlo estimate of the continuous 3D convolution with respect to an importance sampling. The convolutional kernels consist of a weighting function (which is learned with MLP layers) and a density function (which is learned by a kernelized density estimation and an MLP layer). To improve memory and computational efficiency, the 3D convolution is further reduced into two operations: matrix multiplication and 2D convolution. With the same parameter setting, its memory consumption can be reduced by about 64 times. In MCCNN

[54], convolution is considered as a Monte Carlo estimation process relying on a sample’s density function (which is implemented with MLP). Poisson disk sampling is then used to construct a point cloud hierarchy. This convolution operator can be used to perform convolution between two or multiple sampling methods and can handle varying sampling densities. In SpiderCNN [180], SpiderConv is proposed to define convolution as the product of a step function and a Taylor expansion defined on the

nearest neighbors. The step function captures the coarse geometry by encoding the local geodesic distance, and the Taylor expansion captures the intrinsic local geometric variations by interpolating arbitrary values at the vertices of a cube. Besides, a convolution network PCNN


is also proposed for 3D point clouds based on the radial basis function. Thomas et al.

[157] proposed both rigid and deformable kernel point convolution (KPConv) operators for 3D point clouds using a set of learnable kernel points.

Several methods have been proposed to address the rotation equivariant problem faced by 3D convolution networks. Esteves et al. [40]

proposed 3D spherical convolutional neural networks (Spherical CNN) to learn rotation equivariant representation for 3D shapes, which takes multi-valued spherical functions as its input. Localized convolutional filters are obtained by parameterizing spectrum with anchor points in the spherical harmonic domain. Tensor field networks

[158] are proposed to define the point convolution operation as the product of a learnable radial function and spherical harmonics, which are locally equivariant to 3D rotations, translations, and permutations of points. The convolution in [24]

is defined based on the spherical cross-correlation and implemented using a generalized Fast Fourier Transformation (FFT) algorithm. Based on PCNN, SPHNet

[125] achieves rotation invariance by incorporating spherical harmonic kernels during convolution on volumetric functions. ConvPoint [13] separates the convolution kernel into spatial and feature parts. The locations of the spatial part are randomly selected from a unit sphere and the weighting function is learned through a simple MLP.

To accelerate computing speed, Flex-Convolution [48] defines weights of convolution kernel as standard scalar product over nearest neighbors, which can be accelerated using CUDA. Experimental results have demonstrated its competitive performance on a small dataset with fewer parameters and lower memory consumption.

3D Discrete Convolution Networks. These methods define convolutional kernels on regular grids, where the weights for neighboring points are related to the offsets with respect to the center point.

Hua et al. [59] transformed non-uniform 3D point clouds into uniform grids, and defined convolutional kernels on each grid. Unlike 2D convolutions (which assign a weight to each pixel), the proposed 3D kernel assigns the same weights to all points falling into the same grid. For a given point, the mean features of all the neighboring points that are located on the same grid are computed from the previous layer. Then, mean features of all grids are weighted and summed to produce the output of the current layer. Lei et al. [83] defined a spherical convolutional kernel by partitioning a 3D spherical neighboring region into multiple volumetric bins and associating each bin with a learnable weighting matrix. The output of the spherical convolutional kernel for a point is determined by the non-linear activation of the mean of weighted activation values of its neighboring points. In GeoConv [76], the geometric relationship between a point and its neighboring points is explicitly modeled based on six bases. Edge features along each direction of the basis is weighted independently by a learnable matrix according to the basis of the neighboring point. These direction-associated features are then aggregated according to an angle formed by the given point and its neighboring points. For a given point, its feature at the current layer is defined as the sum of features of the given point and its neighboring edge features at the previous layer. PointCNN [88] achieves permutation invariance through a -conv transformation (which is implemented through MLP). By interpolating point features to neighboring discrete convolutional kernel-weight coordinates, Mao et al. [110] proposed an interpolated convolution operator InterpConv to measure the geometric relations between input point clouds and kernel-weight coordinates. Zhang et al. [205] proposed a RIConv operator to achieve rotation invariance, which takes low-level rotation invariant geometric features as input and then turns the convolution into 1D by a simple binning approach.

A-CNN [72] defines an annular convolution by looping the array of neighbors with respect to the size of kernel on each ring of the query point.A-CNN learns the relationship between neighboring points in a local subset. To reduce the computational and memory cost of 3D CNNs, Kumawat et al. [74] proposed a Rectified Local Phase Volume (ReLPV) block to extract phase in a 3D local neighborhood based on 3D Short Term Fourier Transform (STFT), which significantly reduces the number of parameters. In SFCNN [134], a point cloud is projected onto regular icosahedral lattices with aligned spherical coordinates. Convolutions are then conducted upon the features concatenated from vertices of spherical lattices and their neighbors through convolution-maxpooling-convolution structures. SFCNN is resistant to rotations and perturbations.

2.2.3 Graph-based Networks

Graph-based networks consider each point in a point cloud as a vertex of a graph, and generate directed edges for the graph based on the neighbors of each point. Feature learning is then performed in spatial or spectral domains [146]. A typical graph-based network is shown in Fig. 5.

Fig. 5: An illustration of a graph-based network.

Graph-based Methods in Spatial Domain. These methods define operations (e.g., convolution and pooling) in spatial domain. Specifically, convolution is usually implemented through MLP over spatial neighbors, pooling is produces a new coarsened graph by aggregating information from each point’s neighbors. Features at each vertex are usually assigned with coordinates, laser intensities or colors, while features at each edge are usually assigned with geometric attributes between two connected points.

As a pioneering work, Simonovsky et al. [146] considered each point as a vertex of the graph, and connected each vertex to all its neighbors by a directed edge. Then, Edge-Conditioned Convolution (ECC) is proposed using a filter-generating network (e.g., MLP). Max pooling is adopted to aggregate neighborhood information and graph coarsening is implemented based on the VoxelGrid [138] algorithm. For shape classification, convolutions and pooling are first interlaced. Then, global average pooling and fully connected layers are followed to produce classification scores. In DGCNN [168], a graph is constructed in the feature space and dynamically updated after each layer of the network. As the core layer of EdgeConv, an MLP is used as the feature learning function for each edge, channel-wise symmetric aggregation is also applied onto the edge features associated with the neighbors of each point. Further, LDGCNN [203]

removed the transformation network and linked the hierarchical features from different layers in DGCNN

[168] to improve its performance and reduce the mode size. An end-to-end unsupervised deep AutoEncoder network (namely FoldingNet [187]) is also proposed to use the concatenation of a vectorized local covariance matrix and point coordinates as its input.

Inspired by Inception [153] and DGCNN [168], Hassani et al. [53] proposed an unsupervised multi-task autoencoder to learn point and shape features. The encoder is constructed based on mutli-scale graphs. The decoder is constructed using three unsupervised tasks including clustering, self-supervised classification and reconstruction, which are trained jointly with a mutli-task loss. Liu et al. [98] proposed a Dynamic Points Agglomeration Module (DPAM) based on graph convolution to simplify the process of points agglomeration (sampling, grouping and pooling) into a simple step, which is implemented through multiplication of the agglomeration matrix and points feature matrix. Based on the PointNet architecture, a hierarchical learning architecture is constructed by stacking multiple DPAMs. Compared to the hierarchy strategy of PointNet++, DPAM dynamically exploits the relation of points and agglomerates points in a semantic space.

To exploit the local geometric structures, KCNet [140] is proposed to learn features based on kernel correlation. Specifically, a set of learnable points characterizing geometric types of local structures are defined as kernels. Then, affinity between the kernel and the neighborhood of a given point is calculated. In G3D [32], convolution is defined as a variant of polynomial of adjacency matrix, and pooling is defined as multiplying the Laplacian matrix and the vertex matrix by a coarsening matrix. ClusterNet [16]

utilizes Rigorously Rotation-Invariant (RRI) module to extract rotation-invariant features for each point, and constructs hierarchical structures of a point cloud based on the unsupervised agglomerative hierarchical clustering method with ward-linkage criteria

[120]. The features in each sub-cluster are first learned through an EdgeConv block and then aggregated through max pooling.

Graph-based Methods in Spectral Domain.

These methods define convolutions as spectral filtering, which is implemented as the multiplication of signals on graph with eigenvectors of the graph Laplacian matrix


To handle the challenges of high computation and non-localization, Defferrard et al. [30] proposed a truncated Chebyshev polynomials to approximate the spectral filtering. Their learned feature maps are located within the K-hops neighbors of each point. Note, eigenvectors are calculated from a fixed graph Laplacian matrix in [15] [30]. In contrast, RGCNN [156]

constructs a graph by connecting each point with all other points in the point cloud and updates the graph Laplacian matrix in each layer. To make features of adjacent vertices more similar, a graph-signal smoothness prior is added into the loss function. To address the challenges caused by diverse graph topology of data, the SGC-LL layer in AGCN

[87] utilizes a learnable distance metric to parameterize the similarity between two vertices on the graph. The adjacency matrix obtained from graph is normalized using Gaussian kernels and learned distances. Feng et al. [42] proposed a Hypergraph Neural Network (HGNN) and built a hyperedge convolutional layer by applying spectral convolution on the hypergraph.

The aforementioned methods operate on full graphs. To exploit the local structural information, Wang et al. [161] proposed an end-to-end spectral convolution network LocalSpecGCN to work on a local graph (which is constructed from the nearest neighbors). This method does not require any offline computation of the graph Laplacian matrix and graph coarsening hierarchy. In PointGCN [204], a graph is constructed based on nearest neighbors from a point cloud and each edge is weighted using a Gaussian kernel. Convolutional filters are defined as Chebyshev polynomials in the graph spectral domain. Global pooling and multi-resolution pooling are used to capture global and local features of the point cloud. Pan et al. [122] proposed 3DTI-Net by applying convolution on the nearest neighboring graph in the spectral domain. The invariance to geometry transformation is achieved by learning from relative Euclidean and direction distances.

2.2.4 Data Indexing-based Networks

These networks are constructed based on different data indexing structures (e.g., octree and kd-tree). In these methods, point features are learned hierarchically from leaf nodes to the root node along a tree. Lei et al. [83] proposed an octree guided CNN using spherical convolutional kernels (as described in Section 4

). Each layer of the network corresponds to one layer of the octree and a spherical convolutional kernel is applied at each layer. The values of neurons in the current layer are determined as the mean values of all relevant children nodes in the previous layer. Unlike OctNet

[136] (which is based on octree), Kd-Net [71] is built using multiple K-d trees with different splitting directions at each iteration. Following a bottom-up approach, the representation of a non-leaf node is computed from representations of its children using MLP. The feature of the root node (which describes the whole point cloud) is finally fed to fully connected layers to predict classification scores. Note that, Kd-Net shares parameters at each level according to the splitting type of nodes. 3DContextNet [199] uses a standard balanced K-d tree to achieve feature learning and aggregation. At each level, point features are first learned through MLP based on local cues (which models inter-dependencies between points in a local region) and global contextual cues (which models the relationship for one position with respect to all other positions). Then, the feature of a non-leaf node is computed from its child nodes using MLP and aggregated by max pooling. For classification, the above process is repeated until the root node is attained.

The hierarchy of SO-Net network is constructed by performing point-to-node nearest neighbor search [86]

. Specifically, a modified permutation invariant Self-Organizing Map (SOM) is used to model the spatial distribution of a point cloud. Individual point features are learned from normalized point-to-node coordinates through a series of fully connected layers. The feature of each node in SOM is extracted from point features associated with this node using channel-wise max pooling. The final feature is then learned from node features using an approach similar to PointNet

[129]. Compared to PointNet++ [131], the hierarchy of SOM is more efficient and the spatial distribution of the point cloud is fully explored.

Methods Input #params (M)
Pointwise MLP
PointNet [129] Coordinates 3.48 89.2% 86.2% - -
PointNet++ [131] Coordinates 1.48 90.7% - - -
MO-Net [69] Coordinates 3.1 89.3% 86.1% - -
Deep Sets [197] Coordinates - 87.1% - - -
PAT [186] Coordinates - 91.7% - - -
PointWeb [208] Coordinates - 92.3% 89.4% - -
SRN-PointNet++ [33] Coordinates - 91.5% - - -
JUSTLOOKUP [94] Coordinates - 89.5% 86.4% 92.9% 92.1%
Pointwise-CNN [59] Coordinates - 86.1% 81.4% - -
PointConv [175] Coordinates+Normals - 92.5% - - -
MC Convolution [54] Coordinates - 90.9% - - -
SpiderCNN [180] Coordinates+Normals - 92.4% - - -
PointCNN [88] Coordinates 0.45 92.2% 88.1% - -
Flex-Convolution [48] Coordinates - 90.2% - - -
PCNN [111] Coordinates 1.4 92.3% - 94.9% -
Boulch [14] Coordinates - 91.6% 88.1% - -
RS-CNN [104] Coordinates - 92.6% - - -
Spherical CNNs [40] Coordinates 0.5 88.9% - - -
GeoCNN [76] Coordinates - 93.4% 91.1% - -
-CNN [83] Coordinates - 92.0% 88.7% 94.6% 94.4%
A-CNN [72] Coordinates - 92.6% 90.3% 95.5% 95.3%
SFCNN [134] Coordinates - 91.4% - - -
SFCNN [134] Coordinates+Normals - 92.3% - - -
DensePoint [103] Coordinates 0.53 93.2% - 96.6% -
KPConv rigid [157] Coordinates - 92.9% - - -
KPConv deform [157] Coordinates - 92.7% - - -
InterpCNN [110] Coordinates 12.8 93.0% - - -
ConvPoint [13] Coordinates - 91.8% 88.5% - -
ECC [146] Coordinates - 87.4% 83.2% 90.8% 90.0%
KCNet [140] Coordinates 0.9 91.0% - 94.4% -
DGCNN [168] Coordinates 1.84 92.2% 90.2% - -
LocalSpecGCN [161] Coordinates+Normals - 92.1% - - -
RGCNN [156] Coordinates+Normals 2.24 90.5% 87.3% - -
LDGCNN [203] Coordinates - 92.9% 90.3% - -
3DTI-Net [122] Coordinates 2.6 91.7% - - -
PointGCN [204] Coordinates - 89.5% 86.1% 91.9% 91.6%
ClusterNet [16] Coordinates - 87.1% - - -
Hassani et al. [53] Coordinates - 89.1% - - -
DPAM [98] Coordinates - 91.9% 89.9% 94.6% 94.3%
Data Indexing-based
KD-Net [71] Coordinates 2.0 91.8% 88.5% 94.0% 93.5%
SO-Net [86] Coordinates - 90.9% 87.3% 94.1% 93.9%
SCN [177] Coordinates - 90.0% 87.6% - -
A-SCN [177] Coordinates - 89.8% 87.4% - -
3DContextNet [199] Coordinates - 90.2% - - -
3DContextNet [199] Coordinates+Normals - 91.1% - - -
Other Networks 3DmFV-Net [9] Coordinates 4.6 91.6% - 95.2% -
PVNet [194] Coordinates+Views - 93.2% - - -
PVRNet [195] Coordinates+Views - 93.6% - - -
3DPointCapsNet [210] Coordinates - 89.3% - - -
DeepRBFNet [18] Coordinates 3.2 90.2% 87.8% - -
DeepRBFNet [18] Coordinates+Normals 3.2 92.1% 88.8% - -
Point2Sequences [102] Coordinates - 92.6% 90.4% 95.3% 95.1%
RCNet [174] Coordinates - 91.6% - 94.7% -
RCNet-E [174] Coordinates - 92.3% - 95.6% -
TABLE I: Comparative 3D shape classification results on the ModelNet10/40 benchmarks. Here, we only focus on point-based networks and the ’#params’ means the number of parameters of corresponding model. The ‘OA’ represents overall accuracy and the ‘mAcc’ represents mean accuracy in the table. The symbol ‘-’ means the results are unavailable.

2.2.5 Other Networks

In addition to the above methods, many other schemes have also been proposed. In 3DmFV [9]

, a point cloud is voxelized into uniform 3D grids and fisher vectors are extracted based on the likelihood of a set of Gaussian mixture models defined on these grids. Since the components of fisher vector are summed over all points, the resulting representation is invariant to the order, structure and size of point clouds. RBFNet


explicitly models the spatial distribution of points by aggregating features from sparsely distributed Radial Basis Function (RBF) kernels. The RBF feature extraction layer computes responses from all kernels for each point, and then the kernel position and kernel size are optimized to capture the spatial distribution of points during training. Compared to fully connected layers, the RBF feature extraction layer produces more discriminative features while reducing the number of parameters by orders of magnitude. Zhao et al.

[210] proposed an unsupervised auto-encoder 3DPointCapsNet for generic representation learning of 3D point clouds. In the encoder stage, a pointwise MLP is first applied to the point cloud to extract point independent features, which are further fed into multiple independent convolutional layers. Then, a global latent representation is extracted by concatenating multiple max-pooled learned feature maps. Based on unsupervised dynamic routing, powerful representative latent capsules are learned. Inspired from the construction of shape context descriptor [7], Xie et al. [177] proposed a novel ShapeContextNet architecture by combining affinity point selection and compact feature aggregation into a soft alignment operation using dot-product self-attention [159]. To address noise and occlusion in 3D point clouds, Bobkov et al. [11] fed handcrafted point pair function based 4D rotation invariant descriptors into a 4D convolutional neural network. Prokudin et al. [126]

first randomly sampled a basis point set with a uniform distribution from a unit ball, and then encoded point cloud as minimal distances to the basis point set, which converts the point cloud to a vector with a relatively small fixed length. The encoded representation can then be processed with existing machine learning methods. RCNet

[174] utilizes standard RNN and 2D CNN to construct a permutation-invariant network for 3D point cloud processing. The point cloud is first partitioned into parallel beams and sorted along a specific dimension, and each beam is then fed into a shared RNN. The learned features are further fed into an efficient 2D CNN for hierarchical feature aggregation. To enhance its description ability, RCNet-E is proposed to ensemble multiple RCNets along different partition and sorting directions. Point2Sequences [102] is another RNN-based model that captures correlations between different areas in local regions of point clouds. It considers features learned from a local region at multiple scales as sequences and feeds these sequences from all local regions into an RNN-based encoder-decoder structure to aggregate local region features. Qin et al. [132] proposed an end-to-end unsupervised domain adaptation-based network PointDAN for 3D point cloud representation. To capture the semantic properties of a point cloud, a self-supervised method is proposed to reconstruct the point cloud, whose parts have been randomly rearranged [144].

Several methods have also been proposed to learn from both 3D point clouds and 2D images. In PVNet [194]

, high-level global features extracted from multi-view images are projected into the subspace of point clouds through an embedding network, and fused with point cloud features through a soft attention mask. Finally, a residual connection is employed for fused features and multi-view features to perform shape recognition. Later, PVRNet

[195] is further proposed to exploit the relation between a 3D point cloud and its multiple views, which are learned by a relation score module. Based on the relation scores, the original 2D global view features are enhanced for point-single-view fusion and point-multi-view fusion.

The ModelNet10/40 datasets are the most frequently used datasets for shape classification. Table I shows the results achieved by different point-based networks. Several observations can be drawn:

  • Pointwise MLP networks are usually served as basic building blocks for other types of networks to learn pointwise features.

  • As a standard deep learning architecture, convolution-based networks can achieve superior performance on irregular 3D point clouds. More attention should be paid to both discrete and continuous convolution networks for irregular data.

  • Due to its inherent strong capability to handle irregular data, graph-based networks have attracted increasingly more attention in recent years. However, it is still challenging to extend graph-based networks in the spectral domain to various graph structures.

  • Most networks need to down-sample a point cloud into a fixed small size. This sampling process discards details of the shape. Developing networks that can deal with large-scale point clouds is still in its infancy [57].

3 3D Object Detection and Tracking

In this section, we will review existing methods for 3D object detection, 3D object tracking and 3D scene flow estimation.

3.1 3D Object Detection

The task of 3D object detection is to accurately locate all objects of interest in a given scene. Similar to object detection in images [99], 3D object detection methods can be divided into two categories: region proposal-based methods and single shot methods. Several milestone methods are presented in Fig. 6.

Fig. 6: Chronological overview of the most relevant deep learning-based 3D object detection methods.

3.1.1 Region Proposal-based Methods

These methods first propose several possible regions (also called proposals) containing objects, and then extract region-wise features to determine the category label of each proposal. According to their object proposal generation approach, these methods can further be divided into three categories: multi-view based, segmentation-based and frustum-based methods.

Multi-view Methods. These methods fuse proposal-wise features from different view maps (e.g., LiDAR front view, bird’s eye view (BEV) and image) to obtain 3D rotated boxes, as shown in Fig. 7(a). The computational cost of these methods is usually high.

Fig. 7: Typical networks for three categories of 3D object detection methods. From top to bottom: (a) multi-view based, (b) segmentation-based and (c) frustum-based methods.

Chen et al. [19] generated a group of highly accurate 3D candidate boxes from the BEV map and projected them to the feature maps of multiple views (e.g., LiDAR front view image, RGB image). They then combined these region-wise features obtained from different views to predict oriented 3D bounding boxes, as shown in Fig. 7(a). Although this method achieves a recall of 99.1% at an Intersection-over-Union (IoU) of 0.25 with only 300 proposals, its speed is too slow for practical applications. Subsequently, several approaches have been developed to improve multi-view 3D object detection methods from two aspects.

First, several methods have been proposed to efficiently fuse the information of different modalities. To generate 3D proposals with a high recall for small objects, Ku et al. [73] proposed a multi-modal fusion-based region proposal network. They first extracted equal-sized features from both BEV and image views using cropping and resizing operations, and then fused these features using element-wise mean pooling. Liang et al. [90] exploited continuous convolutions to enable effective fusion of image and 3D LiDAR feature maps at different resolutions. Specifically, they extracted nearest corresponding image features for each point in the BEV space and then used bilinear interpolation to obtain a dense BEV feature map by projecting image features into the BEV plane. Experimental results show that dense BEV feature maps are more suitable for 3D object detection than discrete image feature maps and sparse LiDAR feature maps. Liang et al. [89] presented a multi-task multi-sensor 3D object detection network for end-to-end training. Specifically, multiple tasks (e.g., 2D object detection, ground estimation and depth completion) are exploited to help the network learn better feature representations. The learned cross-modality representation is further exploited to produce highly accurate object detection results. Experimental results show that this method achieves a significant improvement on 2D, 3D and BEV detection tasks, and outperforms previous state-of-the-art methods on the TOR4D benchmark [184, 108].

Second, different methods have been investigated to extract robust representations of the input data. Lu et al. [107] explored multi-scale contextual information by introducing a Spatial Channel Attention (SCA) module, which captures the global and multi-scale context of a scene and highlights useful features. They also proposed an Extension Spatial Unsample (ESU) module to obtain high-level features with rich spatial information by combining multi-scale low-level features, thus generating reliable 3D object proposals. Although better detection performance can be achieved, the aforementioned multi-view methods take a long runtime since they perform feature pooling for each proposal. Subsequently, Zeng et al. [200] used a pre-RoI pooling convolution to improve the efficiency of [19]. Specifically, they moved the majority of convolution operations to be ahead of the RoI pooling module. Therefore, RoI convolutions are performed once for all object proposals. Experimental results show that this method can run at a speed of 11.1 fps, which is 5 times faster than MV3D [19].

Segmentation-based Methods. These methods first leverage existing semantic segmentation techniques to remove most background points, and then generate a large amount of high-quality proposals on foreground points to save computation, as shown in Fig. 7(b). Compared to multi-view methods [19, 73, 200], these methods achieve higher object recall rates and are more suitable for complicated scenes with highly occluded and crowded objects.

Yang et al. [189] used a 2D segmentation network to predict foreground pixels and projected them into point clouds to remove most background points. They then generate proposals on the predicted foreground points and designed a new criterion named PointsIoU to reduce the redundancy and ambiguity of proposals. Following [189], Shi et al. [141] proposed a PointRCNN framework. Specifically, they directly segmented 3D point clouds to obtain foreground points and then fused semantic features and local spatial features to produce high-quality 3D boxes. Following the RPN stage of [141], Jesus et al. [65] proposed a pioneering work to leverage Graph Convolution Network (GCN) for 3D object detection. Specifically, two modules are introduced to refine object proposals using graph convolution. The first module R-GCN utilizes all points contained in a proposal to achieve per-proposal feature aggregation. The second module C-GCN fuses per-frame information from all proposals to regress accurate object boxes by exploiting contexts. Sourabh et al. [149] projected a point cloud into the output of the image-based segmentation network and appended the semantic prediction scores to the points. The painted points are fed into existing detectors [141, 213, 79] to achieve significant performance improvement. Yang et al. [190] associated each point with a spherical anchor. The semantic score of each point is then used to remove redundant anchors. Consequently, this method achieves a higher recall with lower computational cost as compared to previous methods [189, 141]. In addition, a PointsPool layer is proposed to learn compact features for interior points in proposals and a parallel IoU branch is introduced to improve localization accuracy and detection performance. Experimental results show that this method significantly outperforms other methods [141, 169, 89] on the hard set (Car Class) of the KITTI dataset [44] and runs at a speed of 12.5 fps.

Frustum-based Methods. These methods first leverage existing 2D object detectors to generate 2D candidate regions of objects and then extract a 3D frustum proposal for each 2D candidate region, as shown in Fig. 7 (c). Although these methods can efficiently propose possible locations of 3D objects, the step-by-step pipeline makes their performance limited by 2D image detectors.

F-PointNets [128] is a pioneering work in this direction. It generates a frustum proposal for each 2D region and applies PointNet [129] (or PointNet++ [131]) to learn point cloud features of each 3D frustum for amodal 3D box estimation. In a follow-up work, Zhao et al. [209] proposed a Point-SENet module to predict a set of scaling factors, which were further used to adaptively highlight useful features and suppress informative-less features. They also integrated the PointSIFT [67] module into the network to capture orientation information of point clouds, which achieved strong robustness to shape scaling. This method achieves significant improvement on both indoor and outdoor datasets [44, 148] as compared to F-PointNets [128].

Xu et al. [179] leveraged both 2D image region and its corresponding frustum points to accurately regress 3D boxes. To fuse image features and global features of point clouds, they presented a global fusion network for direct regression of box corner locations. They also proposed a dense fusion network for the prediction of point-wise offsets to each corner. Shin et al. [143] first estimated 2D bounding boxes and 3D poses of objects from a 2D image, and then extracted multiple geometrically feasible object candidates. These 3D candidates were fed into a box regression network to predict accurate 3D object boxes. Wang et al. [169] generated a sequence of frustums along the frustum axis for each 2D region and applied PointNet [129] to extract features for each frustum. The frustum-level features were reformed to generate a 2D feature map, which was then fed into a fully convolutional network for 3D box estimation. This method achieves the state-of-the-art performance among 2D image-based methods and was ranked in the top position of the official KITTI leaderboard. Lehner et al. [68] first obtained a preliminary detection results on the BEV map, and then extracted small point subsets (also called patches) based on the BEV predictions. A local refinement network is applied to learn the local features of patches to predict highly accurate 3D bounding boxes.

Other Methods. Motivated by the success of axis-aligned IoU in object detection in images, Zhou et al. [31] integrated the IoU of two 3D rotated bounding boxes into several state-of-the-art detectors [182, 79, 141] to achieve consistent performance improvement. Chen et al. [20] proposed a two-stage network architecture to use both point cloud and voxel representations. First, point clouds are voxelized and fed to a 3D backbone network to produce initial detection results. Second, the interior point features of initial predictions are further exploited for box refinements. Although this design is conceptually simple, it achieves comparable performance to PointRCNN [141] while maintaining a speed of 16.7 fps.

Inspired by Hough voting-based 2D object detectors, Qi et al. [127] proposed VoteNet to directly vote for virtual center points of objects from point clouds and to generate a group of high-quality 3D object proposals by aggregating vote features. VoteNet significantly outperforms previous approaches using only geometric information, and achieves the state-of-the-art performance on two large indoor benchmarks (i.e., ScanNet [26] and SUN RGB-D [148]). However, the prediction of virtual center point is unstable for a partially occluded object. Further, Feng et al. [117] added an auxiliary branch of direction vectors to improve the prediction accuracy of virtual center points and 3D candidate boxes. In addition, a 3D object-object relationship graph between proposals is built to emphasize useful features for accurate object detection. Inspired by the observation that the ground truth boxes of 3D objects provide accurate locations of intra-object parts, Shi et al. [142] proposed the Net, which is composed of a part-aware stage and a part-aggregation stage. The part-aware stage applies a UNet-like network with sparse convolution and sparse deconvolution to learn point-wise features for the prediction and coarse generation of intra-object part locations. The part-aggregation stage adopts RoI-aware pooling to aggregate predicted part locations for box scoring and location refinement.

3.1.2 Single Shot Methods

These methods directly predict class probabilities and regress 3D bounding boxes of objects using a single-stage network. These methods do not need region proposal generation and post-processing. As a result, they can run at a high speed and are highly suitable for real-time applications. According to the type of input data, single shot methods can be divided into two categories: BEV-based and point cloud-based methods.

BEV-based Methods. These methods mainly take BEV representation as their input. Yang et al. [184] discretized the point cloud of a scene with equally spaced cells and encoded the reflectance in a similar way, resulting in a regular representation. A Fully Convolution Network (FCN) network was then applied to estimate the locations and heading angles of objects. This method outperforms most single shot methods (including VeloFCN [84], 3D-FCN [85] and Vote3Deep [35]) while running at 28.6 fps. Later, Yang et al. [183] exploited the geometric and semantic prior information provided by High-Definition (HD) maps to improve the robustness and detection performance of [184]

. Specifically, they obtained the coordinates of ground points from the HD map and then replaced the absolute distance in the BEV representation with the distance relative to the ground to remedy the translation variance caused by the slope of the road. In addition, they concatenated a binary road mask with the BEV representation along the channel dimension to focus on moving objects. Since HD maps were not available everywhere, they also proposed an online map prediction module to estimate the map priors from single LiDAR point cloud. This map-aware method significantly outperforms its baseline on the TOR4D

[184, 108] and KITTI [44] datasets. However, its generalization performance to point clouds with different densities is poor. To solve this problem, Beltrán et al. [8] proposed a normalization map to consider the differences among different LiDAR sensors. The normalization map is a 2D grid with the same resolution as the BEV map, and it encodes the maximum number of points contained in each cell. It is shown that this normalization map significantly improves the generalization ability of BEV-based detectors.

Method Modality
Cars Pedestrians Cyclists
MV3D [19] L & I 2.8 74.97 63.63 54.00 - - - - - -
AVOD [73] L & I 12.5 76.39 66.47 60.23 36.10 27.86 25.76 57.19 42.08 38.29
ContFuse [90] L & I 16.7 83.68 68.78 61.67 - - - - - -
MMF [89] L & I 12.5 88.40 77.43 70.22 - - - - - -
SCANet [107] L & I 11.1 79.22 67.13 60.65 - - - - - -
RT3D [200] L & I 11.1 23.74 19.14 18.86 - - - - - -
IPOD [189] L & I 5.0 80.30 73.04 68.73 55.07 44.37 40.05 71.99 52.23 46.50
PointRCNN [141] L 10.0 86.96 75.64 70.70 47.98 39.37 36.01 74.96 58.82 52.53
PointRGCN [65] L 3.8 85.97 75.73 70.60 - - - - - -
PointPainting [149] L & I 2.5 82.11 71.70 67.08 50.32 40.97 37.87 77.63 63.78 55.89
STD [190] L 12.5 87.95 79.71 75.09 53.29 42.47 38.35 78.69 61.59 55.30
F-PointNets [128] L & I 5.9 82.19 69.79 60.59 50.53 42.15 38.08 72.27 56.12 49.01
SIFRNet [209] L & I - - - - - - - - - -
PointFusion [179] L & I - 77.92 63.00 53.27 33.36 28.04 23.38 49.34 29.42 26.98
RoarNet [143] L & I 10.0 83.71 73.04 59.16 - - - - - -
F-ConvNet [169] L & I 2.1 87.36 76.39 66.69 52.16 43.38 38.80 81.98 65.07 56.54
Refinement [68]
L 6.7 88.67 77.20 71.82 - - - - - -
3D IoU loss [31] L 12.5 86.16 76.50 71.39 - - - - - -
Fast Point
R-CNN [20]
L 16.7 84.80 74.59 67.27 - - - - - -
VoteNet [127] L - - - - - - - - - -
Feng et al. [117] L - - - - - - - - - -
Part-A^2 [142] L 12.5 87.81 78.49 73.51 - - - - - -
PIXOR [184] L 28.6 - - - - - - - - -
HDNET [183] L 20.0 - - - - - - - - -
BirdNet [8] L 9.1 13.53 9.47 8.49 12.25 8.99 8.06 16.63 10.46 9.53
VeloFCN [84] L 1.0 - - - - - - - - -
3D FCN [85] L <0.2 - - - - - - - - -
Vote3Deep [35] L - - - - - - - - - -
3DBN [181] L 7.7 83.77 73.53 66.23 - - - - - -
VoxelNet [213] L 2.0 77.47 65.11 57.73 39.48 33.69 31.51 61.22 48.36 44.37
SECOND [182] L 26.3 83.34 72.55 65.82 48.96 38.78 34.91 71.33 52.08 45.83
MVX-Net [147] L & I 16.7 84.99 71.95 64.88 - - - - - -
PointPillars [79] L 62.0 82.58 74.31 68.99 51.45 41.92 38.89 77.10 58.65 51.92
LaserNet [115] L 83.3 - - - - - - - - -
LaserNet++ [114] L & I 26.3 - - - - - - - - -
TABLE II: Comparative 3D object detection results on the KITTI test 3D detection benchmark. 3D bounding box IoU threshold is 0.7 for cars and 0.5 for pedestrians and cyclists. The modalities are LiDAR (L) and image (I). ‘E’, ‘M’ and ‘H’ represent easy, moderate and hard classes of objects, respectively. For simplicity, we omit the ‘%’ after the value. The symbol ‘-’ means the results are unavailable.
Method Modality
Cars Pedestrians Cyclists
MV3D [19] L & I 2.8 86.62 78.93 69.80 - - - - - -
AVOD [73] L & I 12.5 89.75 84.95 78.32 42.58 33.57 30.14 64.11 48.15 42.37
ContFuse [90] L & I 16.7 94.07 85.35 75.88 - - - - - -
MMF [89] L & I 12.5 93.67 88.21 81.99 - - - - - -
SCANet [107] L & I 11.1 90.33 82.85 76.06 - - - - - -
RT3D [200] L & I 11.1 56.44 44.00 42.34 - - - - - -
IPOD [189] L & I 5.0 89.64 84.62 79.96 60.88 49.79 45.43 78.19 59.40 51.38
PointRCNN [141] L 10.0 92.13 87.39 82.72 54.77 46.13 42.84 82.56 67.24 60.28
PointRGCN [65] L 3.8 91.63 87.49 80.73 - - - - - -
PointPainting [149] L & I 2.5 92.45 88.11 83.36 58.70 49.93 46.29 83.91 71.54 62.97
STD [190] L 12.5 94.74 89.19 86.42 60.02 48.72 44.55 81.36 67.23 59.35
F-PointNets [128] L & I 5.9 91.17 84.67 74.77 57.13 49.57 45.48 77.26 61.37 53.78
SIFRNet [209] L & I - - - - - - - - - -
PointFusion [179] L & I - - - - - - - - - -
RoarNet [143] L & I 10.0 88.20 79.41 70.02 - - - - - -
F-ConvNet [169] L & I 2.1 91.51 85.84 76.11 57.04 48.96 44.33 84.16 68.88 60.05
Refinement [68]
L 6.7 92.72 88.39 83.19 - - - - - -
Other Methods
3D IoU loss [31] L 12.5 91.36 86.22 81.20 - - - - - -
Fast Point
R-CNN [20]
L 16.7 90.76 85.61 79.99 - - - - - -
VoteNet [127] L - - - - - - - - - -
Feng et al. [117] L - - - - - - - - - -
Part-A^2 [142] L 12.5 91.70 87.79 84.61 - - - 81.91 68.12 61.92
PIXOR [184] L 28.6 83.97 80.01 74.31 - - - - - -
HDNET [183] L 20.0 89.14 86.57 78.32 - - - - - -
BirdNet [8] L 9.1 76.88 51.51 50.27 20.73 15.80 14.59 36.01 23.78 21.09
VeloFCN [84] L 1.0 0.02 0.14 0.21 - - - - - -
3D FCN [85] L <0.2 70.62 61.67 55.61 - - - - - -
Vote3Deep [35] L - - - - - - - - - -
3DBN [181] L 7.7 89.66 83.94 76.50 - - - - - -
VoxelNet [213] L 2.0 89.35 79.26 77.39 46.13 40.74 38.11 66.70 54.76 50.55
SECOND [182] L 26.3 89.39 83.77 78.59 55.99 45.02 40.93 76.50 56.05 49.45
MVX-Net [147] L & I 16.7 92.13 86.05 78.68 - - - - - -
PointPillars [79] L 62.0 90.07 86.56 82.81 57.60 48.64 45.78 79.90 62.73 55.58
LaserNet [115] L 83.3 79.19 74.52 68.45 - - - - - -
LaserNet++ [114] L & I 26.3 - - - - - - - - -
TABLE III: Comparative 3D object detection results on the KITTI test BEV detection benchmark. 3D bounding box IoU threshold is 0.7 for cars and 0.5 for pedestrians and cyclists. The modalities are LiDAR (L) and image (I). ‘E’, ‘M’ and ‘H’ represent easy, moderate and hard classes of objects, respectively. For simplicity, we omit the ‘%’ after the value. The symbol ‘-’ means the results are unavailable.

Point Cloud-based Methods. These methods convert a point cloud into a regular representation (e.g., 2D map), and then apply CNN to predict both categories and 3D boxes of objects.

Li et al. [84] proposed the first method to use a FCN for 3D object detection. They converted a point cloud into a 2D point map and used a 2D FCN to predict the bounding boxes and confidences of objects. Later, they [85] discretized the point cloud into a 4D tensor with dimensions of length, width, height and channels, and extended the 2D FCN-based detection technologies to 3D domain for 3D object detection. Compared to [84], 3D FCN-based method [85] obtains a gain of over >20% in accuracy, but inevitably costs more computing resources due to 3D convolutions and the sparsity of the data. To address the sparsity problem of voxels, Engelcke et al. [35] leveraged a feature-centric voting scheme to generate a set of votes for each non-empty voxel and to obtain the convolutional results by accumulating the votes. Its computational complexity method is proportional to the number of occupied voxels. Li et al. [181] constructed a 3D backbone network by stacking multiple sparse 3D CNNs. This method is designed to save memory and accelerate computation by fully using the sparsity of voxels. This 3D backbone network extracts rich 3D features for object detection without introducing heavy computational burden.

Zhou et al. [213] presented a voxel-based end-to-end trainable framework VoxelNet. They partitioned a point cloud into equally spaced voxels and encoded the features within each voxel into a 4D tensor. A region proposal network is then connected to produce detection results. Although its performance is strong, this method is very slow due to the sparsity of voxels and 3D convolutions. Later, Yan et al. [182] used the sparse convolutional network [47] to improve the inference efficiency of [213]. They also proposed a sine-error angle loss to solve the ambiguity between orientations of 0 and . Sindagi et al. [147] extended VoxelNet by fusing image and point cloud features at early stages. Specifically, they projected non-empty voxels generated by [213] into the image and used a pre-trained network to extract image features for each projected voxel. These image features were then concatenated with voxel features to produce accurate 3D boxes. This method can effectively exploit multi-modal information to reduce false positives and negatives compared to [213, 182]. Lang et al. [79] proposed a 3D object detector named PointPillars. This method leverages PointNet [129] to learn the feature of point clouds organized in vertical columns (Pillars) and encodes the learned features as a pesudo image. A 2D object detection pipeline is then applied to predict 3D bounding boxes. PointPillars outperforms most fusion approaches (including MV3D [19], RoarNet [143] and AVOD [73]) in terms of Average Precision (AP). Moreover, PointPillars can run at a speed of 62 fps on both the 3D and BEV KITTI [44] benchmarks, making it highly suitable for practical applications.

Other Methods. Meyer et al. [115] proposed an efficient 3D object detector called LaserNet. This method predicts a probability distribution over bounding boxes for each point and then combines these per-point distributions to generate final 3D object boxes. Further, the dense range view (RV) representation of point cloud is used as input and a fast mean-shift algorithm is proposed to reduce the noise produced by per-point prediction. LaserNet achieves the state-of-the-art performance at the range of 0 to 50 meters, and its runtime is significantly lower than existing methods. Meyer et al. [114] then extended LaserNet to exploit the dense texture provided by RGB images (e.g., 50 to 70 meters). Specifically, they associated LiDAR points with image pixels by projecting 3D point clouds onto 2D images and exploited this association to fuse RGB information into 3D points. They also considered 3D semantic segmentation as an auxiliary task to learn better representations. This method achieves a significant improvement in both long-range (e.g., 50 to 70 meters) object detection and semantic segmentation while maintaining high efficiency of LaserNet [115].

3.2 3D Object Tracking

Given the locations of an object in the first frame, the task of object tracking is to estimate its state in subsequent frames [56, 97]. Since 3D object tracking can use the rich geometric information in the point clouds, it is expected to overcome several of the drawbacks that are faced by 2D image-based tracking, including occlusion, illumination and scale variation.

Inspired by the success of Siamese network [10] for imaged-based object tracking, Giancola et al. [45]

proposed a 3D Siamese network with shape completion regularization. Specifically, they first generated candidates using a Kalman filter, and encoded model and candidates into a compact representation using shape regularization. The cosine similarity is then used to search for the location of the tracked object in the next frame. This method can be used as an alternative for object tracking, and significantly outperforms most 2D object tracking methods, including

[119] and SiamFC [10]. To efficiently search for the target object, Zarzar et al. [198] leveraged a 2D Siamese network to generate a large number of coarse object candidates on BEV representation. They then refined the candidates by exploiting the cosine similarity in 3D Siamese network. This method significantly outperforms [45] in terms of both precision (i.e., by 18%) and success rate (i.e., by 12%). Simon et al. [145]

proposed a 3D object detection and tracking architecture for semantic point clouds. They first generated voxelized semantic point clouds by fusing 2D visual semantic information, and then utilized the temporal information to improve accuracy and robustness of multi-target tracking. In addition, they introduced a powerful and simplified evaluation metric (i.e., Scale-Rotation-Translation score (SRFs)) to speed up training and inference. Their proposed Complexer-YOLO achieves promising tracking performance and can still run in real-time.

3.3 3D Scene Flow Estimation

Analogous to optical flow estimation in 2D vision, several methods have started to learn useful information (e.g. 3D scene flow, spatial-temporary information) from a sequence of point clouds.

Liu et al. [100] proposed FlowNet3D to directly learn scene flows from a pair of consecutive point clouds. FlowNet3D learns both point-level features and motion features through a flow embedding layer. However, there are two problems with FlowNet3D. First, some predicted motion vectors differ significantly from the ground truth in their directions. Second, it is difficult to apply FlowNet to non-static scenes, especially for the scenes which are dominated by deformable objects. To solve this problem, Wang et al. [170] introduced a cosine distance loss to minimize the angle between the predictions and the ground truth. In addition, they also proposed a point-to-plane distance loss to improve the accuracy for both rigid and dynamic scenes. Experimental results show that these two loss terms improve the accuracy of FlowNet3D from 57.85% to 63.43%, and speed up and stabilize the training process. Gu et al. [49] proposed a Hierarchical Permutohedral Lattice FlowNet (HPLFlowNet) to directly estimate scene flow from large-scale point clouds. Several bilateral convolution layers are proposed to restore structural information from raw point clouds, while reducing the computational cost.

To effectively process sequential point clouds, Fan and Yang [41] proposed PointRNN, PointGRU and PointLSTM networks and a sequence-to-sequence model to track moving points. PointRNN, PointGRU, and PointLSTM are able to capture the spatial-temporary information and model dynamic point clouds. Similarly, Liu et al. [101] proposed MeteorNet to directly learn a representation from dynamic point clouds. This method learns to aggregate information from spatiotemporal neighboring points. Direct grouping and chained-flow grouping are further introduced to determine the temporal neighbors. However, the performance of the aforementioned methods is limited by the scale of datasets. Mittal et al. [118]

proposed two self-supervised losses to train their network on large unlabeled datasets. Their main idea is that a robust scene flow estimation method should be effective in both forward and backward predictions. Due to the unavailability of scene flow annotation, the nearest neighbor of the predicted transformed point is considered as pesudo ground truth. However, the true ground truth may not be the same as the nearest point. To avoid this problem, they computed the scene flow in the reverse direction and proposed a cycle consistency loss to translate the point to the original position. Experimental results show that this self-supervised method exceeds the state-of-the-art performance of supervised learning-based methods.

3.4 Summary

The KITTI [44] benchmark is one of the most influential datasets in autonomous driving and has been commonly used in both academia and industry. Tables II and III present the results achieved by different detectors on the KITTI test 3D and BEV benchmarks, respectively. The following observations can be made:

  • Region proposal-based methods are the most frequently investigated methods among these two categories, and outperform single shot methods by a large margin on both KITTI test 3D and BEV benchmarks.

  • There are two limitations for existing 3D object detectors. First, the long-range detection capability of existing methods is relatively poor. Second, how to fully exploit the texture information in images is still an open problem.

  • Multi-task learning is a future direction in 3D object detection. For example, MMF [89] learns a cross-modality representation to achieve state-of-the-art detection performance by incorporating multiple tasks.

  • 3D object tracking and scene flow estimation are emerging research topics, and have gradually attracted increasing attention since 2019.

4 3D Point Cloud Segmentation

3D point cloud segmentation requires the understanding of both the global geometric structure and the fine-grained details of each point. According to the segmentation granularity, 3D point cloud segmentation methods can be classified into three categories:

semantic segmentation (scene level), instance segmentation (object level) and part segmentation (part level).

Fig. 8: Chronological overview of some of the most relevant deep learning-based point cloud semantic segmentation methods.

4.1 3D Semantic Segmentation

Given a point cloud, the goal of semantic segmentation is to separate a point cloud into several subsets according to their semantic meanings. Similar to the taxonomy for 3D shape classification (see Section 2), there are two paradigms for semantic segmentation, i.e., projection-based and point-based methods. We show several representative methods in Fig. 8.

4.1.1 Projection-based Networks

Intermediate regular representations can be organised or classfied into multi-view representation [80, 12], spherical representation [172, 173, 116], volumetric representation [113, 135, 46], permutohedral lattice representation [150, 137], and hybrid representation [27, 64], as shown in Fig. 9.

Multi-view Representation. Felix et al. [80] first projected a 3D point cloud onto 2D planes from multiple virtual camera views. Then, a multi-stream FCN is used to predict pixel-wise scores on synthetic images. The final semantic label of each point is obtained by fusing the re-projected scores over different views. Similarly, Boulch et al. [12] first generated several RGB and depth snapshots of a point cloud using multiple camera positions. They then performed pixel-wise labeling on these snapshots using 2D segmentation networks. The scores predicted from RGB and depth images are further fused using residual correction [5]. Based on the assumption that point clouds are sampled from locally Euclidean surfaces, Tatarchenko et al. [154] introduced tangent convolutions for dense point cloud segmentation. This method first projects the local surface geometry around each point to a virtual tangent plane. Tangent convolutions are then directly operated on the surface geometry. This method shows great scalability and is able to process large-scale point clouds with millions of points. Overall, the performance of multi-view segmentation methods is sensitive to viewpoint selection and occlusions. Besides, these methods have not fully exploited the underlying geometric and structural information, as the projection step inevitably introduces information loss.

Fig. 9: An illustration of the intermediate representation of projection-based methods.

Spherical Representation. To achieve fast and accurate segmentation of 3D point clouds, Wu et al. [172] proposed an end-to-end network based on SqueezeNet [62] and Conditional Random Field (CRF). To further improve segmentation accuracy, SqueezeSegV2 [173] is introduced to address domain shift by utilizing an unsupervised domain adaptation pipeline. Milioto et al. [116]

proposed RangeNet++ for real-time semantic segmentation of LiDAR point clouds. The semantic labels of 2D range images are first transferred to 3D point clouds, an efficient GPU-enabled KNN-based post-processing step is further used to alleviate the problem of discretization errors and blurry inference outputs. Compared to single view projection, spherical projection retains more information and is suitable for the labeling of LiDAR point clouds. However, this intermediate representation inevitably brings several problems such as discretization errors and occlusions.

Volumetric Representation. Huang et al. [60] first divided a point cloud into a set of occupancy voxels. They then fed these intermediate data to a fully-3D convolutional neural network for voxel-wise segmentation. Finally, all points within a voxel are assigned the same semantic label as the voxel. The performance of this method is severely limited by the granularity of the voxels and the boundary artifacts caused by the point cloud partition. Further, Tchapmi et al. [155] proposed SEGCloud to achieve fine-grained and global consistent semantic segmentation. This method introduces a deterministic trilinear interpolation to map the coarse voxel predictions generated by 3D-FCNN [106] back to the point cloud, and then uses Fully Connected CRF (FC-CRF) to enforce spatial consistency of these inferred per-point labels. Meng et al. [113] introduced a kernel-based interpolated variational autoencoder architecture to encode the local geometrical structures within each voxel. Instead of a binary occupancy representation, RBFs are employed for each voxel to obtain a continuous representation and capture the distribution of points in each voxel. VAE is further used to map the point distribution within each voxel to a compact latent space. Then, both symmetry groups and an equivalence CNN are used to achieve robust feature learning.

Good scalability is one of the remarkable advantages of volumetric representation. Specifically, volumetric-based networks are free to be trained and tested in point clouds with different spatial sizes. In Fully-Convolutional Point Network (FCPN) [135], different levels of geometric relations are first hierarchically abstracted from point clouds, 3D convolutions and weighted average pooling are then used to extract features and incorporate long-range dependencies. This method can process large-scale point clouds and has good scalability during inference. Angela et al. [28] proposed ScanComplete to achieve 3D scan completion and per-voxel semantic labeling. This method leverages the scalability of fully-convolutional neural networks and can adapt to different input data sizes during training and test. A coarse-to-fine strategy is used to hierarchically improve the resolution of the predicted results.

Volumetric representation is naturally sparse, as the number of non-zero values only accounts for a small percentage. Therefore, it is inefficient to apply dense convolution neural networks on the spatially-sparse data. To this end, Graham et al. [46] proposed submanifold sparse convolutional networks. This method significantly reduces memory and computational costs by restricting the output of convolution to be only related to occupied voxels. Meanwhile, its sparse convolution can also control the sparsity of the extracted features. This submanifold sparse convolution is suitable for efficient processing of high-dimensional and spatially-sparse data. Further, Choy et al. [23]

proposed a 4D spatio-temporal convolutional neural network called MinkowskiNet for 3D video perception. A generalized sparse convolution is proposed to effectively process high-dimensional data. A trilateral-stationary conditional random field is further applied to enforce consistency.

Overall, the volumetric representation naturally preserves the neighborhood structure of 3D point clouds. Its regular data format also allows direct application of standard 3D convolutions. These factors lead to a stead performance improvement in this area. However, the voxelization step inherently introduces discretization artifacts and information loss. Usually, a high resolution leads to high memory and computational costs, while a low resolution introduces loss of details. It is non-trivial to select an appropriate grid resolution in practice.

Permutohedral Lattice Representation. Su et al. [150] proposed the Sparse Lattice Networks (SPLATNet) based on Bilateral Convolution Layers (BCLs). This method first interpolates a raw point cloud to a permutohedral sparse lattice, BCL is then applied to convolve on occupied parts of the sparsely populated lattice. The filtered output is then interpolated back to the raw point cloud. In addition, this method allows flexible joint processing of multi-view images and point clouds. Further, Rosu et al. [137] proposed LatticeNet to achieve efficient processing of large point clouds. A data-dependent interpolation module called DeformsSlice is also introduced to back project the lattice feature to point clouds.

Hybrid Representation. To further leverage all available information, several methods have been proposed to learn multi-modal features from 3D scans. Angela and Matthias [27] present a joint 3D-multi-view network to combine RGB features and geometric features. A 3D CNN stream and several 2D streams are used to extract features, and a differentiable back-projection layer is proposed to jointly fuse the learned 2D embeddings and 3D geometric features. Further, Hung et al. [22] proposed a unified point-based framework to learn 2D textural appearance, 3D structures and global context features from point clouds. This method directly applies point-based networks to extracts local geometric features and global context from sparsely sampled point sets without any voxelization. Jaritz et al. [64] proposed Multi-view PointNet (MVPNet) to aggregate appearance features from 2D multi-view images and spatial geometric features in the canonical point cloud space.

4.1.2 Point-based Networks

Point-based networks directly work on irregular point clouds. However, point clouds are orderless and unstructured, making it infeasible to directly apply standard CNNs. To this end, the pioneering work PointNet [129] is proposed to learn per-point features using shared MLPs and global features using symmetrical pooling functions. Based on PointNet, a series of point-based networks have been proposed recently. Overall, these methods can be roughly divided into pointwise MLP methods, point convolution methods, RNN-based methods, and graph-based methods.

Pointwise MLP Methods. These methods usually use shared MLP as the basic unit in their network for its high efficiency. However, point-wise features extracted by shared MLP cannot capture the local geometry in point clouds and the mutual interactions between points [129]. To capture wider context for each point and learn richer local structures, several dedicated networks have been introduced, including methods based on neighboring feature pooling, attention-based aggregation, and local-global feature concatenation.

Fig. 10: An illustration of the PointNet++ [131] framework.

Neighboring feature pooling: To capture local geometric patterns, these methods learn a feature for each point by aggregating the information from local neighboring points. In particular, PointNet++ [131] groups points hierarchically and progressively learns from larger local regions, as illustrated in Fig. 10. Multi-scale grouping and multi-resolution grouping are also proposed to overcome the problems caused by non-uniformity and varying density of point clouds. Later, Jiang et al. [67] proposed a PointSIFT module to achieve orientation encoding and scale awareness. This module stacks and encodes the information from eight spatial orientations through a three-stage ordered convolution operation. Multi-scale features are extracted and concatenated to achieve adaptivity to different scales. Different from the grouping techniques used in PointNet++ (i.e., ball query), Francis et al. [38] utilized -means clustering and KNN to separately define two neighborhoods in the world space and learned feature space. Based on the assumption that points from the same class are expected to be closer in feature space, a pairwise distance loss and a centroid loss are introduced to further regularize feature learning. To model the mutual interactions between different points, Zhao et al. [208] proposed PointWeb to explore the relations between all pairs of points in a local region by densely constructing a locally fully-linked web. An Adaptive Feature Adjustment (AFA) module is proposed to achieve information interchange and feature refinement. This aggregation operation helps the network to learn a discriminative feature representation. Zhang et al. [206] proposed a permutation invariant convolution called Shellconv based on the statistics from concentric spherical shells. This method first queries a set of multi-scale concentric spheres, the max-pooling operation is then used within different shells to summarize the statistics, MLPs and 1D convolution are used to obtain the final convolution output. Hu et al. [57] proposed an efficient and lightweight network called RandLA-Net for large-scale point cloud processing. This network utilizes random point sampling to achieve a remarkable efficiency in terms of memory and computation. A local feature aggregation module is further proposed to capture and preserve the geometric features.

Attention-based aggregation: To further improve segmentation accuracy, an attention mechanism [159] is introduced to point cloud segmentation. Yang et al. [186]

proposed a group shuffle attention to model the relations between points, and presented a permutation-invariant, task-agnostic and differentiable Gumbel Subset Sampling (GSS) to replace the widely used Furthest Point Sampling (FPS) approach. This module is less sensitive to outliers and can select a representative subset of points. To better capture the spatial distribution of a point cloud, Chen et al.

[17] proposed a Local Spatial Aware (LSA) layer to learn spatial awareness weights based on the spatial layouts and the local structures of point clouds. Similar to CRF, Zhao et al. [207] proposed an Attention-based Score Refinement (ASR) module to post-process the segmentation results produced by the network. The initial segmentation result is refined by pooling the scores of neighbouring points with learned attention weights. This module can be easily integrated into existing deep networks to improve the final segmentation performance.

Local-global concatenation: Zhao et al. [210] proposed a permutation-invariant PS-Net to incorporate local structures and global context from point clouds. Edgeconv [168] and NetVLAD [3] are repeatedly stacked to capture the local information and scene-level global features.

Point Convolution Methods. These methods tend to propose effective convolution operations for point clouds. Hua et al. [59] proposed a point-wise convolution operator, where the neighboring points are binned into kernel cells and then convolved with kernel weights. Wang et al. [165] proposed a network called PCCN based on parametric continuous convolution layers. The kernel function of this layer is parameterized by MLPs and spans the continuous vector space. Hughes et al. [157] proposed a Kernel Point Fully Convolutional Network (KP-FCNN) based on Kernel Point Convolution (KPConv). Specifically, the convolution weights of KPConv are determined by the Euclidean distances to kernel points, and the number of kernel points is not fixed. The positions of the kernel points are formulated as an optimization problem of best coverage in a sphere space. Note that the radius neighbourhood is used to keep a consistent receptive field, while grid subsampling is used in each layer to achieve high robustness under varying densities of point clouds. In [37], Francis et al. provided rich ablation experiments and visualization results to show the impact of receptive field on the performance of aggregation-based methods. They also proposed a Dilated Point Convolution (DPC) operation to aggregate dilated neighbouring features, instead of the K nearest neighbours. This operation is demonstrated to be very effective in increasing the receptive field and can be easily integrated into existing aggregation-based networks.

Method S3DIS Semantic3D ScanNet(v2)
Multi-view DeePr3SS [80] - - - - - - 88.9 58.5 - - -
SnapNet [12] - - - - 91.0 67.4 88.6 59.1 - - -
TangentConv [154] 82.5 52.8 - - - - - - 80.1 40.9 40.9
Spherical SqueezeSeg [172] - - - - - - - - - - 29.5
SqueezeSegV2 [173] - - - - - - - - - - 39.7
RangeNet++ [116] - - - - - - - - - - 52.2
Volumetric SegCloud [155] - 48.9 - - - - 88.1 61.3 - - -
SparseConvNet [46] - - - - - - - - - 72.5 -
MinkowskiNet [23] - - - - - - - - - 73.6 -
VV-Net [113] - - 87.8 78.2 - - - - - - -
SPLATNet [150] - - - - - - - - - 39.3 18.4
LatticeNet [137] - - - - - - - - - 64.0 52.2
Hybrid 3DMV [27] - - - - - - - - - 48.4 -
UPB [22] - - - - - - - - - 63.4 -
MVPNet [64] - - - - - - - - 64.1 -
PointNet [129] - 41.1 78.6 47.6 - - - - - - 14.6
PointNet++ [131] - - 81.0 54.5 85.7 63.1 - - 84.5 33.9 20.1
PointSIFT [67] - - 88.7 70.2 - - - - 86.2 41.5 -
Engelmann [39] 84.2 52.2 84.0 58.3 - - - - - - -
3DContextNet [199] - - 84.9 55.6 - - - - - - -
A-SCN [177] - - 81.6 52.7 - - - - - - -
PointWeb [208] 87.0 60.3 87.3 66.7 - - - - 85.9 - -
PAT [186] 60.1 64.3 - - - - - - -
LSANet [17] - - 86.8 62.2 - - - - 85.1 - -
ShellNet [206] - - 87.1 66.8 - - 93.2 69.3 85.2 - -
RandLA-Net [57] - - 87.2 68.5 - - 94.4 76.0 - - 50.3
PointCNN [88] 85.9 57.3 88.1 65.4 - - - - 85.1 45.8 -
PCCN [165] - 58.3 - - - - - - - - -
A-CNN [72] - - 87.3 - - - - - 85.4 - -
ConvPoint [13] - - 88.8 68.2 93.4 76.5 - - - - -
KPConv [157] - 67.1 - 70.6 - - 92.9 74.6 - 68.4 -
DPC [37] 86.8 61.3 - - - - - - - 59.2 -
InterpCNN [110] - - 88.7 66.7 - - - - - - -
RSNet [61] - 51.9 - 56.5 - - - - 84.9 39.4 -
G+RCU [36] - 45.1 81.1 49.7 - - - - - - -
3P-RNN [191] 85.7 53.4 86.9 56.3 - - - - - - -
DGCNN [168] - - 84.1 56.1 - - - - - - -
SPG [78] 86.4 58.0 85.5 62.1 92.9 76.2 94.0 73.2 - - 17.4
SSP+SPG [77] 87.9 61.7 87.9 68.4 - - - - - - -
GACNet [162] 87.8 62.9 - - - - 91.9 70.8 - - -
PAG [123] 86.8 59.3 88.1 65.9 - - - - - -
HDGCN [92] - 59.3 - 66.9 - - - - - - -
HPEIN [66] 87.2 61.9 88.2 67.8 - - - - - 61.8 -
SPH3D-GCN [82] 87.7 59.5 88.6 68.9 - - - - - 61.0 -
DPAM [98] 86.1 60.0 87.6 64.5 - - - - - - -
TABLE IV: Comparative semantic segmentation results on the S3DIS (including both Area5 and 6-fold cross validation) [4], Semantic3D (including both semantic-8 and reduced-8 subsets) [52] , ScanNet [26], and SemanticKITTI [6] datasets. Overall Accuracy (OA), Mean Intersection-over-Union (mIoU) are the main evaluation metric. For simplicity, we omit the ‘%’ after the value. The symbol ‘-’ means the results are unavailable.

RNN-based Methods.

To capture inherent context features from point clouds, Recurrent Neural Networks (RNN) have also been used for semantic segmentation of point clouds. Based on PointNet

[129], Francis et al. [36] first transformed a block of points into multi-scale blocks and grid blocks to obtain input-level context. Then, the block-wise features extracted by PointNet are sequentially fed into Consolidation Units (CU) or Recurrent Consolidation Units (RCU) to obtain output-level context. Experimental results show that incorporating spatial context is important for the improvement of the segmentation performance. Huang et al. [61] proposed a lightweight local dependency modeling module, and utilized a slice pooling layer to convert unordered point feature sets into an ordered sequence of feature vectors. Ye et al. [191] first proposed a Pointwise Pyramid Pooling (3P) module to capture the coarse-to-fine local structure, and then utilized two-direction hierarchical RNNs to further obtain long-range spatial dependencies. RNN was then applied to achieve an end-to-end learning. However, these methods lose rich geometric features and density distribution from point clouds when aggregating the local neighbourhood features with global structure features [211]. To alleviate the problems caused by the rigid and static pooling operations, Zhao et al. [211] proposed a Dynamic Aggregation Network (DAR-Net) to consider both global scene complexity and local geometric features. The inter-medium features are dynamically aggregated using a self-adapted receptive field and node weights. Liu et al. [96] proposed 3DCNN-DQN-RNN for efficient semantic parsing of large-scale point clouds. This network first learns the spatial distribution and color features using a 3D CNN network, DQN is further used to localize the class objects. The final concatenated feature vector is fed into a residual RNN to obtain the final segmentation results.

Graph-based Methods. To capture the underlying shapes and geometric structures of 3D point clouds, several methods resort to graph networks. Loic et al. [78] represented a point cloud as a set of interconnected simple shapes and superpoints, and used an attributed directed graph (i.e., superpoint graph) to capture the structure and context information. Then, the large-scale point cloud segmentation problem is spilt into three sub-problems, i.e., geometrically homogeneous partition, superpoint embedding, and contextual segmentation. To further improve the partition step, Loic and Mohamed [77] proposed a supervised framework to oversegment a point cloud into pure superpoints. This problem is formulated as a deep metric learning problem structured by an adjacency graph. In addition, a graph-structured contrastive loss is also proposed to help the recognition of borders between objects.

To better capture the local geometric relationships in high-dimensional space, Kang et al. [212] proposed a PyramNet based on Graph Embedding Module (GEM) and Pyramid Attention Network (PAN). The GEM module formulates a point cloud as a directed acyclic graph and utilzes a covariance matrix to replace the Euclidean distance for the construction of adjacent similarity matrix. Convolution kernels with four different sizes are used in the PAN module to extract features with different semantic intensities. In [162], Graph Attention Convolution (GAC) is proposed to selectively learn relevant features from a local neighbouring set. This operation is achieved by dynamically assigning attention weights to different neighbouring points and feature channels based on their spatial positions and feature differences. GAC can learn to capture discriminative features for segmentation, and has similar characteristics to the commonly used CRF model.

4.2 Instance Segmentation

Compared to semantic segmentation, instance segmentation is more challenging as it requires more accurate and fine-grained reasoning of points. In particular, it not only needs to distinguish the points with different semantic meanings, but also separate instances with the same semantic meaning. Overall, existing methods can be divided into two groups: proposal-based methods and proposal-free methods. Several milestone methods are illustrated in Fig 11.

Fig. 11: Chronological overview of representative 3D point cloud instance segmentation methods.

4.2.1 Proposal-based Methods

These methods convert the instance segmentation problem into two sub-tasks: 3D object detection and instance mask prediction. Hou et al. [55] proposed a 3D fully-convolutional Semantic Instance Segmentation (3D-SIS) network to achieve semantic instance segmentation on RGB-D scans. This network learns from both color and geometry features. Similar to 3D object detection, a 3D Region Proposal Network (3D-RPN) and a 3D Region of Interesting (3D-RoI) layer are used to predict bounding box locations, object class labels and instance masks. Following the analysis-by-synthesis strategy, Yi et al. [193] proposed a Generative Shape Proposal Network (GSPN) to generate high-objectness 3D proposals. These proposals are further refined by a Region-based PointNet (R-PointNet). The final label is obtained by predicting a per-point binary mask for each class label. Different from direct regression of 3D bounding boxes from point clouds, this method removes a large amount of meaningless proposals by enforcing geometric understanding. By extending 2D panoptic segmentation to 3D mapping, Gaku et al. [121] proposed an oneline volumetirc 3D mapping system to jointly achieve large-scale 3D reconstruction, semantic labeling, and instance segmentation. They first utilized 2D semantic and instance segmentation networks to obtain pixel-wise panoptic labels and then integrated these labels to the volumtric map. A fully-connected CRF is further used to achieve accurate segmentation. This semantic mapping system can achieve high-quality semantic mapping and discriminative object recognition. Yang et al. [185] proposed a single-stage, anchor-free and end-to-end trainable network called 3D-BoNet to achieve instance segmentation on point clouds. This method directly regress rough 3D bounding boxes for all potential instances, and then utilized a point-level binary classifier to obtain instance labels. Particularly, the bounding box generation task is formulated as an optimal assignment problem. In addition. a multi-criteria loss function is also proposed to regularize the generated bounding boxes. This method does not need any post-processing and is computationally efficient. Zhang et al. [202] proposed a network for instance segmentation of large-scale outdoor LiDAR point clouds. This method learns a feature representation on the bird’s-eye view of point clouds using self-attention blocks. The final instance labels are obtained based on the predicted horizontal center and the height limits.

Overall, proposal-based methods are intuitive and straightforward, and the instance segmentation results usually have good objectness. However, these methods require multi-stage training and pruning of redundant proposals. Therefore, they are usually time-consuming and computationally expensive.

4.2.2 Proposal-free Methods

Proposal-free methods [166, 167, 124, 34, 95, 93] do not have an object detection module. Instead, they usually consider instance segmentation as a subsequent clustering step after semantic segmentation. In particular, most existing methods are based on the assumption that points belonging to the same instance should have very similar features. Therefore, these methods mainly focus on discriminative feature learning and point grouping.

In a pioneering work, Wang et al. [166]

first introduced a Similarity Group Proposal Network (SGPN). This method first learns a feature and semantic map for each point, and then introduces a similarity matrix to represent the similarity between each paired features. To learn more discriminative features, they use a double-hinge loss to mutually adjust the similarity matrix and semantic segmentation results. Finally, a heuristic and non-maximal suppression method is adopted to merge similar points into instances. Since the construction of a similarity matrix requires large memory consumption, the scalability of this method is limited. Similarly, Liu et al.

[95] first leveraged submanifold sparse convolution [46] to predict semantic scores of each voxel and affinity between neighboring voxels. They then introduced a clustering algorithm to group points into instances based on the predicted affinity and the mesh topology. Further, Liang et al. [93] proposed a structure-aware loss for the learning of discriminative embeddings. This loss considers both the similarity of features and the geometric relations among points. An attention-based graph CNN is further used to adaptively refine the learned features by aggregating different information from neighbors.

Since the semantic category and instance label of a point are usually dependent on each other, several methods have been proposed to couple these two tasks into a single task. Wang et al. [167] integrated these two tasks by introducing an end-to-end and learnable Associatively Segmenting Instances and Semantics (ASIS) module. Experiments show that semantic features and instance features can mutually support each other to achieve an improved performance through this ASIS module. Similarly, Pham et al. [124] first introduced a Multi-Task Point-wise Network (MT-PNet) to assign a label to each point and regularized the embeddings in the feature space by introducing a discriminative loss [29]. They then fused the predicted semantic labels and embeddings to a Multi-Value Conditional Random Field (MV-CRF) model for joint optimization. Finally, mean-field variational inference is used to produce semantic labels and instance labels. Hu et al. [58]

first proposed a Dynamic Region Growing (DRG) method to dynamically separate a point cloud into a set of disjoint patches, and then used an unsupervised K-means++ algorithm to group all these patches. Multi-scale patch segmentation is then performed with the guidance of contextual information between patches. Finally, these labeled patches are merged into object level to obtain final semantic and instance labels.

To achieve instance segmentation on full 3D scenes, Cathrin et al. [34] presented a hybrid 2D-3D network to jointly learn global consistent instance features from a BEV representation and local geometric features of point clouds. The learned features are then combined to achieve semantic and instance segmentation. Note that, rather than heuristic GroupMerging algorithms [166], a more flexible Meanshift [25] algorithm is used to group these points into instances. Alternatively, multi-task learning is also introduced for instance segmentation. Jean et al. [75] learned both the unique feature embedding of each instance and the directional information to the object’s center. Feature embedding loss and directional loss are proposed to adjust the learned feature embeddings in latent feature space. Mean-shift clustering and non-maximum suppression are adopted to group voxels into instances. This method achieves the state-of-the-art performance on the ScanNet [26] benchmark. Besides, the predicted directional information is particularly useful to determine the boundary of instances. Zhang et al. [201] introduced probabilistic embeddings to instance segmentation of point clouds. This method also incorporates uncertainty estimation and proposes a new loss function for the clustering step.

In summary, proposal-free methods do not require computationally expensive region-proposal components. However, the objectness of instance segments grouped by these methods is usually low since these methods do not explicitly detect object boundaries.

4.3 Part Segmentation

The difficulty for part segmentation of 3D shapes are twofold. First, shape parts with the same semantic label have a large geometric variation and ambiguity. Second, the method should be robust to noise and sampling.

VoxSegNet [171] is proposed to achieve fine-grained part segmentation on 3D voxelized data under a limited solution. A Spatial Dense Extraction (SDE) module (which consists of stacked atrous residual blocks) is proposed to extract multi-scale discriminative features from sparse volumetric data. The learned features are further re-weighted and fused by progressively applying an Attention Feature Aggregation (AFA) module. Evangelos et al. [70] combined FCNs and surface-based CRFs to achieve end-to-end 3D part segmentation. They first generated images from multiple views to achieve optimal surface coverage and fed these images into a 2D network to produce confidence maps. Then, these confidence maps are aggregated by a surface-based CRF, which is responsible for a consistent labeling of the entire scene. Yi et al. [192] introduced a Synchronized Spectral CNN (SyncSpecCNN) to perform convolution on irregular and non-isomorphic shape graphs. A spectral parameterization of dilated convolutional kernels and a spectral transformer network is introduced to solve the problem of multi-scale analysis in parts and information sharing across shapes.

Wang et al. [164] first performed shape segmentation on 3D meshes by introducing Shape Fully Convolutional Networks (SFCN) and taking three low-level geometric features as its input. They then utilized voting-based multi-label graph cuts to further refine the segmentation results. Zhu et al. [214] proposed a weakly-supervised CoSegNet for 3D shape co-segmentation. This network takes a collection of unsegmented 3D point cloud shapes as input, and produces shape part labels by iteratively minimizing a group consistency loss. Similar to CRF, a pre-trained part-refinement network is proposed to further refine and denoise part proposals. Chen et al. [21] proposed a Branched AutoEncoder network (BAE-NET) for unsupervised, one-shot and weakly supervised 3D shape co-segmentation. This method formulates the shape co-segmentation task as a representation learning problem and aims at finding the simplest part representations by minimizing the shape reconstruction loss. Based on the encoder-decoder architecture, each branch of this network can learn a compact representation for a specific part shape. The features learned from each branch and the point coordinate are then fed to the decoder to produce a binary value (which indicates whether the point belongs to this part). This method has good generalization ability and can process large 3D shape collections (up to 5000+ shapes). However, it is sensitive to initial parameters and does not incorporate shape semantics into the network, which hinders this method to obtain a robust and stable estimation in each iteration.

4.4 Summary

Table IV shows the results achieved by existing methods on public benchmark, including S3DIS [4], Semantic3D [52], ScanNet [107], and SemanticKITTI [6]. The following issues need to be further investigated:

  1. Point-based networks are the most frequently investigated methods. However, point representation naturally does not have the explicit neighbouring information, most existing point-based methods have to resort the expensive neighbor searching mechanism (e.g., KNN [88] or ball query [131]). This inherently limits the efficiency of these methods, as the neighbor searching mechanism requires both high computational cost and irregular memory access [105].

  2. Learning from imbalanced data is still a challenging problem in point cloud segmentation. Although several approaches [206, 157, 78] have achieved a remarkable overall performance, their performance on minority classes is still limited. E.g., RandLA-Net [57] achieves an overall IoU of 76.0% on the reduced-8 subset of Semantic3D, but a very low IOU of 41.1% on the class of hardscape.

  3. The majority of existing approaches [129, 131, 88, 17, 206] work on small point clouds (e.g., 1m1m with 4096 points). In practice, the point clouds acquired by depth sensors are usually immense and large-scale. Therefore, it is desirable to further investigate the problem of efficient segmentation of large-scale point clouds.

  4. A handful of works [41, 101, 23] have started to learn spatio-temporal information from dynamic point clouds. It is expected that the spatio-temporal information can help to improve the performance of subsequent tasks such as 3D object recognition, segmentation, and completion.

5 Conclusion

This paper has presented a contemporary survey of the state-of-the-art methods for 3D understanding, including 3D shape classification, 3D object detection & tracking, and 3D scene and object segmentation. A comprehensive taxonomy and performance comparison of these methods have been presented. Merits and demerits of various methods are also covered, with potential research directions being listed.


This work was partially supported by the National Natural Science Foundation of China (No. 61972435, 61602499), the Natural Science Foundation of Guangdong Province (2019A1515011271), the Shenzhen Technology and Innovation Committee, the Australian Research Council (Grants DP150100294 and DP150104251), and a China Scholarship Council (CSC) scholarship.


  • [1] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas (2017) Learning representations and generative models for 3D point clouds. arXiv preprint arXiv:1707.02392. Cited by: §2.2.1.
  • [2] E. Ahmed, A. Saint, A. E. R. Shabayek, K. Cherenkova, R. Das, G. Gusev, D. Aouada, and B. Ottersten (2018) Deep learning advances on different 3D data representations: a survey. arXiv preprint arXiv:1808.01462. Cited by: item 2, §1.
  • [3] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic (2016) NetVLAD: CNN architecture for weakly supervised place recognition. In CVPR, pp. 5297–5307. Cited by: §4.1.2.
  • [4] I. Armeni, O. Sener, A. R. Zamir, H. Jiang, I. Brilakis, M. Fischer, and S. Savarese (2016) 3D semantic parsing of large-scale indoor spaces. In CVPR, pp. 1534–1543. Cited by: §4.4, TABLE IV.
  • [5] N. Audebert, B. Le Saux, and S. Lefèvre (2016) Semantic segmentation of earth observation data using multimodal and multi-scale deep networks. In ACCV, pp. 180–196. Cited by: §4.1.1.
  • [6] J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall (2019) SemanticKITTI: a dataset for semantic scene understanding of lidar sequences. In ICCV, Cited by: §4.4, TABLE IV.
  • [7] S. Belongie, J. Malik, and J. Puzicha (2002) Shape matching and object recognition using shape contexts. IEEE TPAMI (4), pp. 509–522. Cited by: §2.2.5.
  • [8] J. Beltrán, C. Guindel, F. M. Moreno, D. Cruzado, F. García, and A. De La Escalera (2018) BirdNet: a 3D object detection framework from lidar information. In Proceedings of International Conference on Intelligent Transportation Systems, pp. 3517–3523. Cited by: §3.1.2, TABLE II, TABLE III.
  • [9] Y. Ben-Shabat, M. Lindenbaum, and A. Fischer (2017) 3D point cloud classification and segmentation using 3D modified fisher vector representation for convolutional neural networks. arXiv preprint arXiv:1711.08241. Cited by: §2.2.5, TABLE I.
  • [10] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr (2016) Fully-convolutional siamese networks for object tracking. In ECCV, pp. 850–865. Cited by: §3.2.
  • [11] D. Bobkov, S. Chen, R. Jian, Z. Iqbal, and E. Steinbach (2018) Noise-resistant deep learning for object classification in 3D point clouds using a point pair descriptor. IEEE Robotics and Automation Letters. Cited by: §2.2.5.
  • [12] A. Boulch, B. Le Saux, and N. Audebert (2017) Unstructured point cloud semantic labeling using deep segmentation networks.. In Proceedings of the Eurographics Workshop on 3D Object Retrieval, Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [13] A. Boulch (2019) ConvPoint: continuous convolutions for point cloud processing. arXiv preprint arXiv:1904.02375. Cited by: §2.2.2, TABLE I, TABLE IV.
  • [14] A. Boulch (2019) Generalizing discrete convolutions for unstructured point clouds. arXiv preprint arXiv:1904.02375. Cited by: §2.2.2, TABLE I.
  • [15] J. Bruna, W. Zaremba, A. Szlam, and Y. Lecun (2013) Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. Cited by: §2.2.3, §2.2.3.
  • [16] C. Chen, G. Li, R. Xu, T. Chen, M. Wang, and L. Lin (2019) ClusterNet: deep hierarchical cluster network with rigorously rotation-invariant representation for point cloud analysis. In CVPR, pp. 4994–5002. Cited by: §2.2.3, TABLE I.
  • [17] L. Chen, X. Li, D. Fan, M. Cheng, K. Wang, and S. Lu (2019) LSANet: feature learning on point sets by local spatial attention. arXiv preprint arXiv:1905.05442. Cited by: item , §4.1.2, TABLE IV.
  • [18] W. Chen, X. Han, G. Li, C. Chen, J. Xing, Y. Zhao, and H. Li (2018) Deep rbfnet: point cloud feature learning using radial basis functions. arXiv preprint arXiv:1812.04302. Cited by: §2.2.5, TABLE I.
  • [19] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia (2017) Multi-view 3D object detection network for autonomous driving. In CVPR, pp. 1907–1915. Cited by: §1, §3.1.1, §3.1.1, §3.1.1, §3.1.2, TABLE II, TABLE III.
  • [20] Y. Chen, S. Liu, X. Shen, and J. Jia (2019) Fast point r-cnn. In ICCV, Cited by: §3.1.1, TABLE II, TABLE III.
  • [21] Z. Chen, K. Yin, M. Fisher, S. Chaudhuri, and H. Zhang (2019) BAE-NET: branched autoencoder for shape co-segmentation. arXiv preprint arXiv:1903.11228. Cited by: §4.3.
  • [22] H. Chiang, Y. Lin, Y. Liu, and W. H. Hsu (2019) A unified point-based framework for 3D segmentation. arXiv preprint arXiv:1908.00478. Cited by: §4.1.1, TABLE IV.
  • [23] C. Choy, J. Gwak, and S. Savarese (2019) 4D spatio-temporal convnets: minkowski convolutional neural networks. arXiv preprint arXiv:1904.08755. Cited by: item , §4.1.1, TABLE IV.
  • [24] T. S. Cohen, M. Geiger, J. Koehler, and M. Welling (2018) Spherical CNNs. arXiv preprint arXiv:1801.10130. Cited by: §2.2.2.
  • [25] D. Comaniciu and P. Meer (2002) Mean shift: a robust approach toward feature space analysis. IEEE TPAMI (5), pp. 603–619. Cited by: §4.2.2.
  • [26] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner (2017) ScanNet: richly-annotated 3D reconstructions of indoor scenes. In CVPR, pp. 5828–5839. Cited by: §1, §3.1.1, §4.2.2, TABLE IV.
  • [27] A. Dai and M. Nießner (2018) 3DMV: joint 3d-multi-view prediction for 3D semantic scene segmentation. In ECCV, pp. 452–468. Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [28] A. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, and M. Nießner (2018) ScanComplete: large-scale scene completion and semantic segmentation for 3D scans. In CVPR, pp. 4578–4587. Cited by: §4.1.1.
  • [29] B. De Brabandere, D. Neven, and L. Van Gool (2017) Semantic instance segmentation with a discriminative loss function. arXiv preprint arXiv:1708.02551. Cited by: §4.2.2.
  • [30] M. Defferrard, X. Bresson, and P. Vandergheynst (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, pp. 3844–3852. Cited by: §2.2.3.
  • [31] Z. Dingfu, F. Jin, S. Xibin, G. Chenye, Y. Junbo, D. Yuchao, and Y. Ruigang (2019) IoU loss for 2D/3D object detection. arXiv preprint arXiv:1908.03851. Cited by: §3.1.1, TABLE II, TABLE III.
  • [32] M. Dominguez, R. Dhamdhere, A. Petkar, S. Jain, S. Sah, and R. Ptucha (2018) General-purpose deep point cloud feature extractor. In WACV, pp. 1972–1981. Cited by: §2.2.3.
  • [33] Y. Duan, Y. Zheng, J. Lu, J. Zhou, and Q. Tian (2019) Structural relational reasoning of point clouds. In CVPR, pp. 949–958. Cited by: §2.2.1, TABLE I.
  • [34] C. Elich, F. Engelmann, J. Schult, T. Kontogianni, and B. Leibe (2019) 3D-BEVIS: birds-eye-view instance segmentation. arXiv preprint arXiv:1904.02199. Cited by: §4.2.2, §4.2.2.
  • [35] M. Engelcke, D. Rao, D. Z. Wang, C. H. Tong, and I. Posner (2017) Vote3Deep: fast object detection in 3D point clouds using efficient convolutional neural networks. In ICRA, pp. 1355–1361. Cited by: §3.1.2, §3.1.2, TABLE II, TABLE III.
  • [36] F. Engelmann, T. Kontogianni, A. Hermans, and B. Leibe (2017) Exploring spatial context for 3D semantic segmentation of point clouds. In ICCV, pp. 716–724. Cited by: §4.1.2, TABLE IV.
  • [37] F. Engelmann, T. Kontogianni, and B. Leibe (2019) Dilated point convolutions: on the receptive field of point convolutions. arXiv preprint arXiv:1907.12046. Cited by: §4.1.2, TABLE IV.
  • [38] F. Engelmann, T. Kontogianni, J. Schult, and B. Leibe (2018) Know what your neighbors do: 3D semantic segmentation of point clouds. In ECCVW, Cited by: §4.1.2.
  • [39] F. Engelmann, T. Kontogianni, J. Schult, and B. Leibe (2018) Know what your neighbors do: 3d semantic segmentation of point clouds. In ECCV, pp. 0–0. Cited by: TABLE IV.
  • [40] C. Esteves, C. Allen-Blanchette, A. Makadia, and K. Daniilidis (2017) Learning so(3) equivariant representations with spherical CNNs. In ECCV, pp. 52–68. Cited by: §2.2.2, TABLE I.
  • [41] H. Fan and Y. Yang (2019) PointRNN: point recurrent neural network for moving point cloud processing. arXiv preprint arXiv:1910.08287. Cited by: §3.3, item .
  • [42] Y. Feng, H. You, Z. Zhang, R. Ji, and Y. Gao (2019) Hypergraph neural networks. In AAAI, Vol. 33, pp. 3558–3565. Cited by: §2.2.3.
  • [43] Y. Feng, Z. Zhang, X. Zhao, R. Ji, and Y. Gao (2018) GVCNN: group-view convolutional neural networks for 3D shape recognition. In CVPR, pp. 264–272. Cited by: §2.1.1.
  • [44] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving. In CVPR, pp. 3354–3361. Cited by: §1, §3.1.1, §3.1.1, §3.1.2, §3.1.2, §3.4.
  • [45] S. Giancola, J. Zarzar, and B. Ghanem (2019) Leveraging shape completion for 3D siamese tracking. arXiv preprint arXiv:1903.01784. Cited by: §3.2.
  • [46] B. Graham, M. Engelcke, and L. van der Maaten (2018) 3D semantic segmentation with submanifold sparse convolutional networks. In CVPR, pp. 9224–9232. Cited by: §4.1.1, §4.1.1, §4.2.2, TABLE IV.
  • [47] B. Graham (2014) Spatially-sparse convolutional neural networks. arXiv preprint arXiv:1409.6070. Cited by: §3.1.2.
  • [48] F. Groh, P. Wieschollek, and H. P. Lensch (2018) Flex-Convolution. In ACCV, pp. 105–122. Cited by: §2.2.2, TABLE I.
  • [49] X. Gu, Y. Wang, C. Wu, Y. J. Lee, and P. Wang (2019) HPLFlowNet: hierarchical permutohedral lattice flownet for scene flow estimation on large-scale point clouds. In CVPR, pp. 3254–3263. Cited by: §3.3.
  • [50] Y. Guo, M. Bennamoun, F. Sohel, M. Lu, and J. Wan (2014) 3D object recognition in cluttered scenes with local surface features: a survey. IEEE TPAMI 36 (11), pp. 2270–2287. Cited by: §1.
  • [51] Y. Guo, F. Sohel, M. Bennamoun, M. Lu, and J. Wan (2013) Rotational projection statistics for 3D local surface description and object recognition. IJCV 105 (1), pp. 63–86. Cited by: §1.
  • [52] T. Hackel, N. Savinov, L. Ladicky, J. D. Wegner, K. Schindler, and M. Pollefeys (2017) a new large-scale point cloud classification benchmark. arXiv preprint arXiv:1704.03847. Cited by: §1, §4.4, TABLE IV.
  • [53] K. Hassani and M. Haley (2019) Unsupervised multi-task feature learning on point clouds. In ICCV, pp. 8160–8171. Cited by: §2.2.3, TABLE I.
  • [54] P. Hermosilla, T. Ritschel, P. Vázquez, À. Vinacua, and T. Ropinski (2018) Monte carlo convolution for learning on non-uniformly sampled point clouds. ACM TOG 37 (6). Cited by: §2.2.2, TABLE I.
  • [55] J. Hou, A. Dai, and M. Nießner (2019) 3D-SIS: 3D semantic instance segmentation of RGB-D scans. In CVPR, pp. 4421–4430. Cited by: §4.2.1.
  • [56] Q. Hu, Y. Guo, Y. Chen, J. Xiao, and W. An (20172017) Correlation filter tracking: beyond an open-loop system. In BMVC, Cited by: §3.2.
  • [57] Q. Hu, B. Yang, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, and A. Markham (2019) RandLA-Net: efficient semantic segmentation of large-scale point clouds. arXiv preprint arXiv:1911.11236. Cited by: item , item , §4.1.2, TABLE IV.
  • [58] S. Hu, J. Cai, and Y. Lai (2018) Semantic labeling and instance segmentation of 3D point clouds using patch context analysis and multiscale processing. IEEE transactions on visualization and computer graphics. Cited by: §4.2.2.
  • [59] B. Hua, M. Tran, and S. Yeung (2018) Pointwise convolutional neural networks. In CVPR, Cited by: §2.2.2, TABLE I, §4.1.2.
  • [60] J. Huang and S. You (2016) Point cloud labeling using 3D convolutional neural network. In ICPR, pp. 2670–2675. Cited by: §4.1.1.
  • [61] Q. Huang, W. Wang, and U. Neumann (2018) Recurrent slice networks for 3D segmentation of point clouds. In CVPR, pp. 2626–2635. Cited by: §4.1.2, TABLE IV.
  • [62] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer (2016) SqueezeNet: alexnet-level accuracy with 50x fewer parameters and 0.5 MB model size. arXiv preprint arXiv:1602.07360. Cited by: §4.1.1.
  • [63] A. Ioannidou, E. Chatzilari, S. Nikolopoulos, and I. Kompatsiaris (2017) Deep learning advances in computer vision with 3D data: a survey. ACM Computing Surveys 50 (2), pp. 20. Cited by: item 2, §1.
  • [64] M. Jaritz, J. Gu, and H. Su (2019) Multi-view pointnet for 3D scene understanding. In ICCVW, Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [65] Z. Jesus, G. Silvio, and G. Bernard (2019) PointRGCN: graph convolution networks for 3D vehicles detection refinement. arXiv preprint arXiv:1911.12236. Cited by: §3.1.1, TABLE II, TABLE III.
  • [66] L. Jiang, H. Zhao, S. Liu, X. Shen, C. Fu, and J. Jia (2019) Hierarchical point-edge interaction network for point cloud semantic segmentation. In ICCV, pp. 10433–10441. Cited by: TABLE IV.
  • [67] M. Jiang, Y. Wu, and C. Lu (2018) PointSIFT: a sift-like network module for 3D point cloud semantic segmentation. arXiv preprint arXiv:1807.00652. Cited by: §3.1.1, §4.1.2, TABLE IV.
  • [68] L. Johannes, M. Andreas, A. Thomas, H. Markus, N. Bernhard, and H. Sepp (2019) Patch refinement - localized 3D object detection. arXiv preprint arXiv:1910.04093. Cited by: §3.1.1, TABLE II, TABLE III.
  • [69] M. Joseph-Rivlin, A. Zvirin, and R. Kimmel (2018) Mo-Net: flavor the moments in learning to classify shapes. In ICCVW, Cited by: §2.2.1, TABLE I.
  • [70] E. Kalogerakis, M. Averkiou, S. Maji, and S. Chaudhuri (2017) 3D shape segmentation with projective convolutional networks. In CVPR, pp. 3779–3788. Cited by: §4.3.
  • [71] R. Klokov and V. Lempitsky (2017) Escape from cells: deep kd-networks for the recognition of 3D point cloud models. In ICCV, pp. 863–872. Cited by: §2.2.4, TABLE I.
  • [72] A. Komarichev, Z. Zhong, and J. Hua (2019) A-CNN: annularly convolutional neural networks on point clouds. In CVPR, pp. 7421–7430. Cited by: §2.2.2, TABLE I, TABLE IV.
  • [73] J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. L. Waslander (2018) Joint 3D proposal generation and object detection from view aggregation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1–8. Cited by: §3.1.1, §3.1.1, §3.1.2, TABLE II, TABLE III.
  • [74] S. Kumawat and S. Raman (2019) LP-3DCNN: unveiling local phase in 3D convolutional neural networks. In CVPR, pp. 4903–4912. Cited by: §2.2.2.
  • [75] J. Lahoud, B. Ghanem, M. Pollefeys, and M. R. Oswald (2019) 3D instance segmentation via multi-task metric learning. arXiv preprint arXiv:1906.08650. Cited by: §4.2.2.
  • [76] S. Lan, R. Yu, G. Yu, and L. S. Davis (2018) Modeling local geometric structure of 3D point clouds using Geo-CNN. arXiv preprint arXiv:1811.07782. Cited by: §2.2.2, TABLE I.
  • [77] L. Landrieu and M. Boussaha (2019) Point cloud oversegmentation with graph-structured deep metric learning. arXiv preprint arXiv:1904.02113. Cited by: §4.1.2, TABLE IV.
  • [78] L. Landrieu and M. Simonovsky (2018) Large-scale point cloud semantic segmentation with superpoint graphs. In CVPR, pp. 4558–4567. Cited by: item , §4.1.2, TABLE IV.
  • [79] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2018) PointPillars: fast encoders for object detection from point clouds. arXiv preprint arXiv:1812.05784. Cited by: §3.1.1, §3.1.1, §3.1.2, TABLE II, TABLE III.
  • [80] F. J. Lawin, M. Danelljan, P. Tosteberg, G. Bhat, F. S. Khan, and M. Felsberg (2017) Deep projective 3D semantic segmentation. In Proceedings of International Conference on Computer Analysis of Images and Patterns, pp. 95–107. Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [81] T. Le and Y. Duan (2018) Pointgrid: a deep network for 3d shape understanding. In CVPR, pp. 9204–9214. Cited by: §2.1.2.
  • [82] H. Lei, N. Akhtar, and A. Mian (2018) Spherical convolutional neural network for 3d point clouds. Cited by: TABLE IV.
  • [83] H. Lei, N. Akhtar, and A. Mian (2019) Octree guided CNN with spherical kernels for 3D point clouds. arXiv preprint arXiv:1903.00343. Cited by: §2.2.2, §2.2.4, TABLE I.
  • [84] B. Li, T. Zhang, and T. Xia (2016) Vehicle detection from 3D lidar using fully convolutional network. arXiv preprint arXiv:1608.07916. Cited by: §3.1.2, §3.1.2, TABLE II, TABLE III.
  • [85] B. Li (2017) 3D fully convolutional network for vehicle detection in point cloud. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1513–1518. Cited by: §3.1.2, §3.1.2, TABLE II, TABLE III.
  • [86] J. Li, B. M. Chen, and G. Hee Lee (2018) SO-Net: self-organizing network for point cloud analysis. In CVPR, pp. 9397–9406. Cited by: §2.2.4, TABLE I.
  • [87] R. Li, S. Wang, F. Zhu, and J. Huang (2018) Adaptive graph convolutional neural networks. In AAAI, Cited by: §2.2.3.
  • [88] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen (2018) PointCNN: convolution on x-transformed points. In NeurIPS, pp. 820–830. Cited by: §2.2.2, TABLE I, item , item , TABLE IV.
  • [89] M. Liang, B. Yang, Y. Chen, R. Hu, and R. Urtasun (2019) Multi-task multi-sensor fusion for 3D object detection. In CVPR, pp. 7345–7353. Cited by: item , §3.1.1, §3.1.1, TABLE II, TABLE III.
  • [90] M. Liang, B. Yang, S. Wang, and R. Urtasun (2018) Deep continuous fusion for multi-sensor 3D object detection. In ECCV, pp. 641–656. Cited by: §3.1.1, TABLE II, TABLE III.
  • [91] Z. Liang, Y. Guo, Y. Feng, W. Chen, L. Qiao, L. Zhou, J. Zhang, and H. Liu (2019) Stereo matching using multi-level cost volume and multi-scale feature constancy. IEEE TPAMI. Cited by: §1.
  • [92] Z. Liang, M. Yang, L. Deng, C. Wang, and B. Wang (2019) Hierarchical depthwise graph convolutional neural network for 3d semantic segmentation of point clouds. In ICRA, pp. 8152–8158. Cited by: TABLE IV.
  • [93] Z. Liang, M. Yang, and C. Wang (2019) 3D graph embedding learning with a structure-aware loss function for point cloud semantic instance segmentation. arXiv preprint arXiv:1902.05247. Cited by: §4.2.2, §4.2.2.
  • [94] H. Lin, Z. Xiao, Y. Tan, H. Chao, and S. Ding (2019)

    Justlookup: one millisecond deep feature extraction for point clouds by lookup tables

    In ICME, pp. 326–331. Cited by: §2.2.1, TABLE I.
  • [95] C. Liu and Y. Furukawa (2019) MASC: multi-scale affinity with sparse convolution for 3D instance segmentation. arXiv preprint arXiv:1902.04478. Cited by: §4.2.2, §4.2.2.
  • [96] F. Liu, S. Li, L. Zhang, C. Zhou, R. Ye, Y. Wang, and J. Lu (2017)

    3DCNN-DQN-RNN: a deep reinforcement learning framework for semantic parsing of large-scale 3D point clouds

    In ICCV, pp. 5678–5687. Cited by: §4.1.2.
  • [97] H. Liu, Q. Hu, B. Li, and Y. Guo (2019) Robust long-term tracking via instance specific proposals. IEEE TIM. Cited by: §3.2.
  • [98] J. Liu, B. Ni, C. Li, J. Yang, and Q. Tian (2019) Dynamic points agglomeration for hierarchical point sets learning. In ICCV, pp. 7546–7555. Cited by: §2.2.3, TABLE I, TABLE IV.
  • [99] L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen (2018) Deep learning for generic object detection: a survey. arXiv preprint arXiv:1809.02165. Cited by: §3.1.
  • [100] X. Liu, C. R. Qi, and L. J. Guibas (2019) Flownet3D: learning scene flow in 3D point clouds. In CVPR, pp. 529–537. Cited by: §3.3.
  • [101] X. Liu, M. Yan, and J. Bohg (2019) MeteorNet: deep learning on dynamic 3D point cloud sequences. In ICCV, Cited by: §3.3, item .
  • [102] X. Liu, Z. Han, Y. Liu, and M. Zwicker (2019) Point2Sequence: learning the shape representation of 3D point clouds with an attention-based sequence to sequence network. In AAAI, Vol. 33, pp. 8778–8785. Cited by: §2.2.5, TABLE I.
  • [103] Y. Liu, B. Fan, G. Meng, J. Lu, S. Xiang, and C. Pan (2019) DensePoint: learning densely contextual representation for efficient point cloud processing. In ICCV, pp. 5239–5248. Cited by: §2.2.2, TABLE I.
  • [104] Y. Liu, B. Fan, S. Xiang, and C. Pan (2019) Relation-shape convolutional neural network for point cloud analysis. In CVPR, pp. 1–10. Cited by: §2.2.2, TABLE I.
  • [105] Z. Liu, H. Tang, Y. Lin, and S. Han (2019) Point-Voxel CNN for efficient 3D deep learning. In NeurIPS, pp. 963–973. Cited by: item .
  • [106] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In CVPR, pp. 3431–3440. Cited by: §4.1.1.
  • [107] H. Lu, X. Chen, G. Zhang, Q. Zhou, Y. Ma, and Y. Zhao (2019) SCANet: spatial-channel attention network for 3D object detection. In ICASSP, pp. 1992–1996. Cited by: §3.1.1, TABLE II, TABLE III, §4.4.
  • [108] W. Luo, B. Yang, and R. Urtasun (2018) Fast and furious: real time end-to-end 3D detection, tracking and motion forecasting with a single convolutional net. In CVPR, pp. 3569–3577. Cited by: §3.1.1, §3.1.2.
  • [109] C. Ma, Y. Guo, J. Yang, and W. An (2018) Learning multi-view representation with LSTM for 3D shape recognition and retrieval. IEEE TMM. Cited by: §2.1.1.
  • [110] J. Mao, X. Wang, and H. Li (2019) Interpolated convolutional networks for 3d point cloud understanding. In ICCV, pp. 1578–1587. Cited by: §2.2.2, TABLE I, TABLE IV.
  • [111] A. Matan, M. Haggai, and L. Yaron (2018) Point convolutional neural networks by extension operators. ACM TOG 37 (4), pp. 1–12. Cited by: §2.2.2, TABLE I.
  • [112] D. Maturana and S. Scherer (2015) VoxNet: a 3D convolutional neural network for real-time object recognition. In IROS, pp. 922–928. Cited by: §2.1.2.
  • [113] H. Meng, L. Gao, Y. Lai, and D. Manocha (2018) VV-Net: voxel vae net with group convolutions for point cloud segmentation. arXiv preprint arXiv:1811.04337. Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [114] G. P. Meyer, J. Charland, D. Hegde, A. Laddha, and C. Vallespi-Gonzalez (2019) Sensor fusion for joint 3D object detection and semantic segmentation. arXiv preprint arXiv:1904.11466. Cited by: §3.1.2, TABLE II, TABLE III.
  • [115] G. P. Meyer, A. Laddha, E. Kee, C. Vallespi-Gonzalez, and C. K. Wellington (2019) LaserNet: an efficient probabilistic 3D object detector for autonomous driving. arXiv preprint arXiv:1903.08701. Cited by: §3.1.2, TABLE II, TABLE III.
  • [116] A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate lidar semantic segmentation. In IROS, Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [117] F. Mingtao, G. Syed Zulqarnain, W. Yaonan, Z. Liang, and M. Ajmal (2019) Relation graph network for 3D object detection in point clouds. arXiv preprint arXiv:1912.00202. Cited by: §3.1.1, TABLE II, TABLE III.
  • [118] H. Mittal, B. Okorn, and D. Held (2019) Just go with the flow: self-supervised scene flow estimation. arXiv preprint arXiv:1912.00497. Cited by: §3.3.
  • [119] M. Mueller, N. Smith, and B. Ghanem (2017) Context-aware correlation filter tracking. In CVPR, pp. 1396–1404. Cited by: §3.2.
  • [120] D. Müllner (2011) Modern hierarchical, agglomerative clustering algorithms. arXiv preprint arXiv:1109.2378. Cited by: §2.2.3.
  • [121] G. Narita, T. Seno, T. Ishikawa, and Y. Kaji (2019) PanopticFusion: online volumetric semantic mapping at the level of stuff and things. arXiv preprint arXiv:1903.01177. Cited by: §4.2.1.
  • [122] G. Pan, J. Wang, R. Ying, and P. Liu (2018) 3DTI-Net: learn inner transform invariant 3D geometry features using dynamic GCN. arXiv preprint arXiv:1812.06254. Cited by: §2.2.3, TABLE I.
  • [123] L. Pan, C. Chew, and G. H. Lee (2019) PointAtrousGraph: deep hierarchical encoder-decoder with atrous convolution for point clouds. arXiv preprint arXiv:1907.09798. Cited by: TABLE IV.
  • [124] Q. Pham, T. Nguyen, B. Hua, G. Roig, and S. Yeung (2019) JSIS3D: joint semantic-instance segmentation of 3D point clouds with multi-task pointwise networks and multi-value conditional random fields. In CVPR, pp. 8827–8836. Cited by: §4.2.2, §4.2.2.
  • [125] A. Poulenard, M. Rakotosaona, Y. Ponty, and M. Ovsjanikov (2019) Effective rotation-invariant point CNN with spherical harmonics kernels. In 3DV, pp. 47–56. Cited by: §2.2.2.
  • [126] S. Prokudin, C. Lassner, and J. Romero (2019) Efficient learning on point clouds with basis point sets. In ICCV, Cited by: §2.2.5.
  • [127] C. R. Qi, O. Litany, K. He, and L. J. Guibas (2019) Deep hough voting for 3D object detection in point clouds. arXiv preprint arXiv:1904.09664. Cited by: §3.1.1, TABLE II, TABLE III.
  • [128] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas (2018) Frustum pointnets for 3D object detection from RGB-D data. In CVPR, pp. 918–927. Cited by: §3.1.1, TABLE II, TABLE III.
  • [129] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) PointNet: deep learning on point sets for 3D classification and segmentation. In CVPR, pp. 652–660. Cited by: §1, §2.2.1, §2.2.1, §2.2.1, §2.2.4, TABLE I, §3.1.1, §3.1.1, §3.1.2, item , §4.1.2, §4.1.2, §4.1.2, TABLE IV.
  • [130] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas (2016) Volumetric and multi-view CNNs for object classification on 3D data. In CVPR, pp. 5648–5656. Cited by: §2.1.1.
  • [131] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. In NeurIPS, pp. 5099–5108. Cited by: §2.2.1, §2.2.4, TABLE I, §3.1.1, Fig. 10, item , item , §4.1.2, TABLE IV.
  • [132] C. Qin, H. You, L. Wang, C. J. Kuo, and Y. Fu (2019) PointDAN: a multi-scale 3D domain adaption network for point cloud representation. In NIPS, pp. 7190–7201. Cited by: §2.2.5.
  • [133] M. M. Rahman, Y. Tan, J. Xue, and K. Lu (2019) Recent advances in 3D object detection in the era of deep neural networks: a survey. IEEE TIP. Cited by: §1.
  • [134] Y. Rao, J. Lu, and J. Zhou (2019) Spherical fractal convolutional neural networks for point cloud recognition. In CVPR, pp. 452–460. Cited by: §2.2.2, TABLE I.
  • [135] D. Rethage, J. Wald, J. Sturm, N. Navab, and F. Tombari (2018) Fully-convolutional point networks for large-scale point clouds. In ECCV, pp. 596–611. Cited by: §4.1.1, §4.1.1.
  • [136] G. Riegler, A. Osman Ulusoy, and A. Geiger (2017) OctNet: learning deep 3D representations at high resolutions. In CVPR, pp. 3577–3586. Cited by: §2.1.2, §2.2.4.
  • [137] R. A. Rosu, P. Schütt, J. Quenzel, and S. Behnke (2019) LatticeNet: fast point cloud segmentation using permutohedral lattices. arXiv preprint arXiv:1912.05905. Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [138] R. B. Rusu and S. Cousins (2011) 3D is here: point cloud library (pcl). In ICRA, pp. 1–4. Cited by: §2.2.3.
  • [139] M. Savva, F. Yu, H. Su, M. Aono, B. Chen, D. Cohen-Or, W. Deng, H. Su, S. Bai, X. Bai, et al. (2016) Shrec16 track: large-scale 3D shape retrieval from shapenet core55. In Proceedings of the Eurographics Workshop on 3D Object Retrieval, pp. 89–98. Cited by: §1.
  • [140] Y. Shen, C. Feng, Y. Yang, and D. Tian (2018) Mining point cloud local structures by kernel correlation and graph pooling. In CVPR, pp. 4548–4557. Cited by: §2.2.3, TABLE I.
  • [141] S. Shi, X. Wang, and H. Li (2018) PointRCNN: 3D object proposal generation and detection from point cloud. arXiv preprint arXiv:1812.04244. Cited by: §3.1.1, §3.1.1, TABLE II, TABLE III.
  • [142] S. Shi, Z. Wang, X. Wang, and H. Li (2019) Part-A^ 2 Net: 3D part-aware and aggregation neural network for object detection from point cloud. arXiv preprint arXiv:1907.03670. Cited by: §3.1.1, TABLE II, TABLE III.
  • [143] K. Shin, Y. P. Kwon, and M. Tomizuka (2018) RoarNet: a robust 3D object detection based on region approximation refinement. arXiv preprint arXiv:1811.03818. Cited by: §3.1.1, §3.1.2, TABLE II, TABLE III.
  • [144] B. Sievers and J. Sauder (2019) Self-supervised deep learning on point clouds by reconstructing space. In NIPS, pp. 12942–12952. Cited by: §2.2.5.
  • [145] M. Simon, K. Amende, A. Kraus, J. Honer, T. Sämann, H. Kaulbersch, S. Milz, and H. M. Gross (2019) Complexer-YOLO: real-time 3D object detection and tracking on semantic point clouds. arXiv preprint arXiv:1904.07537. Cited by: §3.2.
  • [146] M. Simonovsky and N. Komodakis (2017) Dynamic edge-conditioned filters in convolutional neural networks on graphs. In CVPR, pp. 29–38. External Links: Document, ISSN 1063-6919 Cited by: §2.2.3, §2.2.3, TABLE I.
  • [147] V. A. Sindagi, Y. Zhou, and O. Tuzel (2019) MVX-Net: multimodal voxelnet for 3D object detection. arXiv preprint arXiv:1904.01649. Cited by: §3.1.2, TABLE II, TABLE III.
  • [148] S. Song, S. P. Lichtenberg, and J. Xiao (2015) Sun RGB-D: a RGB-D scene understanding benchmark suite. In CVPR, pp. 567–576. Cited by: §3.1.1, §3.1.1.
  • [149] V. Sourabh, L. Alex H., H. Bassam, and B. Oscar (2019) PointPainting: sequential fusion for 3D object detection. arXiv preprint arXiv:1911.10150. Cited by: §3.1.1, TABLE II, TABLE III.
  • [150] H. Su, V. Jampani, D. Sun, S. Maji, E. Kalogerakis, M. Yang, and J. Kautz (2018) Splatnet: sparse lattice networks for point cloud processing. In CVPR, pp. 2530–2539. Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [151] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller (2015) Multi-view convolutional neural networks for 3D shape recognition. In ICCV, pp. 945–953. Cited by: §2.1.1.
  • [152] X. Sun, Z. Lian, and J. Xiao (2019) SRINet: learning strictly rotation-invariant representations for point cloud classification and segmentation. In ACM MM, pp. 980–988. Cited by: §2.2.1.
  • [153] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In CVPR, pp. 1–9. Cited by: §2.2.3.
  • [154] M. Tatarchenko, J. Park, V. Koltun, and Q. Zhou (2018) Tangent convolutions for dense prediction in 3d. In CVPR, pp. 3887–3896. Cited by: §4.1.1, TABLE IV.
  • [155] L. Tchapmi, C. Choy, I. Armeni, J. Gwak, and S. Savarese (2017) SEGCloud: semantic segmentation of 3D point clouds. In 3DV, pp. 537–547. Cited by: §4.1.1, TABLE IV.
  • [156] G. Te, W. Hu, A. Zheng, and Z. Guo (2018) RGCNN: regularized graph CNN for point cloud segmentation. In ACM MM, pp. 746–754. Cited by: §2.2.3, TABLE I.
  • [157] H. Thomas, C. R. Qi, J. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas (2019) KPConv: flexible and deformable convolution for point clouds. arXiv preprint arXiv:1904.08889. Cited by: §2.2.2, TABLE I, item , §4.1.2, TABLE IV.
  • [158] N. Thomas, T. Smidt, S. Kearnes, L. Yang, L. Li, K. Kohlhoff, and P. Riley (2018) Tensor field networks: rotation-and translation-equivariant neural networks for 3D point clouds. arXiv preprint arXiv:1802.08219. Cited by: §2.2.2.
  • [159] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In NeurIPS, pp. 5998–6008. Cited by: §2.2.5, §4.1.2.
  • [160] C. Wang, M. Pelillo, and K. Siddiqi (2019) Dominant set clustering and pooling for multi-view 3D object recognition. arXiv preprint arXiv:1906.01592. Cited by: §2.1.1.
  • [161] C. Wang, B. Samari, and K. Siddiqi (2018) Local spectral graph convolution for point set feature learning. In ECCV, pp. 52–66. Cited by: §2.2.3, TABLE I.
  • [162] L. Wang, Y. Huang, Y. Hou, S. Zhang, and J. Shan (2019) Graph attention convolution for point cloud semantic segmentation. In CVPR, pp. 10296–10305. Cited by: §4.1.2, TABLE IV.
  • [163] P. Wang, Y. Liu, Y. Guo, C. Sun, and X. Tong (2017) O-CNN: octree-based convolutional neural networks for 3D shape analysis. ACM TOG 36 (4), pp. 72. Cited by: §2.1.2.
  • [164] P. Wang, Y. Gan, P. Shui, F. Yu, Y. Zhang, S. Chen, and Z. Sun (2018) 3D shape segmentation via shape fully convolutional networks. Computers & Graphics 70, pp. 128–139. Cited by: §4.3.
  • [165] S. Wang, S. Suo, W. Ma, A. Pokrovsky, and R. Urtasun (2018) Deep parametric continuous convolutional neural networks. In CVPR, pp. 2589–2597. Cited by: §4.1.2, TABLE IV.
  • [166] W. Wang, R. Yu, Q. Huang, and U. Neumann (2018) SGPN: similarity group proposal network for 3D point cloud instance segmentation. In CVPR, pp. 2569–2578. Cited by: §4.2.2, §4.2.2, §4.2.2.
  • [167] X. Wang, S. Liu, X. Shen, C. Shen, and J. Jia (2019) Associatively segmenting instances and semantics in point clouds. In CVPR, pp. 4096–4105. Cited by: §4.2.2, §4.2.2.
  • [168] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon (2019) Dynamic graph CNN for learning on point clouds. ACM TOG. Cited by: §2.2.3, §2.2.3, TABLE I, §4.1.2, TABLE IV.
  • [169] Z. Wang and K. Jia (2019) Frustum convnet: sliding frustums to aggregate local point-wise features for amodal 3D object detection. arXiv preprint arXiv:1903.01864. Cited by: §3.1.1, §3.1.1, TABLE II, TABLE III.
  • [170] Z. Wang, S. Li, H. Howard-Jenkins, V. A. Prisacariu, and M. Chen (2019) FlowNet3D++: geometric losses for deep scene flow estimation. arXiv preprint arXiv:1912.01438. Cited by: §3.3.
  • [171] Z. Wang and F. Lu (2019) VoxSegNet: volumetric CNNs for semantic part segmentation of 3D shapes. IEEE transactions on visualization and computer graphics. Cited by: §4.3.
  • [172] B. Wu, A. Wan, X. Yue, and K. Keutzer (2018) SqueezeSeg: convolutional neural nets with recurrent crf for real-time road-object segmentation from 3D lidar point cloud. In ICRA, pp. 1887–1893. Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [173] B. Wu, X. Zhou, S. Zhao, X. Yue, and K. Keutzer (2019) SqueezeSegV2: improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud. In ICRA, pp. 4376–4382. Cited by: §4.1.1, §4.1.1, TABLE IV.
  • [174] P. Wu, C. Chen, J. Yi, and D. Metaxas (2019) Point cloud processing via recurrent set encoding. In AAAI, Vol. 33, pp. 5441–5449. Cited by: §2.2.5, TABLE I.
  • [175] W. Wu, Z. Qi, and L. Fuxin (2018) PointConv: deep convolutional networks on 3D point clouds. arXiv preprint arXiv:1811.07246. Cited by: §2.2.2, TABLE I.
  • [176] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao (2015) 3D shapenets: a deep representation for volumetric shapes. In Proceedings of CVPR, pp. 1912–1920. Cited by: §1, §2.1.2.
  • [177] S. Xie, S. Liu, Z. Chen, and Z. Tu (2018) Attentional shapecontextnet for point cloud recognition. In CVPR, pp. 4606–4615. Cited by: §2.2.5, TABLE I, TABLE IV.
  • [178] Y. Xie, J. Tian, and X. X. Zhu (2019) A review of point cloud semantic segmentation. arXiv preprint arXiv:1908.08854. Cited by: §1.
  • [179] D. Xu, D. Anguelov, and A. Jain (2018) PointFusion: deep sensor fusion for 3D bounding box estimation. In CVPR, pp. 244–253. Cited by: §3.1.1, TABLE II, TABLE III.
  • [180] Y. Xu, T. Fan, M. Xu, L. Zeng, and Y. Qiao (2018) SpiderCNN: deep learning on point sets with parameterized convolutional filters. In ECCV, pp. 87–102. Cited by: §2.2.2, TABLE I.
  • [181] L. Xuesong, G. Jose E, K. Ngaiming, and X. Yongzhi (2019) 3D backbone network for 3D object detection. arXiv preprint arXiv:1901.08373. Cited by: §3.1.2, TABLE II, TABLE III.
  • [182] Y. Yan, Y. Mao, and B. Li (2018) SECOND: sparsely embedded convolutional detection. Sensors 18. Cited by: §3.1.1, §3.1.2, TABLE II, TABLE III.
  • [183] B. Yang, M. Liang, and R. Urtasun (2018) HDNET: exploiting hd maps for 3D object detection. In CoRL, pp. 146–155. Cited by: §3.1.2, TABLE II, TABLE III.
  • [184] B. Yang, W. Luo, and R. Urtasun (2018) PIXOR: real-time 3D object detection from point clouds. In CVPR, pp. 7652–7660. Cited by: §3.1.1, §3.1.2, TABLE II, TABLE III.
  • [185] B. Yang, J. Wang, R. Clark, Q. Hu, S. Wang, A. Markham, and N. Trigoni (2019) Learning object bounding boxes for 3D instance segmentation on point clouds. arXiv preprint arXiv:1906.01140. Cited by: §4.2.1.
  • [186] J. Yang, Q. Zhang, B. Ni, L. Li, J. Liu, M. Zhou, and Q. Tian (2019) Modeling point clouds with self-attention and gumbel subset sampling. arXiv preprint arXiv:1904.03375. Cited by: §2.2.1, TABLE I, §4.1.2, TABLE IV.
  • [187] Y. Yang, C. Feng, Y. Shen, and D. Tian (2018) FoldingNet: point cloud auto-encoder via deep grid deformation. In CVPR, pp. 206–215. Cited by: §2.2.3.
  • [188] Z. Yang and L. Wang (2019) Learning relationships for multi-view 3D object recognition. In ICCV, pp. 7505–7514. Cited by: §2.1.1.
  • [189] Z. Yang, Y. Sun, S. Liu, X. Shen, and J. Jia (2018) IPOD: intensive point-based object detector for point cloud. arXiv preprint arXiv:1812.05276. Cited by: §3.1.1, TABLE II, TABLE III.
  • [190] Z. Yang, Y. Sun, S. Liu, X. Shen, and J. Jia (2019) STD: sparse-to-dense 3D object detector for point cloud. arXiv preprint arXiv:1907.10471. Cited by: §3.1.1, TABLE II, TABLE III.
  • [191] X. Ye, J. Li, H. Huang, L. Du, and X. Zhang (2018) 3D recurrent neural networks with context fusion for point cloud semantic segmentation. In ECCV, pp. 403–417. Cited by: §4.1.2, TABLE IV.
  • [192] L. Yi, H. Su, X. Guo, and L. J. Guibas (2017) SyncSpecCNN: synchronized spectral CNN for 3D shape segmentation. In CVPR, pp. 2282–2290. Cited by: §4.3.
  • [193] L. Yi, W. Zhao, H. Wang, M. Sung, and L. J. Guibas (2019) GSPN: generative shape proposal network for 3D instance segmentation in point cloud. In CVPR, pp. 3947–3956. Cited by: §4.2.1.
  • [194] H. You, Y. Feng, R. Ji, and Y. Gao (2018) PVNet: a joint convolutional network of point cloud and multi-view for 3D shape recognition. In ACM MM, pp. 1310–1318. Cited by: §2.2.5, TABLE I.
  • [195] H. You, Y. Feng, X. Zhao, C. Zou, R. Ji, and Y. Gao (2018) PVRNet: point-view relation neural network for 3D shape recognition. arXiv preprint arXiv:1812.00333. Cited by: §2.2.5, TABLE I.
  • [196] T. Yu, J. Meng, and J. Yuan (2018) Multi-view harmonized bilinear network for 3D object recognition. In CVPR, pp. 186–194. Cited by: §2.1.1.
  • [197] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola (2017) Deep sets. In NeurIPS, pp. 3391–3401. Cited by: §2.2.1, TABLE I.
  • [198] J. Zarzar, S. Giancola, and B. Ghanem (2019) Efficient tracking proposals using 2D-3D siamese networks on lidar. arXiv preprint arXiv:1903.10168. Cited by: §3.2.
  • [199] W. Zeng and T. Gevers (2018) 3DContextNet: k-d tree guided hierarchical learning of point clouds using local and global contextual cues. In ECCV, Cited by: §2.2.4, TABLE I, TABLE IV.
  • [200] Y. Zeng, Y. Hu, S. Liu, J. Ye, Y. Han, X. Li, and N. Sun (2018) RT3D: real-time 3D vehicle detection in lidar point cloud for autonomous driving. IEEE Robotics and Automation Letters 3 (4), pp. 3434–3440. Cited by: §3.1.1, §3.1.1, TABLE II, TABLE III.
  • [201] B. Zhang and P. Wonka (2019) Point cloud instance segmentation using probabilistic embeddings. arXiv preprint arXiv:1912.00145. Cited by: §4.2.2.
  • [202] F. Zhang, C. Guan, J. Fang, R. Yang, P. Torr, and V. Prisacariu (2020) Instance segmentation of lidar point clouds. In ICRA, Cited by: §4.2.1.
  • [203] K. Zhang, M. Hao, J. Wang, C. W. de Silva, and C. Fu (2019) Linked dynamic graph CNN: learning on point cloud via linking hierarchical features. arXiv preprint arXiv:1904.10014. Cited by: §2.2.3, TABLE I.
  • [204] Y. Zhang and M. Rabbat (2018) A Graph-CNN for 3D point cloud classification. In ICASSP, pp. 6279–6283. Cited by: §2.2.3, TABLE I.
  • [205] Z. Zhang, B. Hua, D. W. Rosen, and S. Yeung (2019) Rotation invariant convolutions for 3D point clouds deep learning. In 3DV, pp. 204–213. Cited by: §2.2.2.
  • [206] Z. Zhang, B. Hua, and S. Yeung (2019) ShellNet: efficient point cloud convolutional neural networks using concentric shells statistics. arXiv preprint arXiv:1908.06295. Cited by: item , item , §4.1.2, TABLE IV.
  • [207] C. Zhao, W. Zhou, L. Lu, and Q. Zhao (2019) Pooling scores of neighboring points for improved 3D point cloud segmentation. In ICIP, pp. 1475–1479. Cited by: §4.1.2.
  • [208] H. Zhao, L. Jiang, C. Fu, and J. Jia (2019) PointWeb: enhancing local neighborhood features for point cloud processing. In CVPR, pp. 5565–5573. Cited by: §2.2.1, TABLE I, §4.1.2, TABLE IV.
  • [209] X. Zhao, Z. Liu, R. Hu, and K. Huang (2019) 3D object detection using scale invariant and feature reweighting networks. arXiv preprint arXiv:1901.02237. Cited by: §3.1.1, TABLE II, TABLE III.
  • [210] Y. Zhao, T. Birdal, H. Deng, and F. Tombari (2019) 3D point capsule networks. In CVPR, Cited by: §2.2.5, TABLE I, §4.1.2.
  • [211] Z. Zhao, M. Liu, and K. Ramani (2019) DAR-Net: dynamic aggregation network for semantic scene segmentation. arXiv preprint arXiv:1907.12022. Cited by: §4.1.2.
  • [212] K. Zhiheng and L. Ning (2019) PyramNet: point cloud pyramid attention network and graph embedding module for classification and segmentation. arXiv preprint arXiv:1906.03299. Cited by: §4.1.2.
  • [213] Y. Zhou and O. Tuzel (2018) VoxelNet: end-to-end learning for point cloud based 3D object detection. In CVPR, pp. 4490–4499. Cited by: §3.1.1, §3.1.2, TABLE II, TABLE III.
  • [214] C. Zhu, K. Xu, S. Chaudhuri, L. Yi, L. Guibas, and H. Zhang (2019) CoSegNet: deep co-segmentation of 3D shapes with group consistency loss. arXiv preprint arXiv:1903.10297. Cited by: §4.3.