With the widespread availability of 3D scanning devices, depth sensors and light field cameras [10, 11], 3D point cloud data is being increasingly used in many different application domains such as robotics, autonomous driving, city planning, infrastructure maintenance etc. Accurate detection of 3D objects in point clouds is a central problem for mobile agents to automatically avoid obstacles, plan a route and interact with objects. Converting point clouds to canonical forms such as depth images, multiple views or voxels have been popular approaches to subsequently process the 3D data with Convolutional Neural Networks (CNNs). However, applying CNNs directly on the raw coordinates of the point cloud for 3D object detection has not been widely studied. Progress in 3D object detection lags far behind its 2D counterpart due to the irregular and sparse nature of 3D point clouds. Recently, PointNet  and PointNet++  were proposed to directly process raw point clouds without converting them to a canonical form. These methods split the input scene into overlapping blocks to avoid the expensive computation and memory cost associated with the huge amounts of data. Unfortunately, this step adversely effects the detection of 3D objects when it is necessary to consider the global scene context.
A straightforward idea is to take inspiration from 2D object detection frameworks to guide the design of 3D methods. For example, Generative Shape Proposal Network  extends the classic 2D-based detector Mask R-CNN  to 3D. It provides an analysis-by-synthesis strategy to generate a large number of 3D object proposals by reconstructing the shapes followed by proposal refinement and instance identification in point clouds. Nonetheless, GSPN  is a dense proposal-based method and relies on two-stage training, which is computationally expensive. More recently, simpler and more efficient 2D object detectors have been proposed [56, 7, 43, 19]. Zhou et al.  represent a 2D object as a single pixel located at the center of its bounding box, and regress the parameters (e.g., dimension, orientation, object size) of each bounding box directly from features of the center pixel. 2D object detection is thus transformed to a standard keypoint detection problem.
Inspired from the above approach, we extend the 2D object detection scheme and propose an algorithm for accurate geometric center estimation and 3D bounding box regression over 3D point clouds. Specifically, we first introduce a strategy that jointly predicts the pseudo geometric centers and direction vectors leading to a win-win solution for 3D bounding box candidate regression. A challenge in this cases is that unlike 2D images, where the object’s center pixel is surrounded by other pixels, the geometric centers of 3D objects are generally in an empty space111For example, the center of a sphere is far from the surface of the sphere. far from points on the object’s surface. Moreover, 3D objects in cluttered scenes are usually scanned partially and are noisy. Given the semantic features of points, it is difficult to regress offset values directly that measure the distance from the object surface points to their geometric centers. Inaccurate prediction of pseudo centers will induce error in the downstream 3D bounding box generator. Therefore, to learn more discriminative features from the object surface points and to compute the center position more accurately is the key to regress 3D bounding box candidates. Different from , we predict pseudo centers that are close to the geometric center of the object and assign each surface point a direction vector that points to the geometric center. The magnitude and direction of these vectors collaborate to further boosts the accuracy of 3D bounding box candidates.
Regressing 3D bounding boxes often results in duplicate candidates. A straightforward but naive approach to remove the duplicates is to use 3D Non-Maximum Suppression (NMS) with an Intersection over Union (IoU) threshold. However, an intuitive idea is to exploit the relationship between different objects in the scene. For example, a chair is often close to the desk and a computer is often placed on a desk. This relationship has been exploited in many 2D object recognition algorithms [17, 50, 46, 22, 3] and is easy to model since related objects are close to each other in images. However, 3D object-object relationships are difficult to model since objects belonging to different categories in cluttered 3D scenes are at arbitrary distances from each other, and their number and sizes are also different.
We propose an effective 3D relation module that builds a 3D object-object relation graph between 3D bounding box candidates for feature enhancement to achieve accurate object detection. Inspired by , we introduce a point attention pooling method to extract uniform appearance features for each 3D proposal which are used together with the position features to define nodes of the relation graphs. Since objects in the cluttered scenes are randomly placed and densely connected, (e.g., the seat part of a chair may be under a table) one 3D proposal often contains parts from different objects. Our proposed point attention pooling method exploits the information obtained from 3D proposals by modelling semantic, spacial and direction relationships of the interior points simultaneously. This plays an important role in specifying the intra-object pull forces and the inter-object push forces. The above relation graphs are inserted into the main framework and are learned in an unsupervised manner by minimizing the task specific losses, like the 3D bounding box regression loss, cross entropy loss and direction feature loss.
To sum up, our contributions include: (1) A framework for 3D object detection that directly exploits the raw point cloud, is single stage and end-to-end trainable. (2) An optimization method which jointly uses the pseudo geometric centers and direction vectors for 3D bounding box candidate estimation. (3) A point attention pooling method to extract uniform appearance features for each 3D proposal using semantic features, pseudo geometric centers and direction vectors. (4) Constructing a relation graph that exploits the 3D object-object relationships to represent the appearance and position relationship between 3D objects. This enhances the appearance features of each 3D proposal and boosts the performance of 3D bounding box regression. We explore the effects of supervised 3D relation graph and multi-graph patterns on 3D relationship reasoning. Experiments are performed on the benchmark SunRGB-D  and ScanNet  datasets and achieve state-of-the-art results. We also conduct a series of ablation studies to demonstrate the effectiveness of verious modules of our proposed method.
Ii Related Work
2D Object Detection Methods:
2D object detection in images is a fundamental problem in computer vision and has been an active area of research for several decades. Numerous methods have been proposed covering different approaches to the problem of generic 2D object detection. Some of these methods can also provide inspirations for 3D object detection in point clouds. Region proposal driven detectors such as RCNN 
enumerate object location from region proposals at the first stage, then classify the proposals and refine them at the second stage. Hence, such methods are slow and require a huge amount of storage. This motivated a series of innovations in this area leading to a number of improved detection methods such as Fast-RCNN, SPPNet , Faster RCNN , RFCN , Mask RCNN , Light Head RFCN  etc.
One-stage detection strategies were introduced which skip the region proposal generation step and directly predict class scores and 2D bounding box offsets from the input images with a single network. Several attempts were made to improve the performance of one-stage detectors, e.g., DetectorNet , OverFeat , YOLO  and SSD .
Recently, some interesting methods building on robust keypoint estimation networks have been proposed for 2D object detection [56, 7, 43, 19, 57]. These methods are the inspiration for our proposed 3D bounding box candidates generation method. Specially, Zhou et al.  represent objects by the center pixel of their 2D bounding box. The object centers are obtained by selecting peaks in the heat map generated by feeding the input image to a fully convolutional network. The center keypoint/pixel based 2D object detection methods heavily rely on the heat map and peak pixel estimation whereas it is difficult to generate such heat maps for 3D point clouds. We address this problem by designing a strategy that uses pseudo geometric centers and direction vectors to represent 3D objects, and then regress the 3D bounding box candidates.
3D Object Detection Methods: The most common approach for 3D object detection in a point cloud is to project the point cloud to 2D images for 3D bounding box regression [1, 49, 53]. Point clouds are also sometimes represented by voxel grids for 3D object detection. Zhou et al.  divide a full LiDAR point cloud scene into equally spaced 3D voxels and propose a voxel feature encoding layer to learn features for each voxel. Yang et al.  encode each voxel as occupancy and predict oriented 2D bounding boxes in bird’s eye view of LiDAR data. PointPillars  organize LiDAR point clouds in vertical columns and then detect 3D objects using a standard 2D convolutional detection framework.
To avoid voxels and the associated computational cost, Qi et al.  proposed a framework to directly process raw point clouds and then predict 3D bounding boxes based on points within the frustum proposals. However, their algorithm heavily relies on 2D object detection. Moreover, PointRCNN  generates 3D bounding box proposals via foreground point segmentation in the first stage, and then learns better local spatial features for box refinement. These methods are designed for 3D object detection in point cloud data obtained from LiDAR sensors. However, LiDAR data is very sparse and there are no cross-connections between different objects that are naturally separate in the 3D space.
VoteNet  detects 3D objects in cluttered scenes via a combination of deep point set networks and Hough voting. However, VoteNet is unstable when it comes to voting for the geometric center a partially scanned 3D object. Yang et al.  extracted a global feature vector through an existing backbone to regress the 3D bounding boxes that ignores small objects and heavily relies on the instance segmentation label. Our proposed 3D bounding box candidates prediction branch is completely different from them as we associate the direction vectors and pseudo geometrical centers for 3D proposal regression.
Networks for Direct Point Cloud Processing. Learning geometric features directly from point clouds becomes even more essential when color information in unavailable e.g. in LiDARs. Qi et al. 
proposed PointNet that learns point level features directly from sparse and unordered 3D points. All 3D points are passed through a set of Multi-Layer Perceptrons (MLP) independently and then aggregated to form global features using max-pooling. PointNet achieves promising performance on point cloud classification and segmentation tasks. The basic PointNet framework has since been extended by many researchers[9, 31, 48, 45, 38, 23, 18]. Recently, Duan et al.  proposed a structural network architecture for point clouds that takes the contextual information into account. Similarly, Wang et al.  designed a graph convolution kernel that selectively focuses on the most related parts of point clouds and captures the structural features for semantic segmentation. Among these methods, PointNet++ 
is the most commonly used hierarchical framework and is often chosen as the base feature extraction unit for different point cloud related tasks. PointNet++ extracts global features from neighborhood points within a ball query radius, where each local point is processed separately by an MLP. In this work, we use PointNet++ as the backbone architecture for point-level feature learning.
Iii Proposed Approach
Figure 1 shows our framework comprising two parts, one part directly processes the raw points to generate 3D bounding box candidates while the other part builds the 3D object-object relation graphs to enhance the appearance features of the proposals for more accurate 3D bounding box regression. Finally, a 3D non-maximum suppression (NMS) is used to remove the duplicate candidates and obtain the final 3D bounding boxes.
point cloud, we first subsample it and learn deep features from it using the PointNet++. The output is a subset of points of size , where
is the dimension of learned features. Each subsampled point passes through an MLP with fully connected layers, ReLU and batch normalization and generates a pseudo center, a semantic feature and a direction vector independently. This process enables each sampled point on the object surface to have a direction vector pointing to the geometric center and produces a pseudo center point that is close to the geometric center of the object. To accomplish this task, we propose a direction loss and a cross entropy loss ofclasses to supervise the network. The pseudo centers, semantic features and direction vectors are then processed to generate 3D bounding box candidates. Based on the direction features and semantic features, we extract uniform appearance features from internal points of each positive proposal using our proposed point attention pooling method. Later, graph convolution networks are used to perform relational reasoning on graph which are built on the appearance and position features. Multi graphs and supervised graph methods are used to enhance the performance of the graph network. Next, the output of graph network is used to enhance the appearance features of the proposals and regress accurate 3D bounding boxes. Finally, the 3D NMS picks highest quality 3D bounding boxes to output the detected objects. In the following Sections, we give details of the individual modules of our method.
Iii-B 3D Object Proposal Generation
as we propose a new direction loss function to supervise the learning process of the MLP network, obtain the pseudo centers that are close to the geometric center of object and assign each sampled point a direction vector that points to the geometric center of object.
Iii-B1 Direction feature learning
Given an unordered point cloud with , where is the total number of points and is the feature dimension of each point. We use values of the point cloud only i.e. .
An entire 3D scene often contains millions of 3D points which are densely sampled on objects that are close to the sensor and sparsely sampled on far objects. Processing these points simultaneously is computationally expensive. Therefore, we subsample the scene to points () to represent the entire scene. Instead of randomly subsampling the point cloud, we leverage the recently proposed PointNet++  for point feature learning due to its efficiency and demonstrated success on tasks ranging from point cloud classification, point cloud semantic segmentation to point cloud generation [30, 48, 23]. The backbone feature learning network has several Set Abstraction (SA) and Feature Propagation (FP) layers with skip connections, which output a subset of the input points with 3D coordinates and an enriched -dimensional feature vector. The backbone network extracts local point features and selects the most discriminative points within a spherical region. The output points are denoted by .
Next, we learn direction vectors pointing to the ground truth geometric center and generate pseudo centers that will be close to the ground truth geometric center for the sampled points . Inspired by the concept of center pixels estimation in 2D object detection , we regress a 3D bounding box using the predicted pesudo centers and direction vectors jointly. For a set of sampled points , where with and , we train a shared MLP network with fully connected layers, ReLU and batch normalization. The network takes and inputs and outputs the Euclidean space coordinates and their corresponding feature such that the pseudo centers generated from the point are denoted as , . The MLP network also outputs a normalized direction feature for each object surface point. We define the vector to be the one pointing towards the ground truth geometric center of each object. The direction feature can describe the inter-object relationship accurately without being affected by other objects. To learn the pseudo center and direction feature, we define the direction loss as follows:
where is the ground truth geometric center of the 3D bounding box of each object, is the point on an object surface, indicates whether or not a seed point is on an object surface, is the total number of points on an object surface and is the ground truth normalized direction feature which points towards the geometric center,
Compared to regressing the pseudo centers directly from point semantic features 
, optimizing the direction features and the pseudo centers jointly distributes the estimated pseudo-centers around the geometric center more uniformly. Moreover, the proposed direction loss function generates more discriminate semantic features for points on the object surface in the MLP network and provides more accurate regional information for subsequent proposal region feature extraction. For illustration purposes, Figure2 shows the direction vectors (green arrows) and pseudo centers (blue points) generated on surface points of two adjacent chairs in one point cloud scene respectively. We can see that the direction vectors belonging to the same chair are oriented towards their geometric center, so that different objects can mutually repel each other while different regions belonging to the same object are attracted to each other. Moreover, the pseudo centers cluster at the geometric center of the object providing a basis for the regression of the 3D bounding box candidates together with the direction vectors.
Iii-B2 Proposal Candidates Aggregation
For a 3D point cloud scene, there are a set of pseudo centers , which create canonical “meeting points” for context aggregation from different parts of each object. Similar to VoteNet , we sample and cluster these pseudo centers, then aggregate semantic features together with the direction vectors of their corresponding surface points to predict 3D bounding box candidates for all objects and classify them with objectness scores and semantic scores. Each proposal is represented by a fixed vector with an objectness score, bounding box parameters ( represents center, heading and scale parametrized as in ) and semantic classification scores. Refer to the loss function part III-D for more details on the parameters.
Iii-C 3D Object-Object Relation Graphs
There is always some relationship between proposals in a 3D scene. We exploit these relationships to enrich the representation of the proposals. However, the modelling of correlation between the proposals faces three primary challenges. First, the points in each region are sparse, varied in number and non-uniformly distributed in space. However, we need to extract fixed dimension appearance features to represent each region, to be used as nodes of the relation graph and play an important role in graph convolution operations. Second, apart from the appearance features, we also need to explore the 3D spatial location interaction between different proposals in order to have sufficient representation capabilities to form the graph nodes. Third, the relationship among proposals is not well defined. Hence, we learn the relationships using multiple graphs or supervised graphs where a center mass loss function is used to guide the relationship learning process.
Iii-C1 Point Attention Pooling
In a typical 2D object detection pipeline, Region Of Interest (ROI) pooling  or ROI Align  are used to extract uniform features of each region proposal. However, since the points within a 3D bounding box candidate are usually unordered and sparse, straightforward extension of 2D ROI pooling to point clouds is not possible. We propose a new method named point attention pooling to extract compact features for each 3D bounding box candidate, as shown in Figure 3.
For each proposal, a naive way would be to apply PointNet++  to the interior points without considering their inner interactions and output a uniform feature. However, such an approach does not exploit the semantic information. Instead, our point attention pooling method exploits proposal information by modelling semantic, location and direction relationship of the interior points simultaneously, which plays an important role in indicating intra-object pull forces and inter-object push forces. Our point attention pooling follows two steps. Firstly, we randomly choose interior points for each proposal with their semantic features , spatial coordinates and direction vectors as initial features. When the number of points in a 3D bounding box is less than , we repeat the interior points until we get the predefined number of points. To make the model robust under geometrical transformations, we move the 3D points of each proposal to their mean spatial location. The canonical locations are concatenated with the directional vectors to represent spatial features of points within the proposal. In the second step, we explore the semantic and spatial interactions between points and . Both semantic features and spatial features of interior points play critical roles in interaction learning. For example, repetitive object patterns are exploited by semantic features while the linkage relationship is captured by spatial features. Therefore, we define point attention between the th point and others by jointly learning semantic and spatial interactions:
where both and are the indices of interior points, , and are functions, and is the learned appearance feature for each proposal. The pairwise function and aim to exploit the semantic and spatial relationship between and respectively, and then fuses the two relationships followed by an element wise sum for all . Figure 3 shows an illustration of the proposed point attention pooling layer, which aims to learn appearance features of the proposal region. Let and be the feature dimensions after the and , the number of parameters for point attention pooling are:
Iii-C2 Appearance and Position Relationship
As described above, a series of 3D bounding box candidates are regressed by the pseudo points and direction vectors. Since the 3D scenes are composed of point clouds which are sparse, unordered and usually represent partial objects, the estimated pseudo points will have great uncertainty and can introduce relatively large errors in the regressed 3D bounding box candidates. Once we obtain the uniform appearance features for each 3D bounding box candidate, we explore a method to enhance the features within each proposal region. Inspired by the recent success of relation reasoning and graph neural networks for videos , 2D object detection 
and Natural Language Processing (NLP) tasks, we use the 3D object-object relation graph structure to explicitly model pairwise relationship information for enhancing the 3D proposal features. To obtain sufficient representational power that captures the underlying relationship between different proposals, both appearance features and position features are considered. Moreover, we note that appearance and position relationships have different effects on the relation graphs. We further investigate this empirically in the ablation study i.e. section IV-C.
Formally, a relation graph is defined as where is the set of nodes and is the set of edges. The nodes in our graph correspond to 3D bounding box candidates and are denoted as , where and are appearance features and position features of the -th proposal respectively. We construct the graph to represent pair wise relationship among the proposals where the relationship value indicates the relative importance of the proposal to proposal .
Given an input set of proposals , the relationship feature of all proposals with respect to the -th proposal is computed as,
where is the relationship value between the -th and
-th proposals. The output is a weighted sum of appearance features from other proposals, linearly transformed by. In our experiment, we adopt the following function to define relation value,
where denotes the appearance relationship between two proposals and the position relationship is computed by . We normalize each relation graph node using softmax function so that the sum of all the relationships for one node is equal to . For the appearance relationship, we use dot-product to compute relationship value between two proposals,
where is transformed feature dimension of the appearance features and , and works as a normalization factor.
For position relationship, the features represent both the spatial location and geometric structure of each 3D bounding box candidate. The spatial location is represented by the center point of each bounding box while the geometric structure is represented by the parameters of each bounding box. Inspired by [50, 2], we investigate two methods to exploit position features for considering the position relationship between proposals: (a) 3D position mask. Similar to the image convolution operation where pixels within a local range contribute more to the reference pixel, we assume that proposals from local entities are more important than the proposals from distant entities. Based on the spatial distance between proposals, we define a threshold to filter out distant proposals. Therefore, we set to zero for two proposals whose distance is above the threshold. Mathematically,
where denotes the Euclidean distance between center points of two proposals and is the distance threshold which is a hyper-parameter. The position features are embedded in a high-dimensional representation  by . The feature dimension after embedding is . We then transform the embedded features into a scalar by weight vector , followed by ReLU activation. (b)3D position encoding. Alternatively, we can use all the proposals to compute their position relationship with the reference proposal. Similar to Equation (7), the distance threshold is ignored and the rest is retained, as shown below.
Each relationship function in Equation (4) is parametrized by matrices , , and . Recall that is the dimension of the input appearance feature . The number of parameters for one relationship module is
Iii-C3 Multiple Graphs vs Graph Supervision
Since the 3D object-object relationship among proposals is not well defined and a single relation graph typically focuses on a specific interaction information between proposals, we extend the single relation graph into multiple graphs to capture complex relationship information. That is, we build the graphs on the same proposals set, where is the number of graphs. Every graph is computed as in Equation (4), but with unshared weights. Building multiple relation graphs allows the model to jointly attend to different types of relationships between proposals. Finally, a multi-relation graph module aggregates the total relationship features and augments the input appearance features,
We supervise each graph by giving pseudo ground truth graph weights to learn more accurate relationships. The unsupervised graph weights are learned by minimizing the task specific total loss which contains 3D bounding box regression loss, cross entropy loss, direction feature loss etc. We must construct ground truth labels in matrix form to supervise the learning of without the need for relationship annotations in the raw point cloud. Our approach is inspired from . We want our attention weights to focus on relationships between different objects. Hence, for each entry of the ground truth relationship label matrix , we assign only when: (1) 3D object and 3D object overlap with the ground truth 3D bounding boxes of two different objects with IOU and (2) their category labels are different.
where is the center of mass. When minimizing this loss, we would like to have high relation weights at those entries where , and low relation weights elsewhere.
As shown in Figure 4, the appearance features are first extracted from 3D bounding box candidates and then used with the position features to build relation graphs. Graph convolution is then used to perform relation reasoning. The outputs of all graphs are then fused with the appearance features to regress more accurate 3D bounding boxes. We use the multiple graphs and graph supervision methods to explore which one is more beneficial to the establishment of relationships between different proposals. Their performance is discussed in the ablation study IV-C.
Iii-D Loss Function
Our complete network can be trained in an end-to-end manner with a multi-task loss including a directional loss, an objectness loss, a 3D bounding box estimation loss and a semantic classification loss. We weigh the losses such that they are in similar scales with , , , and when we use supervised graph model or .
The direction regression loss is defined in Equation 1 and discussed in detail in Section III-B. Note that the SUN RGB-D  dataset does not provide instance segmentation annotations. Therefore, we compute the ground truth object centers as the centers of the 3D bounding boxes and consider any point inside a ground truth bounding box as an object point. Similar to , we keep a set of up to three ground truth votes, and consider the minimum distance between the predicted vote and any ground truth vote in the set during vote regression on this point. For ScanNet , we consider any point sampled from instance mesh vertices as an object point and compute the ground truth object center as 3D bounding box center.
The objectness loss is a cross-entropy loss for two classes (positive and negative proposals) while the semantic classification loss is the cross-entropy loss for classes. We follow [28, 29] in defining the box loss, which comprises the center regression, heading estimation and size estimation sub-losses. Specifically, , where is the loss for the box center regression, and are losses for heading angle estimation while and are losses for bounding box size regression. The dimension of the output of the last layer is channels, where the first channels are for objectness classification, the channels are for pseudo center and directional vector regression, is the number of heading bins, is the number of size templates and is the number of semantic classes. We use the robust smooth loss in all regressions for the box loss. Both the box and semantic losses are only computed on positive vote clusters and normalized by the number of positive clusters.
Iii-E Comparison to 2D Visual Relationships
The classic 2D object detection approaches, e.g., RCNN , Fast-RCNN  and Faster RCNN , only use features within the proposals to refine the bounding boxes. The surrounding and long term information is not considered which is also important for 2D object detection. Santoro et al.  introduced a relation networks augmented method on visual question answering between different objects and achieved performance superior human annotators. Hu et al.  proposed an object relation module to learn the relationships between different proposals which captures the 2D appearance and location relations simultaneously, and evaluated the effectiveness of inserting the modelled relations in the RCNN based detection framework. Wu et al.  used actor relation graphs to learning the relation information between multi person for recognizing group activity, which achieved significant gain on group activity recognition accuracy. Moreover, many works also showed that modelling relation information are useful for action recognition [47, 41, 27]. These image based relation models mostly rely on the extraction of regions of interest (RoI) features where regular pooling methods can be used, and the definition of location relation generally use the center point of each 2D bounding box since the 2D objects in an image are interlaced and occluded. Although these methods are instructive for our 3D relationships learning for object detection in point cloud, they can not be used directly. Therefore, we design a completely new approach to establish interaction model between 3D objects.
In 3D point cloud scenes, objects of various sizes are randomly placed in space, often occluding each other and with dense object-to-object connections. Qi et al.  proposed a VoteNet which detects 3D objects from the raw point clouds without splitting the scene into overlapped cubes. VoteNet regresses the 3D bounding boxes for all 3D objects using voting, sampling and clustering. However, calculating the pseudo centers directly from the sparse and unordered points on object surface is an unstable approach and will affect the regression of 3D bounding boxes from pseudo centers. Yi et al.  introduced a generative shape proposal network (GSPN) for 3D instance segmentation which takes an analysis-by-synthesis strategy to generate 3D proposals for all instances where the shape proposal generation is just an intermediate process. Similar to GSPN , Yang et al.  segment the instances in 3D point cloud scene by regressing 3D bounding boxes for all instances. The proposed 3D-BoNet extracts a global feature vector through an existing backbone to regress the 3D bounding boxes, which ignores small objects. Moreover, the GSPN  and 3D-BoNet  heavily rely on the point level mask labels. Note that all these methods do not consider the relationships between the surrounding objects and semantic information in the global 3D space. Unlike the above frameworks, we propose the relation graph network to detect 3D objects in point cloud scenes. We regress the 3D bounding box candidates through the predicted pseudo centers and direction vectors jointly, where these two features can take advantage of each other to further boost the accuracy of 3D bounding box candidates. We also build the 3D object-object relation graph module using appearance features and position features to learn the interactions between different 3D proposals for 3D bounding box refinement. Furthermore, we explore multi graphs and supervised graph strategies to drive relation modules that learn stronger relationships.
Iv Experiments and Discussion
We first introduce two widely-used 3D object detection benchmarks and the implementation details of our method and then present a series of ablation studies to analyse the efficacy of the proposed units in our model. We also compare the performance of our method with the state of art. Finally, we show visualizations of our learned 3D object-object relation graph and present the 3D object detection results.
All experiments are performed on the publicly available SunRGB-D  and ScanNet  datasets. The SunRGB-D  dataset contains 10,335 RGB-D images with dense annotations in both 2D and 3D for all object classes. We split it into a train set of 7,000 scenes and a validation set of 3,335 scenes. For our purpose, we reconstruct point cloud scenes from the depth images using camera calibration parameters, where each object is annotated by a 3D bounding box presented by center coordinates, orientations and dimensions.
The ScanNet  dataset contains 1,513 scans of about 707 unique real-world environments. A ground truth instance level semantic label is assigned to each reconstructed 3D surface mesh. We split the data into a train set of 1200 scans and a validation set of 3,335 scans. We sample points from vertices of the 3D surface mesh and compute 3D bounding box of each instance following the method proposed by .
Following [54, 28], we augment the training data by randomly flipping each point cloud scene along the -axis and -axis in camera coordinates, randomly rotating around the -axis by an angle selected uniformly between and globally scaling between . We follow the standard protocols for performance evaluation.
The SunRGB-D  and ScanNet  datasets are mainly indoor scenes where the objects are densely interlaced and randomly placed. A 3D bounding box may contain one object and partial areas from other objects in some cases. Limited by the RGB-D sensors, the SunRGB-D  has partial scans and the reconstructed point cloud scenes are noisy. These conditions make it challenging to detect 3D objects directly from point clouds. Compared to the SunRGB-D , ScanNet  contains more complete objects.
Iv-B Implementation Details
Our 3D object detection architecture contains a 3D proposal generation module and a 3D relationship module followed by 3D NMS post-processing. In the first stage, we randomly sample points from each reconstructed point cloud scene of SunRGB-D  and points from the 3D scans of ScanNet .The 3D object proposal generation module is based on PointNet++  with four Set Abstraction (SA) layers for learning local features and two Feature Propagation (FP) layers for upsampling. The receptive radius of the SA layers are , , and in meters, the number of the subsampled points are , , and respectively. The output size of PointNet++ is , where is the number of sub-sampled points and is the feature dimension (the last channels are for 3D coordinates). The data is then fed to an MLP and the size of outputs are , and respectively, the last channels are for 3D coordinates of pseudo centers and direction vector of sampled points. After sampling and clustering, a total of 3D bounding boxes are generated.
In the second stage, we extract a fixed dimensional appearance feature for constructing the 3D object-object relation graph. We arbitrarily set the distance threshold to of the , where , and are the -axis limits of each 3D scene. We set , , , , and . In SUN RGB-D: , in ScanNet: .
We train the entire network end to end from scratch with an Adam optimizer, batch size 6 and initial learning rate of . The learning rate is decreased by a factor of after and epochs. We train the network on two NVIDIA GTX TitanX GPUs in the PyTorch framework. Our network is able to take point clouds of entire scenes and generate proposals in one forward pass.
|Base model + Direction features (ours)||57.9|
Exploring the effect of direction features. Evaluation metric is mean Average Precision (mAP) with 3D bounding box IoU threshold 0.25 as proposed by.
|SA layer ||58.6|
|Point attention pooling w/o direction features||58.9|
|Point attention pooling (ours)||59.2|
Iv-C Ablation Studies
We conduct four groups of ablation experiments on the SunRGB-D  dataset. We select this dataset as it contains partial and noisy scans thereby making our task more challenging.
Iv-C1 Effect of Direction Features
To simplify the experiments and get more intuitive result, we build a concise base framework that only contains the 3D proposal generation module and 3D NMS post-processing. Hence, the total loss function of the base model are defined as , where , and , the direction loss function , which only contains the distance norm between the sampled points on object surface and their corresponding object geometric center. In contrast to , we use the proposed direction loss function , as defined in Equation (1), which also generates a direction vector for each object surface point while regressing a pseudo center for them. The settings of PointNet++ remain the same as the original network.
As shown in Table I, using the proposed loss function results in improved performance which means that the direction features improve pseudo center estimation and 3D bounding boxes regression.
Iv-C2 Point Attention Pooling
To extract appearance features from each 3D bounding box candidate for relation graph construction, we propose a point attention pooling method to make use of the interactions of the interior points. In this section, we explore the efficacy of point attention pooling as compared to set abstraction (SA) layer , feature average or feature max. Feature average and feature maximum methods are connected to the MLP and their outputs are one dimensional features of the same size as our proposed method. We use these four methods to extract proposal features and then use them as appearance features to build relation graph. The other settings of our proposed framework remain the same as the original network.
|1 graph w/o supervision||58.3|
|1 graph w/ supervision||58.5|
|3 graphs w/o supervision||59.2|
|w/o appearance features||57.5|
|w/o position features||58.4|
|w/ appearance features (w/ 3D postion mask)||58.9|
|w/ appearance features (w/ 3D position encoding)||59.2|
From Table II we can see that the point attention pooling method achieves the best performance. We observe that the feature average and feature maximum methods do not perform as good because they extract proposal region features by simply computing the mean and extreme values respectively. Since objects in the scenes of SunRGB-D  dataset are randomly placed and densely connected, e.g., the seat part of a chair may be under a table, a 3D bounding box candidate often contains parts from different objects. Therefore, it is necessary to fully gather the points of the same object and learn the semantic and geometric information associated with them when extracting proposal region features. In the last two rows of Table II, we can see that direction features on the point attention pooling does provide some improvement. Our interpretation is that the direction vectors make points of the same object attract each other, and points of different objects repulse each other.
|VoteNet ||Points only||57.7||74.4||83.0||28.8||75.3||22.0||29.8||62.2||64.0||47.3||90.1|
|MRCNN [16, 14]||RGB+Points||17.3||10.5|
|3D-SIS ||5 views+Points||40.2||22.5|
|3D-SIS ||Points only||25.4||14.6|
|VoteNet ||Points only||46.8||24.7|
Iv-C3 3D Object-object Relation Graph
We now perform ablation studies on the following three key parameters:
(a) Number of relation graphs: As shown in Figure 5, using more relation graphs (while keeping everything else constant) steadily improves the accuracy of 3D object detection up to three graphs after which there is a gradual drop in accuracy. Therefore, we use three graphs in the remaining experiments unless otherwise mentioned. One might intuitively think that the 3D object detection accuracy will continue to improve with increasing number of graphs but practically, this is not the case. The relationships between different objects are learned by building graphs using fixed size proposal region features, which enhances the features of each region to return more accurate 3D bounding boxes. The relationships between different objects in most scenes of SunRGB-D  dataset are not particularly complicated, and although augmentation is performed, the training data is limited to only 7000 scenes. When the number of graphs increases, the number of model parameters also increases leading to over-fitting.
(b) Supervised relation graph: To examine the supervised graph strategy for 3D object detection, we choose a baseline model with graph only and an improved model with graph supervised by the center of mass loss. All other settings of the framework were kept fixed during this experiment. The results are given in Table III, and indicate that the supervised graph method is indeed better than that unsupervised case. In addition, we observe that the graphs model without supervision is better than the graph model with supervision. The center mass loss function allows the network to directly learn the relationships between different objects that are predefined by us. In the absence of supervision, the relationship coefficients between different objects are indirectly learned in conjunction with the task specific loss function. The detection results depend more on the size of the training data and the design of the network structure. Detection result of the supervised graph model is worse than the graphs model because we only provide one fixed prior situation which can only reflect one relationship case between different objects. Although the graphs model is supervised by the task specific loss function, it has enough parameter space to explore the relationships between different objects and hence works the best. Therefore, in the remaining experiments, we use a graphs model without supervision unless stated otherwise.
(c) Usage of appearance and position features: We firstly study the effect of appearance features on modelling the 3D object-object relation graphs abstracted by our proposed point attention pooling method. We build a single framework without using appearance features to build the relation graphs. The results are listed in the first and third rows of Table IV, it is obvious that explicitly modelling the relation graphs between different objects using appearance features improves performance. Next, we study the effect of position features on modelling the relation graphs, which are defined by the distance mask and distance encoding. From the last three rows of Table IV, we observe that the position features yield improvement for 3D object detection accuracy, and the distance mask performs better than the distance encoding. The appearance features extracted from each proposal region play an important role in the process of building relation graphs, which is used to represent 3D bounding box candidates discriminatively. The use of appearance features determines whether the relation graphs can learn the relationships between different objects and enhance the features of each proposal region. Due to the complexity of spatial distribution within the scenes, the introduction of position features also better establishes the spatial relationships between different objects. Thus, in all our experiments, we apply the appearance and position features together to build relation graphs unless stated otherwise.
We use the following values for the parameters in Equations (3) and (9), , , , , , . When , our proposed point attention pooling method has about 0.09 million parameters and the relation graphs module has about 0.9 million parameters. Table VIII shows that the complexity of our proposed network is much lower than F-PointNet  and 3D-SIS , but a little more than VoteNet . The base model indicates the most simple pipeline that does not include the point attention pooling and 3D object-object relation graph, and is similar to VoteNet . We can see that the increase in model complexity brought by our complete framework is relatively small compared to the entire detection architecture.
Iv-E Comparison with the State of the Art Methods
We first evaluate our method on SunRGB-D dataset i.e. ten common 3D object categories. Note that we do not use the color information of point clouds in our model. We report the average precision (AP) with an IoU threshold 0.25 as the evaluation metric. As shown in Table V, our method performs better than the state-of-the-art approaches. The results of baseline methods are taken from the original papers for fair comparison. Particularly, the Deep sliding Shapes (DSS)  and COG  are both voxel based detectors which combine RGB and 3D coordinate information for detection and classification. 2D-driven  and F-PointNet  rely on the 2D object detectors in the projected images with RGB and 3D coordinate information. VoteNet  only uses raw point cloud as input with 3D coordinate information. Note that our method outperforms DSS , COG , 2D-driven  and F-PointNet  by at least mAP@0.25 even though they use dual modalities. Our method also outperforms VoteNet . Furthermore, our method provides the best results on classes, even on objects with partially missing data (e.g., chair and nightstand), and achieves higher accuracy than VoteNet  on categories such as bathtub, bookshelf, desk and sofa, mainly because our 3D bounding box candidate regression strategy is more stable.
Table VI shows results on the ScanNet dataset with individual accuracies on the 18 categories. Table VII shows the overall accuracy. Particularly, 3D-SIS  uses 3D CNN to detect objects which combines 3D coordinates and multi-views to improve performance. We choose two cases (5 views and 3D coordinates, 3D coordinates only) as the inputs of 3D-SIS for comparison. MRCNN [16, 14] directly projects the 2D proposals from Mask-RCNN  on 3D point clouds to estimate the 3D bounding boxes. GSPN  uses the Mask-RCNN based framework and PointNet++  backbone to generate 3D object proposals which is supervised by the instance segmentation labels. As summarized in Table VI, our method performs better than 3D-SIS  and VoteNet  on classes, and achieves higher accuracy on categories such as chair, table and desk, where the interactions between these objects are complex. For categories with small geometric variations such as door and sink, our method does not achieve the best scores. Our method outperforms all the previous state-of-the-art methods even though it uses only the 3D information.
Iv-F Model Visualization and Qualitative Results
We show visualizations of a group of 3D bounding box candidates, the relation graph and the final output generated by our model in Figure 6. We show only one of the three graphs. In Figure 6 (b) (c), we show only four 3D bounding box candidates per object with higher accuracy. For example, for the candidates of oven, we use oven1, oven2, oven3 and oven4 to represent them. The Figure 6(c) is one of the corresponding learned 3 graphs from candidates in Figure 6(b). In order to highlight objects with strong relationships in Figure 6(c), we only show the relationship between the candidate numbered 1 and all other candidates numbered 1 for the long distance relationships, and we show the relationship between all four candidates of an object for the local relationships. The darker the color of the square in Figure 6(c), the stronger the relationship. We can see that there are usually strong interactions between the four candidates of one object, but the long distance relationship between different objects only occasionally exists. For example, there is a weak relationship between the microwave and the cabinet, but the relationships between table and other objects have not been learned. One explanation is that a table is not necessary found in kitchen scenes, hence, its relationship with other objects is difficult to establish.
We show examples of our detection results on SunRGB-D  and ScanNet  dataset in Figure 7. We select 5 challenging scenes which contain partially scanned objects, size changes, occlusions, contact connections, dense placement, and a wide variety of relationships that are difficult to establish. In the first scene of SunRGB-D dataset , our method successfully detect most of the objects, although some objects have a slight deviation from the ground truth, such as curtain and chairs. We can see that our method ignores the whiteboard because it does not have complex geometric information and requires color information to be recognized. The whiteboard also does not have a strong relationship with the surrounding objects. Moreover, the desk is successfully detected because its appearance features can self-reinforce with the information of the surrounding chairs. In the second scene, if only a small part of a chair is scanned, our method can not detect it because too few points do not provide enough information to regress the 3D bounding box. Our method detects the computer that is not in the ground truth. In the third scene, our approach correctly detects more chairs than the annotations given in the ground truth since the partial scanned objects use the information from the surrounding chairs to predict its 3D bounding boxes.
Detecting thin objects seems to be a limitation of our method. In the scenes of ScanNet , our method has large errors on very thin objects like window, laptop, and small box, and misses the wardrobe embedded in the wall. Most of these errors occur because we we do not use color information.
We proposed a relation graph network for 3D object detection in point clouds. Our network jointly learns the pseudo centers and direction vectors from the sampled points on the object surface, which are used to regress 3D bounding box candidates. We introduced a point attention pooling method to adaptively extract uniform and accurate appearance features for each 3D proposal, which benefit from the direction and semantic interactions of interior points. Equipped with the uniform appearance and position features, we built a 3D object-object relation graph to consider the relationships between all 3D proposals. Finally, we exploit the multi graphs and supervised graph strategies to improve the performance of relation graph. Experiments on two challenging benchmark datasets show that our method quantitatively and qualitatively obtains better performance than existing state-of-the-art in 3D object detection.
This work was supported in part by National Natural Science Foundation of China under Grant 61573134, Grant 61973106 and in part by the Australian Research Council under Grant DP190102443. Thank Yifeng Zhang and Tingting Yang from Hunan University for helping with baseline experiments setup.
Multi-view 3d object detection network for autonomous driving.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1907–1915. Cited by: §II.
-  (2019) Graph-based global reasoning networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 433–442. Cited by: §III-C2.
-  (2019) Attention-based dropout layer for weakly supervised object localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2219–2228. Cited by: §I.
-  (2019) FAN: focused attention networks. arXiv preprint arXiv:1905.11498. Cited by: §III-C3.
-  (2017) Scannet: richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5828–5839. Cited by: Relation Graph Network for 3D Object Detection in Point Clouds, §I, §III-D, Fig. 7, §IV-A, §IV-A, §IV-A, §IV-B, §IV-F, §IV-F, TABLE VI, TABLE VII.
-  (2016) R-fcn: object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pp. 379–387. Cited by: §II.
-  (2019) CenterNet: keypoint triplets for object detection. arXiv preprint arXiv:1904.08189. Cited by: §I, §II.
-  (2019) Structural relational reasoning of point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 949–958. Cited by: §II.
-  (2017) Exploring spatial context for 3d semantic segmentation of point clouds. In Proceedings of the IEEE International Conference on Computer Vision, pp. 716–724. Cited by: §II.
-  (2018) Benchmark data set and method for depth estimation from light field images. IEEE Transactions on Image Processing 27 (7), pp. 3586–3598. Cited by: §I.
-  (2018) 3d face reconstruction from light field images: a model-free approach. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 501–518. Cited by: §I.
-  (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587. Cited by: §II, §III-E.
-  (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §II, §III-C1, §III-E.
-  (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §I, §II, §IV-E, TABLE VII.
-  (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence 37 (9), pp. 1904–1916. Cited by: §II.
-  (2019) 3d-sis: 3d semantic instance segmentation of rgb-d scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4421–4430. Cited by: §IV-A, §IV-D, §IV-E, TABLE VI, TABLE VII, TABLE VIII.
-  (2018) Relation networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3588–3597. Cited by: §I, §I, §III-C2, §III-E.
-  (2018) Recurrent slice networks for 3d segmentation of point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2626–2635. Cited by: §II.
-  (2019) FoveaBox: beyond anchor-based object detector. arXiv preprint arXiv:1904.03797. Cited by: §I, §II.
-  (2017) 2d-driven 3d object detection in rgb-d images. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4622–4630. Cited by: §IV-E, TABLE V.
-  (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12697–12705. Cited by: §II.
-  (2019-06) Learning to learn relation for important people detection in still images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I.
-  (2018) Pointcnn: convolution on x-transformed points. In Advances in Neural Information Processing Systems, pp. 820–830. Cited by: §II, §III-B1.
-  (2018) Light-head r-cnn: in defense of two-stage object detector. Cited by: §II.
-  (2018) Deep learning for generic object detection: a survey. arXiv preprint arXiv:1809.02165. Cited by: §II.
-  (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §II.
-  (2018) Attend and interact: higher-order object interactions for video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6790–6800. Cited by: §III-E.
-  (2019) Deep hough voting for 3d object detection in point clouds. Proceedings of the IEEE Conference on Computer Vision. Cited by: §I, §II, §III-B1, §III-B2, §III-B, §III-D, §III-D, §III-E, §IV-A, §IV-D, §IV-E, §IV-E, TABLE V, TABLE VI, TABLE VII, TABLE VIII.
-  (2018) Frustum pointnets for 3d object detection from rgb-d data. Proceedings of the IEEE Conference on Computer Vision. Cited by: §II, §III-B2, §III-D, §IV-D, §IV-E, TABLE V, TABLE VII, TABLE VIII.
-  (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 1 (2), pp. 4. Cited by: §I, §II, §III-B1.
-  (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pp. 5099–5108. Cited by: Fig. 1, §I, §II, §III-A, §III-B1, §III-C1, §IV-B, §IV-C2, §IV-E, TABLE II.
-  (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §II.
-  (2016) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (6), pp. 1137–1149. Cited by: §II, §III-C1, §III-E.
-  (2016) Three-dimensional object detection and layout prediction using clouds of oriented gradients. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1525–1533. Cited by: §IV-E, TABLE V.
-  (2017) A simple neural network module for relational reasoning. In Advances in neural information processing systems, pp. 4967–4976. Cited by: §III-E.
-  (2014) Overfeat: integrated recognition, localization and detection using convolutional networks. In International Conference on Learning Representations, Cited by: §II.
-  (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–779. Cited by: §II.
-  (2017) Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3693–3702. Cited by: §II.
Sun rgb-d: a rgb-d scene understanding benchmark suite. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 567–576. Cited by: Relation Graph Network for 3D Object Detection in Point Clouds, §I, §III-D, Fig. 6, Fig. 7, §IV-A, §IV-A, §IV-B, §IV-C2, §IV-C3, §IV-C, §IV-F, TABLE I, TABLE V.
-  (2016) Deep sliding shapes for amodal 3d object detection in rgb-d images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 808–816. Cited by: §IV-E, TABLE V, TABLE VII.
-  (2018) Actor-centric relation network. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 318–334. Cited by: §III-E.
-  (2013) Deep neural networks for object detection. In Advances in neural information processing systems, pp. 2553–2561. Cited by: §II.
-  (2019) FCOS: fully convolutional one-stage object detection. arXiv preprint arXiv:1904.01355. Cited by: §I, §II.
-  (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §III-C2, §III-C2.
-  (2018) Local spectral graph convolution for point set feature learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 52–66. Cited by: §II.
-  (2019-06) Adaptively connected neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I.
-  (2018) Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. Cited by: §III-E.
-  (2019) Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics. Cited by: §II, §III-B1.
-  (2018) Squeezeseg: convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1887–1893. Cited by: §II.
-  (2019) Learning actor relation graphs for group activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9964–9974. Cited by: §I, §III-C2, §III-C2, §III-E.
-  (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7652–7660. Cited by: §II.
-  (2019) Learning object bounding boxes for 3d instance segmentation on point clouds. arXiv preprint arXiv:1906.01140. Cited by: §II, §III-E.
-  (2018) Learning single-view 3d reconstruction with limited pose supervision. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 86–101. Cited by: §II.
-  (2019) STD: sparse-to-dense 3d object detector for point cloud. Proceedings of the IEEE Conference on Computer Vision. Cited by: §IV-A.
-  (2019) Gspn: generative shape proposal network for 3d instance segmentation in point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3947–3956. Cited by: §I, §III-E, §IV-E, TABLE VII.
-  (2019) Objects as points. arXiv preprint arXiv:1904.07850. Cited by: §I, §II, §III-B1.
-  (2019) Bottom-up object detection by grouping extreme and center points. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 850–859. Cited by: §II.
-  (2018) Voxelnet: end-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499. Cited by: §II.