DeepAI
Log In Sign Up

GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation

The inherent ambiguity in ground-truth annotations of 3D bounding boxes caused by occlusions, signal missing, or manual annotation errors can confuse deep 3D object detectors during training, thus deteriorating the detection accuracy. However, existing methods overlook such issues to some extent and treat the labels as deterministic. In this paper, we propose GLENet, a generative label uncertainty estimation framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables. The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors to build probabilistic detectors and supervise the learning of the localization uncertainty. Besides, we propose an uncertainty-aware quality estimator architecture in probabilistic detectors to guide the training of IoU-branch with predicted localization uncertainty. We incorporate the proposed methods into various popular base 3D detectors and observe that their performance is significantly boosted to the current state-of-the-art over the Waymo Open dataset and KITTI dataset.

READ FULL TEXT VIEW PDF

page 2

page 6

page 7

page 9

08/01/2022

DSLA: Dynamic smooth label assignment for efficient anchor-free object detection

Anchor-free detectors basically formulate object detection as dense clas...
12/18/2020

Labels Are Not Perfect: Inferring Spatial Uncertainty in Object Detection

The availability of many real-world driving datasets is a key reason beh...
01/28/2022

Self-paced learning to improve text row detection in historical documents with missing labels

An important preliminary step of optical character recognition systems i...
08/09/2017

Extreme clicking for efficient object annotation

Manually annotating object bounding boxes is central to building compute...
09/13/2014

Self-taught Object Localization with Deep Networks

This paper introduces self-taught object localization, a novel approach ...
09/15/2020

Polyp-artifact relationship analysis using graph inductive learned representations

The diagnosis process of colorectal cancer mainly focuses on the localiz...
07/01/2013

An Empirical Study into Annotator Agreement, Ground Truth Estimation, and Algorithm Evaluation

Although agreement between annotators has been studied in the past from ...

1 Introduction

With the rise of autonomous driving application scenarios and the emergence of large-scale annotated datasets (e.g., KITTI [6] and Waymo [32]), 3D object detection has attracted much attention in both industry and academia.

In recent years, deep learning frameworks have been richly investigated for 3D object detection. However, there inevitably exist inaccuracy and ambiguity in many object-level annotations, which can confuse the learning process of the detection model. For example, in the data collection phase, the captured point clouds are typically incomplete due to environmental occlusion and the intrinsic properties of LiDAR sensors. In the data labeling phase, ambiguity occurs when human annotators subjectively estimate object locations and shapes from 2D images or partial 3D points. Concretely, as illustrated in Fig. 

1, an incomplete LiDAR observation can correspond to multiple potentially plausible labels, and similar point cloud objects can be annotated with significantly varying bounding boxes. This phenomenon reminds us to carefully consider and exploit label uncertainty throughout the whole dataset.

Currently, the dominating 3D object detectors are designed as deterministic learning frameworks, in which label ambiguity is totally ignored in the bounding box regression branch. To address this issue, another family of probabilistic detectors [9, 19, 4, 5]

introduce uncertainty estimation by modeling the predictions as a Gaussian distribution. However, they simply model all ground-truth bounding boxes as a Dirac delta function with zero uncertainty and ignore the noise in labels. Inheriting the probabilistic detection paradigm, there is also a limited number of studies focusing on quantifying label uncertainty based on simple heuristics

[18] and Bayes estimation [36]. However, [18] can be less reliable due to its insufficient modeling capacity, and the assumption of conditional probabilistic independence between point clouds as involved in [36] is often untenable in practice. In addition, both [18] and [36]

tend to predict the bounding box uncertainty as a whole instead of the dimension-wise prediction, which ignores the fact that the variance in each dimension is typically different.

In this paper, we aim to explicitly model the one-to-many relationship between a typical 3D object and its potentially plausible bounding boxes under a learning-based framework. Technically, we present GLENet, a novel deep generative network adapted from conditional variational auto-encoders (CVAE), which introduces a latent variable to capture the distribution over potentially plausible bounding boxes of point cloud objects. During testing, we sample latent variables multiple times to generate diverse bounding boxes, the variance of which is taken as label uncertainty to guide the learning of localization uncertainty estimation in the downstream detection task. Besides, motivated by the observation that in probabilistic detectors the predicted uncertainty is relevant to localization quality, as illustrated in Fig. 3, we further propose Uncertainty-aware Quality Estimator (UAQE), which facilitates the training of the IoU-branch with the predicted uncertainty information. To demonstrate our effectiveness and universality, we integrate GLENet into several popular 3D object detection frameworks to build powerful probabilistic detectors. Experiments on KITTI [6] and Waymo [32] datasets demonstrate that our method can bring stable performance gains and achieve the current state-of-the-art.

Fig. 1: (a) Given an object with an incomplete LiDAR observation, there exist multiple potential plausible ground-truth bounding boxes with varying size and shape. (b) It is easy to introduce errors in the labeling process only based on partial point clouds and 2D images. For example, similar point clouds with only the rear part of a car might be annotated as ground-truth boxes of different lengths.

We summarize the main contributions of this paper as follows.

  • We introduce a general and unified deep learning-based paradigm to generate reliable label uncertainty, and further extend it as an auxiliary regression target to improve 3D object detection.

  • We present a deep generative model adapted from CVAE to capture the one-to-many relationship between incomplete point cloud objects and the potentially plausible ground-truth bounding boxes.

  • Inspired by the strong correlation between the localization quality and the predicted variance in probabilistic detectors, we propose UGQP to facilitate the training of the IoU-branch.

The remainder of the paper is organized as follows. Section 2 shows the related work including reviews of LiDAR-based detectors and existing label uncertainty estimation methods. Section 3 describes our architecture and the strategy to estimate label uncertainty. In Section 4 we conduct the experiments on the KITTI dataset and the Waymo Open dataset to demonstrate the effectiveness of our method to enhance existing 3D detectors and the ablation study to analyze the effect of different components. Finally, Sect. 5 concludes our paper.

2 Related Work

2.1 LiDAR Point Cloud-based 3D Object Detection

Existing 3D object detectors can be categorized into single-stage and two-stage frameworks. For single-stage detectors, Zhou et al. [51] proposed to convert raw point clouds to regular volumetric representations and adopted voxel-based feature encoding. Yan et al. [40] presented a more efficient sparse convolution. Lang et al. [12] converted point clouds to sparse fake images using pillars. Shi et al. [29] aggregated point information via graph structure. He et al. [8] introduced point segmentation and center estimation as auxiliary tasks in the training phase to enhance model capacity. Zheng et al. [48]

constructed an SSFA module for robust feature extraction and multi-task head for confidence rectification, and proposed DI-NMS for post-processing. For two-stage detectors, Shi et al.

[28] exploited a voxel-based network to learn the additional spatial relationship between intra-object parts under the supervision of 3D box annotations. Shi et al. [27] proposed to directly generate 3D proposals from raw point clouds in a bottom-up manner, using semantic segmentation to valid point to regress detection boxes. The follow-up work [42] further proposed PointsPool to convert sparse proposal features to compact representations and used spherical anchors to generate accurate proposals. Shi et al. [26] utilized both point-based and voxel-based methods to fuse multi-scale voxel and point features. Deng et al. [3] proposed voxel RoI pooling to extract RoI features from coarse voxels.

Compared with 2D object detection, there are more serious boundary ambiguity problems in 3D object detection due to occlusion and signal miss. Studies such as SPG [37] try to use point cloud completion methods to restore full shape of objects and improve the detection performance [39, 21]. However, it’s non-trivial to generate complete and precise shapes with incomplete point clouds only.

2.2 Probabilistic 3D Object Detector

Most existing state-of-the-art 2D [15, 33, 2] and 3D [28]

object detectors produce a deterministic box with a confidence score for each detection, while the probability score represents the existence and semantic confidence, it can’t reflect the localization uncertainty. By contrast, probabilistic object detectors

[9, 19, 14, 34] estimate the probabilistic distribution of predicted bounding boxes rather than taking them as deterministic results. [9] models the predicted boxes as Gaussian distributions, the variance of which can indicate the localization uncertainty. The regression branch is expected to output larger variance and get a smaller loss for inaccurate localization estimation with KL Loss. However, most probabilistic detectors take the ground-truth bounding box as deterministic Dirac delta distribution and ignore the ambiguity in labels. The localization variance is learned in an unsupervised manner, which may result in sub-optimal localization precision and erratic training.

2.3 Label Uncertainty Estimation

The label uncertainty estimation and uncertainty estimation in detectors are two different tasks. The former focuses on estimating the uncertainty of annotated labels with an independent framework, while the latter aims to predict uncertainty of detected results with a branch in detectors.

There only exists a limited amount of previous works that focus on quantifying uncertainty statistics of ground-truth bounding boxes in 3D object detection task. Meyer et al. [18] proposed to model label uncertainty by the IoU between the label bounding box and the corresponding convex hull of the aggregated LiDAR observations. Feng et al. [36]

proposed a Bayes method to estimate label noises by quantifying the matching degree of point clouds for the given boundary box with Gaussian Mixture Model. However, its assumption of conditional probabilistic independence between point clouds is often untenable in practice. Besides,

[18, 36] only produce uncertainty of the box as a whole instead of each dimension.

Conditional variational autoencoder (CVAE) [31] is a powerful tool for controllable generative tasks, which has been applied to a wide range of language [47, 45, 35, 13] and vision [38, 24, 22, 46] processing scenarios. Inspired by [47] that apply generative models to handle ambiguities in conversations, i.e., a question may have multiple suitable responses, we propose GLENet based on CVAE to capture the one-to-many relationship between incomplete point cloud objects and potentially plausible bounding box labels. Compared with previous methods, our method can estimate the label uncertainty in 7 dimensions. Despite the existence of previous studies that incorporate VAE for point cloud applications [20, 43], we are the first to employ CVAE in 3D object detection for label uncertainty modeling.

3 Proposed Method

Fig. 2: The overall workflow of GLENet. In the training phase, we learn parameters and (resp. and ) of latent variable (resp. ) through the prior network (resp. recognition network), after which a sample of and the corresponding geometrical embedding produced by the context encoder are jointly exploited to estimate bounding box distribution. In the uncertainty estimation phase, we encode the distribution of and then derive multiple samples to generate diverse bounding boxes, from which we can perform label uncertainty estimation.

As aforementioned, label ambiguity widely exists in 3D object detection scenarios and has adverse effects on the deep model learning process, which is not well addressed or even completely ignored by previous works. To this end, we propose GLENet, a generic and unified deep learning framework that generates label uncertainty by modeling the one-to-many relationship between point cloud objects and potentially plausible bounding box labels, and extend it as an auxiliary regression objective to enhance 3D object detection performance.

In what follows, we will explicitly formulate the label uncertainty estimation problem from the probabilistic distribution perspective, followed by the technical implementation of GLENet in Sec. 3.1. After that, we introduce a unified way of integrating the label uncertainty statistics predicted by GLENet into the existing 3D object detection frameworks to build more powerful probabilistic detectors in Sec. 3.2.

3.1 Estimating Label Uncertainty with GLENet

3.1.1 Problem Formulation

We denote as a set of observed LiDAR points belonging to an object, and as a 3D point represented by spatial coordinates. Let be the annotated ground-truth bounding box of parameterized by the center location , the size (length, width and height), and the orientation , i.e., .

To quantify label uncertainty, we propose to model , i.e., the distribution of potentially plausible bounding boxes conditioned on point cloud , and take its variance as uncertainty. Considering the fact that directly modeling can be intractable and result in inaccurate distribution estimation [31]

, we resort to the Bayes theorem and introduce an intermediate variable

to reformulate the conditional distribution as , in which and

are deduced through neural networks parameterized by

. Meanwhile, inspired by the optimization process of CVAE, we regularize variable via maximizing the variational lower bound of the conditional log likelihood :

(1)

where means the expectation of on the distribution of , denotes KL-divergence. The first task term enforces to learn bounding box knowledge. The second term is targeted at regularizing the distribution of by minimizing the KL-divergence between and , in which the auxiliary distribution is introduced to estimate the true posterior.

The overall workflow of GLENet is illustrated in Fig. 2. We assume the prior distribution and the auxiliary posterior distribution subject to multivariate Gaussian distribution and , respectively. Here, and

denote vectorized parameters of the Gaussian distribution learned by the prior network and the recognition network. When modeling

, we employ a context encoder embed input points into geometric feature representations , which are combined with the sample of and jointly fed into a prediction network to learn bounding box distribution.

3.1.2 Prior Network and Recognition Network

For the prior network, we adopt PointNet [23] for point feature embedding and add additional MLP layers to embed input points into distribution . For the recognition network, which we denote as , we adopt the same learning architecture as the prior network to generate point cloud embeddings, which are concatenated with ground-truth bounding box information and jointly fed into the subsequent MLP layers to learn distribution . To facilitate the learning process, we encode ground-truth bounding box information into offsets relative to predefined anchors, and then perform normalization as:

(2)

where (, , ) is the size of the predefined anchor located in the center of the point cloud, and is the diagonal of the anchor box. We also take as the additional input of the recognition network to handle the issue of angle periodicity.

3.1.3 Context Encoder

We design a context encoder to generate the corresponding deterministic geometric feature vector from given points . As empirically observed in various related domains [7], it can be difficult to make use of latent variables when the decoder is sufficiently expressive to generate a plausible output only using the condition . Therefore, we deploy a simplified PointNet architecture as the backbone of the context encoder to avoid posterior collapse.

3.1.4 Prediction Network

Given a condition and its bounding box , we assume there is a true posterior distribution , and train the prediction network to restore from sampled from and context features . In order to approximate with the recognition network , we feed samples of instead of to the prediction network during training. In the inference phase, we feed the prediction network with sampled from instead of to prevent the generated from overfitting the given .

3.1.5 Objective Function

As formulated in Eq. (1), the whole optimization objective of GLENet consists of a task term and a regularization term.

The task term enforces GLENet to learn ground-truth bounding box information. Following [40, 3], we define the bounding box reconstruction loss as:

(3)

where denotes the Huber loss imposed on the encoded regression targets, as described in Eq. (2), and denotes the binary cross-entropy loss used for direction classification.

For the regularization term, since and are re-parameterized as and , through the prior network and the recognition network, we can define the regularization loss as:

, (4)

Thus, the overall objective function can be formulated as

, where the hyperparameter

is empirically set as in all experiments.

3.1.6 Label Uncertainty Estimation and Evaluation Metric of GLENet

Inspired by previous works [9], we assume the ground-truth bounding box subject to a Gaussian distribution , whose expectation is exactly the value of the annotation and variance indicates uncertainty. Its uncertainty can be approximated by the degree of confusion in the distribution of potential bounding boxes . Although the intermediate variable is explicitly modeled by a learnable Gaussian distribution, however, it is still intractable to directly deduce the from the prediction network. So we adopt a Monte Carol method to approximate via sampling multiple times, and calculate the variance of multiple predictions as the label uncertainty in seven dimensions, i.e., .

Considering the unavailability of the true distribution of ground-truth bounding box, we evaluate GLENet in a non-reference manner. To this end, we propose to compute negative log-likelihood between the distribution of estimated distribution of ground-truth and prediction’s distribution :

(5)

where denotes the number of inference times, and represent the regression targets and the predicted offsets, respectively. We estimate the integral by randomly sampling multiple prediction results, i.e., the Monte Carlo method. encourages the network to predict diverse plausible boxes with high variance for incomplete point cloud and precise boxes with low variance for high-quality point cloud respectively.

3.2 Building Probabilistic 3D Detectors with Label Uncertainty

Fig. 3: Motivation of utilizing the learned uncertainty of bounding box distributions to facilitate the training of the IoU estimating branch. (a) shows the relationship between the localization precision (i.e., IoU between predicted and ground-truth bounding box) and the variance predicted by a probabilistic detector. Here, we reduce the dimension of the variance with PCA to facilitate visualization. (b) shows two specific examples, where prediction with high uncertainty corresponds to low localization quality, and for the dense sample, the prediction has high localization quality and low uncertainty estimation.

Most existing probabilistic detectors model the prediction and ground-truth as Gaussian distribution and Dirac delta function, respectively. The probabilistic regression branch is trained with the KL loss:

(6)

where is the regression targets of detectors, is the predicted offsets, and is the predicted localization variance. Intuitively, the regression branch should output larger variance and get a smaller loss for inaccurate localization estimation. However, considering the existence of label noise, the ground-truth distribution cannot be sufficiently described by the Dirac delta function with zero uncertainty. Differently, GLENet is designed to boost the existing 3D object detectors with probabilistic bounding box regression by learning label uncertainty, which serves as the auxiliary supervision information to facilitate learning the variance of the predicted bounding box in probabilistic 3D detectors.

3.2.1 Incorporating Label Uncertainty into KL-Loss

We assume the ground-truth bounding box as a Gaussian distribution with variance , and approximate the label noise through GLENet. Then, we incorporate the generated label uncertainty in the KL Loss between the distribution of prediction and ground-truth in the detection head:

(7)

where denotes the regression targets of detectors, is the predicted offsets, and is the uncertainty of the estimation. Intuitively, given samples with high label uncertainty, the model is encouraged to predict larger variance under the supervision of .

3.2.2 Uncertainty-aware Quality Estimator

Motivated by the strong correlation between the uncertainty and localization quality for each bounding box (see Fig. 3), we propose Uncertainty-aware Quality Estimator (UAQE) to facilitate the training of the IoU-branch and improve the IoU estimation accuracy.

Given the predicted uncertainty as input, we build a lightweight sub-module to generate a coefficient, multiplied by the original output of the IoU-branch as the final estimation. The UAQE consists of two fully-connected (FC) layers and with Sigmoid activation in the output end.

Fig. 4: Illustration of the proposed Uncertainty-aware Quality Estimator module in the detection head using the learned localization variance to assist the training of localization quality (IoU) estimation branch.

3.2.3 3D Variance Voting

In probabilistic object detectors, the learned localization variance by the KL loss is interpretable, which reflects the uncertainty of the predicted bounding boxes. Following [9], we propose 3D variance voting to combine neighboring boxes to find a more precise box representative. Specifically, during the merging process, the neighboring boxes that are closer and have a low variance are assigned with higher weights. There is a detail, for detected box with maximum score, neighboring boxes with a large angle difference from do not participate in the ensembling of angles.

Data: is an matrix of predicted bounding boxes with parameter . is the corresponding variance. is a set of N corresponding confidence values. is a tunable hyperparameter.
Result: The final voting results of selected candidate boxes.
1 ; and ;
2 ; and ;
3 {};
4 ;
5 while  do
6          idx =;
7          ;
8          {};
9          for  do
10                   ;
11                   if   then
12                            ;
13                           
14                   end if
15                  ;
16                  
17          end for
18         ;
19          ;
20          ;
21         
22 end while
Algorithm 1 3D var voting

4 Experiments

To reveal the effectiveness and universality of our method, we integrated GLENet into several popular types of 3D object detection frameworks to form probabilistic detectors, which were evaluated on two commonly used benchmark datasets, i.e., the Waymo Open dataset [32] and the KITTI dataset [6]. Specifically, we start by introducing specific experiment settings and implementation details in Sec. 4.1. After that, we report detection performance of the resulting probabilistic detectors and make comparisons with previous state-of-the-art approaches in Sec. 4.2 and 4.3. In the end, we conduct a series of ablation studies to verify the necessity of different key components and configurations in Sec. 4.4.

Fig. 5: Illustration of the occlusion data augmentation. (a) Point cloud of the original object. (b) Sample a dense object from the ground-truth database, and place it between the LiDAR sensor and original object. (c) Project original and sampled objects to range image, then we can calculate the convex hull of the sampled object. The convex hull is further jittered to increase the diversity of occluded samples. The point cloud of the original object corresponding to the occluded area is removed. (d) Final augmented object.

4.1 Experiment Settings

4.1.1 Benchmark Datasets

KITTI datasetThe KITTI dataset contains 7481 training samples with annotations in the camera field of vision and 7518 testing samples. According to the occlusion level, visibility and bounding box size, the samples are further divided into three difficulty level: simple, moderate and hard. Following common practice, when performing experiments on the val set, we further split all training samples into a subset with 3712 samples for training and the rest 3769 samples for validation. We report the performance on both the val set and online test leaderboard for comparison. And we use all training data for the test server submission.

Waymo Open DatasetThe Waymo Open Dataset (WOD) is a large-scale autonomous driving dataset with more diverse scenes and object annotations in full , which contains 798 sequences (158361 LiDAR frames) for training and 202 sequences (40077 LiDAR frames) for validation. These frames are further divided into two difficulty levels: LEVEL1 for boxes with more than five points and LEVEL2 for boxes with at least one points. We report performance on both LEVEL 1 and LEVEL 2 difficulty objects using the recommended metrics, mean Average Precision (mAP) and mean Average Precision weighted by heading accuracy (APH).

4.1.2 Implementation Details

We trained GLENet on all annotated objects in the training set. As the initial input of GLENet, each point cloud object was uniformly pre-processed into 512 points via random subsampling or upsampling. Then we decentralized the point cloud by subtracting the coordinates of the center point to eliminate the local impact of translation.

Architecturally, we realized the prior network and recognition network with an identical PointNet structure consisting of three fully-connected layers with output dimensions (64, 128, 512), followed by another fully-connected layer to generate an 8-dim latent variable. To avoid posterior collapse, we particularly chose a lightweight PointNet structure with channel dimensions (8, 8, 8) in the context encoder. The prediction network concatenates the generated latent variable and context features and feeds them into subsequent fully-connected layers with channels (64, 64) before predicting offsets and directions.

4.1.3 Training and Inference Strategies

We adopted Adam [11] (=0.9,

=0.99) for the optimization of GLENet, which was trained for totally 400 epochs on KITTI and 40 epochs on Waymo while maintaining a batch size of 64 on 2 GPUs. We initialized the learning rate as 0.003 and updated it with the one cycle policy 

[30].

In the training process, we applied common data augmentation strategies including random flipping, scaling, and rotation, in which the scaling factor and rotation angle were uniformly drawn from [0.95, 1.05] and , respectively. It is important to include multiple plausible ground-truth boxes in training especially for incomplete point clouds, so we further propose an occlusion-driven augmentation approach, as illustrated in Fig. 5, after which a complete point cloud may look similar to another incomplete point cloud, while the ground-truth boxes of them are completely different. To overcome posterior collapse, we also adopted KL annealing [1] to gradually increase the weight of the KL loss from 0 to 1. We followed k-fold cross-sampling to divide all training objects into 10 mutually exclusive subsets. To overcome overfitting, each time we trained GLENet on 9 subsets and then made predictions on the remaining subset to generate label uncertainty estimations on the whole training set. During inference, we sampled the latent variable from the predicted prior distribution 30 times to form multiple predictions, the variance of which was used as the label uncertainty.

Method Reference Modality 3D
Easy Mod. Hard mAP
EPNet[10] ECCV2020 RGB+LiDAR 89.81 79.28 74.59 81.23
3D-CVF [44] ECCV2020 RGB+LiDAR 89.2 80.05 73.11 80.79
STD [42] ICCV 2019 LiDAR 87.95 79.71 75.09 80.92
Part-A2 [28] TPAMI 2020 LiDAR 87.81 78.49 73.51 79.94
3DSSD [41] CVPR 2020 LiDAR 88.36 79.57 74.55 80.83
SA-SSD [8] CVPR 2020 LiDAR 88.8 79.52 72.3 80.21
PV-RCNN [26] CVPR 2020 LiDAR 90.25 81.43 76.82 82.83
SE-SSD [49] CVPR 2021 LiDAR 91.49 82.54 77.15 83.73
VoTR [17] ICCV 2021 LiDAR 89.9 82.09 79.14 83.71
Pyramid-PV [16] ICCV 2021 LiDAR 88.39 82.08 77.49 82.65
CT3D [25] ICCV 2021 LiDAR 87.83 81.77 77.16 82.25
GLENet-VR (Ours) - LiDAR 91.67 83.23 78.43 84.44
TABLE I:

Comparison with the state-of-the-art methods on the KITTI test set for vehicle detection, under the evaluation metric of 3D Average Precision (AP) of 40 sampling recall points. The best and second best results are highlighted in blod and underline, respectively.

4.1.4 Base Detectors

We integrated GLENet into three popular deep 3D object detection frameworks, i.e., SECOND[40], CIA-SSD[48], and Voxel R-CNN[3]

, to construct probabilistic detectors, which are dubbed as GLENet-S, GLENet-C, and GLENet-VR, respectively. Specifically, we introduced an extra fully-connected layer on the top of the detection head to estimate standard deviations along with the box locations. Meanwhile, we applied the proposed UGQP to GLENet-VR to facilitate the training of the IoU-branch. Note that we kept all the other network configurations in these base detectors unchanged for fair comparisons.

4.2 Evaluation on the KITTI Dataset

As shown in Table I, we compare GLENet-VR with the state-of-the-art detectors on the KITTI test set. We report the AP and mAP that averages over the APs of easy, moderate and hard objects. As of March 29th, 2022, our method surpasses all published single-modal detection methods in a large margin.

Table II lists the validation results of different detection frameworks on the KITTI dataset, from which we can observe that GLENet-S, GLENet-C, and GLENet-VR consistently outperform their corresponding baseline methods, i.e., SECOND, CIA-SSD, and Voxel R-CNN, by 4.79%, 4.78%, and 1.84% in terms of 3D R11 AP on the category of moderate car. Particularly, GLENet-VR achieves 86.36% AP on the moderate car class, which surpasses all other state-of-the-art methods. Besides, as a single-stage method, GLENet-C achieves 84.59% AP for the moderate vehicle class, which is comparable to the exiting two-stage approaches while achieving relatively lower inference costs. It is worth noting that our method is compatible with mainstream detectors and can be expected to achieve better performance when combined with stronger baselines.

Method Reference 3D 3D
Easy Moderate Hard Easy Moderate Hard
Part- [28] TPAMI 2020 89.47 79.47 78.54 - - -
3DSSD [41] CVPR 2020 89.71 79.45 78.67 - - -
SA-SSD [8] CVPR 2020 90.15 79.91 78.78 92.23 84.30 81.36
PV-RCNN [26] CVPR 2020 89.35 83.69 78.70 92.57 84.83 83.31
SE-SSD [49] CVPR 2021 90.21 85.71 79.22 93.19 86.12 83.31
VoTR [17] ICCV 2021 89.04 84.04 78.68 - - -
Pyramid-PV [16] ICCV 2021 89.37 84.38 78.84 - - -
CT3D [25] ICCV 2021 89.54 86.06 78.99 92.85 85.82 83.46
SECOND [40] Sensors 2018 88.61 78.62 77.22 91.16 81.99 78.82
GLENet-S (Ours) - 88.68 82.95 78.19 91.73 84.11 81.35
CIA-SSD [48] AAAI 2021 90.04 79.81 78.80 93.59 84.16 81.20
GLENet-C (Ours) - 89.82 84.59 78.78 93.20 85.16 81.94
Voxel R-CNN [3] AAAI 2021 89.41 84.52 78.93 92.38 85.29 82.86
GLENet-VR (Ours) - 89.93 86.46 79.19 93.51 86.10 83.60
TABLE II: Performances of different methods on the KITTI validation set for vehicle detection, under the evaluation metric of 3D Average Precision (AP) calculated with 11 sampling recall positions. The 3D APs under 40 recall sampling recall points are also reported for the moderate car class.
Method LEVEL_1 3D mAP mAPH LEVEL_2 3D mAP mAPH
Overall 0-30m 30-50m 50m-inf Overall Overall 0-30m 30-50m 50m-inf Overall
PointPillar [12] 56.62 81.01 51.75 27.94 - - - - - -
MVF [50] 62.93 86.30 60.02 36.02 - - - - - -
PV-RCNN [26] 70.30 91.92 69.21 42.17 69.69 65.36 91.58 65.13 36.46 64.79
VoTr-TSD [17] 74.95 92.28 73.36 51.09 74.25 65.91 - - - 65.29
Pyramid-PV [16] 76.30 92.67 74.91 54.54 75.68 67.23 - - - 66.68
CT3D [25] 76.30 92.51 75.07 55.36 - 69.04 91.76 68.93 42.60 -
 [40] 69.85 90.71 68.93 41.17 69.40 62.76 86.92 62.57 35.89 62.30
GLENet-S (Ours) 72.29 91.02 71.86 45.43 71.85 64.78 87.56 65.11 38.60 64.25
Voxel R-CNN [3] 76.08 92.44 74.67 54.69 75.67 68.06 91.56 69.62 42.80 67.64
GLENet-VR (Ours) 77.32 92.97 76.28 55.98 76.85 69.68 92.09 71.21 44.36 68.97
TABLE III: Performance of different methods on the Waymo validation set for vehicle detection. : re-implemented by ourselves with the official code.

4.3 Evaluation on the Waymo Open Dataset

The evaluation results of different approaches on both LEVEL_1 and LEVEL_2 of the Waymo Open Dataset are reported in Table III, which shows that our method contributes 2.44% and 1.24% enhancement in terms of LEVEL_1 mAP for SECOND and Voxel R-CNN, respectively. It is observed that the performance boost brought by our method becomes much more obvious in the range of 30-50m and 50m-Inf. Intuitively, this is because distant point cloud objects tend to be sparser and thus have more serious issues of bounding box ambiguity. GLENet-VR achieves better performance than the existing methods with 77.32% mAP and 69.68% mAP for the LEVEL 1 and LEVEL 2 difficulty.

4.4 Ablation Study

We conducted ablative analyses to verify the effectiveness and characteristics of our processing pipeline. In this section, all the involved model variants are built upon the Voxel R-CNN baseline and evaluated on the KITTI dataset, under the evaluation metric of average precision calculated with 40 recall positions.

Fig. 6: Qualitative comparison results between GLENet-VR and Voxel R-CNN on the KITTI dataset. We show the ground-truth, true positive and false positive bounding boxes in red, green and yellow, respectively, on both the point cloud and image. Best viewed in color.
Fig. 7: Visual results of GLENet. The point cloud, ground-truth and predictions of GLENet are colored in black, red and green respectively. GLENet produces diverse predictions for sparse and incomplete point clouds, and consistent bounding boxes for high-quality point clouds.

4.4.1 Different Methods for Label Uncertainty Estimation

We compared with other two ways of label uncertainty estimation: 1) treating the label distribution as the deterministic Dirac delta distribution with zero uncertainty; 2) estimating the label uncertainty with simple heuristics, i.e., the number of points in the ground-truth bounding box or the IoU between the label bounding box and its convex hull of the aggregated LiDAR observations [18]).

As shown in Table IV, our method consistently outperforms existing label uncertainty estimation paradigms. Compared with heuristic strategies, our deep generative learning paradigm can adaptively estimate label uncertainty statistics in 7 dimensions, instead of the uncertainty of bounding boxes as a whole, considering the variance in each dimension could be very different.

Method 3D
Easy Moderate Hard
Voxel R-CNN 92.38 85.29 82.86
GLENet-VR w/  (=0) 92.48 85.37 83.05
GLENet-VR w/  (points num) 92.46 85.58 83.16
GLENet-VR w/  (convex hull [18]) 92.33 85.45 82.81
GLENet-VR w/  (Ours) 93.49 86.10 83.56
TABLE IV: Comparison of different label uncertainty estimation approaches.

4.4.2 Key Components of Probabilistic Detectors

We analyzed the contribution of different key components in our constructed probabilistic detectors and reported results in Table V. According to the second row, we can conclude that only training with the KL loss brings little performance gains. Label uncertainty generated by GLENet module in KL Loss contributes 0.75%, 0.51%, and 0.3% improvement on the APs of easy, moderate, and hard class respectively, which demonstrates its regularization effect on KLD-loss (Eq. 7) and its ability to estimate more reliable uncertainty statistics of bounding box labels. Our UGQP module in the probabilistic detection head boosts the easy, moderate, and hard APs by 0.25%, 0.19% and 0.15% respectively, which demonstrates UGQP’s effectiveness in estimating the localization quality.

KL loss LU var voting UGQP Easy Moderate Hard
92.38 85.29 82.86
92.45 85.25 82.99
92.48 85.37 83.05
93.20 85.76 83.29
93.24 85.91 83.41
93.49 86.10 83.56
TABLE V: Contribution of each component in our constructed GLENet-VR pipeline. “LU” denotes the label uncertainty.

4.4.3 Influence of Data Augmentation

To generate similar point cloud shapes with diverse ground-truth bounding boxes during training of GLENet, we proposed an occlusion data augmentation strategy and generated more incomplete point clouds while keeping the bounding boxes unchanged (see Fig. 5). As listed in table VI, it can be seen that the occlusion data augmentation effectively enhances the performance of GLENet and the downstream detection task. Besides, the effectiveness of the metric is also validated, which is proposed to evaluate GLENet and select optimal configurations to generate reliable label uncertainty.

Occlusion Easy Mod. Hard
230.1 93.21 85.86 83.35
91.5 93.49 86.10 83.56
TABLE VI: Ablation study on occlusion augmentation techniques in GLENet, in which we report the for evaluation of GLENet and the 3D average precisions of 40 sampling recall points for evaluation of downstream detectors.

4.4.4 Conditional Analysis

To figure out in what cases our method improves the base detector most, we evaluated GLENet-VR on different occlusion levels and distance ranges. As shown in Table VII, compared with the baseline, our method mainly improves on the heavily occluded and distant samples, which suffer from more serious boundary ambiguities of ground-truth bounding boxes.

Method Occlusion Distance 011footnotemark: 1 1 2 0-20m 20-40m 40m-Inf Voxel R-CNN [3] 92.35 76.91 54.32 96.42 83.82 38.86 GLENet-VR (Ours) 93.51 78.64 56.93 96.69 86.87 39.82 Improvement +1.16 +1.73 +2.61 +0.27 +3.05 +0.96
TABLE VII: Comparison on different occlusion levels and distance ranges, evaluated by the 3D Average Precision (AP) calculated with 40 sampling recall positions on the KITTI val set.

Note: The results include separate APs for objects belonging to different occlusion levels and APs for moderate vehicle class in different distance ranges. Definition of occlusion levels: levels 0, 1 and 2 correspond to fully visible samples, partly occluded samples, and samples difficult to see respectively.

Method SECOND [40] GLENet-S (Ours) CIA-SSD [48] GLENet-C (Ours) Voxel R-CNN [3] GLENet-VR (Ours)
FPS(Hz) 23.36 22.80 27.18 28.76 21.08 20.82
TABLE VIII: Inference speed of different detection frameworks on the KITTI dataset.

4.4.5 Inference Latency

We evaluated the inference speed of different baselines with a batch size of 1 on a desktop with Intel CPU E5-2560 @ 2.10 GHz and NVIDIA GeForce RTX 2080Ti GPU. As shown in Table VIII, our approach doesn’t significantly increase the computational overhead. Particularly, GLENet-VR only takes 0.6 more ms than the base Voxel R-CNN, since the number of candidates for the input of var voting is relatively small in two-stage detectors.

4.5 Qualitative Results

Fig. 6 visualizes the detection results using the proposed GLENet-VR and the baseline Voxel R-CNN. We observe that GLENet-VR can obtain better detection results with fewer false-positive bounding boxes and fewer missed heavily occluded and distant objects compared with the base detector on the KITTI dataset.

We also include some visualization of results from GLENet. As shown in Fig. 7, given a point cloud object, we can acquire potentially plausible bounding boxes with GLENet by sampling latent variables multiple times. In general, GLENet tends to predict diverse bounding boxes for objects represented with sparse point clouds and incomplete outlines, and consistently accurate boundary boxes for high-quality point cloud objects. Therefore, the variance of GLENet’s multiple predictions can represent the label uncertainty in ground-truth bounding boxes.

5 Conclusion

We presented a general and unified deep learning-based paradigm for generative modeling of 3D object-level label uncertainty. Technically, we proposed GLENet, adapted from the learning framework of CVAE, to capture one-to-many relationships between incomplete point cloud objects and potentially plausible bounding boxes. As a plug-and-play component, GLENet can generate reliable label uncertainty statistics that can be conveniently integrated into various types of 3D detection pipelines to build powerful probabilistic detectors. We verified the effectiveness and universality of our method by incorporating the proposed GLENet into several existing deep 3D object detectors, which demonstrated stable improvement and produced state-of-the-art performance on both KITTI and Waymo datasets.

References

  • [1] S. Bowman, L. Vilnis, O. Vinyals, A. Dai, R. Jozefowicz, and S. Bengio (2016) Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 10–21. Cited by: §4.1.3.
  • [2] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko (2020) End-to-end object detection with transformers. In

    European conference on computer vision

    ,
    pp. 213–229. Cited by: §2.2.
  • [3] J. Deng, S. Shi, P. Li, W. Zhou, Y. Zhang, and H. Li (2021) Voxel r-cnn: towards high performance voxel-based 3d object detection. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Vol. 35, pp. 1201–1209. Cited by: §2.1, §3.1.5, §4.1.4, TABLE II, TABLE III, TABLE VII, TABLE VIII.
  • [4] D. Feng, L. Rosenbaum, and K. Dietmayer (2018) Towards safe autonomous driving: capture uncertainty in the deep neural network for lidar 3d vehicle detection. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 3266–3273. Cited by: §1.
  • [5] D. Feng, L. Rosenbaum, F. Timm, and K. Dietmayer (2019)

    Leveraging heteroscedastic aleatoric uncertainties for robust real-time lidar 3d object detection

    .
    In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 1280–1287. Cited by: §1.
  • [6] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In

    2012 IEEE Conference on Computer Vision and Pattern Recognition

    ,
    Vol. , pp. 3354–3361. External Links: Document Cited by: §1, §1, §4.
  • [7] A. Goyal, A. Sordoni, M. Côté, N. R. Ke, and Y. Bengio (2017) Z-forcing: training stochastic recurrent networks. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6716–6726. Cited by: §3.1.3.
  • [8] C. He, H. Zeng, J. Huang, X. Hua, and L. Zhang (2020-06) Structure aware single-stage 3d object detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1, TABLE I, TABLE II.
  • [9] Y. He, C. Zhu, J. Wang, M. Savvides, and X. Zhang (2019-06) Bounding box regression with uncertainty for accurate object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.2, §3.1.6, §3.2.3.
  • [10] T. Huang, Z. Liu, X. Chen, and X. Bai (2020) Epnet: enhancing point features with image semantics for 3d object detection. In European Conference on Computer Vision, pp. 35–52. Cited by: TABLE I.
  • [11] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §4.1.3.
  • [12] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom (2019-06) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1, TABLE III.
  • [13] J. Li, Y. Song, H. Zhang, D. Chen, S. Shi, D. Zhao, and R. Yan (2018) Generating classical chinese poems via conditional variational autoencoder and adversarial training. In

    Proceedings of the 2018 conference on empirical methods in natural language processing

    ,
    pp. 3890–3900. Cited by: §2.3.
  • [14] X. Li, W. Wang, L. Wu, S. Chen, X. Hu, J. Li, J. Tang, and J. Yang (2020) Generalized focal loss: learning qualified and distributed bounding boxes for dense object detection. Advances in Neural Information Processing Systems 33, pp. 21002–21012. Cited by: §2.2.
  • [15] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §2.2.
  • [16] J. Mao, M. Niu, H. Bai, X. Liang, H. Xu, and C. Xu (2021-10) Pyramid r-cnn: towards better performance and adaptability for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2723–2732. Cited by: TABLE I, TABLE II, TABLE III.
  • [17] J. Mao, Y. Xue, M. Niu, H. Bai, J. Feng, X. Liang, H. Xu, and C. Xu (2021) Voxel transformer for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3164–3173. Cited by: TABLE I, TABLE II, TABLE III.
  • [18] G. P. Meyer and N. Thakurdesai (2020) Learning an uncertainty-aware object detector for autonomous driving. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10521–10527. Cited by: §1, §2.3, §4.4.1, TABLE IV.
  • [19] G. P. Meyer, A. Laddha, E. Kee, C. Vallespi-Gonzalez, and C. K. Wellington (2019-06) LaserNet: an efficient probabilistic 3d object detector for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.2.
  • [20] A. Mousavian, C. Eppner, and D. Fox (2019) 6-dof graspnet: variational grasp generation for object manipulation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2901–2910. Cited by: §2.3.
  • [21] M. Najibi, G. Lai, A. Kundu, Z. Lu, V. Rathod, T. Funkhouser, C. Pantofaru, D. Ross, L. S. Davis, and A. Fathi (2020-06) DOPS: learning to detect 3d objects and predict their 3d shapes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [22] N. Painchaud, Y. Skandarani, T. Judge, O. Bernard, A. Lalande, and P. Jodoin (2020) Cardiac segmentation with strong anatomical guarantees. IEEE transactions on medical imaging 39 (11), pp. 3703–3713. Cited by: §2.3.
  • [23] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660. Cited by: §3.1.2.
  • [24] S. Sharma, P. T. Varigonda, P. Bindal, A. Sharma, and A. Jain (2019-10)

    Monocular 3d human pose estimation by generation and ordinal ranking

    .
    In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Cited by: §2.3.
  • [25] H. Sheng, S. Cai, Y. Liu, B. Deng, J. Huang, X. Hua, and M. Zhao (2021) Improving 3d object detection with channel-wise transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2743–2752. Cited by: TABLE I, TABLE II, TABLE III.
  • [26] S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li (2020) Pv-rcnn: point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10529–10538. Cited by: §2.1, TABLE I, TABLE II, TABLE III.
  • [27] S. Shi, X. Wang, and H. Li (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 770–779. Cited by: §2.1.
  • [28] S. Shi, Z. Wang, J. Shi, X. Wang, and H. Li (2020) From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network. IEEE transactions on pattern analysis and machine intelligence 43 (8), pp. 2647–2664. Cited by: §2.1, §2.2, TABLE I, TABLE II.
  • [29] W. Shi and R. Rajkumar (2020-06) Point-gnn: graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [30] L. N. Smith (2017) Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV), pp. 464–472. Cited by: §4.1.3.
  • [31] K. Sohn, H. Lee, and X. Yan (2015) Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28, pp. 3483–3491. Cited by: §2.3, §3.1.1.
  • [32] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Y. Zhang, J. Shlens, Z. Chen, and D. Anguelov (2020-06) Scalability in perception for autonomous driving: waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §1, §4.
  • [33] M. Tan, R. Pang, and Q. V. Le (2020) Efficientdet: scalable and efficient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10781–10790. Cited by: §2.2.
  • [34] A. Varamesh and T. Tuytelaars (2020) Mixture dense regression for object detection and human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13086–13095. Cited by: §2.2.
  • [35] T. Wang and X. Wan (2019) T-cvae: transformer-based conditioned variational autoencoder for story completion.. In IJCAI, pp. 5233–5239. Cited by: §2.3.
  • [36] Z. Wang, D. Feng, Y. Zhou, L. Rosenbaum, F. Timm, K. Dietmayer, M. Tomizuka, and W. Zhan (2020) Inferring spatial uncertainty in object detection. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5792–5799. Cited by: §1, §2.3.
  • [37] Q. Xu, Y. Zhou, W. Wang, C. R. Qi, and D. Anguelov (2021-10) SPG: unsupervised domain adaptation for 3d object detection via semantic point generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15446–15456. Cited by: §2.1.
  • [38] X. Yan, J. Yang, K. Sohn, and H. Lee (2016) Attribute2image: conditional image generation from visual attributes. In European conference on computer vision, pp. 776–791. Cited by: §2.3.
  • [39] X. Yan, J. Gao, J. Li, R. Zhang, Z. Li, R. Huang, and S. Cui (2021) Sparse single sweep lidar point cloud segmentation via learning contextual shape priors from scene completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 3101–3109. Cited by: §2.1.
  • [40] Y. Yan, Y. Mao, and B. Li (2018) Second: sparsely embedded convolutional detection. Sensors 18 (10), pp. 3337. Cited by: §2.1, §3.1.5, §4.1.4, TABLE II, TABLE III, TABLE VIII.
  • [41] Z. Yang, Y. Sun, S. Liu, and J. Jia (2020) 3dssd: point-based 3d single stage object detector. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11040–11048. Cited by: TABLE I, TABLE II.
  • [42] Z. Yang, Y. Sun, S. Liu, X. Shen, and J. Jia (2019) Std: sparse-to-dense 3d object detector for point cloud. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1951–1960. Cited by: §2.1, TABLE I.
  • [43] L. Yi, W. Zhao, H. Wang, M. Sung, and L. J. Guibas (2019-06) GSPN: generative shape proposal network for 3d instance segmentation in point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.3.
  • [44] J. H. Yoo, Y. Kim, J. Kim, and J. W. Choi (2020) 3d-cvf: generating joint camera and lidar features using cross-view spatial feature fusion for 3d object detection. In European Conference on Computer Vision, pp. 720–736. Cited by: TABLE I.
  • [45] B. Zhang, D. Xiong, J. Su, H. Duan, and M. Zhang (2016)

    Variational neural machine translation

    .
    In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 521–530. External Links: Link, Document Cited by: §2.3.
  • [46] J. Zhang, D. Fan, Y. Dai, S. Anwar, F. S. Saleh, T. Zhang, and N. Barnes (2020) UC-net: uncertainty inspired rgb-d saliency detection via conditional variational autoencoders. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8582–8591. Cited by: §2.3.
  • [47] T. Zhao, R. Zhao, and M. Eskenazi (2017) Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 654–664. Cited by: §2.3.
  • [48] W. Zheng, W. Tang, S. Chen, L. Jiang, and C. Fu (2021) CIA-ssd: confident iou-aware single-stage object detector from point cloud. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 3555–3562. Cited by: §2.1, §4.1.4, TABLE II, TABLE VIII.
  • [49] W. Zheng, W. Tang, L. Jiang, and C. Fu (2021) SE-ssd: self-ensembling single-stage object detector from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14494–14503. Cited by: TABLE I, TABLE II.
  • [50] Y. Zhou, P. Sun, Y. Zhang, D. Anguelov, J. Gao, T. Ouyang, J. Guo, J. Ngiam, and V. Vasudevan (2020) End-to-end multi-view fusion for 3d object detection in lidar point clouds. In Conference on Robot Learning, pp. 923–932. Cited by: TABLE III.
  • [51] Y. Zhou and O. Tuzel (2018-06) VoxelNet: end-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.