Log In Sign Up

Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection

by   Di Feng, et al.

To assure that an autonomous car is driving safely on public roads, its deep learning-based object detector should not only predict correctly, but show its prediction confidence as well. In this work, we present practical methods to capture uncertainties in object detection for autonomous driving. We propose a probabilistic 3D vehicle detector for Lidar point clouds that can model both classification and spatial uncertainty. Experimental results show that our method captures reliable uncertainties related to the detection accuracy, vehicle distance and occlusion. The results also show that we can improve the detection performance by 1


page 1

page 7


Can We Trust You? On Calibration of a Probabilistic Object Detector for Autonomous Driving

Reliable uncertainty estimation is crucial for perception systems in saf...

Leveraging Uncertainties for Deep Multi-modal Object Detection in Autonomous Driving

This work presents a probabilistic deep neural network that combines LiD...

Learning an Uncertainty-Aware Object Detector for Autonomous Driving

The capability to detect objects is a core part of autonomous driving. D...

Capture Uncertainties in Deep Neural Networks for Safe Operation of Autonomous Driving Vehicles

Uncertainties in Deep Neural Network (DNN)-based perception and vehicle'...

LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving

In this paper, we present LaserNet, a computationally efficient method f...

Prediction Surface Uncertainty Quantification in Object Detection Models for Autonomous Driving

Object detection in autonomous cars is commonly based on camera images a...

3D for Free: Crossmodal Transfer Learning using HD Maps

3D object detection is a core perceptual challenge for robotics and auto...

I Introduction

Knowing what an object detection model is unsure about is of paramount importance for safe autonomous driving. For example, if an autonomous car recognizes a front object as a pedestrian but is uncertain about its location, the system may warn the driver to take over the car at an early stage or slow down to avoid fatal accidents.

Deep learning has been introduced to object detection in the autonomous driving settings that use cameras [1, 2, 3], Lidar [4, 5, 6, 7, 8, 9, 10], or both [11, 12, 13, 14, 15, 16, 17], and has set the benchmark on many popular datasets (e.g. KITTI [18], Cityscapes [19]

). However, to the best of our knowledge, none of these methods allow for the estimation of uncertainty in bounding box regression. Moreover, these methods often use a softmax layer to normalize the score vector on the purpose of classifying objects, which does not necessarily represent the classification uncertainty in the model. As a result, these object detections can only tell the human drivers

what they have seen, but not how certain they are about it.

Fig. 1:

Our proposed probabilistic Lidar 3D vehicle detection network takes the Lidar point clouds as input. It not only predicts object classes and 3D bounding boxes, but also predicts the model uncertainty and the sensor observation uncertainty. Shannon Entropy (SE) and Mutual Information (MI) quantify the classification uncertainty, and Total Variance (TV) the localization uncertainty. These scores will be described in Sec. 


There are two types of uncertainty that we can quantify in the object detection network. Epistemic uncertainty, or model uncertainty, indicates how uncertain an object detector is to explain the observed dataset. Aleatoric uncertainty, conversely, captures the observation noises that are inherent in sensors. For instance, detecting an abnormal object that is different from the training dataset may result in high epistemic uncertainty, while detecting a distant object may result in high aleatoric uncertainty. Capturing both uncertainties in the object detection network is indispensable for safe autonomous driving, as the epistemic uncertainty displays the limitation of detection models, while the aleatoric uncertainty can provide the sensor observation noises for object tracking.

In this work, we develop practical methods to capture epistemic and aleatoric uncertainties in a 3D vehicle detector for Lidar point clouds. Our contributions are three-fold:

  • We extract model uncertainty and observation uncertainty for the vehicle recognition and 3D bounding box regression tasks.

  • We show an improvement of vehicle detection performance by modeling the aleatoric uncertainty.

  • We study the difference between the epistemic and aleatoric uncertainty. The former is associated with the vehicle detection accuracy, while the latter is influenced by the vehicle distance and occlusion.

The remainder of the paper is structured as follows: Sec. II summarizes related works. Sec. III illustrates the architecture of our probabilistic Lidar 3D vehicle detection neural network. Sec. IV presents the proposed methods to capture epistemic and aleatoric uncertainties. Sec. V illustrates the experimental results, followed by a conclusion and a discussion of future research in Sec. VI.

Ii Related Works

In this section, we first summarize methods for 3D object detection in autonomous driving for Lidar point clouds. We then summarize related works to Bayesian neural network which we use to extract uncertainty in our Lidar vehicle detection network.

Ii-a 3D Object Detection in Autonomous Driving

Ii-A1 3D Object Detection via Lidar Point Clouds

Many works represent 3D point clouds using voxel grids. Li [4]

discretizes point clouds into square grids, and then employs a 3D convolution neural network that outputs an objectness map and a bounding box map for each grid. Zhou

et al. [6] introduce a voxel feature encoding (VFE) layer that can learn the unified features directly from the Lidar point clouds. They feed these features to a Region Proposal Network (RPN) that predicts the detections. Engelcke et al. [7] use a voting scheme to search possible object locations, and introduce special convolutional layers to tackle with the sparsity of the point clouds. In addition, several works encode 3D point clouds as 2D feature maps. Li et al. [5] project range scans into a 2D front-view depth map and use a 2D fully convolutional network (FCN) to detect vehicles. Caltagirone et al. [8] and Yang et al. [10] project the Lidar point clouds into birds’ eye view image and propose a road detector and a car detector, respectively.

Ii-A2 3D Object Detection by Combining Lidar and Camera

Lidar point clouds usually give us accurate spatial information. Camera images provide us with object appearances, which is beneficial for object classification. Therefore, it is natural to combine these two sensors [11, 12, 13, 14, 15, 16, 17]. For example, Chen et al. [11] propose MV3D, a network that generates region proposals from the bird’s eye view Lidar features and then combine the proposals with the regional features from the front view Lidar feature maps and RGB camera images for accurate 3D vehicle detection. Qi et al. [12] use RGB camera images to draw 2D object bounding boxes, which build frustums in the Lidar point cloud. Then, they use these frustums for 3D bounding box regressions.

Ii-B Bayesian Neural Networks

Bayesian neural networks (BNNs) provide us the probabilistic interpretation of neural networks [20]. Instead of placing deterministic weights in the neural network, BNNs assume a prior distribution over them. By inferring the posterior distribution over the model weights, we can extract how uncertain a neural network is with its predictions. Uncertainty estimation methods include variational inference [21], sampling technique [22], or ensemble [23]. Recently, Y. Gal [24]

propose a method that captures the uncertainty in BNNs at test time by sampling the network multiple times with dropout. This dropout sampling method has been applied to active learning for cancer diagnosis

[25], semantic segmentation [26, 27] and image object detection for open-set problem [28].

Iii Network Architecture

Fig. 2: Network architecture for our proposed probabilistic Lidar 3D vehicle detector.

Our proposed probabilisitic Lidar 3D object detection network is shown in Fig. 2. The network takes the features from Lidar bird’s eye view (BEV) as input and feed it into a pre-processing network. Then, a region proposal network is employed to generate candidates of regions of interest (ROIs), which are processed through the intermediate layers. These layers are built to capture the uncertainties in our object detection network. After the intermediate layers, the features are fed into two fully connected layers for D bounding box regression and softmax objectness score.

Iii-a Input and Output Encoding

To extract the Lidar BEV features, the 3D point clouds are first projected onto a 2D grid with a resolution of m and then encoded by height, intensity and density maps [11]. The height maps are generated by dividing the point clouds into slices. As a result we obtain channel features. Fig. 2 shows exemplary input features.

The network outputs the softmax objectness score and the oriented 3D bounding boxes. The bounding boxes are encoded by their corner offsets, , and are normalized by the diagonal length of the proposal anchors. Concretely, let be the vector of a bounding box in the Lidar coordinate frame and its corresponding region proposal with diagonal length , we encode the -dimensional regression outputs by .

Iii-B Feature Pre-processing Network

To process the Lidar feature images, we use the ResNet-8 [29] architecture that contains residual blocks (one block architecture is shown in Fig. 3), with the number of kernels increasing from , , to . The last convolution layer is up-sampled with a factor of two for generating region proposals and detecting objects. In this way, the network can detect small vehicles that occupy only a few grid cells (- grid cells [11]).

Fig. 3: Resnet block architecture.

Iii-C Region Proposal Networks

We follow the Faster-RCNN pipeline[30] to generate BEV 3D region proposals. First, for each feature map pixel we generate nine 2D anchors using scales with box areas of , and and aspect ratios of , , and . Then, the 2D anchors are projected into 3D space by adding a fixed height value, which is selected large enough to enclose all daily vehicles.

Iii-D Intermediate Layers

To further process the Lidar features and to extract uncertainties, we design fully connected hidden layers, each of which has hidden units and is followed by a dropout layer.

Iv Capturing Uncertainty in Lidar 3D Vehicle Detection

As introduced in Sec. I, there are two different types of uncertainty that can be captured by our proposed probabilistic Lidar 3D vehicle detector: the epistemic uncertainty that describes the uncertainty in the model, and the aleatoric uncertainty that describes the observation noises.

We employ the intermediate layers (Fig. 2) as a Bayesian neural network to extract uncertainties. Let us denote as a ROI candidate generated by RPN during the test time, and the prediction of object labels or bounding boxes. We want to estimate the posterior distribution of the prediction , where denotes the training dataset and its ground truth. To do this, we marginalize the prediction over the weights in the fully connected layers through , where denotes the network output. By estimating the weight posterior

and performing Bayesian inference, we can extract epistemic uncertainty; By estimating the observation likelihood

, we can obtain the aleatoric uncertainty.

Iv-a Capturing Epistemic Uncertainty

In order to extract the epistemic uncertainty (or model uncertainty) in our vechile detection network, we need to calculate the weight posterior . However, performing the inference analytically is in general intractable, thus approximation techniques are required. Recently, Gal [24]

shows that dropout can be used as approximate inference in a Bayesian neural network: during the training process, performing dropout with stochastic gradient descent is equivalent to optimizing the approximated posterior distribution with Bernoulli distributions over the weights

. Gal [24] also shows that by performing the network forward passes several times with dropout during the test time, we can get the samples from the model’s posterior distribution, and thus obtain the epistemic uncertainty.

We employ the dropout method to capture the model uncertainty in the object detection network. Our network is trained using dropout with a dropout rate . For each region proposal during test time , it performs forward passes with dropout. The output of the network contains the softmax score and the bounding box regression , with and being the softmax score and the regression output for the forward pass, respectively. In the following, we illustrate how to use these outputs to measure the epistemic uncertainty.

Iv-A1 Extracting Vehicle Probability and Epistemic Classification Uncertainty

The vehicle probability

- the probability that a sample is recognized as a vehicle - is approximated by the mean of softmax score of forward passes according to


Note that if we set and , is approximated by a softmax score - the point-wise prediction done by most object detection networks. However, as Gal [24] mentioned, passing the point estimate can underestimate the uncertainty of samples that are different from the training dataset. In order to extract more accurate model uncertainty, the network needs to perform multiple forward passes with dropout (i.e. ).

We further use the Shannon entropy (SE) and mutual information (MI) to measure the classification uncertainty [24]. For a region proposal , its SE score is calculated via


SE captures the uncertainty in the prediction output . In this work, we use the natural logarithm, and the SE score ranges in . When or , the network is certain with its prediction, resulting SE. The SE score reaches its peak when the network is most uncertain with its prediction, namely, .

Different from SE, MI calculates the models’ confidence in the output. It measures the information difference between the prediction probability and the posterior of model parameters [24] according to


MI ranges between , with a large value indicating high epistemic classification uncertainty.

Iv-A2 Extracting 3D Bounding Box and Epistemic Spatial Uncertainty

We estimate the bounding box position of a region proposal in the Lidar coordinate frame, denoted as . For this purpose, we calculate the mean value of the regression outputs of forward passes. To do this, we first transform the bounding box prediction into the Lidar coordinate frame , the mean is calculated accordingly


To estimate the epistemic spatial (regression) uncertainty in the bounding box prediction, we use the total variance of the covariance matrix of forward pass regressions. The covariance matrix is calculated via . The total variance is the trace of the covariance matrix according to


This score ranges in , with a large value indicating high epistemic spatial uncertainty.

Iv-B Capturing Aleatoric Uncertainty

So far, we have explained how to use the dropout sampling technique to approximate the posterior distribution and to extract epistemic uncertainty. To capture the aleatoric uncertainty, we need to model the distribution . In the classification task we can model it by the softmax function, i.e. , with

referring to the object labels. In the 3D bounding box regression task, we model the observation likelihood as a multi-variate Gaussian distribution with diagonal covariance matrix via


Here, is the bounding box prediction. The parameter is a -dimensional vector, with each element representing an observation noise of a prediction in the bounding box regression . We can obtain the observation noises by adding an output regression layer in our vehicle detection network. In addition to the softmax scores and bounding box regressions, now it allows us to predict the observation noises , where we use for numerical stability. In the training phase, we modify the cost function for the bounding box regression through


As [26]

mentioned, this loss function can increase the network robustness when learning from noisy dataset. When the training data has high aleatoric uncertainty, the model is penalized by the

term and ignores the residual term since becomes small. Consequently, the data with high uncertainty contributes less to the loss. Note that instead of performing approximate inference when extracting epistemic uncertainty, here we perform MAP inference.

Iv-C Implementation

Training the neural network is achieved in a multi-loss end-to-end learning fashion. We use smooth loss and cross-entropy loss for the oriented 3D box regression () and classification () respectively. We also use the same loss function for region proposals and object category outputs in the Region Proposal Network ( and ). The final loss function is formulated as:


where we set and . We employ regularization and dropout with dropout rate to prevent over-fitting and to perform posterior distribution approximation. The network is trained steps using Adam optimizer and a learning rate of . We then reduce the learning rate to and train another steps for fine tuning.

To solely extract epistemic uncertainty, the network performs number of forward passes with dropout during the test time. We fix the dropout rate at , though it can be tuned by grid-searching to maximize the validation log likelihood [24] or by using gradient methods [31]. A dropout rate of introduces the highest epistemic uncertainty in the model. We also fix . To solely extract the aleatoric uncertainty, the network is trained with the loss function modified according to Eq. 7. It performs the feed-forward pass only once without dropout during the test time. To extract both epistemic and aleatoric uncertainty, the network trained with Eq. 7 needs to perform multiple feed-forward passes with dropout.

V Experimental Results

In this section, we experimentally evaluate our proposed probabilistic Lidar 3D object detection network. We show that modeling the aleatoric uncertainty improves the vehicle detection performance. We also show that the network captures model uncertainty and observation uncertainty, which behave differently from each other: The model (epistemic) uncertainty is influenced by the detection accuracy, but is unaffected by the vehicle distance. Conversely, the observation (aleatoric) uncertainty is associated with the vehicle distance and occlusion, and has little relationship to detection accuracy.

V-a Experimental Setup

Our proposed method was evaluated on the KITTI raw dataset [18]. Among the dataset, we randomly chose drives for training, and another drives for testing (drive , , , , , ). This corresponds to training frames and testing frames. We considered the Lidar point clouds in the ranges m, m, m (The lidar coordinate system is illustrated in Fig. 1). With a discretization resolution of m, each bird’s eye view input channel was a 2D image with pixels. We only evaluated the detections that were visible in the front-view of the camera.

The 3D vehicle proposals were considered to be detected when their probability scores were larger than . Their performance was evaluated using Intersection Over Union (IoU) threshold ranging from to . The IoU scores rate the similarity between a predicted vehicle and its ground truth. A higher IoU value with the ground truth indicates a more accurate prediction.

V-B 3D Vehicle Detection Performance

We first evaluated the 3D vehicle detection performance of our proposed network that captures epistemic and aleatoric uncertainty. As a baseline method, we trained a vehicle detector called Non-Bayesian which did not explicitly model the observation noise and the weight posterior distribution . We then compared the baseline method with the vehicle detectors that captured epistemic uncertainty (Epistemic), aleatoric uncertainty (Aleatoric), or both (Epistemic+Aleatoric). We used scores () at different IoU threshold to evaluate the detection performance, where indicates the 3D bounding box precision, and the recall value. Results are illustrated in Tab. I. The vehicle detectors that captured aleatoric uncertainty (Aleatoric and Epistemic+Aleatoric) consistently outperformed the baseline method, with an improvement of scores by . This is because modeling the aleatoric uncertainty has increased the model robustness when dealing with noisy input data. The network Epistemic, conversely, slightly underperformed the baseline method, as network capability was reduced by dropping out some of its hidden units.

IoU threshold
Network 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Non-Bayesian (baseline)
Aleatoric 0.750 0.736 0.716 0.688 0.651 0.533 0.253 0.044
TABLE I: score comparison for different vehicle detectors

V-C Understanding the Epistemic Uncertainty in 3D Object Detection

(a) Averaged Shannon entropy for epistemic uncertainty (b) Averaged mutual information for epistemic uncertainty (c) Averaged total variance for epistemic uncertainty
Fig. 4: Averaged espistemic uncertainty of predicted samples at differnet IoU intervals. The horizontal axis represents the increasing IoU values, and the vertical axis represents the uncertainty measurement. (a), (b): Shannon Entropy and Mutual Information estimates for epistemic classification uncertainty. (c): Total Variance for epistemic spatial uncertainty. The total variance in , , and axes were calculated.
(a) Epistemic classification uncertainty at IoU (b) Epistemic classification uncertainty at IoU (c) Epistemic spatial uncertainty at IoU (d) Epistemic spatial uncertainty at IoU
Fig. 5: The epistemic uncertainty estimates for each detection at IoU and IoU. (a), (b): classification uncertainty; (c), (d): 3D bounding box spatial uncertainty. The total variances in , , and axes were calculated.

We then evaluated how our proposed network captured the epistemic uncertainty in 3D object detection task. Fig. 4(a) and Fig. 4(b) demonstrate the mean values of Shannon Entropy (SE) and Mutual Information (MI) averaged over all predicted samples that lay within different IoU intervals. Predictions with higher IoU scores were more accurate. Results show that SE behaved similar to the MI score. Both scores were associated well with the prediction accuracy, as they became smaller with increasing IoU. We also evaluated the epistemic uncertainty in the bounding box regression. Following Eq. 5, we calculated the mean values of total variance in , , and axes. Fig. 4(c) shows that the total variance in the axis was the largest, indicating that the network was most uncertain to estimate the vehicles’ positions in the direction. The total variance in the axis was the smallest, as there was little variance in vehicles’ height. Fig. 4(c) also shows that the regression uncertainty was affected by the prediction accuracy. The total variance decreased for larger IoU values.

To better understand the epistemic uncertainty, we plot the SE and MI values for each predicted vehicle with IoU , referring to the most accurate detections (Fig. 5(a)), and IoU the worst detections (Fig. 5(b)). The SE and MI values for detections with IoU mostly lay on the bottom left of the figure, showing the low epistemic classification uncertainty. In contrast, detections with IoU were widely spread in Fig. 5(b), demonstrating higher uncertainty. We also analyzed the distribution of total variance for IoU and IoU . Fig. 5(c), Fig. 5(d) show large epistemic regression uncertainties, when the network made inaccurate detections. In conclusion, our proposed Lidar 3D object detector captured reliable epistemic classification and regression uncertainty. The objects with low classification and regression uncertainty were more likely to be detected accurately.

Qualitative Observations. We observed that big vehicles such as vans and trucks as well as the “ghost” objects (i.e. False Positives) often showed high levels of epistemic classification uncertainty (e.g. obj 5 in Fig. 6 and obj 0 in Fig. 6). Besides, detections with abnormal bounding boxes, e.g. boxes with unusual small length and large height (e.g. obj 6 in Fig. 6), often showed high epistemic spatial uncertainty. This is because that these detections differed from our training dataset, which contained no ghost objects, no objects with abnormal shapes, and a few big vehicles. Therefore, our Lidar 3D object detector was uncertain with them, and showed high epistemic uncertainty scores. Such information can be used to efficiently improve the vehicle detector in an active learning paradigm: the detector actively queries the unseen samples with high epistemic uncertainty. More detection results can be found in the supplementary video.

Fig. 6: Illustration of detections with high epistemic uncertainty: (a) big vehicles such as trucks; (b) ghost objects. A larger value indicates higher uncertainty. The camera images are used only for visualization purpose.

V-D Understanding the Aleatoric Uncertainty in 3D Object Detection

Finally, we evaluated the aleatoric uncertainty, i.e. the uncertainty that captures the observation noises in Lidar point clouds. Here, we focused on the 3D bounding box regression whose uncertainty is quantified by (Sec. IV-B). In this work, we did not explicitly model the parameters to quantify the aleatoric classification uncertainty. We leave it as an interesting topic for the future research.

We demonstrated the mean value of total variance of aleatoric spatial uncertainty with regard to different IoU intervals, similar to the experiment in Sec. V-C. To do this, we summed up the variance of observation noises of an object in , , axis, respectively. Fig. 7 shows that the aleatoric spatial uncertainty was little related to the detection accuracy, which was different from the epistemic uncertainty shown in Fig 4(c).

Fig. 7: Averaged total variance for aleatoric spatial uncertainty of predicted samples at differnet IoU intervals. The total variance in , , and axes were calculated.

We further analyzed the behavior of aleatoric uncertainty with regard to the distance between the ego-vehicle and the detected vehicles. We used the Pearson Correlation Coefficient (PCC) to quantify the linear correlation between the distance and the total variance in , , axes as well as the total variance of the whole covariance matrix, for detections in the test dataset. Tab. II shows the results. The aleatoric uncertainty was positively correlated with distance, indicating that a more distant vehicle was more difficult to be localized. This is due to the fact that the point clouds of an object become increasingly sparse at a large distance. Contrarily, the epistemic uncertainty showed little relationship with the distance. This conclusion was also supported by an exemplary sequence of vehicle detection shown in Fig. 8. As the car was leaving from the ego-vehicle, the aleatoric uncertainties in , , axes were continuously increasing, whereas the epistemic uncertainty showed no tendency.

Network axis axis axis All
Aleatoric 0.569 0.412 0.497 0.537
TABLE II: Pearson correlation coefficient between distance and spatial uncertainty
Fig. 8: An exemplary evolution of aleatoric and epistemic spatial uncertainty in sequential detections. PCC: Pearson correlation coefficient.

The observation noises were not only affected by distance, but by occlusion as well. We found that the corners of the bounding boxes which face directly towards the Lidar sensor consistently display smaller aleatoric spatial uncertainty than the occluded corners. For instance, the red scores in Fig. 9 and Fig. 9 represent the sum of the spatial uncertainty of the vehicles’ corners that face towards the Lidar sensor, and blue scores represent the uncertainty of occluded corners. These values were calculated by summing the observation noises of the front corners and back corners separately. For all detections in Fig. 9 and Fig. 9, the aleatoric spatial uncertainties of occluded corners were consistently higher than the ego-vehicle facing corners.

Fig. 9: Illustration of aleatoric spatial uncertainty in the corners of the bounding boxes that face towards to ego-vehicle (marked in red) and occluded corners (marked in blue). A larger value indicates higher uncertainty.

Vi Conclusions and Discussions

We have presented a probabilistic Lidar 3D vehicle detection network that captures reliable uncertainties in vehicle recognition and 3D bounding box regression tasks. Our proposed network models the epistemic uncertainty, i.e. the model uncertainty to describe data, by performing predictions several times with dropout. Our network also models the aleatoric uncertainty, i.e. the observation noises inherent in Lidar by adding an auxiliary output layer.

Epistemic and aleatoric uncertainties behave differently from each other. Experimental results showed that the epistemic uncertainty is associated with the detection accuracy. The network showed high epistemic uncertainty with samples that were different from the training dataset, such as ghost objects, big vehicles, or vehicles with abnormal bounding box regressions. Since the epistemic uncertainty displays the limitations of the vehicle detection network, it is highly valuable to be applied to efficiently querying the unseen samples to improve the model during the offline training phase (active learning). For example, when a vehicle detector trained on highways is deployed to urban areas, it is necessary to adapt the object detector in this new environment. By employing the epistemic uncertainty, the vehicle detector can be efficiently improved by actively querying objects such as pedestrians and cyclists which do not exist on highways.

Conversely, experiments showed that the aleatoric uncertainty is influenced by the detection distance and occlusion rather than detection accuracy, as distant vehicles or the occluded parts of vehicles contain high observation noises. In this way, the aleatoric uncertainty shows the sensor limitations and can be applied to improve the tracking of a vehicle position. Furthermore, we have showed that modeling the aleatoric uncertainty improved the detection performance by , indicating that it increased the model robustness to noisy data. Finally, computing the aleatoric uncertainty of a sample requires only one-time inference. Thus, modeling aleatoric uncertainty is useful for online deployment.

One limitation of our method is the computation cost when extracting the epistemic uncertainty, where the network needs to perform multiple feed-forward passes with dropout ( fps on a Titan X GPU). This makes epistemic uncertainty infeasible for online autonomous driving. Finding the trade-off between the performance of epistemic uncertainty and the number of feed-forward passes is an open question for the further research.

In the future, we intend to explore uncertainties in different Lidar based object detection architectures such as one-stage detection pipeline [32]. Furthermore, we plan to investigate more factors that may influence aleatoric uncertainties, such as Lidar reflection rates and different bounding box encodings. Finally, we plan to apply our uncertainty estimation to active learning and object tracking to improve bounding box predictions, or to incorporate these uncertainty estimations as additional knowledge to the training phase (e.g. [33]).


We thank Zhongyu Lou and Florian Faion for their suggestions and inspiring discussions.