I Introduction
Knowing what an object detection model is unsure about is of paramount importance for safe autonomous driving. For example, if an autonomous car recognizes a front object as a pedestrian but is uncertain about its location, the system may warn the driver to take over the car at an early stage or slow down to avoid fatal accidents.
Deep learning has been introduced to object detection in the autonomous driving settings that use cameras [1, 2, 3], Lidar [4, 5, 6, 7, 8, 9, 10], or both [11, 12, 13, 14, 15, 16, 17], and has set the benchmark on many popular datasets (e.g. KITTI [18], Cityscapes [19]
). However, to the best of our knowledge, none of these methods allow for the estimation of uncertainty in bounding box regression. Moreover, these methods often use a softmax layer to normalize the score vector on the purpose of classifying objects, which does not necessarily represent the classification uncertainty in the model. As a result, these object detections can only tell the human drivers
what they have seen, but not how certain they are about it.There are two types of uncertainty that we can quantify in the object detection network. Epistemic uncertainty, or model uncertainty, indicates how uncertain an object detector is to explain the observed dataset. Aleatoric uncertainty, conversely, captures the observation noises that are inherent in sensors. For instance, detecting an abnormal object that is different from the training dataset may result in high epistemic uncertainty, while detecting a distant object may result in high aleatoric uncertainty. Capturing both uncertainties in the object detection network is indispensable for safe autonomous driving, as the epistemic uncertainty displays the limitation of detection models, while the aleatoric uncertainty can provide the sensor observation noises for object tracking.
In this work, we develop practical methods to capture epistemic and aleatoric uncertainties in a 3D vehicle detector for Lidar point clouds. Our contributions are threefold:

We extract model uncertainty and observation uncertainty for the vehicle recognition and 3D bounding box regression tasks.

We show an improvement of vehicle detection performance by modeling the aleatoric uncertainty.

We study the difference between the epistemic and aleatoric uncertainty. The former is associated with the vehicle detection accuracy, while the latter is influenced by the vehicle distance and occlusion.
The remainder of the paper is structured as follows: Sec. II summarizes related works. Sec. III illustrates the architecture of our probabilistic Lidar 3D vehicle detection neural network. Sec. IV presents the proposed methods to capture epistemic and aleatoric uncertainties. Sec. V illustrates the experimental results, followed by a conclusion and a discussion of future research in Sec. VI.
Ii Related Works
In this section, we first summarize methods for 3D object detection in autonomous driving for Lidar point clouds. We then summarize related works to Bayesian neural network which we use to extract uncertainty in our Lidar vehicle detection network.
Iia 3D Object Detection in Autonomous Driving
IiA1 3D Object Detection via Lidar Point Clouds
Many works represent 3D point clouds using voxel grids. Li [4]
discretizes point clouds into square grids, and then employs a 3D convolution neural network that outputs an objectness map and a bounding box map for each grid. Zhou
et al. [6] introduce a voxel feature encoding (VFE) layer that can learn the unified features directly from the Lidar point clouds. They feed these features to a Region Proposal Network (RPN) that predicts the detections. Engelcke et al. [7] use a voting scheme to search possible object locations, and introduce special convolutional layers to tackle with the sparsity of the point clouds. In addition, several works encode 3D point clouds as 2D feature maps. Li et al. [5] project range scans into a 2D frontview depth map and use a 2D fully convolutional network (FCN) to detect vehicles. Caltagirone et al. [8] and Yang et al. [10] project the Lidar point clouds into birds’ eye view image and propose a road detector and a car detector, respectively.IiA2 3D Object Detection by Combining Lidar and Camera
Lidar point clouds usually give us accurate spatial information. Camera images provide us with object appearances, which is beneficial for object classification. Therefore, it is natural to combine these two sensors [11, 12, 13, 14, 15, 16, 17]. For example, Chen et al. [11] propose MV3D, a network that generates region proposals from the bird’s eye view Lidar features and then combine the proposals with the regional features from the front view Lidar feature maps and RGB camera images for accurate 3D vehicle detection. Qi et al. [12] use RGB camera images to draw 2D object bounding boxes, which build frustums in the Lidar point cloud. Then, they use these frustums for 3D bounding box regressions.
IiB Bayesian Neural Networks
Bayesian neural networks (BNNs) provide us the probabilistic interpretation of neural networks [20]. Instead of placing deterministic weights in the neural network, BNNs assume a prior distribution over them. By inferring the posterior distribution over the model weights, we can extract how uncertain a neural network is with its predictions. Uncertainty estimation methods include variational inference [21], sampling technique [22], or ensemble [23]. Recently, Y. Gal [24]
propose a method that captures the uncertainty in BNNs at test time by sampling the network multiple times with dropout. This dropout sampling method has been applied to active learning for cancer diagnosis
[25], semantic segmentation [26, 27] and image object detection for openset problem [28].Iii Network Architecture
Our proposed probabilisitic Lidar 3D object detection network is shown in Fig. 2. The network takes the features from Lidar bird’s eye view (BEV) as input and feed it into a preprocessing network. Then, a region proposal network is employed to generate candidates of regions of interest (ROIs), which are processed through the intermediate layers. These layers are built to capture the uncertainties in our object detection network. After the intermediate layers, the features are fed into two fully connected layers for D bounding box regression and softmax objectness score.
Iiia Input and Output Encoding
To extract the Lidar BEV features, the 3D point clouds are first projected onto a 2D grid with a resolution of m and then encoded by height, intensity and density maps [11]. The height maps are generated by dividing the point clouds into slices. As a result we obtain channel features. Fig. 2 shows exemplary input features.
The network outputs the softmax objectness score and the oriented 3D bounding boxes. The bounding boxes are encoded by their corner offsets, , and are normalized by the diagonal length of the proposal anchors. Concretely, let be the vector of a bounding box in the Lidar coordinate frame and its corresponding region proposal with diagonal length , we encode the dimensional regression outputs by .
IiiB Feature Preprocessing Network
To process the Lidar feature images, we use the ResNet8 [29] architecture that contains residual blocks (one block architecture is shown in Fig. 3), with the number of kernels increasing from , , to . The last convolution layer is upsampled with a factor of two for generating region proposals and detecting objects. In this way, the network can detect small vehicles that occupy only a few grid cells ( grid cells [11]).
IiiC Region Proposal Networks
We follow the FasterRCNN pipeline[30] to generate BEV 3D region proposals. First, for each feature map pixel we generate nine 2D anchors using scales with box areas of , and and aspect ratios of , , and . Then, the 2D anchors are projected into 3D space by adding a fixed height value, which is selected large enough to enclose all daily vehicles.
IiiD Intermediate Layers
To further process the Lidar features and to extract uncertainties, we design fully connected hidden layers, each of which has hidden units and is followed by a dropout layer.
Iv Capturing Uncertainty in Lidar 3D Vehicle Detection
As introduced in Sec. I, there are two different types of uncertainty that can be captured by our proposed probabilistic Lidar 3D vehicle detector: the epistemic uncertainty that describes the uncertainty in the model, and the aleatoric uncertainty that describes the observation noises.
We employ the intermediate layers (Fig. 2) as a Bayesian neural network to extract uncertainties. Let us denote as a ROI candidate generated by RPN during the test time, and the prediction of object labels or bounding boxes. We want to estimate the posterior distribution of the prediction , where denotes the training dataset and its ground truth. To do this, we marginalize the prediction over the weights in the fully connected layers through , where denotes the network output. By estimating the weight posterior
and performing Bayesian inference, we can extract epistemic uncertainty; By estimating the observation likelihood
, we can obtain the aleatoric uncertainty.Iva Capturing Epistemic Uncertainty
In order to extract the epistemic uncertainty (or model uncertainty) in our vechile detection network, we need to calculate the weight posterior . However, performing the inference analytically is in general intractable, thus approximation techniques are required. Recently, Gal [24]
shows that dropout can be used as approximate inference in a Bayesian neural network: during the training process, performing dropout with stochastic gradient descent is equivalent to optimizing the approximated posterior distribution with Bernoulli distributions over the weights
. Gal [24] also shows that by performing the network forward passes several times with dropout during the test time, we can get the samples from the model’s posterior distribution, and thus obtain the epistemic uncertainty.We employ the dropout method to capture the model uncertainty in the object detection network. Our network is trained using dropout with a dropout rate . For each region proposal during test time , it performs forward passes with dropout. The output of the network contains the softmax score and the bounding box regression , with and being the softmax score and the regression output for the forward pass, respectively. In the following, we illustrate how to use these outputs to measure the epistemic uncertainty.
IvA1 Extracting Vehicle Probability and Epistemic Classification Uncertainty
The vehicle probability
 the probability that a sample is recognized as a vehicle  is approximated by the mean of softmax score of forward passes according to(1) 
Note that if we set and , is approximated by a softmax score  the pointwise prediction done by most object detection networks. However, as Gal [24] mentioned, passing the point estimate can underestimate the uncertainty of samples that are different from the training dataset. In order to extract more accurate model uncertainty, the network needs to perform multiple forward passes with dropout (i.e. ).
We further use the Shannon entropy (SE) and mutual information (MI) to measure the classification uncertainty [24]. For a region proposal , its SE score is calculated via
(2) 
SE captures the uncertainty in the prediction output . In this work, we use the natural logarithm, and the SE score ranges in . When or , the network is certain with its prediction, resulting SE. The SE score reaches its peak when the network is most uncertain with its prediction, namely, .
Different from SE, MI calculates the models’ confidence in the output. It measures the information difference between the prediction probability and the posterior of model parameters [24] according to
(3) 
MI ranges between , with a large value indicating high epistemic classification uncertainty.
IvA2 Extracting 3D Bounding Box and Epistemic Spatial Uncertainty
We estimate the bounding box position of a region proposal in the Lidar coordinate frame, denoted as . For this purpose, we calculate the mean value of the regression outputs of forward passes. To do this, we first transform the bounding box prediction into the Lidar coordinate frame , the mean is calculated accordingly
(4) 
To estimate the epistemic spatial (regression) uncertainty in the bounding box prediction, we use the total variance of the covariance matrix of forward pass regressions. The covariance matrix is calculated via . The total variance is the trace of the covariance matrix according to
(5) 
This score ranges in , with a large value indicating high epistemic spatial uncertainty.
IvB Capturing Aleatoric Uncertainty
So far, we have explained how to use the dropout sampling technique to approximate the posterior distribution and to extract epistemic uncertainty. To capture the aleatoric uncertainty, we need to model the distribution . In the classification task we can model it by the softmax function, i.e. , with
referring to the object labels. In the 3D bounding box regression task, we model the observation likelihood as a multivariate Gaussian distribution with diagonal covariance matrix via
(6) 
Here, is the bounding box prediction. The parameter is a dimensional vector, with each element representing an observation noise of a prediction in the bounding box regression . We can obtain the observation noises by adding an output regression layer in our vehicle detection network. In addition to the softmax scores and bounding box regressions, now it allows us to predict the observation noises , where we use for numerical stability. In the training phase, we modify the cost function for the bounding box regression through
(7) 
As [26]
mentioned, this loss function can increase the network robustness when learning from noisy dataset. When the training data has high aleatoric uncertainty, the model is penalized by the
term and ignores the residual term since becomes small. Consequently, the data with high uncertainty contributes less to the loss. Note that instead of performing approximate inference when extracting epistemic uncertainty, here we perform MAP inference.IvC Implementation
Training the neural network is achieved in a multiloss endtoend learning fashion. We use smooth loss and crossentropy loss for the oriented 3D box regression () and classification () respectively. We also use the same loss function for region proposals and object category outputs in the Region Proposal Network ( and ). The final loss function is formulated as:
(8) 
where we set and . We employ regularization and dropout with dropout rate to prevent overfitting and to perform posterior distribution approximation. The network is trained steps using Adam optimizer and a learning rate of . We then reduce the learning rate to and train another steps for fine tuning.
To solely extract epistemic uncertainty, the network performs number of forward passes with dropout during the test time. We fix the dropout rate at , though it can be tuned by gridsearching to maximize the validation log likelihood [24] or by using gradient methods [31]. A dropout rate of introduces the highest epistemic uncertainty in the model. We also fix . To solely extract the aleatoric uncertainty, the network is trained with the loss function modified according to Eq. 7. It performs the feedforward pass only once without dropout during the test time. To extract both epistemic and aleatoric uncertainty, the network trained with Eq. 7 needs to perform multiple feedforward passes with dropout.
V Experimental Results
In this section, we experimentally evaluate our proposed probabilistic Lidar 3D object detection network. We show that modeling the aleatoric uncertainty improves the vehicle detection performance. We also show that the network captures model uncertainty and observation uncertainty, which behave differently from each other: The model (epistemic) uncertainty is influenced by the detection accuracy, but is unaffected by the vehicle distance. Conversely, the observation (aleatoric) uncertainty is associated with the vehicle distance and occlusion, and has little relationship to detection accuracy.
Va Experimental Setup
Our proposed method was evaluated on the KITTI raw dataset [18]. Among the dataset, we randomly chose drives for training, and another drives for testing (drive , , , , , ). This corresponds to training frames and testing frames. We considered the Lidar point clouds in the ranges m, m, m (The lidar coordinate system is illustrated in Fig. 1). With a discretization resolution of m, each bird’s eye view input channel was a 2D image with pixels. We only evaluated the detections that were visible in the frontview of the camera.
The 3D vehicle proposals were considered to be detected when their probability scores were larger than . Their performance was evaluated using Intersection Over Union (IoU) threshold ranging from to . The IoU scores rate the similarity between a predicted vehicle and its ground truth. A higher IoU value with the ground truth indicates a more accurate prediction.
VB 3D Vehicle Detection Performance
We first evaluated the 3D vehicle detection performance of our proposed network that captures epistemic and aleatoric uncertainty. As a baseline method, we trained a vehicle detector called NonBayesian which did not explicitly model the observation noise and the weight posterior distribution . We then compared the baseline method with the vehicle detectors that captured epistemic uncertainty (Epistemic), aleatoric uncertainty (Aleatoric), or both (Epistemic+Aleatoric). We used scores () at different IoU threshold to evaluate the detection performance, where indicates the 3D bounding box precision, and the recall value. Results are illustrated in Tab. I. The vehicle detectors that captured aleatoric uncertainty (Aleatoric and Epistemic+Aleatoric) consistently outperformed the baseline method, with an improvement of scores by . This is because modeling the aleatoric uncertainty has increased the model robustness when dealing with noisy input data. The network Epistemic, conversely, slightly underperformed the baseline method, as network capability was reduced by dropping out some of its hidden units.
IoU threshold  
Network  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8 
NonBayesian (baseline)  
Epistemic  
Aleatoric  0.750  0.736  0.716  0.688  0.651  0.533  0.253  0.044 
Epistemic+Aleatoric 
VC Understanding the Epistemic Uncertainty in 3D Object Detection
We then evaluated how our proposed network captured the epistemic uncertainty in 3D object detection task. Fig. 4(a) and Fig. 4(b) demonstrate the mean values of Shannon Entropy (SE) and Mutual Information (MI) averaged over all predicted samples that lay within different IoU intervals. Predictions with higher IoU scores were more accurate. Results show that SE behaved similar to the MI score. Both scores were associated well with the prediction accuracy, as they became smaller with increasing IoU. We also evaluated the epistemic uncertainty in the bounding box regression. Following Eq. 5, we calculated the mean values of total variance in , , and axes. Fig. 4(c) shows that the total variance in the axis was the largest, indicating that the network was most uncertain to estimate the vehicles’ positions in the direction. The total variance in the axis was the smallest, as there was little variance in vehicles’ height. Fig. 4(c) also shows that the regression uncertainty was affected by the prediction accuracy. The total variance decreased for larger IoU values.
To better understand the epistemic uncertainty, we plot the SE and MI values for each predicted vehicle with IoU , referring to the most accurate detections (Fig. 5(a)), and IoU the worst detections (Fig. 5(b)). The SE and MI values for detections with IoU mostly lay on the bottom left of the figure, showing the low epistemic classification uncertainty. In contrast, detections with IoU were widely spread in Fig. 5(b), demonstrating higher uncertainty. We also analyzed the distribution of total variance for IoU and IoU . Fig. 5(c), Fig. 5(d) show large epistemic regression uncertainties, when the network made inaccurate detections. In conclusion, our proposed Lidar 3D object detector captured reliable epistemic classification and regression uncertainty. The objects with low classification and regression uncertainty were more likely to be detected accurately.
Qualitative Observations. We observed that big vehicles such as vans and trucks as well as the “ghost” objects (i.e. False Positives) often showed high levels of epistemic classification uncertainty (e.g. obj 5 in Fig. 6 and obj 0 in Fig. 6). Besides, detections with abnormal bounding boxes, e.g. boxes with unusual small length and large height (e.g. obj 6 in Fig. 6), often showed high epistemic spatial uncertainty. This is because that these detections differed from our training dataset, which contained no ghost objects, no objects with abnormal shapes, and a few big vehicles. Therefore, our Lidar 3D object detector was uncertain with them, and showed high epistemic uncertainty scores. Such information can be used to efficiently improve the vehicle detector in an active learning paradigm: the detector actively queries the unseen samples with high epistemic uncertainty. More detection results can be found in the supplementary video.
VD Understanding the Aleatoric Uncertainty in 3D Object Detection
Finally, we evaluated the aleatoric uncertainty, i.e. the uncertainty that captures the observation noises in Lidar point clouds. Here, we focused on the 3D bounding box regression whose uncertainty is quantified by (Sec. IVB). In this work, we did not explicitly model the parameters to quantify the aleatoric classification uncertainty. We leave it as an interesting topic for the future research.
We demonstrated the mean value of total variance of aleatoric spatial uncertainty with regard to different IoU intervals, similar to the experiment in Sec. VC. To do this, we summed up the variance of observation noises of an object in , , axis, respectively. Fig. 7 shows that the aleatoric spatial uncertainty was little related to the detection accuracy, which was different from the epistemic uncertainty shown in Fig 4(c).
We further analyzed the behavior of aleatoric uncertainty with regard to the distance between the egovehicle and the detected vehicles. We used the Pearson Correlation Coefficient (PCC) to quantify the linear correlation between the distance and the total variance in , , axes as well as the total variance of the whole covariance matrix, for detections in the test dataset. Tab. II shows the results. The aleatoric uncertainty was positively correlated with distance, indicating that a more distant vehicle was more difficult to be localized. This is due to the fact that the point clouds of an object become increasingly sparse at a large distance. Contrarily, the epistemic uncertainty showed little relationship with the distance. This conclusion was also supported by an exemplary sequence of vehicle detection shown in Fig. 8. As the car was leaving from the egovehicle, the aleatoric uncertainties in , , axes were continuously increasing, whereas the epistemic uncertainty showed no tendency.
Network  axis  axis  axis  All 

Epistemic  
Aleatoric  0.569  0.412  0.497  0.537 
The observation noises were not only affected by distance, but by occlusion as well. We found that the corners of the bounding boxes which face directly towards the Lidar sensor consistently display smaller aleatoric spatial uncertainty than the occluded corners. For instance, the red scores in Fig. 9 and Fig. 9 represent the sum of the spatial uncertainty of the vehicles’ corners that face towards the Lidar sensor, and blue scores represent the uncertainty of occluded corners. These values were calculated by summing the observation noises of the front corners and back corners separately. For all detections in Fig. 9 and Fig. 9, the aleatoric spatial uncertainties of occluded corners were consistently higher than the egovehicle facing corners.
Vi Conclusions and Discussions
We have presented a probabilistic Lidar 3D vehicle detection network that captures reliable uncertainties in vehicle recognition and 3D bounding box regression tasks. Our proposed network models the epistemic uncertainty, i.e. the model uncertainty to describe data, by performing predictions several times with dropout. Our network also models the aleatoric uncertainty, i.e. the observation noises inherent in Lidar by adding an auxiliary output layer.
Epistemic and aleatoric uncertainties behave differently from each other. Experimental results showed that the epistemic uncertainty is associated with the detection accuracy. The network showed high epistemic uncertainty with samples that were different from the training dataset, such as ghost objects, big vehicles, or vehicles with abnormal bounding box regressions. Since the epistemic uncertainty displays the limitations of the vehicle detection network, it is highly valuable to be applied to efficiently querying the unseen samples to improve the model during the offline training phase (active learning). For example, when a vehicle detector trained on highways is deployed to urban areas, it is necessary to adapt the object detector in this new environment. By employing the epistemic uncertainty, the vehicle detector can be efficiently improved by actively querying objects such as pedestrians and cyclists which do not exist on highways.
Conversely, experiments showed that the aleatoric uncertainty is influenced by the detection distance and occlusion rather than detection accuracy, as distant vehicles or the occluded parts of vehicles contain high observation noises. In this way, the aleatoric uncertainty shows the sensor limitations and can be applied to improve the tracking of a vehicle position. Furthermore, we have showed that modeling the aleatoric uncertainty improved the detection performance by , indicating that it increased the model robustness to noisy data. Finally, computing the aleatoric uncertainty of a sample requires only onetime inference. Thus, modeling aleatoric uncertainty is useful for online deployment.
One limitation of our method is the computation cost when extracting the epistemic uncertainty, where the network needs to perform multiple feedforward passes with dropout ( fps on a Titan X GPU). This makes epistemic uncertainty infeasible for online autonomous driving. Finding the tradeoff between the performance of epistemic uncertainty and the number of feedforward passes is an open question for the further research.
In the future, we intend to explore uncertainties in different Lidar based object detection architectures such as onestage detection pipeline [32]. Furthermore, we plan to investigate more factors that may influence aleatoric uncertainties, such as Lidar reflection rates and different bounding box encodings. Finally, we plan to apply our uncertainty estimation to active learning and object tracking to improve bounding box predictions, or to incorporate these uncertainty estimations as additional knowledge to the training phase (e.g. [33]).
Acknowledgment
We thank Zhongyu Lou and Florian Faion for their suggestions and inspiring discussions.
References

[1]
X. Chen, K. Kundu, Z. Zhang, H. Ma, S. Fidler, and R. Urtasun, “Monocular 3d
object detection for autonomous driving,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, 2016, pp. 2147–2156.  [2] X. Chen, K. Kundu, Y. Zhu, A. G. Berneshawi, H. Ma, S. Fidler, and R. Urtasun, “3d object proposals for accurate object class detection,” in Advances in Neural Information Processing Systems, 2015, pp. 424–432.
 [3] A. Mousavian, D. Anguelov, J. Flynn, and J. Košecká, “3d bounding box estimation using deep learning and geometry,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017, pp. 5632–5640.
 [4] B. Li, “3d fully convolutional network for vehicle detection in point cloud,” in Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on. IEEE, 2017, pp. 1513–1518.
 [5] B. Li, T. Zhang, and T. Xia, “Vehicle detection from 3d lidar using fully convolutional network,” Robotics:Science and Systems, 2016.
 [6] Y. Zhou and O. Tuzel, “Voxelnet: Endtoend learning for point cloud based 3d object detection,” arXiv preprint arXiv:1711.06396, 2017.
 [7] M. Engelcke, D. Rao, D. Z. Wang, C. H. Tong, and I. Posner, “Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks,” in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 1355–1361.
 [8] L. Caltagirone, S. Scheidegger, L. Svensson, and M. Wahde, “Fast lidarbased road detection using fully convolutional neural networks,” in IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017, pp. 1019–1024.
 [9] A. Asvadi, L. Garrote, C. Premebida, P. Peixoto, and U. J. Nunes, “Depthcn: vehicle detection using 3dlidar and convnet,” in IEEE International Conference on Intelligent Transportation Systems, 2017.
 [10] B. Yang, W. Luo, and R. Urtasun, “Pixor: Realtime 3d object detection from point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7652–7660.
 [11] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multiview 3d object detection network for autonomous driving,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
 [12] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, “Frustum pointnets for 3d object detection from rgbd data,” arXiv preprint arXiv:1711.08488, 2017.
 [13] D. Xu, D. Anguelov, and A. Jain, “Pointfusion: Deep sensor fusion for 3d bounding box estimation,” arXiv preprint arXiv:1711.10871, 2017.
 [14] J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. Waslander, “Joint 3d proposal generation and object detection from view aggregation,” arXiv preprint arXiv:1712.02294, 2017.
 [15] Z. Wang, W. Zhan, and M. Tomizuka, “Fusing bird view lidar point cloud and front view camera image for deep object detection,” arXiv preprint arXiv:1711.06703, 2017.
 [16] X. Du, M. H. Ang Jr, S. Karaman, and D. Rus, “A general pipeline for 3d detection of vehicles,” arXiv preprint arXiv:1803.00387, 2018.
 [17] A. Pfeuffer and K. Dietmayer, “Optimal sensor data fusion architecture for object detection in adverse weather conditions,” in Proceedings of International Conference on Information Fusion, 2018, pp. 2592 – 2599.
 [18] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.

[19]
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223. 
[20]
D. J. MacKay, “A practical bayesian framework for backpropagation networks,”
Neural computation, vol. 4, no. 3, pp. 448–472, 1992. 
[21]
G. E. Hinton and D. Van Camp, “Keeping the neural networks simple by
minimizing the description length of the weights,” in
Proceedings of the sixth annual conference on Computational learning theory
. ACM, 1993, pp. 5–13.  [22] A. Graves, “Practical variational inference for neural networks,” in Advances in neural information processing systems, 2011, pp. 2348–2356.
 [23] I. Osband, C. Blundell, A. Pritzel, and B. Van Roy, “Deep exploration via bootstrapped dqn,” in Advances in neural information processing systems, 2016, pp. 4026–4034.
 [24] Y. Gal, “Uncertainty in deep learning,” Ph.D. dissertation, University of Cambridge, 2016.

[25]
Y. Gal, R. Islam, and Z. Ghahramani, “Deep bayesian active learning with image
data,” in
International Conference on Machine Learning
, 2017, pp. 1183–1192.  [26] A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems, 2017, pp. 5580–5590.
 [27] A. Kendall, V. Badrinarayanan, and R. Cipolla, “Bayesian segnet: Model uncertainty in deep convolutional encoderdecoder architectures for scene understanding,” arXiv preprint arXiv:1511.02680, 2015.
 [28] D. Miller, L. Nicholson, F. Dayoub, and N. Sünderhauf, “Dropout sampling for robust object detection in openset conditions,” arXiv preprint arXiv:1710.06677, 2017.
 [29] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision. Springer, 2016, pp. 630–645.
 [30] S. Ren, K. He, R. Girshick, and J. Sun, “Faster rcnn: Towards realtime object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
 [31] Y. Gal, J. Hron, and A. Kendall, “Concrete dropout,” in Advances in Neural Information Processing Systems, 2017, pp. 3584–3593.
 [32] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision. Springer, 2016, pp. 21–37.
 [33] H. Hu, J. Gu, Z. Zhang, J. Dai, and Y. Wei, “Relation networks for object detection,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.