Uncertainty-Aware Learning from Demonstration using Mixture Density Networks with Sampling-Free Variance Modeling

09/03/2017 ∙ by Sungjoon Choi, et al. ∙ Korea University 0

In this paper, we propose an uncertainty-aware learning from demonstration method by presenting a novel uncertainty estimation method utilizing a mixture density network appropriate for modeling complex and noisy human behaviors. The proposed uncertainty acquisition can be done with a single forward path without Monte Carlo sampling and is suitable for real-time robotics applications. The properties of the proposed uncertainty measure are analyzed through three different synthetic examples, absence of data, heavy measurement noise, and composition of functions scenarios. We show that each case can be distinguished using the proposed uncertainty measure and presented an uncertainty-aware learn- ing from demonstration method of an autonomous driving using this property. The proposed uncertainty-aware learning from demonstration method outperforms other compared methods in terms of safety using a complex real-world driving dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recently, deep learning has been successfully applied to a diverse range of research areas including computer vision

[1]

, natural language processing

[2], and robotics [3]. When deep networks are kept in a cyber environment without interacting with an actual physical system, mis-predictions or malfunctioning of the system may not cause catastrophic disasters. However, when using deep learning methods in a real physical system involving human beings, such as an autonomous car, safety issues must be considered appropriately [4].

In May 2016, in fact, a fatal car accident had occurred due to the malfunctioning of a low-level image processing component of an advanced assisted driving system (ADAS) to discriminate the white side of a trailer from a bright sky [5]. In this regard, Kendall and Gal [6] proposed an uncertainty modeling method for deep learning estimating both aleatoric and epistemic uncertainties indicating noise inherent in the data generating process and uncertainty in the predictive model which captures the ignorance about the model. However, computationally-heavy Monte Carlo sampling is required which makes it not suitable for real-time applications.

In this paper, we present a novel uncertainty estimation method for a regression task using a deep neural network and its application to learning from demonstration (LfD). Specifically, a mixture density network (MDN)

[7] is used to model underlying process which is more appropriate for describing complex distributions [8], e.g., human demonstrations. We first present an uncertainty modeling method when making a prediction with an MDN which can be acquired with a single MDN forward path without Monte Carlo sampling. This sampling-free property makes it suitable for real-time robotic applications compared to existing uncertainty modeling methods that require multiple models [9] or sampling [10, 11, 6]. Furthermore, as an MDN is appropriate for modeling complex distributions [12] compared to a density network used in [6, 12] or standard neural network for regression, the experimental results on autonomous driving tell us that it can better represent the underlying policy of a driver given complex and noise demonstrations.

The main contributions of this paper are twofold. We first present a sampling-free uncertainty estimation method utilizing an MDN and show that it can be decomposed into two parts, explained and unexplained variances which indicate our ignorance about the model and measurement noise, respectively. The properties of the proposed uncertainty modeling method is analyzed through three different cases: absence of data, heavy measurement noise, and composition of functions scenarios. Using the analysis, we further propose an uncertainty-aware learning from demonstration (Lfd) method. We first train an aggressive controller in a simulated environment with an MDN and use the explained variance of an MDN to switch its mode to a rule-based conservative controller. When applied to a complex real-world driving dataset from the US Highway [13], the proposed uncertainty-award LfD outperforms compared methods in terms of safety of the driving as the out-of-distribution inputs, which are often refer to as covariate shift [14], are successfully captured by the proposed explained variance.

The remainder of this paper is composed as follows: Related work and preliminaries regarding modeling uncertainty in deep learning are introduced in Section II and Section III. The proposed uncertainty modeling method with an MDN is presented in Section IV and analyzed in Section V. Finally, in Section VI, we present an uncertainty-aware learning from demonstration method and successfully apply to an autonomous driving task using a real-world driving dataset by deploying and controlling an virtual car inside the road.

Ii Related Work

Despite the heavy successes in deep learning research areas, practical methods for estimating uncertainties in the predictions with deep networks have only recently become actively studied. In the seminal study of Gal and Ghahramani [10], a practical method of estimating the predictive variance of a deep neural network is proposed by computing from the sample mean and variance of stochastic forward paths, i.e., dropout [15]. The main contribution of [10]

is to present a connection between an approximate Bayesian network and a sparse Gaussian process. This method is often referred to as Monte Carlo (MC) dropout and successfully applied to modeling model uncertainty in regression tasks, classification tasks, and reinforcement learning with Thompson sampling. The interested readers are referred to

[11] for more comprehensive information about uncertainty in deep learning and Bayesian neural networks.

Whereas [10] uses a standard neural network, [9] uses a density network whose output consists of both mean and variance of a prediction trained with a negative log likelihood criterion. Adversarial training is also applied by incorporating artificially generated adversarial examples. Furthermore, a multiple set of models are trained using different training sets to form an ensemble where the sample variance of a mixture distribution is used to estimate the uncertainty of a prediction. Interestingly, the usage of a mixture density network is encouraged in cases of modeling more complex distributions. Guillaumes [8] compared existing uncertainty acquisition methods including [9, 10].

Kendall and Gal [6] decomposed the predictive uncertainty into two major types, aleatoric uncertainty and epistemic uncertainty. First, epistemic uncertainty captures our ignorance about the predictive model. It is often referred to as a reducible uncertainty as this type of uncertainty can be reduced as we collect more training data from diverse scenarios. On the other hand, aleatoric uncertainty captures irreducible aspects of the predictive variance, such as the randomness inherent in the coin flipping. To this end, Kendall and Gal utilized a density network similar to [9]

but used a slightly different cost function for numerical stability. The variance outputs directly from the density network indicates heteroscedastic aleatoric uncertainty where the overall predictive uncertainty of the output

given an input is approximated using

(1)

where are samples of mean and variance functions of a density network with stochastic forward paths.

Modeling and incorporating uncertainty in predictions have been widely used in robotics, mostly to ensure safety in the training phase of reinforcement learning [16] or to avoid false classification of learned cost function [17]. In [16], an uncertainty-aware collision prediction method is proposed by training multiple deep neural networks using bootstrapping and dropout. Once multiple networks are trained, the sample mean and variance of multiple stochastic forward paths of different networks are used to compute the predictive variance. Once the predictive variance is higher than a certain threshold, a risk-averse cost function is used instead of the learned cost function leading to a low-speed control. This approach can be seen as extending [10] by adding additional randomness from bootstrapping. However, as multiple networks are required, computational complexities of both training and test phases are increased.

In [17]

, safe visual navigation is presented by training a deep network for modeling the probability of collision for a receding horizon control problem. To handle

out of distribution

cases, a novelty detection algorithm is presented where the reconstruction loss of an autoencoder is used as a measure of novelty. Once current visual input is detected to be novel (high reconstruction loss), a rule-based collision estimation is used instead of learning based estimation. This switching between learning based and rule based approaches is similar to our approach. But, we focus on modeling an uncertainty of a policy function which consists of both input and output pairs whereas the novelty detection in

[17] can only consider input data.

The proposed method can be regarded as extending previous uncertainty estimation approaches by using a mixture density network (MDN) and present a novel variance estimation method for an MDN. We further show a single MDN without MC sampling is sufficient to model both reducible and irreducible parts of uncertainty. In fact, we show that it can better model both types of uncertainties in synthetic examples.

Iii Preliminaries

Iii-a Uncertainty Acquisition in Deep Learning

In [6], Kendall and Gal proposed two types of uncertainties, aleatoric and epistemic uncertainties. These two types of uncertainties capture different aspects of predictive uncertainty, i.e., a measure of uncertainty when we make a prediction using an approximation method such a deep network. First, aleatoric uncertainty captures the uncertainty in the data generating process, e.g., inherent randomness of a coin flipping or measurement noise. This type of uncertainty cannot be reduced even if we collect more training data. On the other hand, epistemic uncertainty models the ignorance of the predictive model where it can be explained away given an enough number of training data. Readers are referred to Section in [11] for further details.

Let be a dataset of samples. For notational simplicity, we assume that an input and output are an

-dimensional vector, i.e.,

, and a scalar, i.e., , respectively. Suppose that

where is a target function and a measurement error

follows a zero-mean Gaussian distribution with a variance

, i.e., . Note that, in this case, the variance of corresponds to the aleatoric uncertainty, i.e., where we will denote as aleatoric uncertainty. Similarly, epistemic uncertainty will be denoted as .

Suppose that we train to approximate from and get

Then, we can see that

which indicates the total predictive variance is the sum of aleatoric uncertainty and epistemic uncertainty.

Correctly acquiring and distinguishing each type of uncertainty is important to many practical problems. Suppose that we are making our model to predict the steering angle of an autonomous driving car (which is exactly the case in our experiments) where the training data is collected from human drivers with an accurate measurement device. In this case, high aleatoric uncertainty and small epistemic uncertainty indicate that there exist multiple possible steering angles. For example, a driver can reasonably steer the car to both left and right in case of another car in front and both sides open. However, when the prediction has low aleatoric uncertainty and high epistemic uncertainty, this indicates that our model is uncertain about the current prediction which could possibly due to the lack of training data. In this case, it is reasonable to switch to a risk-averse controller or alarm the driver, if possible.

Iii-B Mixture Density Network

Fig. 1: A mixture density network () with two hidden layers where the output of the network is decomposed into , , and .

A mixture density network (MDN) was first proposed in [7]

where the output of a neural network is composed of parameters constructing a Gaussian mixture model (GMM):

where is a set of parameters of a GMM, mixture probabilities, mixture means, and mixture variances, respectively. In other words, an MDN can be seen as a mapping from an input to the parameters of a GMM of an output, i.e., as shown in Figure 1.

However, as mixture weights should be on a dimensional simplex and each mixture variance should be positive definite, the output of an MDN is handled accordingly. We assume that each output dimension is independent, and each mixture variance becomes a diagonal matrix. Suppose that the dimension of the output is and let be the raw output of an MDN where , , and . Then the parameters of a GMM is computed by

(2)
(3)

where indicates the maximum mixture weights among all weights, is an operation to convert a -dimensional vector to a -dimensional diagonal matrix, and

is an element-wise sigmoid function.

Two heuristics are applied in (

2) and (3). First, we found that exponential operations often cause numerical instabilities, and thus, subtracted the maximum mixture value in (2). In (3), similarly, we used a sigmoid function multiplied by a constant instead of an exponential function to satisfy the positiveness constraint of the variance. is selected manually and set to five throughout the experiments.

For training the MDN, we used a negative log likelihood as a cost function

(4)

where is a set of training data and

is for numerical stability of a logarithm function. We would like to note that an MDN can be implemented on top of any deep neural network architectures, e.g., a multi-layer perceptron, convolutional neural network, or recurrent neural network.

Once an MDN is trained, the predictive mean and variance can be computed by selecting the mean and variance of the mixture of the highest mixture weight (MAP estimation). This can be seen as a mixture of experts [18] where the mixture weights form a gating network for selecting local experts.

Iv Proposed Uncertainty Estimation Method

In this section, we propose a novel uncertainty acquisition method for a regression task using a mixture density network (MDN) using a law of total variance [19]. As described in Section III-B, an MDN constitutes a Gaussian mixture model (GMM) of an output given a test input :

(5)

where , , and are -th mixture weight function, mean function, and variance function, respectively. Note that a GMM can approximate any given density function to arbitrary accuracy [12]. While the number of mixtures becomes prohibitively large to achieve this, it is worthwhile noting that an MDN is more suitable for fitting complex and noisy distribution compared to a density network.

Iv-a Uncertainty Acquisition for a Mixture Density Network

Let us first define the total expectation of a GMM.

(6)

The total variance of a GMM is computed as follows (we omit in each function).

where the term inside the integral becomes

Therefore the total variance becomes (also see (47) in [7]):

(7)

Now let us present the connection between the two terms in (7) that constitute the total variance of a GMM to the epsitemic uncertainty and aleatoric uncertainty in [6].

Fig. 2: Proposed uncertainty measures of a) heavy noise and b) absence of data scenarios and Aleatoric and epistemic uncertainties of (c) heavy noise and (d) absence of data scenarios.

Iv-B Connection to Aleatoric and Epistemic Uncertainties

Let and be aleatoric uncertainty and epistemic uncertainty, respectively. Then, these two uncertainties constitute total predictive variance as shown in Section III-A.

On the other hand, we can rewrite (7) as

We remark that indicates explained variance whereas represents unexplained variance [19]. Observe that

(8)
(9)

This implies that (7) can be decomposed into uncertainty quantity of each mixture, i.e.,

(10)

in the right hand side of (10) is the predicted variance of the -th mixture where it can be interpreted as aleatoric uncertainty as the variance of a density network captures the noise inherent in data [16]. Consequently, corresponds to the epistemic uncertainty estimating our ignorance about the model prediction. We validate these connections with both synthetic examples and track driving demonstrations in Section V and VI.

We would like to emphasize that Monte Carlo (MC) sampling is not required to compute the total variance of a GMM as we introduced randomness from the mixture distribution . The predictive variance (1) proposed in in [6] requires MC sampling with random weight masking. This additional sampling is required as the density network can only model a single mean and variance. However, when using the MDN, it can not only model the measurement noise or aleatoric uncertainty with (9) but also model our ignorance about the model through (8). Intuitively speaking, (8) becomes high when the mean function of each mixture does not match the total expectation and will be used to model in uncertainty-aware learning from demonstration in Section VI.

V Analysis of the Proposed Uncertainty Modeling with Synthetic Examples

In this section, we analyze the properties of the proposed uncertainty modeling method in Section IV with three carefully designed synthetic examples: absence of data, heavy noise, and composition of functions scenarios. In all experiments, fully connected networks with two hidden layers, hidden nodes, and tanhactivation function111 Other activation functions such as relu or softplus had been tested as well but omitted as they showed poor regression performance on our problem. We also observe that increasing the number of mixtures over does not affect the overall prediction and uncertainty estimation performance. is used and the number of mixtures is . We also implemented the algorithm in [6] with the same network topology and used Monte Carlo (MC) sampling for computing the predictive variance as shown in the original paper. The keep probability of dropout is

and all codes are implemented with TensorFlow

[20]

In the absence of data scenario, we removed the training data on the first quadrant and show how different each type of uncertainty measure behaves on the input regions with and without training data. For the heavy noise scenario, we added heavy uniform noises from to to the outputs training data whose inputs are on the first quadrant. The input space and underlying function for both absence of data and heavy noise are and , respectively. These two scenarios are designed to validate whether the proposed method can distinguish unfamiliar inputs from inputs with high measurement errors. We believe this ability of knowing its ignorance is especially important when it comes to deploy a learning-based system to an actual physical system involving humans.

Fig. 3: Proposed uncertainty measures of the heavy noise scenario while varying the weight decay levels (a-c) and the composition of functions scenario (d). (e) Aleatoric and epistemic uncertainties of composition of functions scenario.

In the composition of functions scenario, the training data are collected from two different function and flipped . This scenario is designed to reflect the cases where we have multiple but few choices to select. For example, when there is a car in front, there could be roughly two choices to select, turn left or turn right, where both choices are totally fine. If the learned model can distinguish this from noisy measurements, it could give additional information to the overall system.

As explained in Section IV, the proposed uncertainty measure is composed of two terms and where the sum is the total predictive variance . Before starting the analysis, let us state again the connections between and of the proposed method and aleatoric and epistemic uncertainties proposed in [6]. indicates explained variance which corresponds to epistemic uncertainty and it indicate our ignorance about the model. On the other hand, models unexplained variance and it corresponds to aleatoric uncertainty indicating measurement noises or randomness inherent in the data generating process.

In Figure 2 and 2, the proposed uncertainty measures, , , and of both heavy noise and absence of data scenarios are illustrated. The uncertainty measures in [6], epistemic uncertainty and aleatoric uncertainty, are shown in Figure 2 and 2.

First, in the heavy noise scenario, both proposed method and the method in [6] capture noisy regions as illustrated in Figure 2 and 2. Particularly, in Figure 2 and epistemic uncertainty in Figure 2 correctly depicts the region with heavy noise. This is mainly because measurement noise is related to the aleatoric uncertainty which could easily be captured with density networks.

Heavy noise High High or Low
Absence of data High High
Composition of functions Low High
TABLE I: Summary of and on absence of data, heavy noise, and composition of functions scenarios.
Computation Time [ms]
Proposed method
[6]
TABLE II: Summary of Computation time for estimating the proposed and vs. aleatoric and epistemic uncertainties in [6] where we run the MC sampling for times.

However, when it comes to the absence of data scenario, proposed and compared methods show clear difference. While both aleatoric and epistemic uncertainties in Figure 2 can hardly capture the regions with no training data, the proposed method shown in Figure 2 effectively captures such regions by assigning high uncertainties to unseen regions. In particular, it can be seen that captures this region more suitably compared to . This is a reasonable result in that is related to the epistemic uncertainty which represents the model uncertainty, i.e., our ignorance about the model.

This difference between and becomes more clear when we compare of absence of data and heavy noise scenarios. Unlike absence of data scenario where is high in the first quadrant (data-absence region), in this region contains both high and low variances. This is mainly due to the fact that is related to the epistemic uncertainty which can be explained away with more training data (even with high measurement noise).

It is also interesting to see the effect of the prior information of weight matrices. In Bayesian deep learning, a prior distribution is given to the weight matrices to assign posterior probability distribution over the outputs

[11]. In this perspective, weight decay on the weight matrices can be seen as assigning a Gaussian prior over the weight matrices. Figure 3, 3, and 3 show , , and of the absence of data scenario with different weight decay levels. One can clearly see that the regions with no training data are more accurately captured as we increase the weight decay level. Specifically, is more affected by the weight decay level as it corresponds to aleatoric uncertainty which is related to the weight decay level [11]. Readers are referred to Section 6.7 in [11] for more information about the effects of weight decay and dropout to a Bayesian neural network.

Fig. 4: A snapshot of NGSIM track environmet.

Figure 3 and 3 shows the experimental results in the composition of functions scenario. As a single density network cannot model the composite of two functions, the reconstructed results shown with red circles in Figure 3 are poor as well as aleatoric and epistemic uncertainties. On the other hand, the reconstructed results from the MDN with mixtures accurately model the composition of two function. 222 A composite of two functions is not a proper function as there exist two different outputs per a single input. However, an MDN can model this as it consists of multiple mean functions. Furthermore, and show clear differences. In particular, is low on almost everywhere whereas has both high and low variances which is proportional to the difference between two composite function. As the training data itself does not contain any measurement noises, has low values in its input domain. However, becomes high where the differences between two possible outputs are high, as it becomes more hard to fit the training data in such regions.

Table II summarizes how and behave on three different scenarios. The computation times for the proposed method and compared method [6] is shown in Table II where the proposed method is about times faster as it does not require MC sampling.

Vi Uncertainty-Aware Learning from Demonstration to Drive

We propose uncertainty-aware LfD (UALfD) which combines the learning-based approach with a rule-based approach by switching the mode of the controller using the uncertainty measure in Section V. In particular, explained variance (8) is used as a measure of uncertainty as it estimates the model uncertainty. The proposed method makes the best of both approaches by using the model uncertainty as a switching criterion. The proposed UALfD is applied to an aggressive driving task using a real-world driving dataset [13] where the proposed method significantly improves the performance of driving in terms of both safety and efficiency by incorporating the uncertainty information.

For evaluating the proposed uncertainty-aware learning from demonstration method, we use the Next-Generation Simulation (NGSIM) datasets collected from US Highway 101 [13] which provides minutes of real-world vehicle trajectories at Hz as well as CAD files for road descriptions. Figure 4 illustrates the road configuration of US Highway 101, which consists of three segments and six lanes. For testing, we only used the second segment, where the initial position of an ego car is at the start location of the third lane in the second segment and the goal is to reach the third segment. Once the ego-car is outside the track or collide with other cars, it is counted as a collision.

Fig. 5: Different uncertainty measures on tracks.
Fig. 6: Feature descriptions for the track driving scenarios.

To efficiently collect a sufficient number of driving demonstrations, we use density matching reward learning [21] to automatically generate driving demonstrations in diverse environments by randomly changing the initial position of an ego car and configurations of other cars. Demonstrations with collisions are excluded from the dataset to form collision-free trajectories. However, it is also possible to manually collect an efficient number of demonstrations using human-in-the loop simulations.

We define a learning-based driving policy as a mapping from input features to the trigonometric encoded desired heading angle of a car, . Figure 6 illustrates obtainable features of the track driving simulator. A seven-dimensional frontal and rearward feature representation, which consists of three frontal distances to the closest cars in left, center, and right lanes in the front, three rearward distances to the closest cars in left, center, and right lanes in the back, and lane deviate distance, , are used as the input representation.

Fig. 7: Snapshots of driving results of different LfD methods.
Collision Ratio [%] Min. Dist. to Cars Lane Dev. Dist. [mm] Lane Dev. Deg. Elapsed Time [s] Num. Lane Change
UALfD (K=10)
UALfD2 (K=10)
MDN (K=10)
MDN (K=1)
RegNet
Safe Mode
TABLE III: Quantitative driving results in NGSIM dataset.
Collision Ratio [%] Min. Dist. to Cars Lane Dev. Dist. [mm] Lane Dev. Deg. Elapsed Time [s] Num. Lane Change
UALfD (K=10)
UALfD2 (K=10)
MDN (K=10)
MDN (K=1)
RegNet
Safe Mode
TABLE IV: Quantitative driving results in NGSIM dataset with of cars.

We trained three different network configurations: an MDN with ten mixtures, MDN (K=10), MDN with one mixture, MDN (k=1)333 This is identical to the density network used in [6, 9] , and a baseline fully connected layer, RegNet, trained with a squared loss. All networks have two hidden layers with nodes where a activation function is used. The proposed uncertainty-aware learning from demonstration method, UALfD, switches its mode to the safe policy when the of explained variance is higher that . As the variance is not scaled, we manually tune the threshold. However, we can chose the threshold based on the percentile of an empirical cumulative probability distribution similar to [17]. We also implemented uncertainty-aware learning from demonstration method, UALfD2 which utilizes instead of to justify the usage of using as an uncertainty measure of LfD. To avoid an immediate collision, UALfD switches to the safe mode when is below and other methods switches when is below .444 We also tested with changing the distance threshold to , but the results were worse than in terms of collision ratio.

Figure 5 illustrates total variance , explainable variance , and unexplained varaince estimated using an MDN with ten mixtures at each location. Here, we can see clear differences between and in two squares with black dotted lines where can better capture model uncertainty possibly due to the lack of training data. Furthermore, we can see that the regions where the desirable heading depicted with gray arrows are not accurate, e.g., leading to a collision, has higher variance and it does not necessarily depend on the distance between the frontal car. This supports that our claim in that is more suitable for estimating modeling error.

Once a desired heading is achieved, the ego-car is controlled at Hz using a simple feedback controller for an angular velocity with where is the difference between current heading and desired heading from the learned controller normalized between to . A directional velocity is fixed at km/h and the control frequency is set to Hz. While we use a simple unicycle dynamic model, more complex dynamic models, e.g., vehicle and bicycle dynamic models, can be also used.

We also carefully designed a rule-based safe controller that safely keeps its lane without a collision where the directional and angular velocities are computed by following rules:

where and are the directional velocities of the frontal and rearward cars, respectively.

We conducted two sets of experiments, one with using the entire cars and the other with using of the cars. The quantitative results are shown in Table IV and IV. The results show that the driving policy that incorporates the proposed uncertainty measure clearly improves both safety and stability of driving. We would like to emphasize that the average elapsed time of the propose UALfD is the shortest among the compared method. Figure 7 shows trajectories and some snapshots of driving results of different methods. Among compared methods, the proposed UALfD, UALfD2, MDN (K=10), and Safe Mode safely navigates without colliding with other moving cars. On the other hand, the cars controlled with MDN (K=1) and RegNet collide with another car and move outside the track which clearly shows the advantage of using a mixture density network for modeling human demonstrations. Furthermore, while both UALfD and UALfD2 navigate without a collision, the average elapsed time and the average number lane changes varies greatly. This is mainly due to the fact that captures the measure noise rather than the model uncertainty which makes the control conservative similar to that of the Safe Mode.

Vii Conclusion

In this paper, we proposed a novel uncertainty estimation method using a mixture density network. Unlike existing approaches that rely on ensemble of multiple models or Monte Carlo sampling with stochastic forward paths, the proposed uncertainty acquisition method can run with a single feedforward model without computationally-heavy sampling. We show that the proposed uncertainty measure can be decomposed into explained and unexplained variances and analyze the properties with three different cases: absence of data, heavy measurement noise, and composition of functions scenarios and show that it can effectively distinguish the three cases using the two types of variances. Furthermore, we propose an uncertainty-aware learning from demonstration method using the proposed uncertainty estimation and successfully applied to real-world driving dataset.

References