In May 2016, in fact, a fatal car accident had occurred due to the malfunctioning of a low-level image processing component of an advanced assisted driving system (ADAS) to discriminate the white side of a trailer from a bright sky . In this regard, Kendall and Gal  proposed an uncertainty modeling method for deep learning estimating both aleatoric and epistemic uncertainties indicating noise inherent in the data generating process and uncertainty in the predictive model which captures the ignorance about the model. However, computationally-heavy Monte Carlo sampling is required which makes it not suitable for real-time applications.
In this paper, we present a novel uncertainty estimation method for a regression task using a deep neural network and its application to learning from demonstration (LfD). Specifically, a mixture density network (MDN) is used to model underlying process which is more appropriate for describing complex distributions , e.g., human demonstrations. We first present an uncertainty modeling method when making a prediction with an MDN which can be acquired with a single MDN forward path without Monte Carlo sampling. This sampling-free property makes it suitable for real-time robotic applications compared to existing uncertainty modeling methods that require multiple models  or sampling [10, 11, 6]. Furthermore, as an MDN is appropriate for modeling complex distributions  compared to a density network used in [6, 12] or standard neural network for regression, the experimental results on autonomous driving tell us that it can better represent the underlying policy of a driver given complex and noise demonstrations.
The main contributions of this paper are twofold. We first present a sampling-free uncertainty estimation method utilizing an MDN and show that it can be decomposed into two parts, explained and unexplained variances which indicate our ignorance about the model and measurement noise, respectively. The properties of the proposed uncertainty modeling method is analyzed through three different cases: absence of data, heavy measurement noise, and composition of functions scenarios. Using the analysis, we further propose an uncertainty-aware learning from demonstration (Lfd) method. We first train an aggressive controller in a simulated environment with an MDN and use the explained variance of an MDN to switch its mode to a rule-based conservative controller. When applied to a complex real-world driving dataset from the US Highway , the proposed uncertainty-award LfD outperforms compared methods in terms of safety of the driving as the out-of-distribution inputs, which are often refer to as covariate shift , are successfully captured by the proposed explained variance.
The remainder of this paper is composed as follows: Related work and preliminaries regarding modeling uncertainty in deep learning are introduced in Section II and Section III. The proposed uncertainty modeling method with an MDN is presented in Section IV and analyzed in Section V. Finally, in Section VI, we present an uncertainty-aware learning from demonstration method and successfully apply to an autonomous driving task using a real-world driving dataset by deploying and controlling an virtual car inside the road.
Ii Related Work
Despite the heavy successes in deep learning research areas, practical methods for estimating uncertainties in the predictions with deep networks have only recently become actively studied. In the seminal study of Gal and Ghahramani , a practical method of estimating the predictive variance of a deep neural network is proposed by computing from the sample mean and variance of stochastic forward paths, i.e., dropout . The main contribution of 
is to present a connection between an approximate Bayesian network and a sparse Gaussian process. This method is often referred to as Monte Carlo (MC) dropout and successfully applied to modeling model uncertainty in regression tasks, classification tasks, and reinforcement learning with Thompson sampling. The interested readers are referred to for more comprehensive information about uncertainty in deep learning and Bayesian neural networks.
Whereas  uses a standard neural network,  uses a density network whose output consists of both mean and variance of a prediction trained with a negative log likelihood criterion. Adversarial training is also applied by incorporating artificially generated adversarial examples. Furthermore, a multiple set of models are trained using different training sets to form an ensemble where the sample variance of a mixture distribution is used to estimate the uncertainty of a prediction. Interestingly, the usage of a mixture density network is encouraged in cases of modeling more complex distributions. Guillaumes  compared existing uncertainty acquisition methods including [9, 10].
Kendall and Gal  decomposed the predictive uncertainty into two major types, aleatoric uncertainty and epistemic uncertainty. First, epistemic uncertainty captures our ignorance about the predictive model. It is often referred to as a reducible uncertainty as this type of uncertainty can be reduced as we collect more training data from diverse scenarios. On the other hand, aleatoric uncertainty captures irreducible aspects of the predictive variance, such as the randomness inherent in the coin flipping. To this end, Kendall and Gal utilized a density network similar to 
but used a slightly different cost function for numerical stability. The variance outputs directly from the density network indicates heteroscedastic aleatoric uncertainty where the overall predictive uncertainty of the outputgiven an input is approximated using
where are samples of mean and variance functions of a density network with stochastic forward paths.
Modeling and incorporating uncertainty in predictions have been widely used in robotics, mostly to ensure safety in the training phase of reinforcement learning  or to avoid false classification of learned cost function . In , an uncertainty-aware collision prediction method is proposed by training multiple deep neural networks using bootstrapping and dropout. Once multiple networks are trained, the sample mean and variance of multiple stochastic forward paths of different networks are used to compute the predictive variance. Once the predictive variance is higher than a certain threshold, a risk-averse cost function is used instead of the learned cost function leading to a low-speed control. This approach can be seen as extending  by adding additional randomness from bootstrapping. However, as multiple networks are required, computational complexities of both training and test phases are increased.
, safe visual navigation is presented by training a deep network for modeling the probability of collision for a receding horizon control problem. To handleout of distribution
cases, a novelty detection algorithm is presented where the reconstruction loss of an autoencoder is used as a measure of novelty. Once current visual input is detected to be novel (high reconstruction loss), a rule-based collision estimation is used instead of learning based estimation. This switching between learning based and rule based approaches is similar to our approach. But, we focus on modeling an uncertainty of a policy function which consists of both input and output pairs whereas the novelty detection in can only consider input data.
The proposed method can be regarded as extending previous uncertainty estimation approaches by using a mixture density network (MDN) and present a novel variance estimation method for an MDN. We further show a single MDN without MC sampling is sufficient to model both reducible and irreducible parts of uncertainty. In fact, we show that it can better model both types of uncertainties in synthetic examples.
Iii-a Uncertainty Acquisition in Deep Learning
In , Kendall and Gal proposed two types of uncertainties, aleatoric and epistemic uncertainties. These two types of uncertainties capture different aspects of predictive uncertainty, i.e., a measure of uncertainty when we make a prediction using an approximation method such a deep network. First, aleatoric uncertainty captures the uncertainty in the data generating process, e.g., inherent randomness of a coin flipping or measurement noise. This type of uncertainty cannot be reduced even if we collect more training data. On the other hand, epistemic uncertainty models the ignorance of the predictive model where it can be explained away given an enough number of training data. Readers are referred to Section in  for further details.
Let be a dataset of samples. For notational simplicity, we assume that an input and output are an
-dimensional vector, i.e.,, and a scalar, i.e., , respectively. Suppose that
where is a target function and a measurement error
follows a zero-mean Gaussian distribution with a variance, i.e., . Note that, in this case, the variance of corresponds to the aleatoric uncertainty, i.e., where we will denote as aleatoric uncertainty. Similarly, epistemic uncertainty will be denoted as .
Suppose that we train to approximate from and get
Then, we can see that
which indicates the total predictive variance is the sum of aleatoric uncertainty and epistemic uncertainty.
Correctly acquiring and distinguishing each type of uncertainty is important to many practical problems. Suppose that we are making our model to predict the steering angle of an autonomous driving car (which is exactly the case in our experiments) where the training data is collected from human drivers with an accurate measurement device. In this case, high aleatoric uncertainty and small epistemic uncertainty indicate that there exist multiple possible steering angles. For example, a driver can reasonably steer the car to both left and right in case of another car in front and both sides open. However, when the prediction has low aleatoric uncertainty and high epistemic uncertainty, this indicates that our model is uncertain about the current prediction which could possibly due to the lack of training data. In this case, it is reasonable to switch to a risk-averse controller or alarm the driver, if possible.
Iii-B Mixture Density Network
A mixture density network (MDN) was first proposed in 
where the output of a neural network is composed of parameters constructing a Gaussian mixture model (GMM):
where is a set of parameters of a GMM, mixture probabilities, mixture means, and mixture variances, respectively. In other words, an MDN can be seen as a mapping from an input to the parameters of a GMM of an output, i.e., as shown in Figure 1.
However, as mixture weights should be on a dimensional simplex and each mixture variance should be positive definite, the output of an MDN is handled accordingly. We assume that each output dimension is independent, and each mixture variance becomes a diagonal matrix. Suppose that the dimension of the output is and let be the raw output of an MDN where , , and . Then the parameters of a GMM is computed by
where indicates the maximum mixture weights among all weights, is an operation to convert a -dimensional vector to a -dimensional diagonal matrix, and
is an element-wise sigmoid function.
Two heuristics are applied in (2) and (3). First, we found that exponential operations often cause numerical instabilities, and thus, subtracted the maximum mixture value in (2). In (3), similarly, we used a sigmoid function multiplied by a constant instead of an exponential function to satisfy the positiveness constraint of the variance. is selected manually and set to five throughout the experiments.
For training the MDN, we used a negative log likelihood as a cost function
where is a set of training data and
is for numerical stability of a logarithm function. We would like to note that an MDN can be implemented on top of any deep neural network architectures, e.g., a multi-layer perceptron, convolutional neural network, or recurrent neural network.
Once an MDN is trained, the predictive mean and variance can be computed by selecting the mean and variance of the mixture of the highest mixture weight (MAP estimation). This can be seen as a mixture of experts  where the mixture weights form a gating network for selecting local experts.
Iv Proposed Uncertainty Estimation Method
In this section, we propose a novel uncertainty acquisition method for a regression task using a mixture density network (MDN) using a law of total variance . As described in Section III-B, an MDN constitutes a Gaussian mixture model (GMM) of an output given a test input :
where , , and are -th mixture weight function, mean function, and variance function, respectively. Note that a GMM can approximate any given density function to arbitrary accuracy . While the number of mixtures becomes prohibitively large to achieve this, it is worthwhile noting that an MDN is more suitable for fitting complex and noisy distribution compared to a density network.
Iv-a Uncertainty Acquisition for a Mixture Density Network
Let us first define the total expectation of a GMM.
The total variance of a GMM is computed as follows (we omit in each function).
where the term inside the integral becomes
Therefore the total variance becomes (also see (47) in ):
Iv-B Connection to Aleatoric and Epistemic Uncertainties
Let and be aleatoric uncertainty and epistemic uncertainty, respectively. Then, these two uncertainties constitute total predictive variance as shown in Section III-A.
On the other hand, we can rewrite (7) as
We remark that indicates explained variance whereas represents unexplained variance . Observe that
This implies that (7) can be decomposed into uncertainty quantity of each mixture, i.e.,
in the right hand side of (10) is the predicted variance of the -th mixture where it can be interpreted as aleatoric uncertainty as the variance of a density network captures the noise inherent in data . Consequently, corresponds to the epistemic uncertainty estimating our ignorance about the model prediction. We validate these connections with both synthetic examples and track driving demonstrations in Section V and VI.
We would like to emphasize that Monte Carlo (MC) sampling is not required to compute the total variance of a GMM as we introduced randomness from the mixture distribution . The predictive variance (1) proposed in in  requires MC sampling with random weight masking. This additional sampling is required as the density network can only model a single mean and variance. However, when using the MDN, it can not only model the measurement noise or aleatoric uncertainty with (9) but also model our ignorance about the model through (8). Intuitively speaking, (8) becomes high when the mean function of each mixture does not match the total expectation and will be used to model in uncertainty-aware learning from demonstration in Section VI.
V Analysis of the Proposed Uncertainty Modeling with Synthetic Examples
In this section, we analyze the properties of the proposed uncertainty modeling method in Section IV with three carefully designed synthetic examples: absence of data, heavy noise, and composition of functions scenarios. In all experiments, fully connected networks with two hidden layers, hidden nodes, and tanhactivation function111 Other activation functions such as relu or softplus had been tested as well but omitted as they showed poor regression performance on our problem. We also observe that increasing the number of mixtures over does not affect the overall prediction and uncertainty estimation performance. is used and the number of mixtures is . We also implemented the algorithm in  with the same network topology and used Monte Carlo (MC) sampling for computing the predictive variance as shown in the original paper. The keep probability of dropout is
and all codes are implemented with TensorFlow
In the absence of data scenario, we removed the training data on the first quadrant and show how different each type of uncertainty measure behaves on the input regions with and without training data. For the heavy noise scenario, we added heavy uniform noises from to to the outputs training data whose inputs are on the first quadrant. The input space and underlying function for both absence of data and heavy noise are and , respectively. These two scenarios are designed to validate whether the proposed method can distinguish unfamiliar inputs from inputs with high measurement errors. We believe this ability of knowing its ignorance is especially important when it comes to deploy a learning-based system to an actual physical system involving humans.
In the composition of functions scenario, the training data are collected from two different function and flipped . This scenario is designed to reflect the cases where we have multiple but few choices to select. For example, when there is a car in front, there could be roughly two choices to select, turn left or turn right, where both choices are totally fine. If the learned model can distinguish this from noisy measurements, it could give additional information to the overall system.
As explained in Section IV, the proposed uncertainty measure is composed of two terms and where the sum is the total predictive variance . Before starting the analysis, let us state again the connections between and of the proposed method and aleatoric and epistemic uncertainties proposed in . indicates explained variance which corresponds to epistemic uncertainty and it indicate our ignorance about the model. On the other hand, models unexplained variance and it corresponds to aleatoric uncertainty indicating measurement noises or randomness inherent in the data generating process.
In Figure 2 and 2, the proposed uncertainty measures, , , and of both heavy noise and absence of data scenarios are illustrated. The uncertainty measures in , epistemic uncertainty and aleatoric uncertainty, are shown in Figure 2 and 2.
First, in the heavy noise scenario, both proposed method and the method in  capture noisy regions as illustrated in Figure 2 and 2. Particularly, in Figure 2 and epistemic uncertainty in Figure 2 correctly depicts the region with heavy noise. This is mainly because measurement noise is related to the aleatoric uncertainty which could easily be captured with density networks.
|Heavy noise||High||High or Low|
|Absence of data||High||High|
|Composition of functions||Low||High|
|Computation Time [ms]|
However, when it comes to the absence of data scenario, proposed and compared methods show clear difference. While both aleatoric and epistemic uncertainties in Figure 2 can hardly capture the regions with no training data, the proposed method shown in Figure 2 effectively captures such regions by assigning high uncertainties to unseen regions. In particular, it can be seen that captures this region more suitably compared to . This is a reasonable result in that is related to the epistemic uncertainty which represents the model uncertainty, i.e., our ignorance about the model.
This difference between and becomes more clear when we compare of absence of data and heavy noise scenarios. Unlike absence of data scenario where is high in the first quadrant (data-absence region), in this region contains both high and low variances. This is mainly due to the fact that is related to the epistemic uncertainty which can be explained away with more training data (even with high measurement noise).
It is also interesting to see the effect of the prior information of weight matrices. In Bayesian deep learning, a prior distribution is given to the weight matrices to assign posterior probability distribution over the outputs. In this perspective, weight decay on the weight matrices can be seen as assigning a Gaussian prior over the weight matrices. Figure 3, 3, and 3 show , , and of the absence of data scenario with different weight decay levels. One can clearly see that the regions with no training data are more accurately captured as we increase the weight decay level. Specifically, is more affected by the weight decay level as it corresponds to aleatoric uncertainty which is related to the weight decay level . Readers are referred to Section 6.7 in  for more information about the effects of weight decay and dropout to a Bayesian neural network.
Figure 3 and 3 shows the experimental results in the composition of functions scenario. As a single density network cannot model the composite of two functions, the reconstructed results shown with red circles in Figure 3 are poor as well as aleatoric and epistemic uncertainties. On the other hand, the reconstructed results from the MDN with mixtures accurately model the composition of two function. 222 A composite of two functions is not a proper function as there exist two different outputs per a single input. However, an MDN can model this as it consists of multiple mean functions. Furthermore, and show clear differences. In particular, is low on almost everywhere whereas has both high and low variances which is proportional to the difference between two composite function. As the training data itself does not contain any measurement noises, has low values in its input domain. However, becomes high where the differences between two possible outputs are high, as it becomes more hard to fit the training data in such regions.
Vi Uncertainty-Aware Learning from Demonstration to Drive
We propose uncertainty-aware LfD (UALfD) which combines the learning-based approach with a rule-based approach by switching the mode of the controller using the uncertainty measure in Section V. In particular, explained variance (8) is used as a measure of uncertainty as it estimates the model uncertainty. The proposed method makes the best of both approaches by using the model uncertainty as a switching criterion. The proposed UALfD is applied to an aggressive driving task using a real-world driving dataset  where the proposed method significantly improves the performance of driving in terms of both safety and efficiency by incorporating the uncertainty information.
For evaluating the proposed uncertainty-aware learning from demonstration method, we use the Next-Generation Simulation (NGSIM) datasets collected from US Highway 101  which provides minutes of real-world vehicle trajectories at Hz as well as CAD files for road descriptions. Figure 4 illustrates the road configuration of US Highway 101, which consists of three segments and six lanes. For testing, we only used the second segment, where the initial position of an ego car is at the start location of the third lane in the second segment and the goal is to reach the third segment. Once the ego-car is outside the track or collide with other cars, it is counted as a collision.
To efficiently collect a sufficient number of driving demonstrations, we use density matching reward learning  to automatically generate driving demonstrations in diverse environments by randomly changing the initial position of an ego car and configurations of other cars. Demonstrations with collisions are excluded from the dataset to form collision-free trajectories. However, it is also possible to manually collect an efficient number of demonstrations using human-in-the loop simulations.
We define a learning-based driving policy as a mapping from input features to the trigonometric encoded desired heading angle of a car, . Figure 6 illustrates obtainable features of the track driving simulator. A seven-dimensional frontal and rearward feature representation, which consists of three frontal distances to the closest cars in left, center, and right lanes in the front, three rearward distances to the closest cars in left, center, and right lanes in the back, and lane deviate distance, , are used as the input representation.
|Collision Ratio [%]||Min. Dist. to Cars||Lane Dev. Dist. [mm]||Lane Dev. Deg.||Elapsed Time [s]||Num. Lane Change|
|Collision Ratio [%]||Min. Dist. to Cars||Lane Dev. Dist. [mm]||Lane Dev. Deg.||Elapsed Time [s]||Num. Lane Change|
We trained three different network configurations: an MDN with ten mixtures, MDN (K=10), MDN with one mixture, MDN (k=1)333 This is identical to the density network used in [6, 9] , and a baseline fully connected layer, RegNet, trained with a squared loss. All networks have two hidden layers with nodes where a activation function is used. The proposed uncertainty-aware learning from demonstration method, UALfD, switches its mode to the safe policy when the of explained variance is higher that . As the variance is not scaled, we manually tune the threshold. However, we can chose the threshold based on the percentile of an empirical cumulative probability distribution similar to . We also implemented uncertainty-aware learning from demonstration method, UALfD2 which utilizes instead of to justify the usage of using as an uncertainty measure of LfD. To avoid an immediate collision, UALfD switches to the safe mode when is below and other methods switches when is below .444 We also tested with changing the distance threshold to , but the results were worse than in terms of collision ratio.
Figure 5 illustrates total variance , explainable variance , and unexplained varaince estimated using an MDN with ten mixtures at each location. Here, we can see clear differences between and in two squares with black dotted lines where can better capture model uncertainty possibly due to the lack of training data. Furthermore, we can see that the regions where the desirable heading depicted with gray arrows are not accurate, e.g., leading to a collision, has higher variance and it does not necessarily depend on the distance between the frontal car. This supports that our claim in that is more suitable for estimating modeling error.
Once a desired heading is achieved, the ego-car is controlled at Hz using a simple feedback controller for an angular velocity with where is the difference between current heading and desired heading from the learned controller normalized between to . A directional velocity is fixed at km/h and the control frequency is set to Hz. While we use a simple unicycle dynamic model, more complex dynamic models, e.g., vehicle and bicycle dynamic models, can be also used.
We also carefully designed a rule-based safe controller that safely keeps its lane without a collision where the directional and angular velocities are computed by following rules:
where and are the directional velocities of the frontal and rearward cars, respectively.
We conducted two sets of experiments, one with using the entire cars and the other with using of the cars. The quantitative results are shown in Table IV and IV. The results show that the driving policy that incorporates the proposed uncertainty measure clearly improves both safety and stability of driving. We would like to emphasize that the average elapsed time of the propose UALfD is the shortest among the compared method. Figure 7 shows trajectories and some snapshots of driving results of different methods. Among compared methods, the proposed UALfD, UALfD2, MDN (K=10), and Safe Mode safely navigates without colliding with other moving cars. On the other hand, the cars controlled with MDN (K=1) and RegNet collide with another car and move outside the track which clearly shows the advantage of using a mixture density network for modeling human demonstrations. Furthermore, while both UALfD and UALfD2 navigate without a collision, the average elapsed time and the average number lane changes varies greatly. This is mainly due to the fact that captures the measure noise rather than the model uncertainty which makes the control conservative similar to that of the Safe Mode.
In this paper, we proposed a novel uncertainty estimation method using a mixture density network. Unlike existing approaches that rely on ensemble of multiple models or Monte Carlo sampling with stochastic forward paths, the proposed uncertainty acquisition method can run with a single feedforward model without computationally-heavy sampling. We show that the proposed uncertainty measure can be decomposed into explained and unexplained variances and analyze the properties with three different cases: absence of data, heavy measurement noise, and composition of functions scenarios and show that it can effectively distinguish the three cases using the two types of variances. Furthermore, we propose an uncertainty-aware learning from demonstration method using the proposed uncertainty estimation and successfully applied to real-world driving dataset.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
Proc. of the IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
R. Collobert and J. Weston, “A unified architecture for natural language
processing: Deep neural networks with multitask learning,” in
Proc. of the International Conference on Machine Learning, 2008, pp. 160–167.
-  J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz, “Trust region policy optimization.” in Proc. of the International Conference on Machine Learing, 2015, pp. 1889–1897.
-  D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané, “Concrete problems in ai safety,” arXiv preprint arXiv:1606.06565, 2016.
-  AP and REUTERS, “Tesla working on ’improvements’ to its autopilot radar changes after model s owner became the first self-driving fatality.” June 2016. [Online]. Available: https://goo.gl/XkzzQd
-  A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” arXiv preprint arXiv:1703.04977, 2017.
-  C. M. Bishop, “Mixture density networks,” 1994.
-  A. Brando Guillaumes, “Mixture density networks for distribution and uncertainty estimation,” Master’s thesis, Universitat Politècnica de Catalunya, 2017.
-  B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” arXiv preprint arXiv:1612.01474, 2016.
-  Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: Representing model uncertainty in deep learning,” in Proc. of the International Conference on Machine Learing, 2016, pp. 1050–1059.
-  Y. Gal, “Uncertainty in deep learning,” Ph.D. dissertation, PhD thesis, University of Cambridge, 2016.
-  G. J. McLachlan and K. E. Basford, Mixture models: Inference and applications to clustering. Marcel Dekker, 1988, vol. 84.
-  J. Colyar and J. Halkias, “Us highway 101 dataset,” Federal Highway Administration (FHWA), Tech. Rep., 2007.
-  S. Ross, “Interactive learning for sequential decisions and predictions,” Ph.D. dissertation, Carnegie Mellon University, 2013.
-  N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting.” Journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
-  G. Kahn, A. Villaflor, V. Pong, P. Abbeel, and S. Levine, “Uncertainty-aware reinforcement learning for collision avoidance,” arXiv preprint arXiv:1702.01182, 2017.
-  C. Richter and N. Roy, “Safe visual navigation via deep learning and novelty detection,” in Proc. of the Robotics: Science and Systems Conference, 2017.
-  N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” arXiv preprint arXiv:1701.06538, 2017.
-  R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification. Wiley, New York, 1973.
-  M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
-  S. Choi, K. Lee, A. Park, and S. Oh, “Density matching reward learning,” arXiv preprint arXiv:1608.03694, 2016.