1 Introduction
In stateoftheart research, the majority of CNNbased classifiers (convolutional neural networks) train to provide normalized predictionscores of observations given the set of classes, that is, in the interval
[15]. Normalized outputs aim to guarantee “probabilistic” interpretation. However, how reliable are these predictions in terms of probabilistic interpretation? Also, given an example of a nontrained class, how confident is the model? These are the key questions to be addressed in this work.Currently, possible answers to these open issues are related to calibration techniques and penalizing overconfident output distributions [2, 12]. Regularization is often used to reduce overconfidence, and consequently overfitting, such as the confidence penalty [12] which is added directly to the cost function. Examples of transformation of network weights include and regularization [10], Dropout [14], Multisample Dropout [4]
[5]. Alternatively, highly confident predictions can often be mitigated by calibration techniques such as Isotonic Regression [16]which combines binary probability estimates of multiple classes, thus jointly optimizing the bin boundary and bin predictions; Platt Scaling
[13]which uses classifier predictions as features for a logistic regression model; Beta Calibration
[7]which is the use of a parametric formulation that considers the Beta probability density function (pdf); and temperature scaling (TS)
[3]which multiplies all values of the Logit vector by a scalar parameter
, for all classes. The value of is obtained by minimizing the negative log likelihood on the validation set.Typically, postcalibration predictions are analysed via reliability diagram representations [6, 2], which illustrate the relationship the of the model’s prediction scores in regards to the true correctness likelihood [11]. Reliability diagrams show the expected accuracy of the examples as a function of confidence i.e., the maximum SoftMax value. The diagram illustrates the identity function should it be perfectly calibrated, while any deviation from a perfect diagonal represents a calibration error [6, 2], as shown in Fig. 0(a) and 0(b) with the uncalibrated () and temperature scaling () predictions on the testing set. Otherwise, Fig. 0(c) shows the distribution of scores (histogram), which is, even after
calibration, still overconfident. Consequently, calibration does not guarantee a good balance of the prediction scores and may jeopardize adequate probability interpretation.
Complex networks such as multilayer perceptron (MLPs) and CNNs are generally overconfident in the prediction phase, particularly when using the baseline SoftMax as the prediction function, generating illdistributed outputs
i.e., values very close to zero or one [2]. Taking into account the importance of having models grounded on proper probability assumptions to enable adequate interpretation of the outputs, and then making reliable decisions, this paper aims to contribute to the advances of multi sensor ( and LiDAR) perception for autonomous vehicle systems [8, 1, 9] by using pdfs (calculated on the training data) to model the Logitlayer scores. Then, the SoftMax is replaced by a Maximum Likelihood (), or by a Maximum A Posteriori (), as prediction layers, which provide a smoother distribution of predictive values. Note that it is not necessary to retrain the CNN i.e., this proposed technique is practical.2 Effects of Replacing the SoftMax Layer by a Probability Layer
The key contribution of this work is to replace the SoftMaxlayer (which is a “hard” normalization function) by a probabilistic layer (a or a layer) during the testing phase. The new layers make inference based on pdfs calculated on the Logit prediction scores using the training set. It is known that the SoftMax scores are overconfident (very close to zero or one), on the other hand the distribution of the scores at the Logitlayer is farmore appropriate to represent a pdf (as shown in Fig. 2). Therefore, replacement by or layers would be more adequate to perform probabilistic inference in regards to permitting decisionmaking under uncertainty which is particularly relevant in autonomous driving and robotic perception systems.
Let be the output score vector^{1}^{1}1The dimensionality of is proportional to the number of classes. of the CNN in the Logitlayer for the example , is the target class, and is the classconditional probability to be modelled in order to make probabilistic predictions. In this paper, a nonparametric pdf estimation, using histograms with (for the case) and bins (for the model), was applied over the predicted scores of the Logitlayer, on the training set, to estimate . Assuming the priors are uniform and identically distributed for the set of classes , thus a is straightforwardly calculated normalizing , by the during the prediction phase. Additionally, to avoid ‘zero’ probabilities and to incorporate some uncertainty level on the final prediction, we apply additive smoothing (with a factor equal to ) before the calculation of the posteriors. Alternatively, a layer can be used by considering, for instance, the apriori
as modelled by a Gaussian distribution, thus the
posterior becomes , where with meanand variance
calculated, per class, from the training set. To simplify, the rationale of using Normaldistributed priors is that, contrary to histograms or more tailored distribution, the Normal pdf fits the data more smoothly.
3 Evaluation and Discussion
Modalities:  

Fscore  
In this work a CNN is modeled by Inception . The classes of interest are pedestrians, cars, and cyclists; the classification dataset is based on the KITTI object [1], and the number of training examples are for ‘ped’, ‘car’, and ‘cyc’. The testing set is comprised of examples respectively.
The output scores of the CNN indicate a degree of certainty of the given prediction. The “certainty level” can be defined as the confidence of the model and, in a classification problem, represents the maximum value within the SoftMax layer i.e., equal to one for the target class. However, the output scores may not always represent a reliable indication of certainty with regards to the target class, especially when unseen or nontrained examples/objects occur in the prediction stage; this is particularly relevant for a realworld application involving autonomous robots and vehicles since unpredictable objects are highly likely to be encountered. With this in mind, in addition to the trained classes (‘ped’, ‘car’, ‘cyc’), a set of untrained objects are introduced: ‘person_sit.’,‘tram’, ‘truck’, ‘vans’, ‘trees/trunks’ comprised of examples respectively. All classes with the exception of ‘trees/trunks’ are from the aforementioned KITTI dataset directly, while the former is additionally introduced by this study. The rationale behind this is to evaluate the prediction confidence of the networks on objects that do not belong to any of the trained classes, and thus consequently the consistency of the models can be assessed. Ideally, if the classifiers are perfectly consistent in terms of probability interpretation, the prediction scores would be identical (equal to 1/3) for all the examples on the unseen set on a perclass basis.
Results on the testing set are shown in Table 1 in terms of Fscore metric and the average of the FPR prediction scores (classification errors). The average () and the samplevariance () of the predicted scores are also shown for the unseen testing set.
To summarise, the proposed probabilistic approach shows promising results since and reduce classifier overconfidence, as can be observed in Figures 2(c), 2(d), 2(e) and 2(f). In reference to Table 1, it can be observed that the FPR values are considerably lower than the result presented by a SoftMax (baseline) function. Finally, to assess classifier robustness or the uncertainty of the model when predicting examples of classes untrained by the network, we consider a testing comprised of ‘new’ objects. Overall, the results are exciting since the distribution of the predictions are not extremities as can be observed in Fig. 4. Quantitatively, the average scores of the network using and layers are significantly lower than the SoftMax approach, and thus are less confident on new/unseen negative objects .
References
 [1] (2013) Vision meets robotics: the KITTI dataset. International Journal of Robotics Research (IJRR) 32 (11). Cited by: §1, §3.

[2]
(2017)
On calibration of modern neural networks.
In
Proceedings of the 34th International Conference on Machine Learning
, Vol. 70, pp. 1321–1330. Cited by: §1, §1, §1.  [3] (2015) Distilling the knowledge in a neural network. In NIPS, Cited by: §1.
 [4] (2019) Multisample dropout for accelerated training and better generalization. Vol. abs/1905.09788. Cited by: §1.
 [5] (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In 32nd ICML, F. Bach and D. Blei (Eds.), Vol. 37, pp. 448–456. Cited by: §1.
 [6] (2019) Beyond temperature scaling: obtaining wellcalibrated multiclass probabilities with dirichlet calibration. In Advances in Neural Information Processing Systems 32, pp. 12316–12326. Cited by: §1.
 [7] (2017) Beta calibration: a wellfounded and easily implemented improvement on logistic calibration for binary classifiers. In 20th AISTATS., pp. 623–631. Cited by: §1.
 [8] (2019) Driveact: a multimodal dataset for finegrained driver behavior recognition in autonomous vehicles. In ICCV, Cited by: §1.

[9]
(2020)
Multimodal deeplearning for object recognition combining camera and LIDAR data
. In IEEE ICARSC, Vol. . Cited by: §1.  [10] (2004) Feature selection, L1 vs. L2 regularization, and rotational invariance. ICML ’04. Cited by: §1.

[11]
(2005)
Predicting good probabilities with supervised learning
. In ICML, pp. 625–632. Cited by: §1.  [12] (2017) Regularizing neural networks by penalizing confident output distributions. CoRR, arXiv: 1701.06548. Cited by: §1.

[13]
(2000)
Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods
. Adv. Large Margin Classifiers 10, pp. . Cited by: §1.  [14] (2014) Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15 (56), pp. 1929–1958. Cited by: §1.
 [15] (2018) Is robustness the cost of accuracy? A comprehensive study on the robustness of 18 deep image classification models. In ECCV, Cited by: §1.
 [16] (2002) Transforming classifier scores into accurate multiclass probability estimates. ACM SIGKDD International Conference on KDD, pp. . Cited by: §1.
Comments
There are no comments yet.