Probabilistic Object Classification using CNN ML-MAP layers

05/29/2020
by   G. Melotti, et al.
Aston University
University of Coimbra
0

Deep networks are currently the state-of-the-art for sensory perception in autonomous driving and robotics. However, deep models often generate overconfident predictions precluding proper probabilistic interpretation which we argue is due to the nature of the SoftMax layer. To reduce the overconfidence without compromising the classification performance, we introduce a CNN probabilistic approach based on distributions calculated in the network's Logit layer. The approach enables Bayesian inference by means of ML and MAP layers. Experiments with calibrated and the proposed prediction layers are carried out on object classification using data from the KITTI database. Results are reported for camera (RGB) and LiDAR (range-view) modalities, where the new approach shows promising performance compared to SoftMax.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/02/2021

Probabilistic Approach for Road-Users Detection

Object detection in autonomous driving applications implies that the det...
03/20/2019

LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving

In this paper, we present LaserNet, a computationally efficient method f...
12/04/2017

Object Classification using Ensemble of Local and Deep Features

In this paper we propose an ensemble of local and deep features for obje...
05/01/2019

Unsupervised Temperature Scaling: Post-Processing Unsupervised Calibration of Deep Models Decisions

Great performances of deep learning are undeniable, with impressive resu...
08/27/2021

Recognition Awareness: An Application of Latent Cognizance to Open-Set Recognition

This study investigates an application of a new probabilistic interpreta...
10/07/2021

Ship Performance Monitoring using Machine-learning

The hydrodynamic performance of a sea-going ship varies over its lifespa...
12/11/2018

Deep networks with probabilistic gates

We investigate learning to probabilistically bypass computations in a ne...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In state-of-the-art research, the majority of CNN-based classifiers (convolutional neural networks) train to provide normalized prediction-scores of observations given the set of classes, that is, in the interval

[15]. Normalized outputs aim to guarantee “probabilistic” interpretation. However, how reliable are these predictions in terms of probabilistic interpretation? Also, given an example of a non-trained class, how confident is the model? These are the key questions to be addressed in this work.

Currently, possible answers to these open issues are related to calibration techniques and penalizing overconfident output distributions [2, 12]. Regularization is often used to reduce overconfidence, and consequently overfitting, such as the confidence penalty [12] which is added directly to the cost function. Examples of transformation of network weights include and regularization [10], Dropout [14], Multi-sample Dropout [4]

and Batch Normalization 

[5]. Alternatively, highly confident predictions can often be mitigated by calibration techniques such as Isotonic Regression [16]

which combines binary probability estimates of multiple classes, thus jointly optimizing the bin boundary and bin predictions; Platt Scaling 

[13]

which uses classifier predictions as features for a logistic regression model; Beta Calibration 

[7]

which is the use of a parametric formulation that considers the Beta probability density function (pdf); and temperature scaling (TS) 

[3]

which multiplies all values of the Logit vector by a scalar parameter

, for all classes. The value of is obtained by minimizing the negative log likelihood on the validation set.

Typically, post-calibration predictions are analysed via reliability diagram representations [6, 2], which illustrate the relationship the of the model’s prediction scores in regards to the true correctness likelihood [11]. Reliability diagrams show the expected accuracy of the examples as a function of confidence i.e., the maximum SoftMax value. The diagram illustrates the identity function should it be perfectly calibrated, while any deviation from a perfect diagonal represents a calibration error [6, 2], as shown in Fig. 0(a) and 0(b) with the uncalibrated () and temperature scaling () predictions on the testing set. Otherwise, Fig. 0(c) shows the distribution of scores (histogram), which is, even after

calibration, still overconfident. Consequently, calibration does not guarantee a good balance of the prediction scores and may jeopardize adequate probability interpretation.

(a) -
(b) -
(c) - SM scores
(d) -
(e) -
(f) - SM scores
Figure 1: RGB modality reliability diagrams (1 row), on the testing set, for uncalibrated (UC) in (a), and for temperature scaling (TS) in (b), with . Subfigure (c) shows the distribution of the calibrated prediction scores using SoftMax (SM). The 2 shows the LiDAR (range-view: RV) modality reliability diagrams in (d) and (e), with , while in (f) is the prediction-score distribution. Note that (c) and (f) are still overconfident post-calibration.

Complex networks such as multilayer perceptron (MLPs) and CNNs are generally overconfident in the prediction phase, particularly when using the baseline SoftMax as the prediction function, generating ill-distributed outputs

i.e., values very close to zero or one [2]. Taking into account the importance of having models grounded on proper probability assumptions to enable adequate interpretation of the outputs, and then making reliable decisions, this paper aims to contribute to the advances of multi sensor ( and LiDAR) perception for autonomous vehicle systems [8, 1, 9] by using pdfs (calculated on the training data) to model the Logit-layer scores. Then, the SoftMax is replaced by a Maximum Likelihood (), or by a Maximum A Posteriori (), as prediction layers, which provide a smoother distribution of predictive values. Note that it is not necessary to re-train the CNN i.e., this proposed technique is practical.

(a) Logit-layer scores, .
(b) SoftMax-layer scores, .
(c) Logit-layer scores, .
(d) SoftMax-layer scores, .
Figure 2: Probability density functions (pdf), here modeled by histograms, calculated for the Logit-layer scores for (a) and (c) modalities. The graphs in (a,b,c,d) are organized from left-right according to the examples on the Training set (where the positives are in orange). The distribution of the SoftMax prediction-scores in (b) and (d) are an evidence of high confidence.

2 Effects of Replacing the SoftMax Layer by a Probability Layer

The key contribution of this work is to replace the SoftMax-layer (which is a “hard” normalization function) by a probabilistic layer (a or a layer) during the testing phase. The new layers make inference based on pdfs calculated on the Logit prediction scores using the training set. It is known that the SoftMax scores are overconfident (very close to zero or one), on the other hand the distribution of the scores at the Logit-layer is far-more appropriate to represent a pdf (as shown in Fig. 2). Therefore, replacement by or layers would be more adequate to perform probabilistic inference in regards to permitting decision-making under uncertainty which is particularly relevant in autonomous driving and robotic perception systems.

Let be the output score vector111The dimensionality of is proportional to the number of classes. of the CNN in the Logit-layer for the example , is the target class, and is the class-conditional probability to be modelled in order to make probabilistic predictions. In this paper, a non-parametric pdf estimation, using histograms with (for the case) and bins (for the model), was applied over the predicted scores of the Logit-layer, on the training set, to estimate . Assuming the priors are uniform and identically distributed for the set of classes , thus a is straightforwardly calculated normalizing , by the during the prediction phase. Additionally, to avoid ‘zero’ probabilities and to incorporate some uncertainty level on the final prediction, we apply additive smoothing (with a factor equal to ) before the calculation of the posteriors. Alternatively, a layer can be used by considering, for instance, the a-priori

as modelled by a Gaussian distribution, thus the

posterior becomes , where with mean

and variance

calculated, per class, from the training set. To simplify, the rationale of using Normal-distributed priors is that, contrary to histograms or more tailored distribution, the Normal pdf fits the data more smoothly.

3 Evaluation and Discussion

(a) SoftMax-layer scores: .
(b) SoftMax-layer scores: .
(c) ML-layer scores: .
(d) ML-layer scores: .
(e) MAP-layer scores: .
(f) MAP-layer scores: .
Figure 3: Prediction scores (i.e., the network outputs), on the Testing set, using SoftMax (baseline solution), ML and MAP layers, for and LiDAR () modalities.
Modalities:
F-score
Table 1: Classification performance (%) in terms of average F-score and for the baseline () models compared to the proposed approach of ML and MAP layers. The performance measures on the ‘unseen’ dataset are the average and the variance of the prediction scores.

In this work a CNN is modeled by Inception . The classes of interest are pedestrians, cars, and cyclists; the classification dataset is based on the KITTI object [1], and the number of training examples are for ‘ped’, ‘car’, and ‘cyc’. The testing set is comprised of examples respectively.

(a) SoftMax (SM), ML, and MAP scores on the unseen set.
(b) SoftMax (SM), ML, and MAP scores on the unseen set.
Figure 4: Prediction scores, on the unseen data (comprising non-trained classes: ‘person_sit.’, ‘tram’, ‘trees/trunks’, ‘truck’, ‘vans’), for the networks using SoftMax-layer (left-most side), and the proposed ML (center) and MAP (right-side) layers.

The output scores of the CNN indicate a degree of certainty of the given prediction. The “certainty level” can be defined as the confidence of the model and, in a classification problem, represents the maximum value within the SoftMax layer i.e., equal to one for the target class. However, the output scores may not always represent a reliable indication of certainty with regards to the target class, especially when unseen or non-trained examples/objects occur in the prediction stage; this is particularly relevant for a real-world application involving autonomous robots and vehicles since unpredictable objects are highly likely to be encountered. With this in mind, in addition to the trained classes (‘ped’, ‘car’, ‘cyc’), a set of untrained objects are introduced: ‘person_sit.’,‘tram’, ‘truck’, ‘vans’, ‘trees/trunks’ comprised of examples respectively. All classes with the exception of ‘trees/trunks’ are from the aforementioned KITTI dataset directly, while the former is additionally introduced by this study. The rationale behind this is to evaluate the prediction confidence of the networks on objects that do not belong to any of the trained classes, and thus consequently the consistency of the models can be assessed. Ideally, if the classifiers are perfectly consistent in terms of probability interpretation, the prediction scores would be identical (equal to 1/3) for all the examples on the unseen set on a per-class basis.

Results on the testing set are shown in Table 1 in terms of F-score metric and the average of the FPR prediction scores (classification errors). The average () and the sample-variance () of the predicted scores are also shown for the unseen testing set.

To summarise, the proposed probabilistic approach shows promising results since and reduce classifier overconfidence, as can be observed in Figures 2(c), 2(d), 2(e) and 2(f). In reference to Table 1, it can be observed that the FPR values are considerably lower than the result presented by a SoftMax (baseline) function. Finally, to assess classifier robustness or the uncertainty of the model when predicting examples of classes untrained by the network, we consider a testing comprised of ‘new’ objects. Overall, the results are exciting since the distribution of the predictions are not extremities as can be observed in Fig. 4. Quantitatively, the average scores of the network using and layers are significantly lower than the SoftMax approach, and thus are less confident on new/unseen negative objects .

References

  • [1] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the KITTI dataset. International Journal of Robotics Research (IJRR) 32 (11). Cited by: §1, §3.
  • [2] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger (2017) On calibration of modern neural networks. In

    Proceedings of the 34th International Conference on Machine Learning

    ,
    Vol. 70, pp. 1321–1330. Cited by: §1, §1, §1.
  • [3] G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. In NIPS, Cited by: §1.
  • [4] H. Inoue (2019) Multi-sample dropout for accelerated training and better generalization. Vol. abs/1905.09788. Cited by: §1.
  • [5] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In 32nd ICML, F. Bach and D. Blei (Eds.), Vol. 37, pp. 448–456. Cited by: §1.
  • [6] M. Kull, M. Perello Nieto, M. Kängsepp, T. Silva Filho, H. Song, and P. Flach (2019) Beyond temperature scaling: obtaining well-calibrated multi-class probabilities with dirichlet calibration. In Advances in Neural Information Processing Systems 32, pp. 12316–12326. Cited by: §1.
  • [7] M. Kull, T. Silva Filho, and P. Flach (2017) Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. In 20th AISTATS., pp. 623–631. Cited by: §1.
  • [8] M. Martin, A. Roitberg, M. Haurilet, M. Horne, S. Reiss, M. Voit, and R. Stiefelhagen (2019) Driveact: a multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles. In ICCV, Cited by: §1.
  • [9] G. Melotti, C. Premebida, and N. Gonçalves (2020)

    Multimodal deep-learning for object recognition combining camera and LIDAR data

    .
    In IEEE ICARSC, Vol. . Cited by: §1.
  • [10] A. Y. Ng (2004) Feature selection, L1 vs. L2 regularization, and rotational invariance. ICML ’04. Cited by: §1.
  • [11] A. Niculescu-Mizil and R. Caruana (2005)

    Predicting good probabilities with supervised learning

    .
    In ICML, pp. 625–632. Cited by: §1.
  • [12] G. Pereyra, G. Tucker, J. Chorowski, L. Kaiser, and G. E. Hinton (2017) Regularizing neural networks by penalizing confident output distributions. CoRR, arXiv: 1701.06548. Cited by: §1.
  • [13] J. Platt (2000)

    Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods

    .
    Adv. Large Margin Classifiers 10, pp. . Cited by: §1.
  • [14] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15 (56), pp. 1929–1958. Cited by: §1.
  • [15] D. Su, H. Zhang, H. Chen, J. Yi, P. Chen, and Y. Gao (2018) Is robustness the cost of accuracy? A comprehensive study on the robustness of 18 deep image classification models. In ECCV, Cited by: §1.
  • [16] B. Zadrozny and C. Elkan (2002) Transforming classifier scores into accurate multiclass probability estimates. ACM SIGKDD International Conference on KDD, pp. . Cited by: §1.