Explaining Regression Based Neural Network Model

04/15/2020 ∙ by Mégane Millan, et al. ∙ Sorbonne Université UPMC 0

Several methods have been proposed to explain Deep Neural Network (DNN). However, to our knowledge, only classification networks have been studied to try to determine which input dimensions motivated the decision. Furthermore, as there is no ground truth to this problem, results are only assessed qualitatively in regards to what would be meaningful for a human. In this work, we design an experimental settings where the ground truth can been established: we generate ideal signals and disrupted signals with errors and learn a neural network that determines the quality of the signals. This quality is simply a score based on the distance between the disrupted signals and the corresponding ideal signal. We then try to find out how the network estimated this score and hope to find the time-step and dimensions of the signal where errors are present. This experimental setting enables us to compare several methods for network explanation and to propose a new method, named AGRA for Accurate Gradient, based on several trainings that decrease the noise present in most state-of-the-art results. Comparative results show that the proposed method outperforms state-of-the-art methods for locating time-steps where errors occur in the signal.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Machine learning is increasingly present in today’s life since the arrival of the first Convolutionnal Neural Networks (CNN) [6]. Performances achieved by such networks are impressive and have led to their development in many applications, such as smart vehicles. Despite these performances, errors still exist and can have dramatic consequences, especially for applications where lives are at stake. Furthermore, in medical fields, for example, it is desirable not only to have a final classification result but also to know the causes of the decision. For all these reasons, more and more research is being conducted on DNN explanation, as mentioned in recent literature reviews [2], [10], [18].

To our knowledge, all these methods try to explain DNN trained for classification task: the goal is to find out which elements of the input led to the decision of the network. Unfortunately, no ground truth exists. Therefore, network-explanation results are only evaluated by looking at the produced maps and comparing them to what a human operator believes to be correct. Without an objective tool that quantifies results, it is difficult to compare the results of different methods.

In this article, we propose to build an experimental setup, associated with a ground truth, to quantify explanation results of networks. This setup aims at estimating signals quality: we created a database of ideal signals to which errors were added at random positions. A note is associated to each signal, depending on the distance of an example to its ideal version. A CNN is trained in regression to find this note. Then, the network explanation aims at determining which part of the input (temporal position and dimension) occasioned the score provided by the network. Such a setup with a ground truth enables us to compare quantitatively different DNN-interpretability algorithms.

In order to determine time-steps and dimensions of the input signal where errors occurred, we do a gradient descent that transforms the input to a signal that has the best possible note. This gradient descent enables us to have a gradient according to the input signal. Such a strategy is not new and it is known that these gradients are very noisy [14], [4]. During our experiments, we have found that these gradients vary a lot depending on training and weight initialisation. Actually, training the same model several times on the same database leads, for a given input example, to gradients that change a lot from one model to another. Some model gradients find some errors but not others, and some are very noisy while others are not, etc. We chose to take advantage of these variations to estimate an ”accurate gradient” from all the models. The proposed method, named Accurate GRAdient (AGRA), consists in averaging the gradients generated by the different trainings and weight initialization for the same input signal.

Thanks to our experimental setup, we quantitatively compare AGRA with several gradients-based methods and show its efficiency. Moreover, AGRA can be combined to other gradient-based methods to improve their performance.

Thus, two main contributions are proposed in this article. First, we develop an experimental database that allows to qualitatively and quantitatively compare DNN explanation methods. Second, we introduce a new DNN explanation technique AGRA, based on gradients, that outperforms state-of-the-art methods.

Ii Related work

Ii-a Explaining Deep Neural Networks methods

Several methods exist in the literature to explain DNN. Their goal is to find the contribution of each input feature to the output and thus, to produce attribution maps. Methods can be grouped into three main categories: class activation based approaches, perturbation based approaches and gradient based approaches.

Ii-A1 Class activation based approaches

Methods such as Class Activation Map (CAM) [19], Gradient-weighted Class Activation Mapping (Grad-CAM) [11], or Uncertainty based Class Activation Maps (U-CAM) [8] propose to generate Class Activation Maps that highlight pixels of the image the model used to make the classification decision. The goal is thus to produce maps similar to human attention regions. These maps are estimated in a multi-class classification context and are class-discriminative.

Ii-A2 Perturbation based approaches

The idea of these approaches is to disturb some portions of the input image and look at their influence on the output. Work in [17]

consists in systematically occulting different portions of the input image with a grey square, and monitoring the output of the classifier. As the probability of the correct class drops significantly when the object is occluded, this technique localizes objects in the scene. Another approach, based on perturbation, proposed by Ribeiro

et al. [9], is the Local Interpretable Model-Agnostic Explanation (LIME). A model is explained by perturbing the input and constructing a local linear model that can be interpreted. Thus, LIME makes local approximations of the complex decision surface.

Ii-A3 Gradient based approaches

Simonyan et al. [13] proposed to compute sensitivity maps as the gradient of the output according to input pixels in a classification task. If is the score function of the classification network for the class and input image , then sensitivity maps are defined as:

(1)

By intuition, important gradient values correspond to locations in the image that have a strong influence on the output.

In practice, these sensitivity maps are very noisy. A first solution to improve them is to change the back-propagation algorithm. Thus, deconvolution networks [17]

and Guided Backpropagation

[17] propose to discard negative gradient values during the back-propagation step. The idea is to keep only entries that will have a positive influence on the score.

Another problem with gradient-based techniques is that the score function may saturate for important input characteristics [15]

. Thus, the function may be flat (but important) around these inputs and thus, has a small gradient. Some methods address this problem by computing the global importance of each pixel. Thus, DeepLIFT (Deep Learning Important FeaTures)

[12]

decomposes the output prediction by back-propagating contributions of all neurons in the network to every feature of the input.

Layer-wise relevance propagation (LRP) [1] uses a pixel-wise decomposition to understand the contribution of each single pixel of the input image to the score function . A propagation rule, applied from the output back to the input, distributes class relevance found at a given layer onto the previous layer. It leads to a heatmap that highlights pixels responsible for the predicted class.

Three other methods, based on the classical back-propagation algorithm, exist to explain DNN: Gradient Input [12], Integrated gradient [16] and SmoothGrad [14].

Gradient Input [12], [1] was proposed to improve attribution maps. They are simply computed as the product between the gradients of the output with respect to the input and the input itself:

(2)

Instead of computing the gradients of the output according to the input pixels , Sundararajan et al. [16] integrate the gradients along a path from a baseline to the input . The integrated gradient, for the dimension of the input is defined as:

(3)

where is the gradient of according to along the dimension.

During computation, the integral is approximated via a summation: gradients at the N points lying on the straight line from the baseline to the input , are added. Integrated gradients add up to the difference between the outputs at and the baseline . Thus, if the baseline has a near-zero score, integrated gradients form an attribution map of the prediction output .

Given the rapid fluctuations of the gradient for an input image , it is less meaningful than a local average of gradient values. Thus, SmoothGrad [14] proposes to create an improved sensitivity maps based on a smoothing of with a Gaussian kernel. As the direct computation of such a local average in a high-dimensional input space is intractable, Smilkov et al. compute a stochastic approximation by taking random samples in a neighborhood of the input and averaging the resulting sensitivity maps:

(4)

where N is the number of noised inputs, and is a Gaussian noise with a mean and a standard deviation.

In this article, we also propose to use a gradient-based approach and to denoise the so-obtained gradient. The proposed approach, based on several trainings, can be combined to other gradient-based methods to improve their performances.

Iii AGRA method to obtain accurate gradient

In this work, we first propose to design an experimental setup to explain DNN. Then, we introduce a new method allowing to denoise the gradient of the output according to an input using several trainings of the same DNN.

Iii-a Designing an experimental setup

A problem often encountered with DNN explanation algorithms is the lack of ground truth. It is therefore difficult to quantitatively estimate the performance of such algorithms. To address this issue, we design a setup where this ground truth is available. This setup is composed of 2D temporal signals. Both dimensions are generated using sinusoids with different lengths, to which a small Gaussian noise has been added. These signals represent ideal signals in the database. Then we artificially create perturbations in both dimensions by adding high-frequency Gaussians. The number of perturbations varies uniformly between 0 and 8 and their position and the dimension where they appear are also drawn according to a uniform law.

A score, re-scaled between and , is then given to each of these signals. This score is based on the Mean Square Error (MSE) between the signal without perturbation and the disrupted signal. is attributed to ideal signals while score gets close to , when many perturbations are present. 1000 signals are thus generated, 750 are used for training and 250 for testing, drawn according to a uniform law.

The goal of the network will then be to regress the score of each input signal while the goal of the DNN explanation will be to find time-steps and dimensions of the errors. Three examples of signals extracted from the database are presented in Figure 1.

Fig. 1: Examples of one ideal signal (left) and two perturbed signals (middle with 5 pertubations and right with 6 pertubations)

Even if we are working on synthetic examples with a ground truth regarding the DNN explanation, this setup corresponds to a real application that aims to determine the quality of gestures in sports [7] or surgical context [3], for instance. In addition to assigning a score, DNN explanation will make it possible to determine where gestures are poorly carried out.

Iii-B The AGRA method

First a CNN is trained to regress the scores with a MSE loss between the predicted scores and the scores of the ground truth : . Then, for DNN explanation, a gradient of the output according to the input example , as that proposed in [13], is computed, without changing the weights of the networks. It is used to change the input so that its note increases. As the goal is to find differences difference between ideal signals and perturbed ones, the loss used for gradient back-propagation is the MSE between the predicted score and the optimal note ( in our case): . Several iterations are done until the ideal note is reached as explained in Algorithm 1 where is the learning rate and is the tolerance: loop stops when the loss is below .

0:  
0:  
  
  while  do
     
     
  end while
  GRAD(x) = x - x’
Algorithm 1 Compute Gradient

Unfortunately, and as stated before, this gradient is very noisy [14], [4]. Moreover, during our experiments, we observed that it depends significantly on weights initialisation and training of the network. Thus, even if two different trainings lead to similar regression scores, gradients are highly variable. Two examples of gradient can be found on Figure 2.

We decided to take advantage of these variations and average gradients of different models with different trainings, to obtain a noise-reduced and more accurate gradient. So, we trained times the same network to obtain models. Let , the gradient of the output according to the input, obtained with model , as described in the algorithm 1. AGRA is then obtained as described in Algorithm 2. AGRA method needs several trainings of the same model, which is computationally expensive. However, as shown in Figure 2, the so-obtained gradients are more accurate. Moreover, they no longer depend on training and initialisation, which was the case before when either good or bad gradients were obtained.

Fig. 2: Gradients obtained from two models with different initialization and with the AGRA method.
0:  
0:  
  
  for  to  do
     
  end for
  
Algorithm 2 Compute Accurate Gradient: AGRA

Iv Experimental results

For all methods involved in this section, we use the loss function

previously defined to compute gradients.

Iv-a Training procedure

The regression network consists of four temporal convolutional layers with filters of size , with no bias added. Each of them is followed by a pooling layer with size . Two fully connected layers with and neurons end the neural network with between them a dropout layer with a probability, with no bias. The network is learnt with adam optimizer [5] and a learning rate, for epochs. The network regresses a score between and and is trained times to obtain models. The mean MSE across the models, on the test set, is of 0.619 with a standard deviation of 0.089. So, during prediction, these models have a similar behavior.

Iv-B Qualitative results

Firstly, we present qualitative results of the five following methods:

  • Classical gradient GRAD [13] computed with Algorithm 1, a learning rate of 0.1 and a tolerance of 0.015.

  • GRAD input as defined in equation 2 and proposed by [12], [1]

  • Smooth gradient [14] estimated as the mean of gradients obtained with Algorithm 1 by adding a Gaussian noise with mean and as standard deviation on the input signal (equation 4).

  • Integrated gradient [16]. As the proposed network has no bias, the baseline is fixed to a zero signal with the same length than . In these conditions, the score of the baseline is and integrated gradients can been interpreted as an attribution map of the prediction output . Integrated gradients have already been multiplied by the input as explained in equation 3.

  • The AGRA method with trained models.

Fig. 3: Results obtained with all the methods on the same two examples.

As shown in Figure 3, classical gradients (GRAD) are noisy and do not lead to clear and easy to interpret results, since peaks at perturbation locations are sometimes too thin and small and can be considered as noise. Furthermore, multiplying these noisy gradients with the input only makes the results worse. Indeed, interesting peaks are enhanced but global results appear noisier than before. Moreover, the sign of the gradient, which gives information on the direction of the error, is lost due to this multiplication. Using smooth gradient instead of classical gradient gives better qualitative results with considerably less noise than before. However, noise is still present and the results are again difficult to interpret. Moreover, the magnitude of the gradient is often under-estimated. Integrated gradients are very noisy and have peaks at undisturbed positions, making them very difficult to interpret. As they are multiplied by the input signal, the sign of the gradient is lost. As shown in Figure 3, less noisy and more accurate results are achieved with AGRA method. Gradients actually highlight the locations corresponding to perturbations and have the correct direction to reconstruct the ideal signal.

Iv-C Quantitative results

To compare methods more thoroughly, giving quantitative results is crucial. Since ground truth is available for each example, it is possible to compute ideal gradients (the difference between perturbed signals and ideal ones) and compare them with results obtained with the different methods. Two metrics are used to make this comparison:

  • Mean Squared Error (MSE) between the signal without errors and the reconstructed signal obtained thanks to the gradients. This metric cannot be used for methods such as GRADInput or Integrated Gradient, since their goal is only to highlight important time-steps and not to reconstruct a perfect signal.

  • Pearson correlation coefficient between the ideal gradient and the gradient obtained with the different methods. To avoid penalising methods, that do not manage the signs (GRADInput and Integrated Gradient), this coefficient is computed between the norms of both ideal gradient and gradient from the methods.

The training examples have been averaged to obtain these metrics. Moreover, for GRAD, GRADInput, Smooth Grad and Integrated Gradient, metrics have been computed on the 50 trained models and afterwards averaged.

Methods Mean Squared Error
GRAD [13] 7.65
Smooth Grad [14] 7.85
AGRA 5.06
TABLE I: Mean Squared Error between ground truth gradients and estimated ones for different methods.

Table I presents the MSE obtained with different methods. As a reminder, an estimated gradient fitting perfectly to the ground truth one would correspond to a 0 MSE. Both GRAD and Smooth Grad methods are noisy. Moreover, Smooth Grad does not keep gradient magnitude. Thus, AGRA method outperforms both of these methods according to MSE. AGRA is therefore the most suitable method for signal reconstruction.

Methods Pearson Correlation
GRAD [13] 0.81
GRADxInput [12], [1] 0.82
Smooth Grad [14] 0.79
Integrated Gradient [16] 0.55
AGRA 0.94
TABLE II: Pearson Correlation coefficient for different methods estimated between the norm of gradients

As shown in Table II, Pearson correlation coefficients vary between 0.55 and 0.94. As Pearson correlation coefficients are standardised (the correlation is divided by the standard deviation of both gradients), they can be estimated in a meaningful way for each method, even when the gradient is multiplied by the input. The best results are obtained with our proposed method, which confirms the previous qualitative study and proves that this method gives better results than other state-of-the-art methods.

Table III gives the Pearson coefficients obtained by keeping the sign of the gradients when calculating the correlation: the correlation is estimated for each of the two dimensions and then averaged. Using this metric, only Grad and Smooth Grad methods can be evaluated since for the other two, multiplying by the input will change signs of gradient and results will not be exploitable. AGRA is again the most efficient method, even if Pearson coefficient do not take into account gradient magnitude, which does not penalize Smooth Grad as the MSE did.

Methods Pearson Correlation
GRAD [13] 0.68
Smooth Grad [14] 0.66
AGRA 0.84
TABLE III: Pearson Correlation coefficient for different methods estimated on all gradient dimension

To study AGRA behaviour, it is interesting to show the evolution of both MSE and Pearson Correlation, according to the number of averaged models (Figure 4). As stated before, gradients are model-dependant. So, MSE, Pearson coefficient and thus the explanation of the network change a lot according to the model. More particularly, it can been seen in Figure 4 that the two first training lead to bad results while the following ones, before the tenth, have a good explanation. Let’s remember that the different model changes just by the initialization of the weights. They all have nearly the same regression scores but their gradients change strongly. It is therefore impossible to define a priori the models that lead to a good quality gradient. So, in Figure 4, the is important at the beginning and then decreases before stabilizing. Averaging the gradients obtained by 20 or more models produces good explanation results, independent of learning. The same reasoning can be applied to Pearson correlation coefficient.

Fig. 4: Evolution of MSE and Correlation according to the numbers of averaged models.

Iv-D AGRA combined with other methods

As stated before, it is possible to combine our approach with different state-of-the-art methods, such as GRADInput, Smooth Grad and Integrated gradient, in order to improve both qualitative and quantitative results.

Fig. 5: Results obtained with all the methods averaged over 50 model over the 2 same examples.

As shown in Figure 5, using the average of models for all methods greatly improves their performances and especially denoises results of every methods. Quantitative results are all improved using AGRA as shown in Table IV, for both Pearson correlation and MSE. This shows that even if this method is computationally intensive, obtained results are really improved compared with state-of-the-art.

Methods Pearson Correlation MSE
GRAD [13]
0.81 7.65
AGRA
0.94 5.06
GRADInput [12], [1]
0.82 NA
GRADInput with AGRA
0.89 NA
Smooth Grad [14]
0.79 7.85
Smooth Grad
with AGRA
0.92 6.91
Integrated Gradient [16] 0.55 NA
Integrated Gradient
with AGRA
0.82 NA
TABLE IV: Pearson correlation coefficient and MSE for different methods combined with AGRA framework.

V Conclusion

In this paper a new approach to explain neural network decisions has been presented, with a specific experimental setup dedicated to neural network explanation. Indeed, the lack of ground truth for network explanation often only allows a qualitative comparison of different approaches. The design of a synthesis device, devoted to this task, enables quantitative comparisons.

In addition to this new database and experimental setup, a novel approach for network decision explanation has been proposed. Indeed, by observing that the explanation strongly depends on the learning of the model, we proposed to carry out several trainings and then to average explanations provided by each of them. It has been shown that this technique improves both qualitative results - indeed explanations are less noisy - and quantitative results, with better scores for both Pearson correlation and MSE of reconstructed signals. However the drawback of this method, is the high computation cost, since many models need to be trained.

In the future, we plan to extend this approach to models learned in classification to see if the same conclusions can be drawn.

References

  • [1] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, and W. Samek (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10 (7). Cited by: §II-A3, §II-A3, 2nd item, TABLE II, TABLE IV.
  • [2] F. K. Došilović, M. Brčić, and N. Hlupić (2018)

    Explainable artificial intelligence: a survey

    .
    In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO), pp. 0210–0215. Cited by: §I.
  • [3] H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P. Muller (2018)

    Evaluating surgical skills from kinematic data using convolutional neural networks

    .
    In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 214–221. Cited by: §III-A.
  • [4] B. Kim, J. Seo, S. Jeon, J. Koo, J. Choe, and T. Jeon (2019) Why are saliency maps noisy? cause of and solution to noisy saliency maps.

    2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)

    , pp. 4149–4157.
    Cited by: §I, §III-B.
  • [5] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. CoRR abs/1412.6980. Cited by: §IV-A.
  • [6] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §I.
  • [7] M. Millan and C. Achard (2020) Fine-tuning siamese networks to assess sport gestures quality. In Proceedings of the 15th International Conference on Computer Vision Theory and Applications, Cited by: §III-A.
  • [8] B. N. Patro, M. Lunayach, S. Patel, and V. P. Namboodiri (2019) U-cam: visual explanation using uncertainty based class activation maps. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7444–7453. Cited by: §II-A1.
  • [9] M. T. Ribeiro, S. Singh, and C. Guestrin (2016) ” Why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144. Cited by: §II-A2.
  • [10] W. Samek, T. Wiegand, and K. Müller (2017) Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296. Cited by: §I.
  • [11] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626. Cited by: §II-A1.
  • [12] A. Shrikumar, P. Greenside, and A. Kundaje (2017) Not just a black box: learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3145–3153. Cited by: §II-A3, §II-A3, §II-A3, 2nd item, TABLE II, TABLE IV.
  • [13] K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. CoRR abs/1312.6034. Cited by: §II-A3, §III-B, 1st item, TABLE I, TABLE II, TABLE III, TABLE IV.
  • [14] D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg (2017) Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825. Cited by: §I, §II-A3, §II-A3, §III-B, 3rd item, TABLE I, TABLE II, TABLE III, TABLE IV.
  • [15] M. Sundararajan, A. Taly, and Q. Yan (2016) Gradients of counterfactuals. arXiv preprint arXiv:1611.02639. Cited by: §II-A3.
  • [16] M. Sundararajan, A. Taly, and Q. Yan (2017) Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3319–3328. Cited by: §II-A3, §II-A3, 4th item, TABLE II, TABLE IV.
  • [17] M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Cited by: §II-A2, §II-A3.
  • [18] Q. Zhang and S. Zhu (2018) Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering 19 (1), pp. 27–39. Cited by: §I.
  • [19] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016)

    Learning deep features for discriminative localization

    .
    In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 2921–2929. Cited by: §II-A1.