When saliency goes off on a tangent: Interpreting Deep Neural Networks with nonlinear saliency maps

10/13/2021
by   Jan Rosenzweig, et al.
0

A fundamental bottleneck in utilising complex machine learning systems for critical applications has been not knowing why they do and what they do, thus preventing the development of any crucial safety protocols. To date, no method exist that can provide full insight into the granularity of the neural network's decision process. In the past, saliency maps were an early attempt at resolving this problem through sensitivity calculations, whereby dimensions of a data point are selected based on how sensitive the output of the system is to them. However, the success of saliency maps has been at best limited, mainly due to the fact that they interpret the underlying learning system through a linear approximation. We present a novel class of methods for generating nonlinear saliency maps which fully account for the nonlinearity of the underlying learning system. While agreeing with linear saliency maps on simple problems where linear saliency maps are correct, they clearly identify more specific drivers of classification on complex examples where nonlinearities are more pronounced. This new class of methods significantly aids interpretability of deep neural networks and related machine learning systems. Crucially, they provide a starting point for their more broad use in serious applications, where 'why' is equally important as 'what'.

READ FULL TEXT

page 8

page 10

research
01/17/2023

Opti-CAM: Optimizing saliency maps for interpretability

Methods based on class activation maps (CAM) provide a simple mechanism ...
research
02/13/2019

Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps

Saliency Map, the gradient of the score function with respect to the inp...
research
05/19/2019

Predicting Model Failure using Saliency Maps in Autonomous Driving Systems

While machine learning systems show high success rate in many complex ta...
research
06/16/2020

A generalizable saliency map-based interpretation of model outcome

One of the significant challenges of deep neural networks is that the co...
research
04/22/2021

Semiotic Aggregation in Deep Learning

Convolutional neural networks utilize a hierarchy of neural network laye...
research
06/14/2022

Attributions Beyond Neural Networks: The Linear Program Case

Linear Programs (LPs) have been one of the building blocks in machine le...
research
06/07/2022

Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods

Saliency methods calculate how important each input feature is to a mach...

Please sign up or login with your details

Forgot password? Click here to reset