A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models

09/19/2022
by   Savvas Karatsiolis, et al.
6

The widespread use of black-box AI models has raised the need for algorithms and methods that explain the decisions made by these models. In recent years, the AI research community is increasingly interested in models' explainability since black-box models take over more and more complicated and challenging tasks. Explainability becomes critical considering the dominance of deep learning techniques for a wide range of applications, including but not limited to computer vision. In the direction of understanding the inference process of deep learning models, many methods that provide human comprehensible evidence for the decisions of AI models have been developed, with the vast majority relying their operation on having access to the internal architecture and parameters of these models (e.g., the weights of neural networks). We propose a model-agnostic method for generating saliency maps that has access only to the output of the model and does not require additional information such as gradients. We use Differential Evolution (DE) to identify which image pixels are the most influential in a model's decision-making process and produce class activation maps (CAMs) whose quality is comparable to the quality of CAMs created with model-specific algorithms. DE-CAM achieves good performance without requiring access to the internal details of the model's architecture at the cost of more computational complexity.

READ FULL TEXT

page 6

page 8

page 9

research
05/30/2020

RelEx: A Model-Agnostic Relational Model Explainer

In recent years, considerable progress has been made on improving the in...
research
03/30/2023

Model-agnostic explainable artificial intelligence for object detection in image data

Object detection is a fundamental task in computer vision, which has bee...
research
06/06/2020

A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI

With the growing complexity of deep learning methods adopted in practica...
research
10/21/2019

Contextual Prediction Difference Analysis

The interpretation of black-box models has been investigated in recent y...
research
11/20/2018

A Gray Box Interpretable Visual Debugging Approach for Deep Sequence Learning Model

Deep Learning algorithms are often used as black box type learning and t...
research
09/05/2022

Visualization Of Class Activation Maps To Explain AI Classification Of Network Packet Captures

The classification of internet traffic has become increasingly important...
research
06/01/2018

Producing radiologist-quality reports for interpretable artificial intelligence

Current approaches to explaining the decisions of deep learning systems ...

Please sign up or login with your details

Forgot password? Click here to reset