Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class Features

07/29/2023
by   Igor Cherepanov, et al.
0

Deep learning (DL) models achieve remarkable performance in classification tasks. However, models with high complexity can not be used in many risk-sensitive applications unless a comprehensible explanation is presented. Explainable artificial intelligence (xAI) focuses on the research to explain the decision-making of AI systems like DL. We extend a recent method of Class Activation Maps (CAMs) which visualizes the importance of each feature of a data sample contributing to the classification. In this paper, we aggregate CAMs from multiple samples to show a global explanation of the classification for semantically structured data. The aggregation allows the analyst to make sophisticated assumptions and analyze them with further drill-down visualizations. Our visual representation for the global CAM illustrates the impact of each feature with a square glyph containing two indicators. The color of the square indicates the classification impact of this feature. The size of the filled square describes the variability of the impact between single samples. For interesting features that require further analysis, a detailed view is necessary that provides the distribution of these values. We propose an interactive histogram to filter samples and refine the CAM to show relevant samples only. Our approach allows an analyst to detect important features of high-dimensional data and derive adjustments to the AI model based on our global explanation visualization.

READ FULL TEXT

page 6

page 7

page 9

page 10

page 11

research
04/29/2021

Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based Reasoning Pairings for Explanation and Data Augmentation

Recently, it has been proposed that fruitful synergies may exist between...
research
09/05/2022

Visualization Of Class Activation Maps To Explain AI Classification Of Network Packet Captures

The classification of internet traffic has become increasingly important...
research
06/09/2023

Strategies to exploit XAI to improve classification systems

Explainable Artificial Intelligence (XAI) aims to provide insights into ...
research
09/19/2023

QXAI: Explainable AI Framework for Quantitative Analysis in Patient Monitoring Systems

Artificial Intelligence techniques can be used to classify a patient's p...
research
04/21/2022

Perception Visualization: Seeing Through the Eyes of a DNN

Artificial intelligence (AI) systems power the world we live in. Deep ne...
research
10/01/2020

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

As an emerging field in Machine Learning, Explainable AI (XAI) has been ...
research
02/05/2021

Achieving Explainability for Plant Disease Classification with Disentangled Variational Autoencoders

Agricultural image recognition tasks are becoming increasingly dependent...

Please sign up or login with your details

Forgot password? Click here to reset