SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective

03/01/2023
by   Xiwei Xuan, et al.
0

Researchers have proposed various methods for visually interpreting the Convolutional Neural Network (CNN) via saliency maps, which include Class-Activation-Map (CAM) based approaches as a leading family. However, in terms of the internal design logic, existing CAM-based approaches often overlook the causal perspective that answers the core "why" question to help humans understand the explanation. Additionally, current CNN explanations lack the consideration of both necessity and sufficiency, two complementary sides of a desirable explanation. This paper presents a causality-driven framework, SUNY, designed to rationalize the explanations toward better human understanding. Using the CNN model's input features or internal filters as hypothetical causes, SUNY generates explanations by bi-directional quantifications on both the necessary and sufficient perspectives. Extensive evaluations justify that SUNY not only produces more informative and convincing explanations from the angles of necessity and sufficiency, but also achieves performances competitive to other approaches across different CNN architectures over large-scale datasets, including ILSVRC2012 and CUB-200-2011.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 8

research
10/30/2017

Grad-CAM++: Generalized Gradient-based Visual Explanations for Deep Convolutional Networks

Over the last decade, Convolutional Neural Network (CNN) models have bee...
research
08/07/2022

Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value

Explaining deep convolutional neural networks has been recently drawing ...
research
05/20/2020

Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks

Recently, increasing attention has been drawn to the internal mechani...
research
12/31/2022

Mapping Knowledge Representations to Concepts: A Review and New Perspectives

The success of neural networks builds to a large extent on their ability...
research
09/20/2023

Signature Activation: A Sparse Signal View for Holistic Saliency

The adoption of machine learning in healthcare calls for model transpare...
research
03/27/2021

Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice

Necessity and sufficiency are the building blocks of all successful expl...
research
03/26/2021

Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation

We present a novel method for reliably explaining the predictions of neu...

Please sign up or login with your details

Forgot password? Click here to reset