The Weighting Game: Evaluating Quality of Explainability Methods

08/12/2022
by   Lassi Raatikainen, et al.
12

The objective of this paper is to assess the quality of explanation heatmaps for image classification tasks. To assess the quality of explainability methods, we approach the task through the lens of accuracy and stability. In this work, we make the following contributions. Firstly, we introduce the Weighting Game, which measures how much of a class-guided explanation is contained within the correct class' segmentation mask. Secondly, we introduce a metric for explanation stability, using zooming/panning transformations to measure differences between saliency maps with similar contents. Quantitative experiments are produced, using these new metrics, to evaluate the quality of explanations provided by commonly used CAM methods. The quality of explanations is also contrasted between different model architectures, with findings highlighting the need to consider model architecture when choosing an explainability method.

READ FULL TEXT

page 2

page 3

page 5

page 7

page 9

page 12

research
03/01/2019

Aggregating explainability methods for neural networks stabilizes explanations

Despite a growing literature on explaining neural networks, no consensus...
research
01/06/2022

Topological Representations of Local Explanations

Local explainability methods – those which seek to generate an explanati...
research
03/07/2022

Robustness and Usefulness in AI Explanation Methods

Explainability in machine learning has become incredibly important as ma...
research
02/23/2023

Dermatological Diagnosis Explainability Benchmark for Convolutional Neural Networks

In recent years, large strides have been taken in developing machine lea...
research
03/21/2023

Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models

The performance of convolutional neural networks has continued to improv...
research
10/07/2022

Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors

The proliferation of DeepFake technology is a rising challenge in today'...
research
09/25/2020

A Diagnostic Study of Explainability Techniques for Text Classification

Recent developments in machine learning have introduced models that appr...

Please sign up or login with your details

Forgot password? Click here to reset