Human Attention-Guided Explainable Artificial Intelligence for Computer Vision Models

05/05/2023
by   Guoyang Liu, et al.
0

We examined whether embedding human attention knowledge into saliency-based explainable AI (XAI) methods for computer vision models could enhance their plausibility and faithfulness. We first developed new gradient-based XAI methods for object detection models to generate object-specific explanations by extending the current methods for image classification models. Interestingly, while these gradient-based methods worked well for explaining image classification models, when being used for explaining object detection models, the resulting saliency maps generally had lower faithfulness than human attention maps when performing the same task. We then developed Human Attention-Guided XAI (HAG-XAI) to learn from human attention how to best combine explanatory information from the models to enhance explanation plausibility by using trainable activation functions and smoothing kernels to maximize XAI saliency map's similarity to human attention maps. While for image classification models, HAG-XAI enhanced explanation plausibility at the expense of faithfulness, for object detection models it enhanced plausibility and faithfulness simultaneously and outperformed existing methods. The learned functions were model-specific, well generalizable to other databases.

READ FULL TEXT

page 1

page 3

page 4

page 9

page 10

page 12

research
03/30/2023

Model-agnostic explainable artificial intelligence for object detection in image data

Object detection is a fundamental task in computer vision, which has bee...
research
08/26/2021

A Comparison of Deep Saliency Map Generators on Multispectral Data in Object Detection

Deep neural networks, especially convolutional deep neural networks, are...
research
07/09/2023

A Novel Explainable Artificial Intelligence Model in Image Classification problem

In recent years, artificial intelligence is increasingly being applied w...
research
09/02/2021

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

We present Gradient Activation Maps (GAM) - a machinery for explaining p...
research
12/30/2021

Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI

This paper quantifies the quality of heatmap-based eXplainable AI method...
research
05/26/2019

Why do These Match? Explaining the Behavior of Image Similarity Models

Explaining a deep learning model can help users understand its behavior ...
research
11/06/2022

ViT-CX: Causal Explanation of Vision Transformers

Despite the popularity of Vision Transformers (ViTs) and eXplainable AI ...

Please sign up or login with your details

Forgot password? Click here to reset