Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

11/21/2022
by   Maximilian Dreyer, et al.
0

Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or bounding box. In this work, we address the need for more informative explanations for these predictors by proposing the post-hoc eXplainable Artificial Intelligence method L-CRP to generate explanations that automatically identify and visualize relevant concepts learned, recognized and used by the model during inference as well as precisely locate them in input space. Our method therefore goes beyond singular input-level attribution maps and, as an approach based on the recently published Concept Relevance Propagation technique, is efficiently applicable to state-of-the-art black-box architectures in segmentation and object detection, such as DeepLabV3+ and YOLOv6, among others. We verify the faithfulness of our proposed technique by quantitatively comparing different concept attribution methods, and discuss the effect on explanation complexity on popular datasets such as CityScapes, Pascal VOC and MS COCO 2017. The ability to precisely locate and communicate concepts is used to reveal and verify the use of background features, thereby highlighting possible biases of the model.

READ FULL TEXT

page 2

page 4

page 7

page 8

page 13

page 14

page 19

page 20

research
06/13/2022

Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure

This paper presents a new efficient black-box attribution method based o...
research
06/07/2022

From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation

The emerging field of eXplainable Artificial Intelligence (XAI) aims to ...
research
06/24/2021

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

Deep Neural Networks (DNNs) are known to be strong predictors, but their...
research
12/30/2022

Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces

Explainable AI transforms opaque decision strategies of ML models into e...
research
07/13/2023

Uncovering Unique Concept Vectors through Latent Space Decomposition

Interpreting the inner workings of deep learning models is crucial for e...
research
10/01/2021

LEMON: Explainable Entity Matching

State-of-the-art entity matching (EM) methods are hard to interpret, and...
research
03/21/2023

Using Explanations to Guide Models

Deep neural networks are highly performant, but might base their decisio...

Please sign up or login with your details

Forgot password? Click here to reset