Expressive Explanations of DNNs by Combining Concept Analysis with ILP

05/16/2021
by   Johannes Rabold, et al.
12

Explainable AI has emerged to be a key component for black-box machine learning approaches in domains with a high demand for reliability or transparency. Examples are medical assistant systems, and applications concerned with the General Data Protection Regulation of the European Union, which features transparency as a cornerstone. Such demands require the ability to audit the rationale behind a classifier's decision. While visualizations are the de facto standard of explanations, they come short in terms of expressiveness in many ways: They cannot distinguish between different attribute manifestations of visual features (e.g. eye open vs. closed), and they cannot accurately describe the influence of absence of, and relations between features. An alternative would be more expressive symbolic surrogate models. However, these require symbolic inputs, which are not readily available in most computer vision tasks. In this paper we investigate how to overcome this: We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN). The semantics of the features are mined by a concept analysis approach trained on a set of human understandable visual concepts. The explanation is found by an Inductive Logic Programming (ILP) method and presented as first-order rules. We show that our explanation is faithful to the original black-box model. The code for our experiments is available at https://github.com/mc-lovin-mlem/concept-embeddings-and-ilp/tree/ki2020.

READ FULL TEXT
research
06/26/2018

Open the Black Box Data-Driven Explanation of Black Box Decision Systems

Black box systems for automated decision making, often based on machine ...
research
05/17/2023

Explain Any Concept: Segment Anything Meets Concept-Based Explanation

EXplainable AI (XAI) is an essential topic to improve human understandin...
research
07/05/2022

Hierarchical Symbolic Reasoning in Hyperbolic Space for Deep Discriminative Models

Explanations for black-box models help us understand model decisions as ...
research
10/04/2019

Enriching Visual with Verbal Explanations for Relational Concepts – Combining LIME with Aleph

With the increasing number of deep learning applications, there is a gro...
research
08/28/2023

TRIVEA: Transparent Ranking Interpretation using Visual Explanation of Black-Box Algorithmic Rankers

Ranking schemes drive many real-world decisions, like, where to study, w...
research
06/15/2021

Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach

In recent research, human-understandable explanations of machine learnin...
research
03/13/2022

Symbolic Learning to Optimize: Towards Interpretability and Scalability

Recent studies on Learning to Optimize (L2O) suggest a promising path to...

Please sign up or login with your details

Forgot password? Click here to reset