Towards Human-Compatible XAI: Explaining Data Differentials with Concept Induction over Background Knowledge

09/27/2022
by   Cara Widmer, et al.
0

Concept induction, which is based on formal logical reasoning over description logics, has been used in ontology engineering in order to create ontology (TBox) axioms from the base data (ABox) graph. In this paper, we show that it can also be used to explain data differentials, for example in the context of Explainable AI (XAI), and we show that it can in fact be done in a way that is meaningful to a human observer. Our approach utilizes a large class hierarchy, curated from the Wikipedia category hierarchy, as background knowledge.

READ FULL TEXT

page 6

page 9

research
01/23/2023

Explaining Deep Learning Hidden Neuron Activations using Concept Induction

One of the current key challenges in Explainable AI is in correctly inte...
research
08/08/2023

Understanding CNN Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning

A major challenge in Explainable AI is in correctly interpreting activat...
research
12/08/2018

Efficient Concept Induction for Description Logics

Concept Induction refers to the problem of creating complex Description ...
research
08/14/2023

Why Not? Explaining Missing Entailments with Evee (Technical Report)

Understanding logical entailments derived by a description logic reasone...
research
10/11/2021

The CaLiGraph Ontology as a Challenge for OWL Reasoners

CaLiGraph is a large-scale cross-domain knowledge graph generated from W...
research
10/29/2014

Reasoning for ALCQ extended with a flexible meta-modelling hierarchy

This works is motivated by a real-world case study where it is necessary...
research
03/02/2023

Converting the Suggested Upper Merged Ontology to Typed First-order Form

We describe the translation of the Suggested Upper Merged Ontology (SUMO...

Please sign up or login with your details

Forgot password? Click here to reset