Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach

06/15/2021
by   Johannes Rabold, et al.
20

In recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance for a concept with a similar counterexample. Contrasting a given instance with a structurally similar example which does not belong to the concept highlights what characteristics are necessary for concept membership. Such near misses have been proposed by Winston (1970) as efficient guidance for learning in relational domains. We introduce an explanation generation algorithm for relational concepts learned with Inductive Logic Programming (GeNME). The algorithm identifies near miss examples from a given set of instances and ranks these examples by their degree of closeness to a specific positive instance. A modified rule which covers the near miss but not the original instance is given as an explanation. We illustrate GeNME with the well known family domain consisting of kinship relations, the visual relational Winston arches domain and a real-world domain dealing with file management. We also present a psychological experiment comparing human preferences of rule-based, example-based, and near miss explanations in the family and the arches domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2019

Enriching Visual with Verbal Explanations for Relational Concepts – Combining LIME with Aleph

With the increasing number of deep learning applications, there is a gro...
research
10/07/2021

Explanation as a process: user-centric construction of multi-level and multi-modal explanations

In the last years, XAI research has mainly been concerned with developin...
research
05/27/2018

Semantic Explanations of Predictions

The main objective of explanations is to transmit knowledge to humans. T...
research
05/16/2021

Expressive Explanations of DNNs by Combining Concept Analysis with ILP

Explainable AI has emerged to be a key component for black-box machine l...
research
04/25/2022

Generating and Visualizing Trace Link Explanations

Recent breakthroughs in deep-learning (DL) approaches have resulted in t...
research
05/20/2022

Explanatory machine learning for sequential human teaching

The topic of comprehensibility of machine-learned theories has recently ...

Please sign up or login with your details

Forgot password? Click here to reset