Semantic Reasoning from Model-Agnostic Explanations

06/29/2021
by   Timen Stepišnik Perdih, et al.
0

With the wide adoption of black-box models, instance-based post hoc explanation tools, such as LIME and SHAP became increasingly popular. These tools produce explanations, pinpointing contributions of key features associated with a given prediction. However, the obtained explanations remain at the raw feature level and are not necessarily understandable by a human expert without extensive domain knowledge. We propose ReEx (Reasoning with Explanations), a method applicable to explanations generated by arbitrary instance-level explainers, such as SHAP. By using background knowledge in the form of ontologies, ReEx generalizes instance explanations in a least general generalization-like manner. The resulting symbolic descriptions are specific for individual classes and offer generalizations based on the explainer's output. The derived semantic explanations are potentially more informative, as they describe the key attributes in the context of more general background knowledge, e.g., at the biological process level. We showcase ReEx's performance on nine biological data sets, showing that compact, semantic explanations can be obtained and are more informative than generic ontology mappings that link terms directly to feature names. ReEx is offered as a simple-to-use Python library and is compatible with tools such as SHAP and similar. To our knowledge, this is one of the first methods that directly couples semantic reasoning with contemporary model explanation methods. This paper is a preprint. Full version's doi is: 10.1109/SAMI50585.2021.9378668

READ FULL TEXT
research
05/27/2018

Semantic Explanations of Predictions

The main objective of explanations is to transmit knowledge to humans. T...
research
11/23/2021

Link Analysis meets Ontologies: Are Embeddings the Answer?

The increasing amounts of semantic resources offer valuable storage of h...
research
06/20/2022

Eliminating The Impossible, Whatever Remains Must Be True

The rise of AI methods to make predictions and decisions has led to a pr...
research
05/26/2021

Fooling Partial Dependence via Data Poisoning

Many methods have been developed to understand complex predictive models...
research
05/21/2021

Probabilistic Sufficient Explanations

Understanding the behavior of learned classifiers is an important task, ...
research
11/13/2017

Learning Abduction under Partial Observability

Juba recently proposed a formulation of learning abductive reasoning fro...
research
05/30/2022

Fooling SHAP with Stealthily Biased Sampling

SHAP explanations aim at identifying which features contribute the most ...

Please sign up or login with your details

Forgot password? Click here to reset