LEMON: Explainable Entity Matching

10/01/2021
by   Nils Barlaug, et al.
0

State-of-the-art entity matching (EM) methods are hard to interpret, and there is significant value in bringing explainable AI to EM. Unfortunately, most popular explainability methods do not work well out of the box for EM and need adaptation. In this paper, we identify three challenges of applying local post hoc feature attribution methods to entity matching: cross-record interaction effects, non-match explanations, and variation in sensitivity. We propose our novel model-agnostic and schema-flexible method LEMON that addresses all three challenges by (i) producing dual explanations to avoid cross-record interaction effects, (ii) introducing the novel concept of attribution potential to explain how two records could have matched, and (iii) automatically choosing explanation granularity to match the sensitivity of the matcher and record pair in question. Experiments on public datasets demonstrate that the proposed method is more faithful to the matcher and does a better job of helping users understand the decision boundary of the matcher than previous work. Furthermore, user studies show that the rate at which human subjects can construct counterfactual examples after seeing an explanation from our proposed method increases from 54 non-matches compared to explanations from a standard adaptation of LIME.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2022

Human Interpretation of Saliency-based Explanation Over Text

While a lot of research in explainable AI focuses on producing effective...
research
11/08/2022

Motif-guided Time Series Counterfactual Explanations

With the rising need of interpretable machine learning methods, there is...
research
10/05/2022

Explanation Uncertainty with Decision Boundary Awareness

Post-hoc explanation methods have become increasingly depended upon for ...
research
11/21/2022

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

Applying traditional post-hoc attribution methods to segmentation or obj...
research
06/18/2021

NoiseGrad: enhancing explanations by introducing stochasticity to model weights

Attribution methods remain a practical instrument that is used in real-w...
research
09/13/2021

Towards Better Model Understanding with Path-Sufficient Explanations

Feature based local attribution methods are amongst the most prevalent i...
research
03/20/2021

Boundary Attributions Provide Normal (Vector) Explanations

Recent work on explaining Deep Neural Networks (DNNs) focuses on attribu...

Please sign up or login with your details

Forgot password? Click here to reset