Teaching the Machine to Explain Itself using Domain Knowledge

by   Vladimir Balayan, et al.

Machine Learning (ML) has been increasingly used to aid humans to make better and faster decisions. However, non-technical humans-in-the-loop struggle to comprehend the rationale behind model predictions, hindering trust in algorithmic decision-making systems. Considerable research work on AI explainability attempts to win back trust in AI systems by developing explanation methods but there is still no major breakthrough. At the same time, popular explanation methods (e.g., LIME, and SHAP) produce explanations that are very hard to understand for non-data scientist persona. To address this, we present JOEL, a neural network-based framework to jointly learn a decision-making task and associated explanations that convey domain knowledge. JOEL is tailored to human-in-the-loop domain experts that lack deep technical ML knowledge, providing high-level insights about the model's predictions that very much resemble the experts' own reasoning. Moreover, we collect the domain feedback from a pool of certified experts and use it to ameliorate the model (human teaching), hence promoting seamless and better suited explanations. Lastly, we resort to semantic mappings between legacy expert systems and domain taxonomies to automatically annotate a bootstrap training set, overcoming the absence of concept-based human annotations. We validate JOEL empirically on a real-world fraud detection dataset. We show that JOEL can generalize the explanations from the bootstrap dataset. Furthermore, obtained results indicate that human teaching can further improve the explanations prediction quality by approximately 13.57%.


page 1

page 2

page 3

page 4


AHMoSe: A Knowledge-Based Visual Support System for Selecting Regression Machine Learning Models

Decision support systems have become increasingly popular in the domain ...

Knowledge-intensive Language Understanding for Explainable AI

AI systems have seen significant adoption in various domains. At the sam...

Weakly Supervised Multi-task Learning for Concept-based Explainability

In ML-aided decision-making tasks, such as fraud detection or medical di...

Deceptive AI Explanations: Creation and Detection

Artificial intelligence comes with great opportunities and but also grea...

Faithful and Plausible Explanations of Medical Code Predictions

Machine learning models that offer excellent predictive performance ofte...

Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction

We present a randomized controlled trial for a model-in-the-loop regress...

HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning

The use of machine learning (ML) models in decision-making contexts, par...

Please sign up or login with your details

Forgot password? Click here to reset