Teaching the Machine to Explain Itself using Domain Knowledge

11/27/2020
by   Vladimir Balayan, et al.
0

Machine Learning (ML) has been increasingly used to aid humans to make better and faster decisions. However, non-technical humans-in-the-loop struggle to comprehend the rationale behind model predictions, hindering trust in algorithmic decision-making systems. Considerable research work on AI explainability attempts to win back trust in AI systems by developing explanation methods but there is still no major breakthrough. At the same time, popular explanation methods (e.g., LIME, and SHAP) produce explanations that are very hard to understand for non-data scientist persona. To address this, we present JOEL, a neural network-based framework to jointly learn a decision-making task and associated explanations that convey domain knowledge. JOEL is tailored to human-in-the-loop domain experts that lack deep technical ML knowledge, providing high-level insights about the model's predictions that very much resemble the experts' own reasoning. Moreover, we collect the domain feedback from a pool of certified experts and use it to ameliorate the model (human teaching), hence promoting seamless and better suited explanations. Lastly, we resort to semantic mappings between legacy expert systems and domain taxonomies to automatically annotate a bootstrap training set, overcoming the absence of concept-based human annotations. We validate JOEL empirically on a real-world fraud detection dataset. We show that JOEL can generalize the explanations from the bootstrap dataset. Furthermore, obtained results indicate that human teaching can further improve the explanations prediction quality by approximately 13.57%.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2021

AHMoSe: A Knowledge-Based Visual Support System for Selecting Regression Machine Learning Models

Decision support systems have become increasingly popular in the domain ...
research
08/02/2021

Knowledge-intensive Language Understanding for Explainable AI

AI systems have seen significant adoption in various domains. At the sam...
research
04/26/2021

Weakly Supervised Multi-task Learning for Concept-based Explainability

In ML-aided decision-making tasks, such as fraud detection or medical di...
research
01/21/2020

Deceptive AI Explanations: Creation and Detection

Artificial intelligence comes with great opportunities and but also grea...
research
04/16/2021

Faithful and Plausible Explanations of Medical Code Predictions

Machine learning models that offer excellent predictive performance ofte...
research
07/23/2020

Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction

We present a randomized controlled trial for a model-in-the-loop regress...
research
06/02/2022

HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning

The use of machine learning (ML) models in decision-making contexts, par...

Please sign up or login with your details

Forgot password? Click here to reset