Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification

09/26/2022
by   Adrien Bennetot, et al.
19

Although Deep Neural Networks (DNNs) have great generalization and prediction capabilities, their functioning does not allow a detailed explanation of their behavior. Opaque deep learning models are increasingly used to make important predictions in critical environments, and the danger is that they make and use predictions that cannot be justified or legitimized. Several eXplainable Artificial Intelligence (XAI) methods that separate explanations from machine learning models have emerged, but have shortcomings in faithfulness to the model actual functioning and robustness. As a result, there is a widespread agreement on the importance of endowing Deep Learning models with explanatory capabilities so that they can themselves provide an answer to why a particular prediction was made. First, we address the problem of the lack of universal criteria for XAI by formalizing what an explanation is. We also introduced a set of axioms and definitions to clarify XAI from a mathematical perspective. Finally, we present the Greybox XAI, a framework that composes a DNN and a transparent model thanks to the use of a symbolic Knowledge Base (KB). We extract a KB from the dataset and use it to train a transparent model (i.e., a logistic regression). An encoder-decoder architecture is trained on RGB images to produce an output similar to the KB used by the transparent model. Once the two models are trained independently, they are used compositionally to form an explainable predictive model. We show how this new architecture is accurate and explainable in several datasets.

READ FULL TEXT

page 7

page 9

page 19

page 22

page 25

page 26

page 27

page 28

research
05/03/2023

Commentary on explainable artificial intelligence methods: SHAP and LIME

eXplainable artificial intelligence (XAI) methods have emerged to conver...
research
01/23/2020

Explainable Machine Learning Control – robust control and stability analysis

Recently, the term explainable AI became known as an approach to produce...
research
06/18/2022

Machine Learning in Sports: A Case Study on Using Explainable Models for Predicting Outcomes of Volleyball Matches

Machine Learning has become an integral part of engineering design and d...
research
05/05/2020

Explainable AI for Classification using Probabilistic Logic Inference

The overarching goal of Explainable AI is to develop systems that not on...
research
10/15/2022

ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model

The need for interpretable models has fostered the development of self-e...
research
01/25/2022

Explanatory Learning: Beyond Empiricism in Neural Networks

We introduce Explanatory Learning (EL), a framework to let machines use ...
research
03/17/2022

An Explainable Stacked Ensemble Model for Static Route-Free Estimation of Time of Arrival

To compare alternative taxi schedules and to compute them, as well as to...

Please sign up or login with your details

Forgot password? Click here to reset