Explainable AI for Classification using Probabilistic Logic Inference

05/05/2020
by   Xiuyi Fan, et al.
0

The overarching goal of Explainable AI is to develop systems that not only exhibit intelligent behaviours, but also are able to explain their rationale and reveal insights. In explainable machine learning, methods that produce a high level of prediction accuracy as well as transparent explanations are valuable. In this work, we present an explainable classification method. Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming. Our approach achieves a level of learning performance comparable to that of traditional classifiers such as random forests, support vector machines and neural networks. It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the art Shapley Value based method. Our algorithms perform well on a range of synthetic and non-synthetic data sets.

READ FULL TEXT
research
09/26/2022

Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification

Although Deep Neural Networks (DNNs) have great generalization and predi...
research
10/02/2021

Making Things Explainable vs Explaining: Requirements and Challenges under the GDPR

The European Union (EU) through the High-Level Expert Group on Artificia...
research
12/01/2020

Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment

Machine learning methods are growing in relevance for biometrics and per...
research
05/09/2018

Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation

Providing model-generated explanations in recommender systems is importa...
research
08/09/2022

Explainable prediction of Qcodes for NOTAMs using column generation

A NOtice To AirMen (NOTAM) contains important flight route related infor...
research
12/23/2022

Rule Learning by Modularity

In this paper, we present a modular methodology that combines state-of-t...
research
10/16/2020

Monitoring Trust in Human-Machine Interactions for Public Sector Applications

The work reported here addresses the capacity of psychophysiological sen...

Please sign up or login with your details

Forgot password? Click here to reset