DeepAI AI Chat
Log In Sign Up

Abduction-Based Explanations for Machine Learning Models

11/26/2018
by   Alexey Ignatiev, et al.
VMware
University of Lisbon
0

The growing range of applications of Machine Learning (ML) in a multitude of settings motivates the ability of computing small explanations for predictions made. Small explanations are generally accepted as easier for human decision makers to understand. Most earlier work on computing explanations is based on heuristic approaches, providing no guarantees of quality, in terms of how close such solutions are from cardinality- or subset-minimal explanations. This paper develops a constraint-agnostic solution for computing explanations for any ML model. The proposed solution exploits abductive reasoning, and imposes the requirement that the ML model can be represented as sets of constraints using some target constraint reasoning system for which the decision problem can be answered with some oracle. The experimental results, obtained on well-known datasets, validate the scalability of the proposed approach as well as the quality of the computed solutions.

READ FULL TEXT
07/04/2019

On Validating, Repairing and Refining Heuristic ML Explanations

Recent years have witnessed a fast-growing interest in computing explana...
07/04/2021

Efficient Explanations for Knowledge Compilation Languages

Knowledge compilation (KC) languages find a growing number of practical ...
10/15/2021

Tree-based local explanations of machine learning model predictions, AraucanaXAI

Increasingly complex learning methods such as boosting, bagging and deep...
03/21/2022

Optimizing Binary Decision Diagrams with MaxSAT for classification

The growing interest in explainable artificial intelligence (XAI) for cr...
03/28/2022

User Driven Model Adjustment via Boolean Rule Explanations

AI solutions are heavily dependant on the quality and accuracy of the in...
09/13/2021

ML Based Lineage in Databases

We track the lineage of tuples throughout their database lifetime. That ...
07/08/2020

Just in Time: Personal Temporal Insights for Altering Model Decisions

The interpretability of complex Machine Learning models is coming to be ...