Local Rule-Based Explanations of Black Box Decision Systems

05/28/2018
by   Riccardo Guidotti, et al.
0

The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.

READ FULL TEXT
research
06/26/2018

Open the Black Box Data-Driven Explanation of Black Box Decision Systems

Black box systems for automated decision making, often based on machine ...
research
02/19/2019

Explaining a black-box using Deep Variational Information Bottleneck Approach

Briefness and comprehensiveness are necessary in order to give a lot of ...
research
08/06/2021

Interpretable Summaries of Black Box Incident Triaging with Subgroup Discovery

The need of predictive maintenance comes with an increasing number of in...
research
10/09/2018

What made you do this? Understanding black-box decisions with sufficient input subsets

Local explanation frameworks aim to rationalize particular decisions mad...
research
12/05/2022

This changes to that : Combining causal and non-causal explanations to generate disease progression in capsule endoscopy

Due to the unequivocal need for understanding the decision processes of ...
research
09/18/2017

Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization

In recent years, a number of artificial intelligent services have been d...

Please sign up or login with your details

Forgot password? Click here to reset