DeepAI AI Chat
Log In Sign Up

Contrastive Explanations with Local Foil Trees

06/19/2018
by   Jasper van der Waa, et al.
TNO
0

Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks. However, in a high-dimensional feature space this approach may become unfeasible without restraining the set of important features. We propose to utilize the human tendency to ask questions like "Why this output (the fact) instead of that output (the foil)?" to reduce the number of features to those that play a main role in the asked contrast. Our proposed method utilizes locally trained one-versus-all decision trees to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact. In this study we illustrate this approach on three benchmark classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/19/2020

Learning Global Transparent Models from Local Contrastive Explanations

There is a rich and growing literature on producing local point wise con...
08/25/2020

Evaluating Nonlinear Decision Trees for Binary Classification Tasks with Other Existing Methods

Classification of datasets into two or more distinct classes is an impor...
05/13/2023

A Novel Memetic Strategy for Optimized Learning of Classification Trees

Given the increasing interest in interpretable machine learning, classif...
06/11/2015

Random Maxout Features

In this paper, we propose and study random maxout features, which are co...
05/14/2021

SAT-Based Rigorous Explanations for Decision Lists

Decision lists (DLs) find a wide range of uses for classification proble...
10/21/2019

Extracting local switching fields in permanent magnets using machine learning

Microstructural features play an important role for the quality of perma...
06/04/2014

Local Decorrelation For Improved Detection

Even with the advent of more sophisticated, data-hungry methods, boosted...