ViCE: Visual Counterfactual Explanations for Machine Learning Models

03/05/2020
by   Oscar Gomez, et al.
8

The continued improvements in the predictive accuracy of machine learning models have allowed for their widespread practical application. Yet, many decisions made with seemingly accurate models still require verification by domain experts. In addition, end-users of a model also want to understand the reasons behind specific decisions. Thus, the need for interpretability is increasingly paramount. In this paper we present an interactive visual analytics tool, ViCE, that generates counterfactual explanations to contextualize and evaluate model decisions. Each sample is assessed to identify the minimal set of changes needed to flip the model's output. These explanations aim to provide end-users with personalized actionable insights with which to understand, and possibly contest or improve, automated decisions. The results are effectively displayed in a visual interface where counterfactual explanations are highlighted and interactive methods are provided for users to explore the data and model. The functionality of the tool is demonstrated by its application to a home equity line of credit dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2020

DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models

With machine learning models being increasingly applied to various decis...
research
09/12/2021

AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation

Rapid improvements in the performance of machine learning models have pu...
research
05/04/2017

A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations

Human-in-the-loop data analysis applications necessitate greater transpa...
research
02/17/2021

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Interpretability methods aim to help users build trust in and understand...
research
07/08/2022

TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues

Machine Learning (ML) models are increasingly used to make critical deci...
research
06/09/2023

Consistent Explanations in the Face of Model Indeterminacy via Ensembling

This work addresses the challenge of providing consistent explanations f...
research
02/11/2020

Decisions, Counterfactual Explanations and Strategic Behavior

Data-driven predictive models are increasingly used to inform decisions ...

Please sign up or login with your details

Forgot password? Click here to reset