Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

11/01/2017
by   Sandra Wachter, et al.
0

There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2020

Algorithmic Recourse: from Counterfactual Explanations to Interventions

As machine learning is increasingly used to inform consequential decisio...
research
08/14/2023

Explaining Black-Box Models through Counterfactuals

We present CounterfactualExplanations.jl: a package for generating Count...
research
05/26/2023

Applying Interdisciplinary Frameworks to Understand Algorithmic Decision-Making

We argue that explanations for "algorithmic decision-making" (ADM) syste...
research
06/09/2020

Contestable Black-Boxes

The right to contest a decision with consequences on individuals or the ...
research
01/03/2021

Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making

The widespread adoption of algorithmic decision-making systems has broug...
research
05/27/2019

Model-Agnostic Counterfactual Explanations for Consequential Decisions

Predictive models are being increasingly used to support consequential d...
research
03/22/2021

Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

There has been a recent resurgence of interest in explainable artificial...

Please sign up or login with your details

Forgot password? Click here to reset