The Conflict Between Explainable and Accountable Decision-Making Algorithms

05/11/2022
by   Gabriel Lima, et al.
4

Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired. Even though these systems are currently deployed in high-stakes scenarios, many of them cannot explain their decisions. This limitation has prompted the Explainable Artificial Intelligence (XAI) initiative, which aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability. This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems. We suggest that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process. Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i.e., patients), due to a misguided perception that they have control over explainable algorithms. This conflict between explainability and accountability can be exacerbated if designers choose to use algorithms and patients as moral and legal scapegoats. We conclude with a set of recommendations for how to approach this tension in the socio-technical process of algorithmic decision-making and a defense of hard regulation to prevent designers from escaping responsibility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2023

Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support

In this paper, we argue for a paradigm shift from the current model of e...
research
03/27/2023

Monetizing Explainable AI: A Double-edged Sword

Algorithms used by organizations increasingly wield power in society as ...
research
07/09/2020

Predicting Court Decisions for Alimony: Avoiding Extra-legal Factors in Decision made by Judges and Not Understandable AI Models

The advent of machine learning techniques has made it possible to obtain...
research
12/15/2020

Towards Grad-CAM Based Explainability in a Legal Text Processing Pipeline

Explainable AI(XAI)is a domain focused on providing interpretability and...
research
10/30/2019

Mathematical decisions and non-causal elements of explainable AI

Recent conceptual discussion on the nature of the explainability of Arti...
research
01/03/2019

Towards a Framework Combining Machine Ethics and Machine Explainability

We find ourselves surrounded by a rapidly increasing number of autonomou...
research
12/01/2020

The Hidden Inconsistencies Introduced by Predictive Algorithms in Judicial Decision Making

Algorithms, from simple automation to machine learning, have been introd...

Please sign up or login with your details

Forgot password? Click here to reset