Generating Process-Centric Explanations to Enable Contestability in Algorithmic Decision-Making: Challenges and Opportunities

05/01/2023
by   Mireia Yurrita, et al.
0

Human-AI decision making is becoming increasingly ubiquitous, and explanations have been proposed to facilitate better Human-AI interactions. Recent research has investigated the positive impact of explanations on decision subjects' fairness perceptions in algorithmic decision-making. Despite these advances, most studies have captured the effect of explanations in isolation, considering explanations as ends in themselves, and reducing them to technical solutions provided through XAI methodologies. In this vision paper, we argue that the effect of explanations on fairness perceptions should rather be captured in relation to decision subjects' right to contest such decisions. Since contestable AI systems are open to human intervention throughout their lifecycle, contestability requires explanations that go beyond outcomes and also capture the rationales that led to the development and deployment of the algorithmic system in the first place. We refer to such explanations as process-centric explanations. In this work, we introduce the notion of process-centric explanations and describe some of the main challenges and research opportunities for generating and evaluating such explanations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset