Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI

02/01/2023
by   Upol Ehsan, et al.
0

Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap–divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts. By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.

READ FULL TEXT
research
11/12/2022

Seamful XAI: Operationalizing Seamful Design in Explainable AI

Mistakes in AI systems are inevitable, arising from both technical limit...
research
04/24/2023

Towards a Praxis for Intercultural Ethics in Explainable AI

Explainable AI (XAI) is often promoted with the idea of helping users un...
research
07/13/2023

Is Task-Agnostic Explainable AI a Myth?

Our work serves as a framework for unifying the challenges of contempora...
research
07/10/2023

Understanding Real-World AI Planning Domains: A Conceptual Framework

Planning is a pivotal ability of any intelligent system being developed ...
research
09/01/2020

Explainability Case Studies

Explainability is one of the key ethical concepts in the design of AI sy...
research
06/22/2022

Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI

Recent years have seen a surge of interest in the field of explainable A...
research
12/21/2021

Empirically Improved Tokuda Gap Sequence in Shellsort

Experiments are conducted to improve Tokuda (1992) gap sequence in Shell...

Please sign up or login with your details

Forgot password? Click here to reset