Can Requirements Engineering Support Explainable Artificial Intelligence? Towards a User-Centric Approach for Explainability Requirements

06/03/2022
by   Umm-e-Habiba, et al.
0

With the recent proliferation of artificial intelligence systems, there has been a surge in the demand for explainability of these systems. Explanations help to reduce system opacity, support transparency, and increase stakeholder trust. In this position paper, we discuss synergies between requirements engineering (RE) and Explainable AI (XAI). We highlight challenges in the field of XAI, and propose a framework and research directions on how RE practices can help to mitigate these challenges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2023

Revisiting the Performance-Explainability Trade-Off in Explainable Artificial Intelligence (XAI)

Within the field of Requirements Engineering (RE), the increasing signif...
research
09/05/2023

A Context-Sensitive Approach to XAI in Music Performance

The rapidly evolving field of Explainable Artificial Intelligence (XAI) ...
research
07/12/2021

Explainable AI: current status and future directions

Explainable Artificial Intelligence (XAI) is an emerging area of researc...
research
12/31/2021

A Critical Review of Inductive Logic Programming Techniques for Explainable AI

Despite recent advances in modern machine learning algorithms, the opaqu...
research
06/08/2022

Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

Motivations for methods in explainable artificial intelligence (XAI) oft...
research
02/21/2023

Aligning Explainable AI and the Law: The European Perspective

The European Union has proposed the Artificial Intelligence Act intendin...
research
05/05/2020

A multi-component framework for the analysis and design of explainable artificial intelligence

The rapid growth of research in explainable artificial intelligence (XAI...

Please sign up or login with your details

Forgot password? Click here to reset