Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work

by   Sabine Theis, et al.

The increasing prevalence of Artificial Intelligence (AI) in safety-critical contexts such as air-traffic control leads to systems that are practical and efficient, and to some extent explainable to humans to be trusted and accepted. The present structured literature analysis examines n = 236 articles on the requirements for the explainability and acceptance of AI. Results include a comprehensive review of n = 48 articles on information people need to perceive an AI as explainable, the information needed to accept an AI, and representation and interaction methods promoting trust in an AI. Results indicate that the two main groups of users are developers who require information about the internal operations of the model and end users who require information about AI results or behavior. Users' information needs vary in specificity, complexity, and urgency and must consider context, domain knowledge, and the user's cognitive resources. The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations, as well as goal-supporting information tailored to individual preferences and information to establish trust in the system. Information about the system's limitations and potential failures can increase acceptance and trust. Trusted interaction methods are human-like, including natural language, speech, text, and visual representations such as graphs, charts, and animations. Our results have significant implications for future human-centric AI systems being developed. Thus, they are suitable as input for further application-specific investigations of user needs.


page 1

page 2

page 3

page 4


Examining correlation between trust and transparency with explainable artificial intelligence

Trust between humans and artificial intelligence(AI) is an issue which h...

Explainable AI (XAI) for PHM of Industrial Asset: A State-of-The-Art, PRISMA-Compliant Systematic Review

A state-of-the-art systematic review on XAI applied to Prognostic and He...

Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems

We consider the problem of providing users of deep Reinforcement Learnin...

Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies

Many ethical frameworks require artificial intelligence (AI) systems to ...

Explainability in Mechanism Design: Recent Advances and the Road Ahead

Designing and implementing explainable systems is seen as the next step ...

Argument Schemes and Dialogue for Explainable Planning

Artificial Intelligence (AI) is being increasingly deployed in practical...

Argument Schemes for Explainable Planning

Artificial Intelligence (AI) is being increasingly used to develop syste...

Please sign up or login with your details

Forgot password? Click here to reset