Explainable AI does not provide the explanations end-users are asking for

01/25/2023
by   Savio Rozario, et al.
0

Explainable Artificial Intelligence (XAI) techniques are frequently required by users in many AI systems with the goal of understanding complex models, their associated predictions, and gaining trust. While suitable for some specific tasks during development, their adoption by organisations to enhance trust in machine learning systems has unintended consequences. In this paper we discuss XAI's limitations in deployment and conclude that transparency alongside with rigorous validation are better suited to gaining trust in AI systems.

READ FULL TEXT
research
08/10/2021

Examining correlation between trust and transparency with explainable artificial intelligence

Trust between humans and artificial intelligence(AI) is an issue which h...
research
05/12/2020

Trust Considerations for Explainable Robots: A Human Factors Perspective

Recent advances in artificial intelligence (AI) and robotics have drawn ...
research
07/11/2018

Explainable Security

The Defense Advanced Research Projects Agency (DARPA) recently launched ...
research
12/06/2022

A Time Series Approach to Explainability for Neural Nets with Applications to Risk-Management and Fraud Detection

Artificial intelligence is creating one of the biggest revolution across...
research
03/29/2023

Distrust in (X)AI – Measurement Artifact or Distinct Construct?

Trust is a key motivation in developing explainable artificial intellige...
research
11/12/2020

Domain-Level Explainability – A Challenge for Creating Trust in Superhuman AI Strategies

For strategic problems, intelligent systems based on Deep Reinforcement ...
research
11/17/2020

Towards evaluating and eliciting high-quality documentation for intelligent systems

A vital component of trust and transparency in intelligent systems built...

Please sign up or login with your details

Forgot password? Click here to reset