DeepAI AI Chat
Log In Sign Up

Explainable AI: current status and future directions

by   Prashant Gohel, et al.

Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI). XAI can explain how AI obtained a particular solution (e.g., classification or object detection) and can also answer other "wh" questions. This explainability is not possible in traditional AI. Explainability is essential for critical applications, such as defense, health care, law and order, and autonomous driving vehicles, etc, where the know-how is required for trust and transparency. A number of XAI techniques so far have been purposed for such applications. This paper provides an overview of these techniques from a multimedia (i.e., text, image, audio, and video) point of view. The advantages and shortcomings of these techniques have been discussed, and pointers to some future directions have also been provided.


page 1

page 2

page 4

page 5

page 6

page 8

page 13

page 15


XAI for Cybersecurity: State of the Art, Challenges, Open Issues and Future Directions

In the past few years, artificial intelligence (AI) techniques have been...

When to Trust AI: Advances and Challenges for Certification of Neural Networks

Artificial intelligence (AI) has been advancing at a fast pace and it is...

Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine

As the 5th Generation (5G) mobile networks are bringing about global soc...

Explainable Goal-Driven Agents and Robots – A Comprehensive Review and New Framework

Recent applications of autonomous agents and robots, for example, self-d...

A multi-component framework for the analysis and design of explainable artificial intelligence

The rapid growth of research in explainable artificial intelligence (XAI...

Contextual Trust

Trust is an important aspect of human life. It provides instrumental val...