-
Improved Explainability of Capsule Networks: Relevance Path by Agreement
Recent advancements in signal processing and machine learning domains ha...
read it
-
Simplifying the explanation of deep neural networks with sufficient and necessary feature-sets: case of text classification
During the last decade, deep neural networks (DNN) have demonstrated imp...
read it
-
An Introduction to Deep Visual Explanation
The practical impact of deep learning on complex supervised learning pro...
read it
-
Neurorobots as a Means Toward Neuroethology and Explainable AI
Understanding why deep neural networks and machine learning algorithms a...
read it
-
Interpreting convolutional networks trained on textual data
There have been many advances in the artificial intelligence field due t...
read it
-
How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
This paper surveys an approach to the XAI problem, using post-hoc explan...
read it
-
Mechanisms of Artistic Creativity in Deep Learning Neural Networks
The generative capabilities of deep learning neural networks (DNNs) have...
read it
A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks
Recent advancements in machine learning and signal processing domains have resulted in an extensive surge of interest in Deep Neural Networks (DNNs) due to their unprecedented performance and high accuracy for different and challenging problems of significant engineering importance. However, when such deep learning architectures are utilized for making critical decisions such as the ones that involve human lives (e.g., in control systems and medical applications), it is of paramount importance to understand, trust, and in one word "explain" the argument behind deep models' decisions. In many applications, artificial neural networks (including DNNs) are considered as black-box systems, which do not provide sufficient clue on their internal processing actions. Although some recent efforts have been initiated to explain the behaviors and decisions of deep networks, explainable artificial intelligence (XAI) domain, which aims at reasoning about the behavior and decisions of DNNs, is still in its infancy. The aim of this paper is to provide a comprehensive overview on Understanding, Visualization, and Explanation of the internal and overall behavior of DNNs.
READ FULL TEXT
Comments
There are no comments yet.