Network Analysis for Explanation

12/07/2017
by   Hiroshi Kuwajima, et al.
0

Safety critical systems strongly require the quality aspects of artificial intelligence including explainability. In this paper, we analyzed a trained network to extract features which mainly contribute the inference. Based on the analysis, we developed a simple solution to generate explanations of the inference processes.

READ FULL TEXT
research
03/17/2020

Directions for Explainable Knowledge-Enabled Systems

Interest in the field of Explainable Artificial Intelligence has been gr...
research
05/06/2022

The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations

Machine learning models in safety-critical settings like healthcare are ...
research
12/23/2022

A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference

We study the problem of combining neural networks with symbolic reasonin...
research
09/07/2023

Automatic Concept Embedding Model (ACEM): No train-time concepts, No issue!

Interpretability and explainability of neural networks is continuously i...
research
10/04/2020

Explanation Ontology in Action: A Clinical Use-Case

We addressed the problem of a lack of semantic representation for user-c...
research
12/09/2021

Evaluating saliency methods on artificial data with different background types

Over the last years, many 'explainable artificial intelligence' (xAI) ap...
research
03/13/2020

Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

The recent enthusiasm for artificial intelligence (AI) is due principall...

Please sign up or login with your details

Forgot password? Click here to reset