Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

03/20/2018
by   Gabrielle Ras, et al.
0

Issues regarding explainable AI involve four components: users, laws & regulations, explanations and algorithms. Together these components provide a context in which explanation methods can be evaluated regarding their adequacy. The goal of this chapter is to bridge the gap between expert users and lay users. Different kinds of users are identified and their concerns revealed, relevant statements from the General Data Protection Regulation are analyzed in the context of Deep Neural Networks (DNNs), a taxonomy for the classification of existing explanation methods is introduced, and finally, the various classes of explanation methods are analyzed to verify if user concerns are justified. Overall, it is clear that (visual) explanations can be given about various aspects of the influence of the input on the output. However, it is noted that explanation methods or interfaces for lay users are missing and we speculate which criteria these methods / interfaces should satisfy. Finally it is noted that two important concerns are difficult to address with explanation methods: the concern about bias in datasets that leads to biased DNNs, as well as the suspicion about unfair outcomes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2021

Explanatory Pluralism in Explainable AI

The increasingly widespread application of AI models motivates increased...
research
08/23/2021

Explaining Bayesian Neural Networks

To make advanced learning machines such as Deep Neural Networks (DNNs) m...
research
09/12/2023

On the Injunction of XAIxArt

The position paper highlights the range of concerns that are engulfed in...
research
06/27/2022

RES: A Robust Framework for Guiding Visual Explanation

Despite the fast progress of explanation techniques in modern Deep Neura...
research
05/14/2021

Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations

Deep Learning of neural networks has progressively become more prominent...
research
08/12/2021

Cases for Explainable Software Systems:Characteristics and Examples

The need for systems to explain behavior to users has become more eviden...
research
10/08/2018

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

Explaining the output of a complicated machine learning model like a dee...

Please sign up or login with your details

Forgot password? Click here to reset