Towards Benchmarking Explainable Artificial Intelligence Methods

08/25/2022
by   Lars Holmberg, et al.
0

The currently dominating artificial intelligence and machine learning technology, neural networks, builds on inductive statistical learning. Neural networks of today are information processing systems void of understanding and reasoning capabilities, consequently, they cannot explain promoted decisions in a humanly valid form. In this work, we revisit and use fundamental philosophy of science theories as an analytical lens with the goal of revealing, what can be expected, and more importantly, not expected, from methods that aim to explain decisions promoted by a neural network. By conducting a case study we investigate a selection of explainability method's performance over two mundane domains, animals and headgear. Through our study, we lay bare that the usefulness of these methods relies on human domain knowledge and our ability to understand, generalise and reason. The explainability methods can be useful when the goal is to gain further insights into a trained neural network's strengths and weaknesses. If our aim instead is to use these explainability methods to promote actionable decisions or build trust in ML-models they need to be less ambiguous than they are today. In this work, we conclude from our study, that benchmarking explainability methods, is a central quest towards trustworthy artificial intelligence and machine learning.

READ FULL TEXT

page 2

page 6

page 9

research
05/03/2023

Commentary on explainable artificial intelligence methods: SHAP and LIME

eXplainable artificial intelligence (XAI) methods have emerged to conver...
research
12/31/2022

Mapping Knowledge Representations to Concepts: A Review and New Perspectives

The success of neural networks builds to a large extent on their ability...
research
11/16/2021

Deep Distilling: automated code generation using explainable deep learning

Human reasoning can distill principles from observed patterns and genera...
research
02/27/2018

Improved Explainability of Capsule Networks: Relevance Path by Agreement

Recent advancements in signal processing and machine learning domains ha...
research
06/20/2019

Unexplainability and Incomprehensibility of Artificial Intelligence

Explainability and comprehensibility of AI are important requirements fo...
research
08/06/2018

Machine Learning Promoting Extreme Simplification of Spectroscopy Equipment

The spectroscopy measurement is one of main pathways for exploring and u...
research
11/11/2019

Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine

As the 5th Generation (5G) mobile networks are bringing about global soc...

Please sign up or login with your details

Forgot password? Click here to reset