Using Explainable Artificial Intelligence to Increase Trust in Computer Vision

02/04/2020
by   Christian Meske, et al.
0

Computer Vision, and hence Artificial Intelligence-based extraction of information from images, has increasingly received attention over the last years, for instance in medical diagnostics. While the algorithms' complexity is a reason for their increased performance, it also leads to the "black box" problem, consequently decreasing trust towards AI. In this regard, "Explainable Artificial Intelligence" (XAI) allows to open that black box and to improve the degree of AI transparency. In this paper, we first discuss the theoretical impact of explainability on trust towards AI, followed by showcasing how the usage of XAI in a health-related setting can look like. More specifically, we show how XAI can be applied to understand why Computer Vision, based on deep learning, did or did not detect a disease (malaria) on image data (thin blood smear slide images). Furthermore, we investigate, how XAI can be used to compare the detection strategy of two different deep learning models often used for Computer Vision: Convolutional Neural Network and Multi-Layer Perceptron. Our empirical results show that i) the AI sometimes used questionable or irrelevant data features of an image to detect malaria (even if correctly predicted), and ii) that there may be significant discrepancies in how different deep learning models explain the same prediction. Our theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability.

READ FULL TEXT

page 9

page 10

page 11

research
08/10/2021

Examining correlation between trust and transparency with explainable artificial intelligence

Trust between humans and artificial intelligence(AI) is an issue which h...
research
04/26/2021

TrustyAI Explainability Toolkit

Artificial intelligence (AI) is becoming increasingly more popular and c...
research
03/30/2023

Model-agnostic explainable artificial intelligence for object detection in image data

Object detection is a fundamental task in computer vision, which has bee...
research
10/07/2022

Utilizing Explainable AI for improving the Performance of Neural Networks

Nowadays, deep neural networks are widely used in a variety of fields th...
research
12/17/2022

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Deep learning models for learning analytics have become increasingly pop...
research
01/04/2022

AI visualization in Nanoscale Microscopy

Artificial Intelligence Nanotechnology are promising areas for the f...
research
11/17/2021

Airport Taxi Time Prediction and Alerting: A Convolutional Neural Network Approach

This paper proposes a novel approach to predict and determine whether th...

Please sign up or login with your details

Forgot password? Click here to reset