An Explainable AI System for the Diagnosis of High Dimensional Biomedical Data

07/05/2021
by   Alfred Ultsch, et al.
0

Typical state of the art flow cytometry data samples consists of measures of more than 100.000 cells in 10 or more features. AI systems are able to diagnose such data with almost the same accuracy as human experts. However, there is one central challenge in such systems: their decisions have far-reaching consequences for the health and life of people, and therefore, the decisions of AI systems need to be understandable and justifiable by humans. In this work, we present a novel explainable AI method, called ALPODS, which is able to classify (diagnose) cases based on clusters, i.e., subpopulations, in the high-dimensional data. ALPODS is able to explain its decisions in a form that is understandable for human experts. For the identified subpopulations, fuzzy reasoning rules expressed in the typical language of domain experts are generated. A visualization method based on these rules allows human experts to understand the reasoning used by the AI system. A comparison to a selection of state of the art explainable AI systems shows that ALPODS operates efficiently on known benchmark data and also on everyday routine case data.

READ FULL TEXT
research
12/02/2021

On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System

Explainable AI (XAI) research has been booming, but the question "To who...
research
10/04/2022

When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment

AI systems are becoming increasingly intertwined with human life. In ord...
research
06/06/2022

Predicting and Understanding Human Action Decisions during Skillful Joint-Action via Machine Learning and Explainable-AI

This study uses supervised machine learning (SML) and explainable artifi...
research
12/16/2022

It is not "accuracy vs. explainability" – we need both for trustworthy AI systems

We are witnessing the emergence of an AI economy and society where AI te...
research
11/12/2021

Explainable AI for Psychological Profiling from Digital Footprints: A Case Study of Big Five Personality Predictions from Spending Data

Every step we take in the digital world leaves behind a record of our be...
research
08/27/2019

Explainable AI: A Neurally-Inspired Decision Stack Framework

European Law now requires AI to be explainable in the context of adverse...
research
02/09/2022

Explainable Predictive Modeling for Limited Spectral Data

Feature selection of high-dimensional labeled data with limited observat...

Please sign up or login with your details

Forgot password? Click here to reset