HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence for Digital Medicine

06/09/2023
by   Rodrigo Agerri, et al.
2

Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration; referring to specific elements that have contributed to the decision; making use of additional knowledge (e.g. expert evidence) which might not be part of the prediction process; and providing evidence supporting negative hypothesis. Finally, the system needs to formulate the explanation in a clearly interpretable, and possibly convincing, way. Given these considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity. ANTIDOTE will exploit cross-disciplinary competences in deep learning and argumentation to support a broader and innovative view of explainable AI, where the need for high-quality explanations for clinical cases deliberation is critical. As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2021

Levels of explainable artificial intelligence for human-aligned conversational explanations

Over the last few years there has been rapid research growth into eXplai...
research
05/27/2020

Who is this Explanation for? Human Intelligence and Knowledge Graphs for eXplainable AI

eXplainable AI focuses on generating explanations for the output of an A...
research
10/02/2021

Making Things Explainable vs Explaining: Requirements and Challenges under the GDPR

The European Union (EU) through the High-Level Expert Group on Artificia...
research
05/24/2021

Argumentative XAI: A Survey

Explainable AI (XAI) has been investigated for decades and, together wit...
research
08/17/2022

A Concept and Argumentation based Interpretable Model in High Risk Domains

Interpretability has become an essential topic for artificial intelligen...
research
12/30/2022

Behave-XAI: Deep Explainable Learning of Behavioral Representational Data

According to the latest trend of artificial intelligence, AI-systems nee...
research
12/18/2017

Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology

Digital pathology is not only one of the most promising fields of diagno...

Please sign up or login with your details

Forgot password? Click here to reset