DeepAI AI Chat
Log In Sign Up

Explaining Causal Models with Argumentation: the Case of Bi-variate Reinforcement

by   Antonio Rago, et al.
University of Brescia
Imperial College London

Causal models are playing an increasingly important role in machine learning, particularly in the realm of explainable AI. We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for the models' outputs. The conceptualisation is based on reinterpreting desirable properties of semantics of AFs as explanation moulds, which are means for characterising the relations in the causal model argumentatively. We demonstrate our methodology by reinterpreting the property of bi-variate reinforcement as an explanation mould to forge bipolar AFs as explanations for the outputs of causal models. We perform a theoretical evaluation of these argumentative explanations, examining whether they satisfy a range of desirable explanatory and argumentative properties.


page 1

page 2

page 3

page 4


Arguments using ontological and causal knowledge

We investigate an approach to reasoning about causes through argumentati...

Argumentative Explanations for Pattern-Based Text Classifiers

Recent works in Explainable AI mostly address the transparency issue of ...

Argumentative XAI: A Survey

Explainable AI (XAI) has been investigated for decades and, together wit...

Causal Explanation for Reinforcement Learning: Quantifying State and Temporal Importance

Explainability plays an increasingly important role in machine learning....

Explaining Visual Models by Causal Attribution

Model explanations based on pure observational data cannot compute the e...

Interactive Explanations by Conflict Resolution via Argumentative Exchanges

As the field of explainable AI (XAI) is maturing, calls for interactive ...

Towards Causal Explanation Detection with Pyramid Salient-Aware Network

Causal explanation analysis (CEA) can assist us to understand the reason...