Argumentative Explanations for Pattern-Based Text Classifiers

Recent works in Explainable AI mostly address the transparency issue of black-box models or create explanations for any kind of models (i.e., they are model-agnostic), while leaving explanations of interpretable models largely underexplored. In this paper, we fill this gap by focusing on explanations for a specific interpretable model, namely pattern-based logistic regression (PLR) for binary text classification. We do so because, albeit interpretable, PLR is challenging when it comes to explanations. In particular, we found that a standard way to extract explanations from this model does not consider relations among the features, making the explanations hardly plausible to humans. Hence, we propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations (for outputs computed by PLR) which unearth model agreements and disagreements among the features. Specifically, we use computational argumentation as follows: we see features (patterns) in PLR as arguments in a form of quantified bipolar argumentation frameworks (QBAFs) and extract attacks and supports between arguments based on specificity of the arguments; we understand logistic regression as a gradual semantics for these QBAFs, used to determine the arguments' dialectic strength; and we study standard properties of gradual semantics for QBAFs in the context of our argumentative re-interpretation of PLR, sanctioning its suitability for explanatory purposes. We then show how to extract intuitive explanations (for outputs computed by PLR) from the constructed QBAFs. Finally, we conduct an empirical evaluation and two experiments in the context of human-AI collaboration to demonstrate the advantages of our resulting AXPLR method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2022

Explaining Causal Models with Argumentation: the Case of Bi-variate Reinforcement

Causal models are playing an increasingly important role in machine lear...
research
07/07/2021

Contrastive Explanations for Argumentation-Based Conclusions

In this paper we discuss contrastive explanations for formal argumentati...
research
09/13/2020

Argumentation-based Agents that Explain their Decisions

Explainable Artificial Intelligence (XAI) systems, including intelligent...
research
03/27/2023

Interactive Explanations by Conflict Resolution via Argumentative Exchanges

As the field of explainable AI (XAI) is maturing, calls for interactive ...
research
08/29/2019

Human-grounded Evaluations of Explanation Methods for Text Classification

Due to the black-box nature of deep learning models, methods for explain...
research
12/10/2020

DAX: Deep Argumentative eXplanation for Neural Networks

Despite the rapid growth in attention on eXplainable AI (XAI) of late, e...
research
11/13/2018

Argumentation for Explainable Scheduling (Full Paper with Proofs)

Mathematical optimization offers highly-effective tools for finding solu...

Please sign up or login with your details

Forgot password? Click here to reset