Explaining black-box text classifiers for disease-treatment information extraction

10/21/2020
by   Milad Moradi, et al.
0

Deep neural networks and other intricate Artificial Intelligence (AI) models have reached high levels of accuracy on many biomedical natural language processing tasks. However, their applicability in real-world use cases may be limited due to their vague inner working and decision logic. A post-hoc explanation method can approximate the behavior of a black-box AI model by extracting relationships between feature values and outcomes. In this paper, we introduce a post-hoc explanation method that utilizes confident itemsets to approximate the behavior of black-box classifiers for medical information extraction. Incorporating medical concepts and semantics into the explanation process, our explanator finds semantic relations between inputs and outputs in different parts of the decision space of a black-box classifier. The experimental results show that our explanation method can outperform perturbation and decision set based explanators in terms of fidelity and interpretability of explanations produced for predictions on a disease-treatment information extraction task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2020

Post-hoc explanation of black-box classifiers using confident itemsets

It is difficult to trust decisions made by Black-box Artificial Intellig...
research
12/20/2020

Explaining Black-box Models for Biomedical Text Classification

In this paper, we propose a novel method named Biomedical Confident Item...
research
02/17/2022

GRAPHSHAP: Motif-based Explanations for Black-box Graph Classifiers

Most methods for explaining black-box classifiers (e.g., on tabular data...
research
07/05/2021

Improving a neural network model by explanation-guided training for glioma classification based on MRI data

In recent years, artificial intelligence (AI) systems have come to the f...
research
05/29/2022

Unfooling Perturbation-Based Post Hoc Explainers

Monumental advancements in artificial intelligence (AI) have lured the i...
research
03/13/2023

Revisiting model self-interpretability in a decision-theoretic way for binary medical image classification

Interpretability is highly desired for deep neural network-based classif...
research
06/01/2018

Producing radiologist-quality reports for interpretable artificial intelligence

Current approaches to explaining the decisions of deep learning systems ...

Please sign up or login with your details

Forgot password? Click here to reset