Ontology-based Interpretable Machine Learning for Textual Data

04/01/2020
by   Phung Lai, et al.
0

In this paper, we introduce a novel interpreting framework that learns an interpretable model based on an ontology-based sampling technique to explain agnostic prediction models. Different from existing approaches, our algorithm considers contextual correlation among words, described in domain knowledge ontologies, to generate semantic explanations. To narrow down the search space for explanations, which is a major problem of long and complicated text data, we design a learnable anchor algorithm, to better extract explanations locally. A set of regulations is further introduced, regarding combining learned interpretable representations with anchors to generate comprehensible semantic explanations. An extensive experiment conducted on two real-world datasets shows that our approach generates more precise and insightful explanations compared with baseline approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2021

Locally Interpretable Model Agnostic Explanations using Gaussian Processes

Owing to tremendous performance improvements in data-intensive domains, ...
research
07/31/2017

Interpretable Active Learning

Active learning has long been a topic of study in machine learning. Howe...
research
04/01/2021

Semantic XAI for contextualized demand forecasting explanations

The paper proposes a novel architecture for explainable AI based on sema...
research
02/28/2021

Model-Agnostic Explainability for Visual Search

What makes two images similar? We propose new approaches to generate mod...
research
05/04/2023

Interpretable Regional Descriptors: Hyperbox-Based Local Explanations

This work introduces interpretable regional descriptors, or IRDs, for lo...
research
11/17/2016

Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

At the core of interpretable machine learning is the question of whether...
research
11/02/2022

XAI-Increment: A Novel Approach Leveraging LIME Explanations for Improved Incremental Learning

Explainability of neural network prediction is essential to understand f...

Please sign up or login with your details

Forgot password? Click here to reset