A Concept and Argumentation based Interpretable Model in High Risk Domains

08/17/2022
by   Haixiao Chi, et al.
11

Interpretability has become an essential topic for artificial intelligence in some high-risk domains such as healthcare, bank and security. For commonly-used tabular data, traditional methods trained end-to-end machine learning models with numerical and categorical data only, and did not leverage human understandable knowledge such as data descriptions. Yet mining human-level knowledge from tabular data and using it for prediction remain a challenge. Therefore, we propose a concept and argumentation based model (CAM) that includes the following two components: a novel concept mining method to obtain human understandable concepts and their relations from both descriptions of features and the underlying data, and a quantitative argumentation-based method to do knowledge representation and reasoning. As a result of it, CAM provides decisions that are based on human-level knowledge and the reasoning process is intrinsically interpretable. Finally, to visualize the purposed interpretable model, we provide a dialogical explanation that contain dominated reasoning path within CAM. Experimental results on both open source benchmark dataset and real-word business dataset show that (1) CAM is transparent and interpretable, and the knowledge inside the CAM is coherent with human understanding; (2) Our interpretable approach can reach competitive results comparing with other state-of-art models.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 9

research
10/26/2018

MCA-based Rule Mining Enables Interpretable Inference in Clinical Psychiatry

Development of interpretable machine learning models for clinical health...
research
09/18/2018

Argumentation Mining: Exploiting Multiple Sources and Background Knowledge

The field of Argumentation Mining has arisen from the need of determinin...
research
06/09/2023

HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence for Digital Medicine

Providing high quality explanations for AI predictions based on machine ...
research
01/02/2019

Natively Interpretable Machine Learning and Artificial Intelligence: Preliminary Results and Future Directions

Machine learning models have become more and more complex in order to be...
research
10/24/2020

Abduction and Argumentation for Explainable Machine Learning: A Position Survey

This paper presents Abduction and Argumentation as two principled forms ...
research
05/31/2022

GlanceNets: Interpretabile, Leak-proof Concept-based Models

There is growing interest in concept-based models (CBMs) that combine hi...
research
06/22/2016

Toward Interpretable Topic Discovery via Anchored Correlation Explanation

Many predictive tasks, such as diagnosing a patient based on their medic...

Please sign up or login with your details

Forgot password? Click here to reset