On Interpretability and Similarity in Concept-Based Machine Learning

02/25/2021
by   Leonard Kwuida, et al.
0

Machine Learning (ML) provides important techniques for classification and predictions. Most of these are black-box models for users and do not provide decision-makers with an explanation. For the sake of transparency or more validity of decisions, the need to develop explainable/interpretable ML-methods is gaining more and more importance. Certain questions need to be addressed: How does an ML procedure derive the class for a particular entity? Why does a particular clustering emerge from a particular unsupervised ML procedure? What can we do if the number of attributes is very large? What are the possible reasons for the mistakes for concrete cases and models? For binary attributes, Formal Concept Analysis (FCA) offers techniques in terms of intents of formal concepts, and thus provides plausible reasons for model prediction. However, from the interpretable machine learning viewpoint, we still need to provide decision-makers with the importance of individual attributes to the classification of a particular object, which may facilitate explanations by experts in various domains with high-cost errors like medicine or finance. We discuss how notions from cooperative game theory can be used to assess the contribution of individual attributes in classification and clustering processes in concept-based machine learning. To address the 3rd question, we present some ideas on how to reduce the number of attributes using similarities in large contexts.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/01/2021

Coalitional strategies for efficient individual prediction explanation

As Machine Learning (ML) is now widely applied in many domains, in both ...
03/01/2020

An Information-Theoretic Approach to Explainable Machine Learning

A key obstacle to the successful deployment of machine learning (ML) met...
12/18/2021

Does Explainable Machine Learning Uncover the Black Box in Vision Applications?

Machine learning (ML) in general and deep learning (DL) in particular ha...
05/30/2020

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

The significant advances in autonomous systems together with an immensel...
11/27/2018

What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems

Recent efforts in Machine Learning (ML) interpretability have focused on...
01/24/2018

Intrinsic dimension of concept lattices

Geometric analysis is a very capable theory to understand the influence ...
11/27/2020

Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

Social Networking Sites (SNS) are one of the most important ways of comm...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.