On Interpretability and Similarity in Concept-Based Machine Learning

by   Leonard Kwuida, et al.

Machine Learning (ML) provides important techniques for classification and predictions. Most of these are black-box models for users and do not provide decision-makers with an explanation. For the sake of transparency or more validity of decisions, the need to develop explainable/interpretable ML-methods is gaining more and more importance. Certain questions need to be addressed: How does an ML procedure derive the class for a particular entity? Why does a particular clustering emerge from a particular unsupervised ML procedure? What can we do if the number of attributes is very large? What are the possible reasons for the mistakes for concrete cases and models? For binary attributes, Formal Concept Analysis (FCA) offers techniques in terms of intents of formal concepts, and thus provides plausible reasons for model prediction. However, from the interpretable machine learning viewpoint, we still need to provide decision-makers with the importance of individual attributes to the classification of a particular object, which may facilitate explanations by experts in various domains with high-cost errors like medicine or finance. We discuss how notions from cooperative game theory can be used to assess the contribution of individual attributes in classification and clustering processes in concept-based machine learning. To address the 3rd question, we present some ideas on how to reduce the number of attributes using similarities in large contexts.



There are no comments yet.


page 1

page 2

page 3

page 4


Coalitional strategies for efficient individual prediction explanation

As Machine Learning (ML) is now widely applied in many domains, in both ...

An Information-Theoretic Approach to Explainable Machine Learning

A key obstacle to the successful deployment of machine learning (ML) met...

Does Explainable Machine Learning Uncover the Black Box in Vision Applications?

Machine learning (ML) in general and deep learning (DL) in particular ha...

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

The significant advances in autonomous systems together with an immensel...

What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems

Recent efforts in Machine Learning (ML) interpretability have focused on...

Intrinsic dimension of concept lattices

Geometric analysis is a very capable theory to understand the influence ...

Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

Social Networking Sites (SNS) are one of the most important ways of comm...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.