On Interpretability and Similarity in Concept-Based Machine Learning

02/25/2021
by   Leonard Kwuida, et al.
0

Machine Learning (ML) provides important techniques for classification and predictions. Most of these are black-box models for users and do not provide decision-makers with an explanation. For the sake of transparency or more validity of decisions, the need to develop explainable/interpretable ML-methods is gaining more and more importance. Certain questions need to be addressed: How does an ML procedure derive the class for a particular entity? Why does a particular clustering emerge from a particular unsupervised ML procedure? What can we do if the number of attributes is very large? What are the possible reasons for the mistakes for concrete cases and models? For binary attributes, Formal Concept Analysis (FCA) offers techniques in terms of intents of formal concepts, and thus provides plausible reasons for model prediction. However, from the interpretable machine learning viewpoint, we still need to provide decision-makers with the importance of individual attributes to the classification of a particular object, which may facilitate explanations by experts in various domains with high-cost errors like medicine or finance. We discuss how notions from cooperative game theory can be used to assess the contribution of individual attributes in classification and clustering processes in concept-based machine learning. To address the 3rd question, we present some ideas on how to reduce the number of attributes using similarities in large contexts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2021

Coalitional strategies for efficient individual prediction explanation

As Machine Learning (ML) is now widely applied in many domains, in both ...
research
03/01/2020

An Information-Theoretic Approach to Explainable Machine Learning

A key obstacle to the successful deployment of machine learning (ML) met...
research
05/30/2020

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

The significant advances in autonomous systems together with an immensel...
research
04/19/2022

GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints

The number of information systems (IS) studies dealing with explainable ...
research
11/27/2018

What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems

Recent efforts in Machine Learning (ML) interpretability have focused on...
research
11/27/2020

Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

Social Networking Sites (SNS) are one of the most important ways of comm...
research
11/03/2022

A k-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning

Besides accuracy, recent studies on machine learning models have been ad...

Please sign up or login with your details

Forgot password? Click here to reset