DeepAI AI Chat
Log In Sign Up

Reconnoitering the class distinguishing abilities of the features, to know them better

11/23/2022
by   Payel Sadhukhan, et al.
0

The relevance of machine learning (ML) in our daily lives is closely intertwined with its explainability. Explainability can allow end-users to have a transparent and humane reckoning of a ML scheme's capability and utility. It will also foster the user's confidence in the automated decisions of a system. Explaining the variables or features to explain a model's decision is a need of the present times. We could not really find any work, which explains the features on the basis of their class-distinguishing abilities (specially when the real world data are mostly of multi-class nature). In any given dataset, a feature is not equally good at making distinctions between the different possible categorizations (or classes) of the data points. In this work, we explain the features on the basis of their class or category-distinguishing capabilities. We particularly estimate the class-distinguishing capabilities (scores) of the variables for pair-wise class combinations. We validate the explainability given by our scheme empirically on several real-world, multi-class datasets. We further utilize the class-distinguishing scores in a latent feature context and propose a novel decision making protocol. Another novelty of this work lies with a refuse to render decision option when the latent variable (of the test point) has a high class-distinguishing potential for the likely classes.

READ FULL TEXT

page 9

page 10

page 12

page 13

page 14

04/02/2021

Multi-Class Data Description for Out-of-distribution Detection

The capability of reliably detecting out-of-distribution samples is one ...
02/21/2022

Explainability in Machine Learning: a Pedagogical Perspective

Given the importance of integrating of explainability into machine learn...
03/16/2023

WebSHAP: Towards Explaining Any Machine Learning Models Anywhere

As machine learning (ML) is increasingly integrated into our everyday We...
07/13/2021

DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement

As the complexity of machine learning (ML) models increases, resulting i...
10/11/2022

Class-Specific Explainability for Deep Time Series Classifiers

Explainability helps users trust deep learning solutions for time series...
10/05/2021

Foundations of Symbolic Languages for Model Interpretability

Several queries and scores have recently been proposed to explain indivi...
02/02/2020

Towards Deep Machine Reasoning: a Prototype-based Deep Neural Network with Decision Tree Inference

In this paper we introduce the DMR – a prototype-based method and networ...