Algorithm-Agnostic Explainability for Unsupervised Clustering

05/17/2021
by   Charles A. Ellis, et al.
0

Supervised machine learning explainability has greatly expanded in recent years. However, the field of unsupervised clustering explainability has lagged behind. Here, we, to the best of our knowledge, demonstrate for the first time how model-agnostic methods for supervised machine learning explainability can be adapted to provide algorithm-agnostic unsupervised clustering explainability. We present two novel algorithm-agnostic explainability methods, global permutation percent change (G2PC) feature importance and local perturbation percent change (L2PC) feature importance, that can provide insight into many clustering methods on a global level by identifying the relative importance of features to a clustering algorithm and on a local level by identifying the relative importance of features to the clustering of individual samples. We demonstrate the utility of the methods for explaining five popular clustering algorithms on low-dimensional, ground-truth synthetic datasets and on high-dimensional functional network connectivity (FNC) data extracted from a resting state functional magnetic resonance imaging (rs-fMRI) dataset of 151 subjects with schizophrenia (SZ) and 160 healthy controls (HC). Our proposed explainability methods robustly identify the relative importance of features across multiple clustering methods and could facilitate new insights into many applications. We hope that this study will greatly accelerate the development of the field of clustering explainability.

READ FULL TEXT

page 6

page 8

research
12/30/2018

Machine learning in resting-state fMRI analysis

Machine learning techniques have gained prominence for the analysis of r...
research
09/30/2021

On the Trustworthiness of Tree Ensemble Explainability Methods

The recent increase in the deployment of machine learning models in crit...
research
10/14/2020

Human-interpretable model explainability on high-dimensional data

The importance of explainability in machine learning continues to grow, ...
research
08/21/2022

Statistical Aspects of SHAP: Functional ANOVA for Model Interpretation

SHAP is a popular method for measuring variable importance in machine le...
research
07/20/2020

Towards Ground Truth Explainability on Tabular Data

In data science, there is a long history of using synthetic data for met...
research
08/05/2022

Parameter Averaging for Robust Explainability

Neural Networks are known to be sensitive to initialisation. The explana...

Please sign up or login with your details

Forgot password? Click here to reset