Algorithm-Agnostic Explainability for Unsupervised Clustering

05/17/2021
by   Charles A. Ellis, et al.
0

Supervised machine learning explainability has greatly expanded in recent years. However, the field of unsupervised clustering explainability has lagged behind. Here, we, to the best of our knowledge, demonstrate for the first time how model-agnostic methods for supervised machine learning explainability can be adapted to provide algorithm-agnostic unsupervised clustering explainability. We present two novel algorithm-agnostic explainability methods, global permutation percent change (G2PC) feature importance and local perturbation percent change (L2PC) feature importance, that can provide insight into many clustering methods on a global level by identifying the relative importance of features to a clustering algorithm and on a local level by identifying the relative importance of features to the clustering of individual samples. We demonstrate the utility of the methods for explaining five popular clustering algorithms on low-dimensional, ground-truth synthetic datasets and on high-dimensional functional network connectivity (FNC) data extracted from a resting state functional magnetic resonance imaging (rs-fMRI) dataset of 151 subjects with schizophrenia (SZ) and 160 healthy controls (HC). Our proposed explainability methods robustly identify the relative importance of features across multiple clustering methods and could facilitate new insights into many applications. We hope that this study will greatly accelerate the development of the field of clustering explainability.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

12/30/2018

Machine learning in resting-state fMRI analysis

Machine learning techniques have gained prominence for the analysis of r...
09/30/2021

On the Trustworthiness of Tree Ensemble Explainability Methods

The recent increase in the deployment of machine learning models in crit...
10/14/2020

Human-interpretable model explainability on high-dimensional data

The importance of explainability in machine learning continues to grow, ...
07/20/2020

Towards Ground Truth Explainability on Tabular Data

In data science, there is a long history of using synthetic data for met...
06/24/2021

On Locality of Local Explanation Models

Shapley values provide model agnostic feature attributions for model out...
02/15/2022

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis

A variety of methods have been proposed to try to explain how deep neura...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.