Selecting the Number of Clusters K with a Stability Trade-off: an Internal Validation Criterion

06/15/2020
by   Alex Mourer, et al.
0

Model selection is a major challenge in non-parametric clustering. There is no universally admitted way to evaluate clustering results for the obvious reason that there is no ground truth against which results could be tested, as in supervised learning. The difficulty to find a universal evaluation criterion is a direct consequence of the fundamentally ill-defined objective of clustering. In this perspective, clustering stability has emerged as a natural and model-agnostic principle: an algorithm should find stable structures in the data. If data sets are repeatedly sampled from the same underlying distribution, an algorithm should find similar partitions. However, it turns out that stability alone is not a well-suited tool to determine the number of clusters. For instance, it is unable to detect if the number of clusters is too small. We propose a new principle for clustering validation: a good clustering should be stable, and within each cluster, there should exist no stable partition. This principle leads to a novel internal clustering validity criterion based on between-cluster and within-cluster stability, overcoming limitations of previous stability-based methods. We empirically show the superior ability of additive noise to discover structures, compared with sampling-based perturbation. We demonstrate the effectiveness of our method for selecting the number of clusters through a large number of experiments and compare it with existing evaluation methods.

READ FULL TEXT
research
06/17/2021

A Distance-based Separability Measure for Internal Cluster Validation

To evaluate clustering results is a significant part of cluster analysis...
research
08/27/2020

reval: a Python package to determine the best number of clusters with stability-based relative clustering validation

Determining the number of clusters that best partitions a dataset can be...
research
04/23/2022

Selective clustering ensemble based on kappa and F-score

Clustering ensemble has an impressive performance in improving the accur...
research
01/19/2011

Transductive-Inductive Cluster Approximation Via Multivariate Chebyshev Inequality

Approximating adequate number of clusters in multidimensional data is an...
research
10/22/2017

A Novel Bayesian Cluster Enumeration Criterion for Unsupervised Learning

The Bayesian Information Criterion (BIC) has been widely used for estima...
research
11/03/2021

Selecting the number of clusters, clustering models, and algorithms. A unifying approach based on the quadratic discriminant score

Cluster analysis requires many decisions: the clustering method and the ...
research
07/30/2021

Distribution free optimality intervals for clustering

We address the problem of validating the ouput of clustering algorithms....

Please sign up or login with your details

Forgot password? Click here to reset