Reclassification formula that provides to surpass K-means method

09/27/2012
by   M. Kharinov, et al.
0

The paper presents a formula for the reclassification of multidimensional data points (columns of real numbers, "objects", "vectors", etc.). This formula describes the change in the total squared error caused by reclassification of data points from one cluster into another and prompts the way to calculate the sequence of optimal partitions, which are characterized by a minimum value of the total squared error E (weighted sum of within-class variance, within-cluster sum of squares WCSS etc.), i.e. the sum of squared distances from each data point to its cluster center. At that source data points are treated with repetitions allowed, and resulting clusters from different partitions, in general case, overlap each other. The final partitions are characterized by "equilibrium" stability with respect to the reclassification of the data points, where the term "stability" means that any prescribed reclassification of data points does not increase the total squared error E. It is important that conventional K-means method, in general case, provides generation of instable partitions with overstated values of the total squared error E. The proposed method, based on the formula of reclassification, is more efficient than K-means method owing to converting of any partition into stable one, as well as involving into the process of reclassification of certain sets of data points, in contrast to the classification of individual data points according to K-means method.

READ FULL TEXT
research
02/08/2016

Homogeneity of Cluster Ensembles

The expectation and the mean of partitions generated by a cluster ensemb...
research
11/04/2021

Explainable k-means. Don't be greedy, plant bigger trees!

We provide a new bi-criteria Õ(log^2 k) competitive algorithm for explai...
research
09/10/2022

scatteR: Generating instance space based on scagnostics

Modern synthetic data generators consist of model-based methods where th...
research
03/24/2013

Generalizing k-means for an arbitrary distance matrix

The original k-means clustering method works only if the exact vectors r...
research
06/25/2020

Tangles: From Weak to Strong Clustering

We introduce a new approach to clustering by using tangles, a tool that ...
research
02/12/2019

A Theory of Selective Prediction

We consider a model of selective prediction, where the prediction algori...
research
04/04/2020

Random Partition Models for Microclustering Tasks

Traditional Bayesian random partition models assume that the size of eac...

Please sign up or login with your details

Forgot password? Click here to reset