Measuring inter-cluster similarities with Alpha Shape TRIangulation in loCal Subspaces (ASTRICS) facilitates visualization and clustering of high-dimensional data

07/15/2021
by   Joshua M. Scurll, et al.
0

Clustering and visualizing high-dimensional (HD) data are important tasks in a variety of fields. For example, in bioinformatics, they are crucial for analyses of single-cell data such as mass cytometry (CyTOF) data. Some of the most effective algorithms for clustering HD data are based on representing the data by nodes in a graph, with edges connecting neighbouring nodes according to some measure of similarity or distance. However, users of graph-based algorithms are typically faced with the critical but challenging task of choosing the value of an input parameter that sets the size of neighbourhoods in the graph, e.g. the number of nearest neighbours to which to connect each node or a threshold distance for connecting nodes. The burden on the user could be alleviated by a measure of inter-node similarity that can have value 0 for dissimilar nodes without requiring any user-defined parameters or thresholds. This would determine the neighbourhoods automatically while still yielding a sparse graph. To this end, I propose a new method called ASTRICS to measure similarity between clusters of HD data points based on local dimensionality reduction and triangulation of critical alpha shapes. I show that my ASTRICS similarity measure can facilitate both clustering and visualization of HD data by using it in Stage 2 of a three-stage pipeline: Stage 1 = perform an initial clustering of the data by any method; Stage 2 = let graph nodes represent initial clusters instead of individual data points and use ASTRICS to automatically define edges between nodes; Stage 3 = use the graph for further clustering and visualization. This trades the critical task of choosing a graph neighbourhood size for the easier task of essentially choosing a resolution at which to view the data. The graph and consequently downstream clustering and visualization are then automatically adapted to the chosen resolution.

READ FULL TEXT

page 11

page 22

research
03/12/2019

Learning Resolution Parameters for Graph Clustering

Finding clusters of well-connected nodes in a graph is an extensively st...
research
09/10/2019

Subspace clustering without knowing the number of clusters: A parameter free approach

Subspace clustering, the task of clustering high dimensional data when t...
research
07/29/2015

IT-Dendrogram: A New Member of the In-Tree (IT) Clustering Family

Previously, we proposed a physically-inspired method to construct data p...
research
02/16/2013

Clustering validity based on the most similarity

One basic requirement of many studies is the necessity of classifying da...
research
01/15/2020

ShapeVis: High-dimensional Data Visualization at Scale

We present ShapeVis, a scalable visualization technique for point cloud ...
research
06/04/2019

A numerical measure of the instability of Mapper-type algorithms

Mapper is an unsupervised machine learning algorithm generalising the no...
research
09/16/2020

Robust Unsupervised Mining of Dense Sub-Graphs at Multiple Resolutions

Whereas in traditional partitional clustering, each data point belongs t...

Please sign up or login with your details

Forgot password? Click here to reset