From Clustering to Cluster Explanations via Neural Networks

by   Jacob Kauffmann, et al.

A wealth of algorithms have been developed to extract natural cluster structure in data. Identifying this structure is desirable but not always sufficient: We may also want to understand why the data points have been assigned to a given cluster. Clustering algorithms do not offer a systematic answer to this simple question. Hence we propose a new framework that can, for the first time, explain cluster assignments in terms of input features in a comprehensive manner. It is based on the novel theoretical insight that clustering models can be rewritten as neural networks, or 'neuralized'. Predictions of the obtained networks can then be quickly and accurately attributed to the input features. Several showcases demonstrate the ability of our method to assess the quality of learned clusters and to extract novel insights from the analyzed data and representations.


Deep Amortized Clustering

We propose a deep amortized clustering (DAC), a neural architecture whic...

Clustering in Partially Labeled Stochastic Block Models via Total Variation Minimization

A main task in data analysis is to organize data points into coherent gr...

Simplifying Clustering with Graph Neural Networks

The objective functions used in spectral clustering are usually composed...

Information Elicitation Meets Clustering

In the setting where we want to aggregate people's subjective evaluation...

Agglomerative Bregman Clustering

This manuscript develops the theory of agglomerative clustering with Bre...

Algorithms for finding k in k-means

k-means Clustering requires as input the exact value of k, the number of...

Deep Image Clustering with Tensor Kernels and Unsupervised Companion Objectives

In this paper we develop a new model for deep image clustering, using co...