Links: A High-Dimensional Online Clustering Method

01/30/2018 ∙ by Philip Andrew Mansfield, et al. ∙ Carnegie Mellon University 0

We present a novel algorithm, called Links, designed to perform online clustering on unit vectors in a high-dimensional Euclidean space. The algorithm is appropriate when it is necessary to cluster data efficiently as it streams in, and is to be contrasted with traditional batch clustering algorithms that have access to all data at once. For example, Links has been successfully applied to embedding vectors generated from face images or voice recordings for the purpose of recognizing people, thereby providing real-time identification during video or audio capture.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Although a wide selection of clustering methods are available [1, 2]

, most of them assume concurrent access to all data being clustered. Our interest is in efficiently clustering each datum as it becomes available, for applications that require unsupervised learning in real time.

The Links approach is to estimate the probability distribution of each cluster based on its current constituent vectors, to use those estimates to assign new vectors to clusters, and to update estimated distributions with each added vector. The update step includes fixing past cluster assignments where indicated by taking the additional data into account, although this is primarily to improve the internal model over time, since in typical online usage scenarios, each cluster assignment is provided once, at the time a new vector is made available.

Prior work [3]

addressing online clustering of unit vectors employs a small-variance approximation and is applied to low-dimensional problems such as segmentation of surface normals in 3D. Our approach is complementary in that it uses a high-dimensional approximation, and has been applied to problems with relatively high variance.

Links has been used to cluster CNN-based FaceNet embeddings [4] and LSTM-based voice embeddings [5]. The results of the latter experiment are presented in a separate paper [6]. The current paper focuses on the technical details of the algorithm.

2 Model

2.1 Generative model for a cluster

Let be a set of unit-length vectors in . They are confined to the submanifold , and to determine proximity for the purpose of clustering these vectors, we will use the natural metric on this submanifold, which is simply the angle between vectors:

(1)

We address the problem of cluster distributions within this submanifold with the following properties:

  1. Each cluster has a center vector and its member vectors are generated by a probability density that is isotropic in the sense that it only depends on distance from the center, .

  2. The function is the same for every cluster, so that probability densities for different clusters are related by isometry.

  3. decreases exponentially with ; for example, as a Gaussian suitably normalized on :

    (2)

    This ensures that the distribution is reasonably localized, since the exponential decrease compensates for a polynomial factor in the marginal distribution of :

    (3)

    where is a constant equal to the hypersurface area of ,

    (4)
  4. The prior distribution for the center of a cluster is constant on (no unit vector is preferred).

2.2 Estimated distribution

Given a set chosen randomly from the same cluster, but without knowledge of the center of the cluster, we would like to estimate the cluster’s probability distribution. The likelihood of the center value is

(5)

Since the prior is constant, the posterior is also proportional to the expression in equation 5. The maximum likelihood (and maximum a posteriori) center is therefore

(6)

which is the same as the centroid of the vectors as defined for a hypersphere according to [7]. The estimated probability distribution for the cluster is

(7)

The probability that a new vector belongs to the same cluster can then be estimated as the cumulative amount

(8)

2.3 High-dimensional approximation

Our primary interest is in problems with relatively large . For example, our typical embedding vectors have . For large enough , the following are true:

Lemma 1

Two randomly chosen vectors are almost always almost perpendicular, i.e.,

(9)

for some positive numbers and .

Lemma 2

The angle between a cluster center and a random vector from that cluster is almost always almost equal to a global constant , i.e.,

(10)

for some positive numbers and .

Lemma 3

Given two randomly chosen vectors from a cluster with center , their components perpendicular to will almost always be almost perpendicular to each other, i.e.,

(11)

for some positive numbers and .

To assess whether to add a new vector to an existing cluster known to include the vectors , we determine a threshold

on the cosine similarity between the new vector and the centroid

of the existing vectors. Using the approximation in lemmas 2 and 3, and assuming , we can compute vector components in an orthonormal basis including , and . This yields

(12)

and a threshold of

(13)

where , which we call the cluster similarity threshold.

Note that

(14)

and

(15)

which confirms that as we accumulate more vectors in a given cluster, the center and cosine similarity threshold of the estimated distribution approach the center and cosine similarity threshold of the generative distribution (i.e., the estimate improves). Since is a strictly increasing function of , the variance of the estimated distribution decreases with .

Similarly, to assess whether two clusters are the same, we determine a threshold on the cosine similarity between their centroids where, for and ,

(16)

Note that equation 13 is the special case with ,

(17)

and

(18)

The latter confirms that the centers estimated from the two sets of cluster points converge.

3 Algorithm

3.1 Online clustering

Each new input vector is assigned to a cluster as soon as it is produced, with no knowledge of future vectors and no backtracking. A unique ID for that cluster is returned. The clusterer keeps statistical information about the vectors received so far. Although it cannot change a previous answer, it can change the internal representation of cluster statistics, such as improvements to estimated distributions as well as cluster splits and merges when indicated by new information.

3.2 Internal representation

The Links algorithm’s internal representation is a two-level hierarchy: clusters are collections of subclusters, and subclusters are collections of input vectors. The subclusters are represented as nodes in a graph whose edges join ‘nearby’ nodes (meaning subclusters that likely belong to the same cluster given the data so far), and clusters are defined as connected components of the graph. Whereas subclusters are indivisible, clusters can become split along graph edges in response to changes in subcluster estimated probability distributions as new data is added. Alternatively, subclusters joined by an edge can become merged in response to changes.

The reasons for maintaining this two-level hierarchy (rather than, say, an arbitrary number of levels) are efficiency and practicality. It is efficient because the algorithm scales with number of subclusters rather than number of vectors. It is practical because the key cluster substructure that can affect future cluster IDs is the set of potential split points.

3.3 Assessing cluster membership

When a new vector is available, compute its cosine similarity to each subcluster centroid , and add it to the most-similar subcluster if the similarity is above a fixed threshold . In other words, let

(19)

If

(20)

then add to subcluster . , called the subcluster similarity threshold

, is a hyperparameter determining the granularity of cluster substructure appropriate for the data.

If inequality 20 does not hold, then start a new subcluster containing just . Next, use the estimated probability distribution of subcluster to determine whether to include the new subcluster in the same cluster as , by thresholding the cumulative probability in expression 8. In the high-dimensional approximation, this means the subcluster is included in the cluster whenever

(21)

where is the number of vectors in the subcluster . To a first approximation, is as given in equation 13. This will be further refined in section 3.5. If inequality 21 does hold, then add an edge to the graph joining the new subcluster to subcluster .

3.4 Updating clusters

When a new vector is added to an existing subcluster, the subcluster’s centroid may change. If this brings it within the subcluster similarity threshold of the centroid of another subcluster currently joined to the first by an edge, then the two are merged. In other words, if , then nodes and are replaced with a single node containing the vectors of both, and with the edge connections of both. Since the merging process also results in a new subcluster centroid, this check is continued recursively on affected subclusters.

Next, the edges joining affected nodes are checked for validity. The edge joining subclusters and is removed if the following does not continue to hold:

(22)

where is approximately as given in equation 16, but with improvements to follow in section 3.5. After severing a cluster in two by removing an edge, an attempt is made to re-join the two parts by adding an edge from the affected node to a new partner node that does satisfy inequality 22. If no such partner is found, then the cluster remains permanently split.

3.5 Anisotropy

Equations 13 and 16 were used to determine thresholds for membership in the same cluster as a given subcluster, effectively treating the subcluster’s members as randomly chosen from the cluster and not correlated with each other. If one were to properly take into account intra-subcluster correlations, then one consequence is that the limit in equation 18 would be reduced to a positive number , which we call the pair similarity maximum,

(23)

whereas the value of , which is , would remain unchanged. Any implicit anisotropy in the cluster distribution, such as an elongation along a preferred axis, will further reduce the value of without changing . A simple though approximate way to incorporate these adjustments into the algorithm is to replace and

by the following interpolated versions:

(24)
(25)

3.6 Hyperparameter Tuning

The similarity thresholds , and need to be tuned to best represent the data source. This is done by manually labeling a dataset with cluster IDs, running the clusterer on the data, and adjusting hyperparameters to improve the accuracy of the output cluster IDs. Accuracy is simply fraction of correct IDs. Prior to evaluation, the Hungarian algorithm [8] is used to map a subset of output cluster IDs bijectively to a subset of ground truth cluster IDs in such a way that produces the best possible accuracy. For some applications an alternate objective has been used; for example, one that gives different weights for conflating IDs vs. fracturing IDs, to reflect the seriousness of each type of error in practise.

4 Acknowledgements

The authors would like to thank Dr. Brian Budge and Dr. Navid Shiee for help with APIs and evaluation frameworks used in the implementation of the Links algorithm.

References