An Improved Cost Function for Hierarchical Cluster Trees
Hierarchical clustering has been a popular method in various data analysis applications. It partitions a data set into a hierarchical collection of clusters, and can provide a global view of (cluster) structure behind data across different granularity levels. A hierarchical clustering (HC) of a data set can be naturally represented by a tree, called a HC-tree, where leaves correspond to input data and subtrees rooted at internal nodes correspond to clusters. Many hierarchical clustering algorithms used in practice are developed in a procedure manner. Dasgupta proposed to study the hierarchical clustering problem from an optimization point of view, and introduced an intuitive cost function for similarity-based hierarchical clustering with nice properties as well as natural approximation algorithms. We observe that while Dasgupta's cost function is effective at differentiating a good HC-tree from a bad one for a fixed graph, the value of this cost function does not reflect how well an input similarity graph is consistent to a hierarchical structure. In this paper, we present a new cost function, which is developed based on Dasgupta's cost function, to address this issue. The optimal tree under the new cost function remains the same as the one under Dasgupta's cost function. However, the value of our cost function is more meaningful. The new way of formulating the cost function also leads to a polynomial time algorithm to compute the optimal cluster tree when the input graph has a perfect HC-structure, or an approximation algorithm when the input graph 'almost' has a perfect HC-structure. Finally, we provide further understanding of the new cost function by studying its behavior for random graphs sampled from an edge probability matrix.
READ FULL TEXT