Lexicographically Ordered Multi-Objective Clustering

03/02/2019 ∙ by Sainyam Galhotra, et al. ∙ University of Massachusetts Amherst 12

We introduce a rich model for multi-objective clustering with lexicographic ordering over objectives and a slack. The slack denotes the allowed multiplicative deviation from the optimal objective value of the higher priority objective to facilitate improvement in lower-priority objectives. We then propose an algorithm called Zeus to solve this class of problems, which is characterized by a makeshift function. The makeshift fine tunes the clusters formed by the processed objectives so as to improve the clustering with respect to the unprocessed objectives, given the slack. We present makeshift for solving three different classes of objectives and analyze their solution guarantees. Finally, we empirically demonstrate the effectiveness of our approach on three applications using real-world data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Identifying graph clusters, which are groups of similar or related entities [Jain et al.1999], is being increasingly employed for data-driven decision making in high-impact applications such as health care [Haraty et al.2015] and urban mobility [Kumar et al.2016, Saisubramanian et al.2015]. Clustering with multiple objectives [Law et al.2004, Handl and Knowles2007] helps improve robustness of the solution and has proven to be beneficial in many applications such as resource sharing [Chen et al.2011], fairness [Chierichetti et al.2017], and team formation [Farhadi et al.2012]. For example, consider a group of six friends who want to carpool to work in two cars (Figure 1). When clustering for carpooling, it is important to minimize the maximum distance traveled by the driver () and balance the cluster sizes ().

(a) Problem
(b)
(c)
(d)
(e) Objective values
Figure 1: An example of single and multi-objective clustering for the carpooling problem. Each edge weight denotes the pairwise distances between the nodes and all pairs which are not connected by an edge have a distance of 67.

Existing techniques that support multi-objective clustering (MOC) either leverage a scalarization function, which combines the multiple objectives into a single objective, or find clusters in parallel for each objective and combine the results using different approaches such as fitness function [Jiamthapthaksin et al.2009, Veldt et al.2018, Handl and Knowles2007, Pizzuti2018, Saha et al.2018]. Finding a suitable scalarization is non-trivial due to the large space of Pareto optimal solutions that may need to be explored [Handl and Knowles2007, Wray et al.2015, Zhou et al.2011]

. Many algorithms employ a heuristic approach to prune the space of Pareto optimal solutions, making it difficult to provide any theoretical guarantees on the results. When combining solutions from solving multiple objectives in isolation, it is not clear how the solution to one objective affects another objective since the clusters formed may be arbitrarily worse with respect to other objectives. For the carpooling example, clusters formed by optimizing independently for

(using kcenter2approx (kcenter2approx)) and (using balancedkc (balancedkc)) are shown in Figures 1(b) and 1(c), and the corresponding objective values are tabulated in Figure 1(e). In single objective clustering, the solution is far from optimal for . The distance traveled is much larger with MOC since it optimizes both the objectives together.

We address these concerns by considering a lexicographic ordering over objectives, which offers a natural way to describe optimization problems [Rangcheng et al.2001, Wray et al.2015]. This is motivated by the observation that many multi-objective problems are characterized by an inherent lexicographic ordering over objectives, which offers a principled approach to evaluate candidate solutions. In fact, for the carpooling scenario, clusters that optimize for both and in the order (Figure 1(d)) achieve the best trade-off between the two objectives.

We introduce Relaxed Lexicographic Multi-Objective Clustering (RLMOC), a generalized model that supports clustering with any finite number of objectives and is characterized by a slack variable. The lexicographic ordering enforces a preference over objectives that are satisfied by the solution. Strict lexicographic ordering often reduces the space for forming clusters that satisfy lower-priority objectives, which we alleviate by using a slack. The slack, for each objective, is a multiplicative approximation factor denoting the upper limit on the acceptable loss in the solution quality from the optimal, thus offering more flexibility for clustering. For example, in clustering for supply-demand matching, there is always a trade-off among optimizing for distance, load balance, and cost. When optimizing the distance is less critical, allowing for a slack helps improve load balancing and cost. RLMOC generalizes (lexicographic) multi-objective clustering and single-objective clustering, since their solutions can be achieved by appropriately modifying the slack.

We propose the Zeus algorithm that solves (relaxed) LMOC problem by sequentially processing the different objectives to form clusters. This is facilitated by a makeshift subroutine that processes the clusters formed by the previous objective so as to improve the clustering with respect to the current objective, as long as the loss in solution quality does not violate the slack. By varying the list of objectives, their ordering, and the allowed slack, a wide range of problems can be efficiently represented by our model. In this paper, we discuss in detail the makeshifts for resource sharing, fairness, team formation, and K-center objectives, and analyze their theoretical guarantees.

Our primary contributions are: (i) introducing the lexicographic multi-objective clustering with slack (Section 2); (ii) presenting Zeus algorithm, makeshift functions for solving various classes of problems, and analyzing their theoretical guarantees (Section 3); and (iv) empirical results on three domains with different combinations of objectives (Section 4).

2 Problem Formulation

Consider a collection of points , along with a pairwise distance metric . Let be a graph with capturing pairwise relationships such as ‘do know each other?’. Let denote a function that maps each node to the different set of attributes111 comprises of various attributes like , and . Only the attributes optimized by the objectives are required to be known.. Given a graph instance and an integer , the goal is to construct clusters that partition into disjoint subsets by optimizing an objective function. Given a graph and a set of clusters , the objective function () returns an objective value as a real number, , which helps compare different clustering techniques. denotes the cluster corresponding to .

Our work focuses on using a lexicographic collection of these objectives to optimize the set of clusters obtained. Given an ordered set of objectives , the lexicographic preference enforces the following priority: . We now define a mechanism to compare two different sets of clusters to identify a lexicographically superior set of clusters and use it to define the Lexicographic Multi-Objective Objective Clustering (LMOC).

[Lexicographically Superior] Given two sets of clusters and that optimize for a lexicographically ordered set of objectives over a graph , is lexicographically superior to () if there exists such that , and whenever is a maximization objective (and the opposite if is a minimization objective).

[Lexicographic Multi-Objective Clustering: LMOC()] Given a graph , an ordered set of objectives , and an integer , the goal is to find a set of k-clusters such that there does not exist any other set of k-clusters which are lexicographically superior to . The LMOC generalizes single objective clustering problem and satisfies the following properties:

  • Optimizing the same objective multiple times is equivalent to optimizing for the same objective once, ;

  • The objective value of optimal clusters returned by for is not less than the objective value of optimal clusters for single objective problem that minimizes .

  • The clusters returned by LMOC are sensitive to the order in which objective functions are considered, ; and

  • .

Given the complexity of identifying a global superior set of clusters, we define Relaxed Lexicographic Multi-Objective Clustering problem (RLMOC) that is characterized by an ordered set of slack values, with . The slack values denote the multiplicative approximation factor corresponding to each objective in the lexicographic order. Therefore, a solution is considered to be valid iff , when is a maximization objective (and if is a minimization objective), where is a globally lexicographically superior set of clusters. The goal is to return any single set of clusters from the space of possible solutions. Every RLMOC is an LMOC when

and the properties described for LMOC can be translated for the relaxed version as well. Given the NP-hardness of identifying optimal clusters (globally superior) with respect to classical objectives (k-center, k-median, k-means), higher values of

enable trading solution quality for faster computations. In the following section, we describe an approach to solve RLMOC.

3 Solution Approach

In this section, we present the Zeus algorithm to solve the RLMOC problem. Given , the algorithm initializes each node to be present in its own separate cluster and sequentially processes the objective functions (Algorithm 1). This is achieved by employing a makeshift subroutine that processes the previously formed clusters to satisfy the current objective. In each application of the makeshift, the goal is to obtain a set of clusters that do not violate the slack values of any of the processed objectives. The slack_violated

function calculates the objective value of the clustering and estimates if the corresponding slack is violated

222Since calculating the optimal objective value can be NP-hard, theoretical guarantees are leveraged to estimate the optimal value.. When any of the slack values are violated by the makeshift, the clusters are post-processed using a local search algorithm to improve the violated objective function, without affecting the quality with respect to other objectives. The local_search function aims to improve the solution by moving one node at a time from its original cluster to any other cluster and terminates when a solution that does not violate the slack is found or when any movement of the nodes does not result in an improvement.

1:Initialize each node in a separate cluster
2:for  do
3:      makeshift_o ()
4:     if slack_violated(then
5:          local_search ()
6:     end if
7:end for
8:return ;
Algorithm 1 Zeus()

Zeus supports any combination of objectives since the makeshift is independent of the sequence of objectives considered.

3.1 Makeshift

The makeshift is a critical component of Zeus in solving RLMOC. Since the makeshift modifies the clusters formed using previously processed objective to satisfy the current objective, they are naturally dependent on the objective function for efficiency. It is relatively easier to design makeshifts for the classical clustering objectives such as K-center, K-median, and K-means. When using a combination of classical and ancillary clustering objectives, the makeshift for the ancillary objective depends on the classical clustering objective as well. In the rest of the paper, we focus on three ancillary objectives that are widely used in real-world applications that benefit from multiple objectives [Zhou et al.2011], each in combination with the K-center objective. A brief description of how our makeshifts can be adapted for other classical clustering objectives is discussed in the appendix.

3.1.1 k-Center

It is one of the most widely studied objectives in the literature [Vazirani2013], where the goal is to identify nodes as cluster centers (say , ) and assign each node to the closest cluster center such that the maximum distance of any node from its cluster center is minimized. The objective value is calculated as:

A simple greedy algorithm provides a 2-approximation for the k-center problem and it is NP-hard to find a better approximation factor [Vazirani2013]. The greedy algorithm initializes each point to be in its own cluster and chooses the first center randomly. In each subsequent iteration, all nodes are assigned to the already identified centers. The node which is farthest from the currently assigned center is selected as the new cluster center. The makeshift algorithm for k-center leverages this to identify the cluster centers. Whenever the input to k-center is a collection of clusters formed by previously processed objectives, the makeshift post-processes these clusters by reassigning nodes such that the set of nodes which were clustered together before processing k-center belong to the same cluster.

3.1.2 Resource Sharing (RS)

The objective in resource sharing, , is to maximize the number of nodes that have at least one of its neighbors in the same cluster. The objective value is calculated as:

Clustering for RS is widely used in distributed computing and cache management where each compute node in the network is assigned to one of the caches [Chu et al.2007].

For the sake of clarity, we introduce the makeshift (Algorithm 2) assuming each node is in its own cluster but this can be modified to work with situations when there are multiple nodes clustered together. The makeshift first iterates over and constructs a subgraph by considering the minimum weight edge for every node (Lines 2-5). The edges are a valid edge-cover of the graph (Lemma 2). The edges are sorted in decreasing order of their weights and the redundant edges are then removed, while ensuring that is a valid edge-cover. The set of nodes that belong to the same connected component in are considered to belong to the same cluster (Line 12). Hence the clusters generated by Algorithm 2 are star-shaped with the star centers acting as cluster centers and the maximum length of any path in is two (Lemma 2). Algorithm 2 is highly efficient with a run time polynomial in , and hence in the worst case.

,
for  do
     
     
end for
for  do
     if  is a valid edge cover then
         
     end if
end for
return ;
Algorithm 2 makeshift_RS)

Algorithm 2 produces optimal edge cover that minimizes the maximum weight of any edge in .

Proof.

To show that Algorithm 2 produces optimal edge cover that minimizes the maximum weight edge, we first show that it produces a valid edge cover. The algorithm constructs with the minimum weight edge for all (Lines 3-5) and any redundant edge, which does not violate the edge cover condition, is removed (Line 6). Thus, is a valid edge cover. We now prove by contradiction that is optimal. Let be an edge cover with a smaller value of the maximum weight edge. Hence, there exists an edge with . However, for the node , the algorithm selected for the edge cover (Lines 3-6), which we have shown to be a valid edge cover. Hence, is a contradiction, proving that Algorithm 2 produces optimal maximum edge cover. ∎

The maximum length of any path in , formed by Algorithm 2, is two.

Proof.

We prove by contradiction that the maximum path length in is two. Upon termination of the algorithm, let there exist a path with edges: and . Removing the edge from would still result in an edge cover. However, Algorithm 2 evaluates every edge in and only retains the edges if the edge cover is not violated (Lines 9-13). Hence, this is a contradiction as the cannot be a valid edge cover. Hence, the maximum length of all paths in is two. ∎

The following theorem proves that the set of clusters returned by Algorithm 2 are optimal with respect to and bounds the solution quality. The clustering returned by Zeus when has objective values and in the worst case, where is the optimal objective value for the objective function .

Proof.

Let be the clusters returned by Zeus with and be the clusters generated on processing RS (and be the edges in the edge cover). Since is the optimal edge cover (by Lemma 2) and the pair of nodes that are connected in the edge cover are present in the same final cluster, the objective value of with respect to is the optimal objective value.

Lemma 2 shows that the weight of any edge is less than the maximum weight of an edge in the optimal solution. Hence, , and Lemma 2 guarantees that the clusters generated by RS are star-shaped. When Zeus processes k-center objective, the the distance between any pair of points is less than  [Vazirani2013]. Using the second property of LMOC, we know that . It then iterates over the leaves in the stars of and assigns them to the same cluster as that of the center of its star in . Using triangle inequality, the distance of these leaves from the center of their new cluster is less than the sum of ‘distance between the leaf and center of the star (say a)’ and ‘distance between center of the star and its cluster center (say b)’. Since (a) and (b) . Hence, the maximum distance of any point from its cluster center is less than . ∎

This shows that the slack is not violated for whenever and . Additionally, is infeasible as the objective value can never be greater than the optimal value and is also infeasible due to NP-hardness of approximating k-center problem. Hence, the slack is violated only when in which case the local search technique helps improve the solution.

3.1.3 Fairness (F)

Minimizing bias to improve fairness is gaining increased attention as it is critical for many real-world settings. However, fairness in clustering remains under-explored. Let each node in the graph have a sensitive attribute, say color which can ‘Blue’ (B) or ‘Purple’ (P). Given such a characteristic, recent work has focused on forming clusters such that every cluster has equal fraction of nodes with ‘Blue’ attribute [Chierichetti et al.2017]. This is a form of group fairness, studied in the literature [Galhotra et al.2017]. Another setting studied in the literature considers individual fairness, where two nodes possessing similar attributes but different color should not be discriminated [Galhotra et al.2017]. We consider a fairness objective, , which ensures that each node from the minority group (say ‘Blue’) is matched to at least one neighbor from the majority group (‘Purple’) and all the matched pairs (denoted by ) belong to the same cluster. The objective value is calculated as:

1:for  binary search in  do
2:     
3:     for  do
4:         if  then
5:              
6:         end if
7:     end for
8:     
9:end for
10:return ;
Algorithm 3 makeshift_F()

Algorithm 3 describes the mechanism to match the nodes with ‘Blue’ color to the nodes with ‘Purple’ color. Let denote the optimal value of the maximum distance between any pair of matched vertices. The optimal distance is initialized to the maximum distance between any pair of vertices, , and is refined iteratively. The algorithm constructs an unweighted bipartite graph with ‘Blue’ nodes on one side () and ‘Purple’ on the other side () and a pair is connected if the corresponding edge distance is less than . It then performs a maximum bipartite matching by adding a source node, , and a sink node, . The nodes in are connected to and the nodes in are connected to , with an edge of unit capacity. Executing bipartite matching on this instance guarantees a that every node in is matched with some node in . In each subsequent iteration, is updated by performing binary search, which helps quickly identify the smallest that guarantees finding an optimal matching.

The clustering returned by Zeus when has objective values and in the worst case, where is the optimal objective value for the objective function .

Proof.

Let be the clusters returned by Zeus with and be the clusters generated on processing F (and be the edges in the matching returned). Since is the optimal maximum matching and the pair of nodes that are connected in the matching are present in the same final cluster, the objective value of with respect to is the optimal objective value.

Since we perform a binary search to identify the smallest value of that returns the maximum matching, the weight of any edge is not greater than the maximum weight of an edge in the optimal solution. Hence, , . When Zeus processes k-center objective, the distance between any pair of points is less than [Vazirani, 2013]. Using the second property of LMOC, we know that . It then iterates over the nodes in the components of and assigns all matched nodes to the same cluster in . Using triangle inequality, the distance of these leaves from the center of their new cluster is less than the sum of ‘distance between the leaf and center of the star (say a)’ and ‘distance between center of the star and its cluster center (say b)’. Since (a) and (b) . Hence, the maximum distance of any point from its cluster center is less than . ∎

This shows that the slack is not violated for whenever and . Additionally, is infeasible as the objective value can never be greater than the optimal value and is also infeasible due to NP-hardness of approximating k-center problem. Hence, the slack is violated only when in which case the local search technique helps improve the solution.

3.1.4 Team Formation (TF)

This objective is motivated by applications that require forming teams (clusters) such that certain attributes (experts in different fields) are equally represented across all clusters, irrespective of their connectivity with other nodes. Consider a scenario where each node has an attribute such that denotes that is an expert in , and a non-expert otherwise. We consider

to be a binary variable but it can be extended to work when there are multiple attributes, each having multiple values. The team formation objective aims to form clusters with equal fraction of experts; each cluster has (approximately)

nodes from , with denoting the experts. The objective value is:

In order to handle the team formation objective along with k-center objective, Algorithm 4 first performs constrained k-center on the set of vertices that are experts, . This step ensures that the k clusters generated are of equal size. When , this objective is equivalent to generating balanced clusters (example in Figure 1), for which the current best solution is a 4-approximation of the clusters on  [Ding2018]. Every node in is assigned to the cluster corresponding to the closest node in . The time complexity of Algorithm 4 is .

The clustering returned by Zeus when has objective values and in the worst case, where is the optimal objective value for the objective function .

Proof.

First, we try to estimate the optimal distance of points in from their corresponding centers. Then, we evaluate the distance of points in from their corresponding centers. Let be the optimal set of clusters that optimize and objectives with the k-center objective value . Hence the pairwise distance between any pair of nodes in that belong to the same cluster is . Restricting to the set of nodes in is a valid solution with respect to objective. Hence, the set of optimal clusters, on the nodes of will have k-center objective value . Using latest result for balanced k-center problem, which is 4-approximation [Ding2018], we get , where is the center of the cluster corresponding .

Additionally, ensures that , there exists some node in having a distance less than . Since Algorithm 4 identifies the closest node in (Line 3), . Using triangle inequality, the distance of any node from the corresponding center is less than . ∎

1: balanced k-center on
2:for  do
3:     
4:     
5:end for
6:return ;
Algorithm 4 makeshift_TF ()

This shows that the slack is not violated for whenever and . Furthermore, is infeasible as the objective value cannot be greater than the optimal value and is infeasible as it is NP-hard to get a better approximation [Vazirani2013]. In general, if an algorithm that guarantees approximation for equal cluster k-center algorithm can be devised, then Algorithm 4 is guaranteed to provide an approximation ratio of . Assigning a node to the closest cluster center improved the solution quality empirically, even though it does not alter the theoretical guarantee. We employ this optimization in our experiments.

Figure 2: Solution quality of various approaches corresponding to fairness and k-center objectives, .

4 Experimental Results

We conduct extensive experiments to evaluate Zeus and the proposed makeshifts on three real world datasets with the objectives discussed in the earlier section. Fairness (F), resource sharing (RS), and Team formation (TF) objectives are considered as with K-center (KC) as , with the lexicographic order . The objectives are evaluated on the Pokec social network dataset333https://snap.stanford.edu/data/soc-Pokec.html. The goal is to form clusters such that for every female in a cluster, there is at least one male neighbor in the same cluster, while optimizing for K-center. The objectives are evaluated on the academic conference dataset444https://core.ac.uk/services#dataset, to identify conferences that can co-occur or be co-located. The objectives are evaluated on the adult dataset555https://archive.ics.uci.edu/ml/machine-learning-databases/adult/. The goal is to form teams such that nodes with “tech-suppport” attribute are equally distributed across clusters, while optimizing for K-center. For the resource sharing application, the distances are estimated using an embedding in Euclidean space. For fairness and team formation, Jaccard distances are used.

We compare the results produced by Zeus with that of three baselines: () a greedy algorithm that optimizes independently; () optimizing k-center objective, (using [Vazirani2013]); and (MOC) a greedy approach that optimizes for both the considered objectives, with equal weight to each objective. The results are compared across different value of and different slack values. Unless otherwise specified, all algorithms were implemented by us in Python using the networkx library on a 8GB RAM laptop and the reported results are on 1000 nodes.

Figure 3: Solution quality of various approaches corresponding to team formation and k-center objectives, .
Figure 4: Solution quality of various approaches corresponding to resource sharing and k-center objectives, .
Figure 5: Effect of slack on .

4.1 Discussion

Solution Quality Figure 2 compares the performance of Zeus with the three baselines for with slack values (as guaranteed by Theorem 3). denotes the optimal fairness objective value on the Pokec dataset. It is evident that Zeus achieves optimal value for fairness and its performance with respect to is closer to that of , which optimizes for alone. The MOC baseline (MOC) did not find clusters even after 24 hours on this problem. Therefore, we compare its results on a smaller subset of this dataset with 100 nodes (3c, 3d). MOC performs well for k-center objective but significantly compromises the solution quality for . Figure 3 shows results for with slack values (as guaranteed by Theorem 3.1.4). denotes the optimal team formation objective value on the adult dataset. Zeus performs similar to for all values of and provides solutions that are far from optimal for . For the KC objective, performs better than Zeus, as expected, and Zeus is better than . Although the worst case approximation guarantee of Zeus is 10 times worse than that of the optimal (Thrm. 3.1.4), it performs better in practice. Figure 4 shows the results for with slack values (as guaranteed by Theorem 2). denotes the optimal resource sharing objective value on the conference dataset. It is evident that Zeus performs consistently better than the baselines for all values of .

MOC did not converge on the full dataset for the team formation and resource sharing objectives but the results on 100 nodes were similar to that of fairness. Overall, Zeus consistently performs better than all three baselines, on all data sets in our experiments and for all values of . These experiments demonstrate the effectiveness of Zeus in optimizing multiple objectives, given a lexicographic order.

Slack Altering the slack from to improves the performance of Zeus on . Zeus performs similar to on , while performing better than on (Figures 6a, 6b). For the sake of consistency, we consider only feasible slack values. This demonstrates that by increasing the slack corresponding to , clustering with respect to lower-priority objectives can be improved. Similar results were observed for and .

Runtime In our experiments, the run time of Zeus is linear in and Zeus took at most 30 minutes to form clusters for all values of and across all datasets.

5 Conclusion and Future Work

We introduce the relaxed multi-objective clustering, a general model for clustering with multiple objectives, given a lexicographic order and slack. By altering the slack and the lexicographic order, a wide range of real-world problems can be efficiently modeled using RLMOC. We also present Zeus, an efficient algorithm that processes the different objective functions sequentially and leverages a makeshift subroutine to modify the clusters for a particular objective. Theoretical properties are discussed for the three makeshifts described in the paper. Our empirical results show that Zeus effectively optimizes the objectives, in terms of solution quality and run time. Identifying makeshift for various other objectives is an interesting problem for future work.

References

  • [Chen et al.2011] Wen-Yen Chen, Yangqiu Song, Hongjie Bai, Chih-Jen Lin, and Edward Y. Chang.

    Parallel spectral clustering in distributed systems.

    IEEE transactions on pattern analysis and machine intelligence, 33(3):568–586, 2011.
  • [Chierichetti et al.2017] Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. In Advances in Neural Information Processing Systems, 2017.
  • [Chu et al.2007] Rui Chu, Nong Xiao, and Xicheng Lu. A clustering model for memory resource sharing in large scale distributed system. In Proceedings of the IEEE International Conference on Parallel and Distributed Systems, 2007.
  • [Ding2018] Hu Ding. Faster balanced clusterings in high dimension. CoRR, 2018.
  • [Farhadi et al.2012] Farnoush Farhadi, Elham Hoseini, Sattar Hashemi, and Ali Hamzeh. Teamfinder: A co-clustering based framework for finding an effective team of experts in social networks. In Proceedings of the 12th IEEE International Conference on Data Mining Workshops, 2012.
  • [Galhotra et al.2017] Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. Fairness testing: Testing software for discrimination. In Proceedings of the 11th Joint Meeting on Foundations of Software Engineering, 2017.
  • [Handl and Knowles2007] Julia Handl and Joshua Knowles. An evolutionary approach to multiobjective clustering.

    IEEE Transactions on Evolutionary Computation

    , 11(1):56–76, 2007.
  • [Haraty et al.2015] Ramzi A. Haraty, Mohamad Dimishkieh, and Mehedi Masud. An enhanced k-means clustering algorithm for pattern discovery in healthcare data. International Journal of Distributed Sensor Networks, 2015.
  • [Jain et al.1999] Anil K. Jain, M. Narasimha Murty, and Patrick J. Flynn. Data clustering: A review. ACM Computing Surveys, 31(3):264–323, 1999.
  • [Jiamthapthaksin et al.2009] Rachsuda Jiamthapthaksin, Christoph F. Eick, and Ricardo Vilalta. A framework for multi-objective clustering and its application to co-location mining. In Proceedings of the International Conference on Advanced Data Mining and Applications, 2009.
  • [Kumar et al.2016] Dheeraj Kumar, Huayu Wu, Yu Lu, Shonali Krishnaswamy, and Marimuthu Palaniswami. Understanding urban mobility via taxi trip clustering. In Proceedings of the 17th IEEE International Conference on Mobile Data Management, 2016.
  • [Law et al.2004] Martin H.C. Law, Alexander P. Topchy, and Anil K. Jain. Multiobjective data clustering. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2004.
  • [Pizzuti2018] Clara Pizzuti. Evolutionary computation for community detection in networks: A review. IEEE Transactions on Evolutionary Computation, 22(3):464–483, 2018.
  • [Rangcheng et al.2001] Jia Rangcheng, Ding Yuanyao, and Tang Shaoxiang. The discounted multi-objective markov decision model with incomplete state observations: lexicographically order criteria. Mathematical Methods of Operations Research, 2001.
  • [Saha et al.2018] Sriparna Saha, Sayantan Mitra, and Stefan Kramer. Exploring multiobjective optimization for multiview clustering. ACM Transactions on Knowledge Discovery from Data (TKDD), 12(4):44, 2018.
  • [Saisubramanian et al.2015] Sandhya Saisubramanian, Pradeep Varakantham, and Hoong Chuin Lau. Risk based optimization for improving emergency medical systems. In

    Proceedings of the 29th AAAI Conference on Artificial Intelligence

    , 2015.
  • [Vazirani2013] Vijay V. Vazirani. Approximation Algorithms. Springer Science & Business Media, 2013.
  • [Veldt et al.2018] Nate Veldt, David F. Gleich, and Anthony Wirth. A correlation clustering framework for community detection. In Proceedings of the 27th Conference on World Wide Web (Web Conference), 2018.
  • [Wray et al.2015] Kyle Hollins Wray, Shlomo Zilberstein, and Abdel-Illah Mouaddib. Multi-objective MDPs with conditional lexicographic reward preferences. In Proceedings of the 29th Conference on Artificial Intelligence, 2015.
  • [Zhou et al.2011] Aimin Zhou, Bo-Yang Qu, Hui Li, Shi-Zheng Zhao, Ponnuthurai Nagaratnam Suganthan, and Qingfu Zhang.

    Multiobjective evolutionary algorithms: A survey of the state of the art.

    Swarm and Evolutionary Computation, 1(1):32–49, 2011.

Appendix A Modification of Objectives

In this section, we describe variations of resource sharing and fairness objective functions and how the proposed makeshifts can be modified to work for these new objectives.

Resource Sharing (RS). The RS objective function described in Sec 3.1 tries to ensure that every node has at least one neighbor in the same cluster. A generalization of RS considers a scenario where every node has at least neighbors in the same cluster. Let denote the optimal value of the maximum distance between any pair of matched vertices. The optimal distance is initialized to the maximum distance between any pair of vertices, , and is refined iteratively. In order to account for this modified objective, we can modify Algorithm 2 such that the edges with weight more than are pruned. If the graph formed by residual graphs is a valid edge cover where each node has degree more than , then is valid. In each subsequent iteration, is updated by performing binary search, which helps quickly identify the smallest that guarantees finding a valid edge cover.

Fairness. The fairness objective can be modified to handle applications where the goal is to match members of minority group to members of majority group. Another modification is to consider more than two groups of members and the goal is be to construct a matching between all pairs of such groups. These variations can he handled easily by the makeshift described in Section 3.1 that calculates b-matching for the nodes in the dataset. Figure 6 demonstrates the behavior of Zeus on the modified fairness objective where nodes from are matched with nodes of .

Appendix B Makeshift for Other Classical Clustering Objectives

Section 3.1 in the main paper describes the makeshifts for different set of objectives studied in the literature, when employed along with k-center objective. We now describe the variation of all these objectives for k-median objective. The algorithm proposed by Vazirani et al. (2013) is one of the popularly used approaches for performing k-median (kM) clustering. This algorithm can be used as the makeshift for kM. We show ways to adapt the makeshift with respect to resource sharing, fairness, and team formation objectives when applied along with k-median objective.

Resource Sharing. The makeshift proposed in Sec 3.1 for resource sharing and k-center works well for the k-median objective as well. This is because the algorithm returns the optimal edge-cover, which minimizes the maximum weight of any edge in the edge cover along with the sum of weights of edges in the cover.

Fairness. The makeshift proposed in Sec 3.1 for fairness and k-center can be modified to work with the k-median objective. The same algorithm works for k-median with a slight modification that all edges are considered while computing the matching and the weight on each edge acts as the cost of the edge. With this construction of the bipartite graph, the minimum cost matching is generated. This matching guarantees that the total distance between any pair of matched vertices in minimized.

Team Formation. Instead of running k-center algorithm on the set of nodes in , we run the k-median algorithm and the makeshift works the same way as described in Algorithm 4.

Figure 6: Results on modified Fairness objective.