1. Introduction
Clustering is one of the most fundamental and popular methods for data classification. In this paper we initiate a rigorous theoretical study of clustering with the help of a ‘faulty oracle’, a model that captures many application scenarios and has drawn significant attention in recent years^{†}^{†}footnotetext: A prior version of some of the results of this work appeared in arxiv previously [42, Sec. 6], see https://arxiv.org/abs/1604.01839. In this version we rewrote several proofs for clarity, and included many new results..
Suppose we are given a set of points, that need to be clustered into clusters where is unknown to us. Suppose there is an oracle that can answer pairwise queries of the form, “do and belong to the same cluster?”. Repeating the same question to the oracle always returns the same answer, but the answer could be wrong with probability (that is slightly better than random answer). We are interested to find the minimum number of queries needed to recover the true clusters with high probability. Understanding query complexity of the faulty oracle model is a fundamental theoretical question [23] with many existing works on sorting and selection [5, 6] where queries are erroneous with probability , and repeating the same question does not change the answer. Here we study the basic clustering problem under this setting which also captures several fundamental applications. Throughout the paper, ‘noisy oracle’ and ‘faulty oracle’ have the same meaning.
Crowdsourced Entity Resolution. Entity resolution (ER) is an important data mining task that tries to identify all records in a database that refer to the same underlying entity. Starting with the seminal work of Fellegi and Sunter [24], numerous algorithms with variety of techniques have been developed for ER [22, 26, 37, 17]. Still, due to ambiguity in representation and poor data quality, accuracy of automated ER techniques has been unsatisfactory. To remedy this, a recent trend in ER has been to use human in the loop. In this setting, humans are asked simple pairwise queries adaptively, “do and represent the same entity?”, and these answers are used to improve the final accuracy [28, 50, 52, 25, 48, 19, 27, 35, 43]. Proliferation of crowdsourcing platforms like Amazon Mechanical Turk (AMT), CrowdFlower etc. allows for easy implementation. However, data collected from nonexpert workers on crowdsourcing platforms are inevitably noisy. A simple scheme to reduce errors could be to take a majority vote after asking the same question to multiple independent crowd workers. However, often that is not sufficient. Our experiments on several real datasets with answers collected from AMT [29, 48] show majority voting could sometime even increase the errors. Interestingly, such an observation has been made by a recent paper as well [47]. There are more complex querying model [47, 51, 49]
, and involved heuristics
[29, 48] to handle errors in this scenario. Let be the probability of error^{1}^{1}1an approximation ofcan often be estimated manually from a small sample of crowd answers.
of a query answer which might also be the aggregated answer after repeating the query several times. Therefore, once the answer has been aggregated, it cannot change. In all crowdsourcing works, the goal is to minimize the number of queries to reduce the cost and time of crowdsourcing, and recover the entities (clusters). This is exactly clustering with noisy oracle. While several heuristics have been developed [48, 28, 49], here we provide a rigorous theory with nearoptimal algorithms and hardness bounds.Signed Edge Prediction. The edge sign prediction problem can be defined as follows. Suppose we are given a social network with signs on all its edges, but the sign from node to , denoted by is hidden. The goal is to recover these signs as best as possible using minimal amount of information. Social interactions or sentiments can be both positive (“like”, “trust”) and negative (“dislike”, “distrust”). [38] provides several such examples; e.g., Wikipedia, where one can vote for or against the nomination of others to adminship [8], or Epinions and Slashdots where users can express trust or distrust, or can declare others to be friends or foes [7, 36]. Initiated by [9, 32], many techniques and related models using convex optimization, lowrank approximation and learning theoretic approaches have been used for this problem [15, 10, 12]. Recently [14, 12, 44] proposed the following model for edge sign prediction. We can query a pair of nodes to test whether indicating and belong to the same cluster or indicating they are not. However, the query fails to return the correct answer with probability , and we want to query the minimal possible pairs. This is exactly the case of clustering with noisy oracle. Our result significantly improves, and generalizes over [14, 12, 44].
Correlation Clustering. In fact, when all pairwise queries are given, and the goal is to recover the maximum likelihood (ML) clustering, then our problem is equivalent to noisy correlation clustering [4, 41]. Introduced by [4], correlation clustering is an extremely wellstudied model of clustering. We are given a graph with each edge labelled either or , the goal of correlation clustering is to either (a) minimize the number of disagreements, that is the number of intracluster edges and intercluster edges, or (b) maximize the number of agreements that is the number of intracluster edges and intercluster edges. Correlation clustering is NPhard, but can be approximated well with provable guarantees [4]. In a random noise model, also introduced by [4] and studied further by [41], we start with a ground truth clustering, and then each edge label is flipped with probability . This is exactly the graph we observe if we make all possible pairwise queries, and the ML decoding coincides with correlation clustering. The proposed algorithm of [4] can recover in this case all clusters of size , and if “all” the clusters have size , then they can be recovered by [41]. Using our proposed algorithms for clustering with noisy oracle, we can also recover significantly smaller sized clusters given the number of clusters are not too many. Such a result is possible to obtain using the repeatedpeeling technique of [3]. However, our running time is significantly better. E.g. for , we have a running time of , whereas for [3], it is dominated by the time to solve a convex optimization over vertex graph which is at least .
Stochastic Block Model (SBM). The clustering with faulty oracle is intimately connected with the planted partition model, also known as the stochastic block model [34, 21, 20, 2, 1, 30, 16, 45]. The stochastic block model is an extremely wellstudied model of random graphs where two vertices within the same community share an edge with probability , and two vertices in different communities share an edge with probability . It is often assumed that , the number of communities, is a constant (e.g. is known as the planted bisection model and is studied extensively [1, 45, 21] or a slowly growing function of (e.g. ). There are extensive literature on characterizing the threshold phenomenon in SBM in terms of the gap between and (e.g. see [2] and therein for many references) for exact and approximate recovery of clusters of nearly equal size^{2}^{2}2Most recent works consider the region of interest as and for some .. If we allow for different probability of errors for pairs of elements based on whether they belong to the same cluster or not, then the resultant faulty oracle model is an intriguing generalization of SBM. Consider the probability of error for a query on is if and belong to the same cluster and otherwise; but now, we can only learn a subset of the entries of an SBM matrix by querying adaptively. Understanding how the threshold of recovery changes for such an “incomplete” or “spaceefficient” SBM will be a fascinating direction to pursue. In fact, our lower bound results extend to asymmetric probability values, while designing efficient algorithms and sharp thresholds are ongoing works. In [13], a locality model where measurements can only be obtained for nearby nodes is studied for two clusters with nonadaptive querying and allowing repetitions. It would also be interesting to extend our work with such locality constraints.
Contributions. Formally the clustering with a faulty oracle is defined as follows.
Problem (QueryCluster).
Consider a set of points containing latent clusters , , , where and the subsets are unknown. There is an oracle with two error parameters . The oracle takes as input a pair of vertices , and if belong to the same cluster then with probability and with probability . On the other hand, if do not belong to the same cluster then with probability and with probability . Such an oracle is called a binary asymmetric channel. A special case would be when , the binary symmetric channel, where the error rate is the same for all pairs. Except for the lower bound, we focus on the symmetric case in this paper. Note that the oracle returns the same answer on repetition.
Now, given , find such that is minimum, and from the oracle answers it is possible to recover , with high probability^{3}^{3}3 high probability implies with probability , where as .
Our contributions are as follows.
Lower Bound (Section 2). We show that is the information theoretic lower bound on the number of adaptive queries required to obtain the correct clustering with high probability even when the clusters are of similar size (see, Theorem 1). Here is the
JensenShannon divergence between Bernoulli and distributions.
For the symmetric case, that is when ,
. In particular, if , our lower bound on query complexity is .
Developing lower bounds in the interactive setting especially with noisy answers appears to be significantly challenging as popular techniques based on Fanotype inequalities for multiple hypothesis testing [11, 39] do not apply, and we believe our technique will be useful in other noisy interactive learning settings.
InformationTheoretic Optimal Algorithm (Section 3). For the symmetric error case, we design an algorithm which asks at most queries (Theorem 2) matching the lower bound within an factor, whenever .
Computationally Efficient Algorithm (Section 3.2). We next design an algorithm that is computationally efficient and runs in time and asks at most queries. Note that most prior works in SBM, or works on edge sign detection, only consider the case when is a constant [2, 30, 16], even just [45, 1, 14, 12, 44]. As long as, , we get a running time of .
We can use this algorithm to recover all clusters of size at least for
correlation clustering on noisy graph, improving upon the results of [4, 41]. The algorithm runs in time whenever , as opposed to in [3].
Nonadaptive Algorithm (Section 4).
When the queries must be done upfront, for , we give a simple time algorithm that asks queries improving upon [44] where a polynomial time algorithm (at least with a running time of ) is shown with number of queries and over [14, 12] where queries are required under certain conditions on the clusters. Our result generalizes to , and we show interesting lower bounds in this setting.
Further, we derive new lower bound showing tradeoff between queries and threshold of recovery for incomplete SBM in Sec. 4.1.
2. Lower bound for the faultyoracle model
Note that we are not allowed to ask the same question multiple times to get the correct answer. In this case, even for probabilistic recovery, a minimum size bound on cluster size is required. For example, consider the following two different clusterings. and . Now if one of these two clusterings are given two us uniformly at random, no matter how many queries we do, we will fail to recover the correct clustering with positive probability. Therefore, the challenge in proving lower bounds is when clusters all have size more than a minimum threshold, or when they are all nearly balanced. This removes the constraint on the algorithm designer on how many times a cluster can be queried with a vertex and the algorithms can have greater flexibility. We define a clustering to be balanced if either of the following two conditions hold 1) the maximum size of a cluster is , 2) the minimum size of a cluster is . It is much harder to prove lower bounds if the clustering is balanced.
Our main lower bound in this section uses the JensenShannon (JS) divergence. The wellknown KL divergence is defined between two probability mass functions and : Further define the JS divergence as:
. In particular, the KL and JS divergences between two Bernoulli random variable with parameters
and are denoted with and respectively.Theorem 1 (QueryCluster Lower Bound).
Any (randomized) algorithm must make expected number of queries to recover the correct clustering with probability at least , even when the clustering is known to be balanced.
Note that the lower bound is more effective when and are close. Moreover our actual lower bound is slightly tighter with the expected number of queries required given by
We have to be the element set to be clustered: . To prove Theorem 1 we first show that, if the number of queries is small, then there exist number of clusters, that are not being sufficiently queried with. Then we show that, since the size of the clusters cannot be too large or too small, there exists a decent number of vertices in these clusters.
Lemma 1.
Suppose, there are clusters. There exist at least clusters such that an element from any one of these clusters will be assigned to a wrong cluster by any randomized algorithm with probability unless the total number of queries involving is more than
Proof.
Our first task is to cast the problem as a hypothesis testing problem.
Step 1: Setting up the hypotheses. Let us assume that the clusters are already formed, and we can moreover assume that all elements except for one element has already been assigned to a cluster. Note that, queries that do not involve the said element plays no role in this stage.
Now the problem reduces to a hypothesis testing problem where the th hypothesis for , denotes that the true cluster for is
. We can also add a nullhypothesis
that stands for the vertex belonging to none of the clusters (since is unknown this is a hypothetical possibility for any algorithm^{4}^{4}4this lower bound easily extend to the case even when is known). Letdenote the joint probability distribution of our observations (the answers to the queries involving vertex
) when is true, . That is for any event we have,Suppose denotes the total number of queries made by a (possibly randomized) algorithm at this stage before assigning a cluster. Also let be the
dimensional binary vector that is the result of the queries. The assignment is based on
. Let the random variable denote the number of queries involving cluster In the second phase, we need to identify a set of clusters that are not being queried with enough by the algorithm.Step 2: A set of “weak” clusters. We must have, Let,
Since, we have . That is there exist at least clusters in each of where less than (on average under ) queries were made before assignment.
Let . Let
Moreover, since we must have, or . Therefore, has size,
Now let us assume that, we are given an element for some to cluster ( is the true hypothesis). The probability of correct clustering is . In the last step, we give an upper bound on probability of correct assignment for this element.
Step 3: Bounding probability of correct assignment for weak cluster elements. We must have,
where we again used the definition of the total variation distance and in the last step we have used the Pinsker’s inequality [18]. The task is now to bound the divergence . Recall that and
are the joint distributions of the independent random variables (answers to queries) that are identical to one of two Bernoulli random variables:
, which is Bernoulli(), or , which is Bernoulli(). Letdenote the outputs of the queries, all independent random variables. We must have, from the chain rule
[18],Note that, for the random variable , the term will contribute to only when the query involves the cluster . Otherwise the term will contribute to . Hence,
Now plugging this in,
if and large enough . Had we bounded the total variation distance with in the Pinsker’s inequality then we would have in the denominator. Obviously the smaller of and would give the stronger lower bound. ∎
Now we are ready to prove Theorem 1.
Proof of Theorem 1.
We will show the claim by considering a balanced input. Recall that for a balanced input either the maximum size of a cluster is or the minimum size of a cluster is . We will consider the two cases separately for the proof.
Case 1: the maximum size of a cluster is .
Suppose, the total number of queries is . That means number of vertices involved in the queries is . Note that, there are clusters and elements. Let be the set of vertices that are involved in less than queries. Clearly,
Now we know from Lemma 1 that there exists clusters such that a vertex from any one of these clusters will be assigned to a wrong cluster by any randomized algorithm with probability unless the expected number of queries involving this vertex is more than .
We claim that must have an intersection with at least one of these clusters. If not, then more than vertices must belong to less than clusters. Or the maximum size of a cluster will be which is prohibited according to our assumption.
Now each vertex in the intersection of and the clusters are going to be assigned to an incorrect cluster with positive probability if, Therefore we must have
Case 2: the minimum size of a cluster is .
Let be the set of clusters that are involved in at most queries. That means, This implies, . Now we know from Lemma 1 that there exist clusters (say ) such that a vertex from any one of these clusters will be assigned to a wrong cluster by any randomized algorithm with probability unless the expected number of queries involving this vertex is more than . Quite clearly .
Consider a cluster such that , which is always possible because the intersection is nonempty. is involved in at most queries. Let the minimum size of any cluster be . Now, at least half of the vertices of must each be involved in at most queries. Now each of these vertices must be involved in at least queries (see Lemma 1) to avoid being assigned to a wrong cluster with positive probability. This means, or since . ∎
3. Algorithms
In this section, we first develop an information theoretically optimal algorithm for clustering with faulty oracle within an factor of the optimal query complexity. Next, we show how the ideas can be extended to make it computationally efficient. We consider both the adaptive and nonadaptive versions.
3.1. InformationTheoretic Optimal Algorithm
Let be the true clustering and be the maximum likelihood (ML) estimate of the clustering that can be found when all queries have been made to the faulty oracle. Our first result obtains a query complexity upper bound within an factor of the information theoretic lower bound. The algorithm runs in quasipolynomial time, and we show this is the optimal possible assuming the famous planted clique hardness. Next, we show how the ideas can be extended to make it computationally efficient in Section 3.2. We consider both the adaptive and nonadaptive versions.
In particular, we prove the following theorem.
Theorem 2.
There exists an algorithm with query complexity for QueryCluster that returns the ML estimate with high probability when query answers are incorrect with probability . Moreover, the algorithm returns all true clusters of of size at least for a suitable constant with probability .
Remark 1.
Assuming , as , , matching the query complexity lower bound within an factor. Thus our upper bound is within a factor of the information theoretic optimum in this range.
Finding the Maximum Likelihood Clustering of with faulty oracle
We can view the clustering problem as following. We have an undirected graph , such that is a union of disjoint cliques , . The subsets are unknown to us; they are called the clusters of . The adjacency matrix of is a blockdiagonal matrix. Let us denote this matrix by .
Now suppose, each edge of is erased independently with probability , and at the same time each nonedge is replaced with an edge with probability . Let the resultant adjacency matrix of the modified graph be . The aim is to recover from .
Lemma 2.
The maximum likelihood recovery is given by the following:
where,
Therefore, the ML recovery asks for,
Note that,
Hence the ML estimation is,
(1) 
where , i.e., when and when . Further We will use this fact to prove Theorem 2 and Theorem 3 below.
Note that (1) is equivalent to finding correlation clustering in with the objective of maximizing the consistency with the edge labels, that is we want to maximize the total number of positive intracluster edges and total number of negative intercluster edges [4, 41, 40]. This can be seen as follows.
Therefore (1) is same as correlation clustering. However going forward we will be viewing it as obtaining clusters with maximum intracluster weight. That will help us to obtain the desired running time of our algorithm. Also, note that, we have a random instance of correlation clustering here, and not a worst case instance.
Algorithm. 1
The algorithm that we propose has several phases. The main idea is as follows. We start by selecting a small subset of vertices, and extract the heaviest weight subgraph in it by suitably defining edge weight. If the subgraph extracted has size, we are confident that it is part of an original cluster. We then grow it completely, where a decision to add a new vertex to it happens by considering the query answers involving these different vertices and the new vertex. Otherwise, if the subgraph extracted has size less than , we select more vertices. We note that we would never have to select more than vertices, because by pigeonhole principle, this will ensure that we have selected at least members from a cluster, and the subgraph detected will have size at least . This helps us to bound the query complexity. We emphasize that our algorithm is completely deterministic.
Phase 1: Selecting a small subgraph. Let .

Select vertices arbitrarily from . Let be the set of selected vertices. Create a subgraph by querying for every and assigning a weight of if the query answer is “yes” and otherwise .

Extract the heaviest weight subgraph in . If , move to Phase 2.

Else we have . Select a new vertex , add it to , and query with every vertex in . Move to step (2).
Phase 2: Creating an Active List of Clusters. Initialize an empty list called when Phase 2 is executed for the first time.

Add to the list .

Update by removing from and every edge incident on . For every vertex , if , include in and remove from with all edges incident to it.

Extract the heaviest weight subgraph in . If , Move to step(1). Else move to Phase .
Phase 3: Growing the Active Clusters. We now have a set of clusters in .

Select an unassigned vertex not in (that is previously unexplored), and for every cluster , pick distinct vertices in the cluster and query with them. If the majority of these answers are “yes”, then include in .

Else we have for every the majority answer is “no” for . Include and query with every node in and update accordingly. Extract the heaviest weight subgraph from and if its size is at least move to Phase 2 step (1). Else move to Phase 3 step (1) by selecting another unexplored vertex.
Phase 4: Maximum Likelihood (ML) Estimate.

When there is no new vertex to query in Phase , extract the maximum likelihood clustering of and return them along with the active clusters, where the ML estimation is defined in Equation 1.
Analysis.
To establish the correctness of the algorithm, we show the following. Suppose all queries on have been made. If the ML estimate of the clustering with these answers is same as the true clustering of that is, then the algorithm for faulty oracle finds the true clustering with high probability.
Let without loss of generality, . We will show that Phase  recover with probability at least . The remaining clusters are recovered in Phase .
A subcluster is a subset of nodes in some cluster. Lemma 5 shows that any set that is included in in Phase of the algorithm is a subcluster of . This establishes that all clusters in at any time are subclusters of some original cluster in . Next, Lemma 7 shows that elements that are added to a cluster in are added correctly, and no two clusters in can be merged. Therefore, clusters obtained from are the true clusters. Finally, the remaining of the clusters can be retrieved from by computing a ML estimate on in Phase , leading to Theorem 3.
We will use the following version of the Hoeffding’s inequality heavily in our proof. We state it here for the sake of completeness.
Hoeffding’s inequality for large deviation of sums of bounded independent random variables is well known [33][Thm. 2].
Lemma 3 (Hoeffding).
If are independent random variables and for all Then
This inequality can be used when the random variables are independently sampled with replacement from a finite sample space. However due to a result in the same paper [33][Thm. 4], this inequality also holds when the random variables are sampled without replacement from a finite population.
Lemma 4 (Hoeffding).
If are random variables sampled without replacement from a finite set , and for all Then
Lemma 5.
Let . Algorithm in Phase and returns a subcluster of of size at least with high probability if contains a subcluster of of size at least . Moreover, it does not return any set of vertices of size at least if does not contain a subcluster of of size at least .
Proof.
Let , , for , and . Suppose without loss of generality . The lemma is proved via a series of claims. The proofs of the claims are delegated to Appendix A.
Claim 1.
Let . Then a set for some will be returned with high probability when is processed.
Claim 2.
Let . Then a set for some with size at least will be returned with high probability when is processed.
Claim 3.
If . then no subset of size will be returned by the algorithm for faulty oracle when processing with high probability.
Since, the algorithm attempts to extract a heaviest weight subgraph at most times, and each time the probability of failure is at most . By union bound, all the calls succeed with probability at least . This establishes the lemma. ∎
We will need the following version of Chernoff bound as well.
Lemma 6 (Chernoff Bound).
Let be independent binary random variables, and with . Then for any
and,
Lemma 7.
The list contains all the true clusters of of size at the end of the algorithm with high probability.
Proof.
From Lemma 5, any cluster that is added to in Phase is a subset of some original cluster in with high probability, and has size at least . Moreover, whenever contains a subcluster of of size at least , it is retrieved by the algorithm and added to .
When a vertex is added to a cluster in , we have at that time, and there exist distinct members of , say, such that majority of the queries of with these vertices returned . Let if possible . Then the expected number of queries among the queries that had an answer “yes” (+1) is . We now use the Chernoff bound, Lemma 6 bound, to have,
On the other hand, if there exists a cluster such that , then while growing , will be added to (either already belongs to , or is a newly considered vertex). This again follows by the Chernoff bound. Here the expected number of queries to be answered “yes” is . Hence the probability that less than queries will be answered yes is . Therefore, for all , if is included in a cluster in , the assignment is correct with probability at least . Also, the assignment happens as soon as such a cluster is formed in and is explored (whichever happens first).
Furthermore, two clusters in cannot be merged. Suppose, if possible there are two clusters and which ought to be subset of the same cluster in . Let without loss of generality is added later in . Consider the first vertex that is considered by our algorithm. If is already there in at that time, then with high probability will be added to in Phase . Therefore, must have been added to after has been considered by our algorithm and added to . Now, at the time is added to in Phase , , and again will be added to with high probability in Phase –thereby giving a contradiction.
This completes the proof of the lemma. ∎
Theorem 3.
If the ML estimate of the clustering of with all possible queries return the true clustering, then the algorithm for faulty oracle returns the true clusters with high probability. Moreover, it returns all the true clusters of of size at least with high probability.
Proof.
From Lemma 5 and Lemma 7, contains all the true clusters of of size at least with high probability. Any vertex that is not included in the clusters in at the end of the algorithm, are in . Also contains all possible pairwise queries among them. Clearly, then the ML estimate of will be the true ML estimate of the clustering restricted to these clusters. ∎
Finally, once all the clusters in are grown, we have a fully queried graph in containing the small clusters which can be retrieved in Phase . This completes the correctness of the algorithm. With the following lemma, we get Theorem 2.
Lemma 8.
The query complexity of the algorithm for faulty oracle is .
Proof.
Let there be clusters in when is considered by the algorithm. could be in which case is considered in Phase , else is considered in Phase . Therefore, is queried with at most members, each from the clusters. If is not included in one of these clusters, then is added to and queried will all vertices in . We have seen in the correctness proof (Lemma 3) that if contains at least vertices from any original cluster, then ML estimate on retrieves those vertices as a cluster with high probability. Hence, when is queried with the vertices in , . Thus the total number of queries made when the algorithm considers is at most , where when the error probability is . This gives the query complexity of the algorithm considering all the vertices, which matches the lower bound computed in Section 2 within an factor. ∎
Now combining all these we get the statement of Theorem 2.
Running Time & Connection to Planted Clique
While the algorithm described above is very close to information theoretic optimal, the running time is not polynomial. Moreover, it is unlikely that the algorithm can be made efficient.
A crucial step of our algorithm is to find a large cluster of size at least , which can of course be computed in time. However, since size of is bounded by , the running time to compute such a heaviest weight subgraph is . This running time is unlikely to be improved to a polynomial. This follows from the planted clique conjecture.
Conjecture 1 (Planted Clique Hardness).
Given an ErdősRényi random graph , with , the planted clique conjecture states that if we plant in a clique of size where , then there exists no polynomial time algorithm to recover the largest clique in this planted model.
Reduction. Given such a graph with a planted clique of size , we can construct a new graph by randomly deleting each edge with probability . Then in , there is one cluster of size where edge error probability is and the remaining clusters are singleton with intercluster edge error probability being . So, if we can detect the heaviest weight subgraph in polynomial time in the faulty oracle algorithm, then there will be a polynomial time algorithm for the planted clique problem.
In fact, the reduction shows that if it is computationally hard to detect a planted clique of size for some value of , then it is also computationally hard to detect a cluster of size in the faulty oracle model. Note that . In the next section, we propose a computationally efficient algorithm which recovers all clusters of size at least with high probability, which is the best possible assuming the conjecture, and can potentially recover much smaller sized clusters if .
3.2. Computationally Efficient Algorithm
Known
We first design an algorithm when , the number of clusters is known. Then we extend it to the case of unknown . The algorithm is completely deterministic.
Theorem 4.
There exists a polynomial time algorithm with query complexity for QueryCluster with error probability and known , that recovers all clusters of size at least .
The algorithm is given below.
Algorithm 2. Let . We define two thresholds and . The algorithm is as follows.
Phase 12C: Select a Small Subgraph. Initially we have an empty graph , and all vertices in are unassigned to any cluster.

Select new vertices arbitrarily from the unassigned vertices in and add them to such that the size of is . If there are not enough vertices left in , select all of them. Update by querying for every such that and and assigning a weight of
Comments
There are no comments yet.