1 Introduction
Aclique is a complete subgraph of an undirected graph , which means that each pair of nodes in have an edge between them. A maximal clique is a clique which is not a subgraph of any other clique. The procedure of enumerating all maximal cliques in a graph is called Maximal Clique Enumeration (MCE).
MCE has a range of applications in different fields, such as discovering communities in social networks [1], identifying coexpressed genes [2], detecting proteinprotein interaction complexes [3], supporting the construction of intelligent agent systems [4] and recognizing emergent patterns in terrorist networks [5].
There are a sufficient number of works [6, 7, 8, 9, 10, 11, 12]
focusing on improving the efficiency of MCE, which is considered as having exponential time. This is probably because the number of cliques in a graph is always very large. A graph with less than
vertices and edges can have more than maximal cliques [13]. Counting the number of maximal cliques in a general graph is considered to be #Pcomplete [14]. This means that the output of any MCE procedure is hard to be used by some other post applications. Fortunately, there typically exist a lot of overlaps between different cliques. This motivates us to consider reporting a summary set of all maximal cliques which has less overlap but can somehow represent all the cliques.Wang et al. [13] introduced the concept of visible summary, a set of maximal cliques, which promises that every maximal clique in graph can be covered by at least one maximal clique in the summary with a ratio of at least . Here, is given by a user and reflects the user’s tolerance of overlap. For example, a summary with ensures that any maximal clique can have at least 80 nodes covered by some clique in the summary. This summary model is interesting, e.g., in the marketing domain, if a certain percentage of users in a clique community has been covered, we expect that the covered users will spread a message across the community. Consequently, finding fewer communities as targets due to marketing cost while still ensuring a broad final user coverage is very desirable. The work [13] modified the depthfirst MCE [15] by adding a sampling function that determines whether a new clique enumeration subprocedure should be entered. It was proved that the expected visibility of such a sampled summary is larger than .
However, expected visible summaries are not unique. Apparently, as long as a summary is visible, the more concise the summary is, the better the summary is. Hence three questions arise naturally in sequence:

Is there any sampling strategy that can find a better (smaller) expected visible summary?

What kind of sampling strategy is optimal?

If achieving the optimal is difficult or impossible, how can we provide the best effort?
We will tackle these three questions in this paper. Our main contributions are summarized as follows:

We introduce a new sampling strategy to help to identify expected visible maximal clique summary. We prove that the new sampling strategy guarantees a better performance than the stateoftheart method in terms of producing a smaller summary while still meeting threshold .

We give a theoretical analysis that the sampling can be optimal under certain conditions, which substantiates good performance of the proposed sampling strategy in practice. Future investigations could also be directed by exploring how to approximate the optimal conditions.

We show that the sampling approach can get close to optimal with clique size bounding and enumeration ordering strategies. Then we propose truss ordering and truss bound respectively to further improve the performance of our sampling strategy.

We conduct experimental studies to verify the superiority of the new sampling method as well as our newly designed truss order and truss bound in terms of both effectiveness and efficiency on eight realworld datasets.
The rest of this paper is organized as follows. In Section 2, we review the definition of visible summary and an existing sampling approach. In Section 3, we give our motivation, introduce a novel sampling function and prove its superiority. The conditions of optimality are analyzed in Section 4. We propose truss vertex order and truss bound to practically instantiate optimality conditions in Section 5. Extensive experiments are conducted in Section 6 for evaluation. Related work and conclusion are in Section 7 and Section 8.
2 Visible Summary
A clique refers to a complete subgraph of an undirected graph . A clique is maximal if it is not contained by any other clique. When the context is clear, we also use to denote the node set of a maximal clique. Given the set of all maximal cliques in graph , denoted as , a summary is a subset of which means . To measure to what extent a summary can witness a clique, visibility is defined in [13], restated as Definitions 1 and 2. We then introduce expected visibility in Definitions 3 and 4.
Definition 1 (Visibility).
Given a summary , the visibility of a maximal clique is defined as:
(1) 
Note that is allowed to be the same as . This means that if , ’s visibility with respect to is 1. In other words, if , the summary can completely witness .
Definition 2 (Visible Summary).
A summary is called visible iff ,
(2) 
Rather than the exact visible summary defined above, our work looks for an expected visible summary. Before we give the formal definition of expected visible summary, we explain what the term expected means intuitively. Since the number of maximal cliques is likely to be exponential, it is infeasible to firstly compute all the cliques and then decide the summary. Instead, it is more practical to decide on the way while enumerating, i.e., try to make a decision whether to keep/discard a new clique or keep/discard with a probability, when the clique is found. To be more active, a decision can be made on whether to enter each enumeration branch with some probability. This means that each maximal clique has a probability to be included in and a corresponding probability to be discarded. For a clique , if it is selected to be included into , the visibility of it should be , since it is witnessed by itself; otherwise this value is , which stays unknown before is finalized. Given the above discussion of visibility, we can have the mathematical expectation of in Definition 3:
Definition 3 (Expected Visibility).
The expected visibility of a clique with regards to a summary , , is defined as
(3) 
One may question that, before is finally known, is unavailable to Formula (3), since this value relies on a materialization of . However, we need to point out that, such a does exist albeit it is hard to know its value early. We will see under which conditions can be calculated without is known in Section 4. Currently, we only need a lower bound of it since we want to make sure the lower bound is sufficiently large, so that the expectation of is larger than a usergiven threshold, implying that we want to find a summary with good visibility expectation guarantee. Definition 4 defines this case:
Definition 4 (Expected Visible Summary).
A summary is called expected visible iff ,
(4) 
where is a given threshold.
In this paper, we focus on developing theories and algorithms for finding a good expected visible summary. The key issue that we are going to address is how to keep/discard the enumeration branches to ensure the final found cliques can form a summary which is visible and of a small size. Note that in an expected visible summary, there may exist a clique which cannot be covered by any other clique in the summary with a factor more than the extent of . However, we still aim for expected visibility rather than exact visibility, because (1) visibility itself has already meant that the summary is an approximation, hence there may be less gain to enforce exact visibility; (2) the basic MCE algorithm (BKMCE) which we will introduce in Section 2.1 is a depthfirst search approach. For expected visibility semantics, the great pruning power of a sampling approach can terminate search subtrees as early as possible so that the exponential search space can be reduced significantly and the summary is promised to be concise with a sufficient quality guarantee. An algorithm serving for exact visibility has to decide whether a search subtree can be discarded at a relatively late stage, thus slows down the running time.
Next, we start with introducing an existing depthfirst MCE procedure [15] for maximal clique enumeration in Section 2.1, and then explain how it can be modified to find an expected visible summary by the stateoftheart work [13] in Section 2.2. Important notations are listed in Table I.
2.1 Maximal Clique Enumeration
BKMCE algorithm [15] (Algorithm 1) is a backtracking approach, which recursively calls procedure to grow the current partial clique by adding a new node from the candidate set until a maximal clique is found. Here we denote all the neighbor nodes of node by . is the current partial clique or configuration, which is still growing. and are candidate sets whose elements are common neighbors of , while only contains nodes which have been contained by some earlier output maximal cliques grown from the current . Algorithm 1 takes graph as input and outputs all the maximal cliques in . Initially it calls procedure (line 1). Then will be called recursively (line 10) until is generated. At every recursive stage will first check whether and (line 3). If so, it means that there is no candidate node left, and therefore the current is output as a maximal clique (line 4). If not, generally speaking, it will remove an arbitrary node from and add it into . Then it recursively calls procedure . Here, is the refined by deleting all nodes which are not neighbors of , and the same for . It ensures that every node in or is a common neighbor of the current . Finally, since is sure to be contained by some future cliques grown from , is added into (lines 812). Note that a pivot is chosen for avoiding some branches which will generate the same maximal clique (line 6). This is because, from the current configuration, a maximal clique containing which is a neighbor of , can be grown either from or . is a neighbor of but not of .
Notation  Meaning 

the graph with vertex set and edge set  
the induced graph of vertex set on graph  
a maximal clique  
the set of all maximal cliques in graph  
a summary, which is a subset of  
the visibility of maximal clique w.r.t. summary  
the expectation of visibility  
the userspecified threshold  
the neighbor node set of node  
the candidate set in BKMCE algorithm  
the candidate set whose elements should not be touched in BKMCE algorithm  
the pivot in BKMCE algorithm  
the local visibility, refers to Formula (5)  
the upper bound of the size of a maximal clique  
the lower bound of local visibility  
the sampling function used in [13]  
the conditional optimal sampling function  
a search subtree  
truss bound (T), H bound (H) and core bound (C)  
truss order (U), degeneracy order (I) and random order (R)  
2.2 Summarization by Sampling
Let us first ignore sampling and consider a deterministic enumeration that can find a visible summary: recall that BKMCE is a depthfirst algorithm, it outputs in such an order that two maximal cliques share a large portion of common nodes if they are produced next to each other. We denote this property as locality. Let be the last generated maximal clique which has been added into summary , when a new clique is generated, we can compare it with , rather than with every clique in , to compute a local visibility (Formula (5)). If , discard ; otherwise, keep . Such a deterministic strategy will guarantee to produce a visible summary.
(5) 
However, it will be desirable if we can discard a whole search branch with good confidence when we find that the branch has significant overlap with the last found clique . This leads to the idea of deliberately pruning some recursive subprocedures with some probability  let us call it sampling. Meanwhile, we must guarantee that the summary should have the expected visibility .
Details about invoking a sampling method to give an expected visible summary are shown in Algorithm 2. The key idea is to execute a sampling operation (line 8) to determine whether this current new branch should be grown or not before entering a new procedure (line 13). In line 7, denotes an upper bound of the size of the next maximal clique and denotes a lower bound of the local visibility . As we have not found , i.e., and
are unknown, we can only estimate
and . The sampling probability function is designed to be a function of and . The work in [13] chose the probability function to be:(6) 
and proved that applying in Algorithm 2 can produce a summary with the expected visibility . Due to the space limit, we briefly introduce the rationale of . From Formula (6), is a decreasing function with range [0,1]. This means when we find the estimated becoming larger, the probability of keeping the current search branch becomes smaller. When is estimated larger, this implies that we will call the recursion more times, hence the probability of keeping the current search branch is made larger. Algorithm 1 and Algorithm 2 follow the clique enumeration paradigm, so the time complexities of them are both bounded by because a vertex graph has at most maximal cliques[16]. Algorithm 2 should be practically faster due to early prunings, but has the same complexity in the worst case when .
3 A new sampling function
Expected visible summaries are not unique. Apparently, the more concise (smaller) a summary is, the better the summary is. Three questions arise naturally:

Are there any better sampling strategies?

What kind of sampling strategy is optimal?

If finding the optimal is difficult, how can we provide the best effort?
We will address question (1) in this section and discuss questions (2) and (3) in Section 4 and Section 5 respectively. In Section 3.1, we give our idea why we consider there should exist a better sampling function, then we introduce the new sampling function and prove its superiority in Section 3.2.
3.1 Intuition
Our new sampling strategy is based on the following two observations:
Observation 1: In Formula (6), when , we always have . This means even if we know the newly generated clique is visible with respect to the current , there is still a positive probability to add it into the summary. Thus will be more redundant because of these unnecessary cliques. A better strategy is to set in such cases, which means not to add these cliques at all.
Observation 2: In Formula (6), when , we have . This means once we find a maximal clique whose nodes are totally new to the current summary, we add it into without hesitation. This seems reasonable, however, there is still some possibility for this brand new clique to be covered by some future cliques. Moreover, considering we are looking for an expected visible summary, which means that we have the option not to include a brand new clique as long as the final summary is expected visible. In other words, it is safe to add the brand new clique with certain probabilities. Cases where is in are similar.
3.2 Sampling Function
Following the observations in Section 3.1, we give a new sampling function in Formula (7).
(7) 
The sampling function implies: if , discard the current search branch; otherwise, keep the current search branch with probability . The rationale of setting will be shown in Theorem 2.
Next, we prove that compared with , is a better function. This means that we need to prove: (1) samples with a lower probability (in Theorem 1); and (2) can produce an expected visible summary (in Theorem 2).
Theorem 1.
samples with a low probability than s(r), i.e.
(8) 
The equation holds iff .
Proof.
We show that this inequality holds when and separately:

if , we have and , it is clear that this inequality is satisfied, and the equation holds only when (corresponding to ).

if , since and , we have
(9)  
Combining these two cases, we complete this proof. ∎
While Theorem 1 promises us a more concise summary , we have to prove that this is indeed expected visible:
Theorem 2.
Algorithm 2, with sampling function , can produce an expected visible summary .
Proof.
First, we give the probability for a maximal clique being added into . Then we calculate the expected visibility and show it is no less than .
Recall that every time before Algorithm 2 starts a new search subtree , line 7 will compute a new pair of and . We denote them by and , where and is the size of this maximal clique to be grown. Since every is an upper bound of and every is a lower bound of , together with the monotonicity of , we have
(10)  
If is not included in , its visibility should be no less than the local visibility ; if is included in , . Now we can calculate the expectation of :
(11)  
We show two cases where and separately:

if , .

if , .
Combining these two cases, we complete this proof. ∎
Summary: Theorem 1 and Theorem 2 jointly show that is a valid sampling function and is better than .
4 Optimality
In this section, for the purpose of analyzing the optimality of the sampling function, we show what kinds of conditions should be satisfied. We prove the optimality of under such conditions and further explain why the performance of is good even without the conditions being fully satisfied.
4.1 Conditions for Optimality Analysis
Since we can find a better sampling function , another question comes out naturally: with the restriction of expected visibility, does an optimal sampling function (even better than ) with the smallest probability exist?
In the proof of Theorem 2, we can only prove the expectation . Intuitively, the smaller the sampling probability is, the smaller the expectation is. It is hard to determine whether a sampling function is optimal because of lacking information on how loose the inequality is. If we intend to analyze the optimality of any function, we need to tighten this inequality to be an equation first. Now we show under what conditions the theoretical analysis of optimality can be feasible.
In the proof of Theorem 2, we amplify two times:
The first inequality sign of Formula (11) is derived from Formula (10). Algorithm 2 implements the sampling operation in each recursive procedure using the probability . Since and are upper bound and lower bound of and respectively, we have , thus . Now we have two approaches to eliminate this inequality. The first approach is that if we want to analyze the property of the sampling function itself, we need to set the other factors ideal. This means if we do not care about the details of how to calculate , we can assume that this upper bound is ideal, so we have , and the same for . Note this hypothesis is made only for the purpose of analyzing the function theoretically, not for implementing Algorithm 2 in practice. With this assumption, we have . The second approach is to modify the sampling procedure. Now let us sample using the probability only after a complete maximal clique is generated, rather than sample each time a new node is grown. If so, it is obvious that . One may argue that it is meaningless to do sampling once a maximal clique is found. It is true that if we add every clique whose is no more than into summary, this summary is strictly visible. However, as we explained in Section 3, this would introduce more redundancy to . In some applications, we only need this summary to be expected visible, so this onestep sampling procedure is significant to give such a concise .
The second inequality sign of Formula (11) is from the definition of . Due to the locality of Algorithm 2, we use to replace the real visibility which should be no less than . Now we need to make the assumption that such locality is sufficiently strong (by which we mean that two similar cliques should be produced consecutively), so that is indeed the visibility defined in Formula (1). In practice, we do not need to enforce such strong locality to implement Algorithm 2. We introduce this hypothesis only for the theoretical consideration, which means, we only need this assumption to construct a framework under which we can analyze the optimality of sampling functions. Once this hypothesis is made, the second inequality becomes an equation.
Now we can modify Formula (11) to be an equation:
(12) 
if the following two conditions are satisfied:

The bounds of and are ideal, or we only do sampling each time when a full maximal clique is generated.

The property of locality is strong for to be the real visibility.
4.2 Optimality of
In this paper, we define the optimality of the sampling function as follows. Given the value of clique , sampling this clique with the lowest probability while still promising visibility. Now we can analyze the optimality of in the framework introduced in Section 4.1.
Proof.
We show that if there exists a sampling function , such that , and for at least one point there is , such a function cannot be used to generate an expected visible summary.
Note when , cannot be in this range, since a valid probability should be nonnegative. So we have , and
(13)  
This means that the summary generated by cannot be expected visible. ∎
Theorem 3 is a conditional theoretical guarantee for the optimality of . Note even if in general cases these two strong conditions are not fully satisfied, Theorem 3 is still useful for the practical implementation of Algorithm 2. That means if the bounds and are well estimated and the property of locality is strong, the inequalities in Formula (11) can be very tight. In such cases, can still show good performance.
One may concern that it is not clear to what extent we can achieve the good performance of in practice by (1) tightening bounds; (2) strengthening the locality. In the next section, we address the first concern by reviewing two existing bounds and proposing a new one which outperforms the other two by large margins. For the second concern, we show that stronger locality can be achieved by reordering vertices carefully. We will review an existing vertex order and design a new better one.
5 Bounds and Locality
In this section, we show how to approach good performance of the new sampling strategy by tightening bounds (Section 5.1) and by reordering vertices (Section 5.2).
5.1 Bound Analysis
The first inequality of (11) is derived from bound estimation. Note that Formula (14) we use to calculate the lower bound is the same as that introduced in [13]:
(14) 
where is the previous maximal clique added into ; is the number of vertices to be used for growing the partial configuration into a full maximal clique; , which satisfies , is the upper bound of ; and given out of the vertices are not covered by , is an upper bound of . Formula (14) can be understood in this way: suppose we know that the current partial configuration still needs vertices to grow into a full maximal clique , then the dominator is the size of . Since means that at most out of vertices in are not contained by , this means that at least are covered by . Thus is a lower bound of the size of . (The max operator is inserted here because depending on the estimation method of , may be negative.) Now we see that the two parts of the numerator are and a lower bound of respectively, thus the sum is a lower bound of . Combining the discussions above, the whole fraction is the very lower bound of . Since we lack information of the exact value of , we have to enumerate all possible in and choose the minimum as the lower bound. can be estimated as , or simply the value of , or the number of vertices in whose degrees are at least (because these vertices should be contained by a clique). We see here the upper bound ( , where is known) is used to estimate , and the fraction after the operator has nothing related to (because it is calculated after is given), so the quality of is determined by the tightness of (or ). Thus in the following, we focus on estimating .
One valid and tight bound of is the size of the maximum clique in candidate set , however, finding such a maximum clique itself is a clique enumeration problem which is of exponential time. As a result, we should consider realistic bounds instead. In the following, we review two bounds that were discussed in the previous work [13], then we propose a new one to further improve the effectiveness of .
Let be the induced graph of the candidate set on graph , then two existing upper bounds of are:

bound, denoted by , is the maximum so that there exist at least vertices in whose degrees are no less than . The maximum clique size can be bounded by because if there exists a clique, there should also exist at least vertices in whose degrees are no less than . Therefore holds for all possible cliques, including the maximum clique.

Core bound, denoted by , where denotes the maximum core number in . We now review the definition of core [17] and core number first.
Definition 5 (core).
The core of a graph is the largest induced subgraph in which the degree of each vertex is at least .
Definition 6 (Core Number).
The core number of graph , denoted as , is the largest such that a core is contained in .
The core number can serve as an upper bound because core is weaker than clique: a clique must be a core, while a ()core may not be a clique. Thus will be no less than the maximum clique size in . Now we define our newly proposed bound.
Definition 7 (Truss bound).
Truss bound, denoted by , is the maximum truss number in .
Now we review the definition of truss and truss number, and then explain why maximum truss number is valid for the upper bound .
Definition 8 (truss).
The truss of a graph is the largest induced subgraph in which each edge must be part of triangles in this subgraph.
Definition 9 (Truss Number).
The truss number of graph , denoted as , is the largest such that a truss is contained in .
is a upper bound of the size of maximum clique. This is because a clique with the maximum is also a truss since each edge in a clique is strictly contained by triangles. Thus the truss number cannot be less than the maximum clique size .
These three bounds satisfy the following inequality:
(15) 
The first inequality holds because bound does not enforce the vertices to be connected, while core bound does. The second inequality comes from the fact that a truss must be a core. This is because the endpoints of each edge should be incident to no less than edges (including itself) since is guaranteed to be involved in at least triangles.
The cost of evaluating these bounds are:
(16) 
where and are vertex set and edge set of the induced graph respectively. The induced graph can be constructed when selecting the pivot thus its construction does not incur an extra cost. For , when constructing , we can maintain a length array to record the number of vertices at each degree value. This can be done in . Then the value can be found by scanning this array from tail (where the numbers of vertices with higher degree values are stored) to head (where the numbers of vertices with lower degree values are stored) until vertices whose degrees are no less than are found. This step is also in . For , an core decomposition [18] is needed after is found. For , the truss decomposition takes to find the maximum truss number [19].
We see that the truss bound is the tightest one among all of the three, and therefore it promises the best performance in terms of effectiveness. The intrinsic is the fact that the structure of truss is more compact (or cohesive) than the other two. (This property of compactness can also be used to design vertex orders to enhance the locality. We will give a detailed discussion soon in Section 5.2.) Users may have their own preferences to balance the running time and summary size. Thus which bound to select depends on to what extent the effectiveness can be improved by sacrificing the efficiency. In Section 6, we conduct experimental studies to compare the practical performance of different bounds in terms of both effectiveness and efficiency.
5.2 Locality Analysis
Strong locality implies that two similar cliques should be produced consecutively. This means, for a new clique , the local visibility computed with the previous output clique should be close to the global visibility which is computed with the most similar clique to in the summary. However, such a condition is difficult to meet. Reflected in practice, one typical implementation is the vertex order we should follow to grow the current partial clique. An effective vertex order with strong locality should have such a property that each candidate set of the current configuration has a sufficiently compact structure. Here, by compact (or cohesive) we mean that the nodes of a candidate set are wellconnected with each other such that cliques in this set have a higher probability to overlap.
One question arises: in the outer recursion level of BKMCE, since the neighbor set of the only vertex in the current partial clique = is uniquely determined by the graph , why we still expect a particular structure in the candidate set of ? The answer is that if we implement a fixed vertex order to grow cliques, when we include into , all the neighbors of which precede it in the order can be safely moved into set . The key point is that the difference between and the candidate set of is determined by the particular order we choose, thus leaves us the very opportunity to reshape the structure of the candidate set. The same holds for each level of the recursion.
Now we see that strong locality can be achieved by reordering vertices. In the following, we explain why degeneracy order can be employed to achieve this goal even if the initial purpose of it is to bound time complexity of the BKMCE [12]. Then we propose a novel truss order based on truss decomposition to further enhance locality. Now we begin with the definition of degeneracy.
Definition 10 (Degeneracy).
Given a graph , the degeneracy of is the smallest value , such that every subgraph of contains a vertex whose degree is no more than .
Degeneracy is naturally related to a special vertex order below.
Definition 11 (Degeneracy Order).
The vertices of a ddegeneracy graph have a degeneracy order, in which each vertex has only or fewer neighbors after itself.
Degeneracy order can be formed by repeatedly deleting the minimum degree vertex with all its edges on the current subgraph. Note this actually is the core decomposition procedure [18], thus this order sorts vertices by core number from low to high.
The reason why this order can be used to enhance locality is straightforward. We explain it by focusing on this particular scene during MCE procedure that a vertex is being moved from candidate set to partial clique . This and all vertices of that are reordered after in the degeneracy order induce a subgraph . By the construction of degeneracy order, we know is the minimum degree vertex in , which is denoted by , thus is a core. Since the candidate set is a subset of , we reach the conclusion that is contained by a core, which is our desirable compact structure with strong locality. Although existing works [12] [10] studied using degeneracy to speed up MCE in the aspect of running time, to our best knowledge, our work is the first to exploit degeneracy order to strengthen the locality for the purpose of reducing overlapping cliques.
To further enhance the locality, we notice that the key of locality is to guarantee the candidate set to be contained by a compact structure, e.g., core. Hence if we can find a novel vertex order that has a stronger guarantee, e.g., is covered by a truss, then we can foresee that the performance in terms of effectiveness will outperform that of degeneracy order. Following this intuition, we carefully inspect the relationship between core decomposition and degeneracy order, and find that such a relationship also applies to the truss decomposition and a new vertex order (truss order).
Definition 12 (Truss Order).
Vertices sorted by truss order satisfy such a property: if is the maximum value that there exists a truss containing vertex , then all the vertices reordered after should also be contained by the same truss.
Truss order can be formed during the procedure of truss decomposition. We firstly delete the edge which is contained by the least number of triangles (this number is denoted by the support of an edge). After is removed, the supports of all edges whose endpoints contain or decrease by 1. The procedure repeats until all the edges are removed. Then the order that vertices are peeled off from is a valid truss order. This is because the order sorts each vertex by the maximum value of that there exists a truss containing it. The same analysis why degeneracy order enhances locality applies to truss order: by Definition 12, the candidate set is guaranteed to be contained by a truss.
Since what we desire is a compact structure of candidate set, truss is apparently more favorable than core. We will report experimental results in Section 6 to compare the performance of these two vertex orders with random order as a baseline in terms of both effectiveness and efficiency.
6 Experimental Evaluation
In this section, we look into three research questions by experiments. (1) To what extent the summary size and running time can be reduced by RMCE vs. RMCE? (2) To what extent the effectiveness of RMCE can be further improved by our newly proposed truss order and truss bound? (3) To what extent our newly designed truss order and bound affect the efficiency (both running time and memory requirement)? For short, we denote the the visible MCE algorithm [13] by RMCE and ours by RMCE. All algorithms are implemented in C++ and tested on a MacBook Pro with 16GB memory and Intel Core i7 2.6GHz CPU 64. We evaluated both effectiveness (in terms of summary size) and efficiency (in terms of firstresult time, total running time and total memory requirement) with varying from to . RMCE and RMCE were implemented with three types of bounds (truss bound (T), core bound (C), H bound (H)) as well as three vertex orders (truss order (U), degeneracy order (I), random order (R)). All results were reported by an average of five runs.
Datasets We use eight realworld datasets from different domains with various data properties to evaluate the algorithms. Details are shown in Table II. For each dataset, we provide the total number of maximal cliques in the last column, and we denote by the number of vertices, by the number of edges and by Cliques the total number of maximal cliques as reference. The 5th and 6th column denotes the fraction for RMCETU and RMCETU with the best configuration (Truss bound (T) and Truss order (U)) respectively. The percentage before and after / is the value at and respectively. For example, for socEpinions1 at the 5th column, means that the sizes of summaries produced by RMCE occupy and of the total number of maximal cliques at and .
All datasets used in this paper can be found in Stanford Large Network Dataset Collection^{1}^{1}1Available at http://snap.stanford.edu/data/index.html.
Name  Cliques  RMCETU []  RMCETU []  

socEpinions1  75,879  508,837  1,775,065  18.1% / 77.9%  2.5% / 31.3% 
locGowalla  196,591  950,327  960,916  33.9% / 85.3%  3.9% / 30.3% 
amazon0302  262,111  1,234,877  403,360  68.8% / 95.6%  15.8% / 37.8% 
emailEuAll  265,214  420,045  377,750  71.3% / 93.6%  3.7% / 14.0% 
comdblp  317,080  1,049,866  257,552  72.6% / 96.0%  16.8% / 46.2% 
NotreDame  325,729  1,497,134  495,947  69.2% / 93.7%  5.1% / 17.0% 
comyoutube  1,134,890  2,987,624  3,265,951  62.8% / 93.8%  6.0% / 23.5% 
socpokec  1,632,803  30,622,564  19,376,873  61.1% / 93.3%  6.0% / 27.3% 
6.1 Effectiveness
To evaluate the effectiveness of our algorithm, we compare the size of the summaries generated by RMCE and RMCE in Section 6.1.1 (both with T bound and U order as default). To see to what extent our proposed truss bound and truss order benefit effectiveness, we implemented RMCE and RMCE with three orders (U, I, R, bound T as default) in Section 6.1.2, and with three bounds (T, C, H, order U as default) in Section 6.1.3.
6.1.1 Summary size
We implemented RMCE and RMCE with the best configurations using truss bound and truss order, which are denoted by RMCETU and RMCETU respectively. The results are shown in Fig. 1. We see that RMCETU consistently outperforms RMCETU on all datasets with all the values.
When , RMCETU significantly reduces more than output cliques vs. RMCETU on all datasets, two of which (Fig. (g)g, (h)h) achieve , and two of which (Fig. (d)d, (f)f) even achieve more than . When decreases, the difference is more dramatic, i.e., the percentage of reduction monotonically increases. At , the reduction for all datasets is more than , two of which (Fig. (a)a, (b)b) reach , and four of which (Fig. (d)d, (f)f, (g)g, (h)h) reach to reduce the summary size by more than one order of magnitude. This monotonic increasing trend implies that the performance of RMCETU performs more significantly than RMCETU along with decreasing. This is because for a small threshold, RMCE includes more unnecessary cliques whose visibilities are greater than into the summary with high probabilities, which confirms our intuition in Section 3 that should be set to for . Another reason is that for a clique whose visibility is close to , forces RMCE to output immediately, while RMCE considers the potential that may be covered by some future cliques, thus more carefully outputs such a clique with a proper probability. To show the robustness of our proposed method, we tested the algorithms on eight realworld datasets with different scales. The results show that RMCE achieves relatively better performance on large graphs. For the convenience of our discussion, now we focus on the results at . We see that among all four datasets (amazon0302, NotreDame, comyoutube, socpokec) that have more than reductions, three of them (comyoutube, socpokec) are the top three largest graphs among all eight datasets. This implies that our proposed method are more capable to handle contemporary large scale graphs than the stateoftheart approach.
6.1.2 Effect of vertex orders
To see to what extent the performance of RMCE can be further improved by employing a vertex order with strong locality, we implemented RMCE and RMCE with three types of orders: random order (R), degeneracy order (I) and truss order (U). The default bound was set as truss bound (T). The results are shown in Fig. 2.
Fig. 2 shows that truss order consistently outperforms degeneracy order and random order for both RMCE and RMCE, while generally degeneracy order is superior to random order except four exceptions (for RMCE: locGowalla at and comdblp at ; for RMCE: comdblp at ). Now we focus our discussion on RMCE. Generally RMCETI reduces output size vs. RMCETR for all values on 7 out of 8 datasets (except webNotreDame), which is not significant especially when there exist exceptions. However, the reduction for RMCETU vs. RMCETI is much dramatic: at , the reduction percentage varies from (comyoutube) to (socpokec), and 5 out of 8 achieve more than (except socEpinions1, amazon0103, socpokec). This confirms our assumption that the effectiveness of RMCE can be further improved by properly reordering vertices. The newly designed truss order significantly outperforms the degeneracy order by a large margin due to strong locality provided by the cohesiveness of truss.
6.1.3 Effect of bounds
To see to what extent the effectiveness of RMCE can be further improved by employing a tight bound, we implemented RMCE and RMCE with three different bounds: H bound (H), core bound (C) and truss bound (T). Truss order (U) was set to be the default. The results are shown in Fig. 3.
We see that for both RMCE and RMCE, the performance of effectiveness consistently follows this order: T outperforms C, and C outperforms H. When we focus on RMCE, results show that RMCECU reduces the summary size vs. RMCEHU by less than for all values on all datasets. However, the reduction between RMCETU and RMCECU ranges from to . At , the percentage achieves more than for 5 out of 8 datasets (except emailEuAll, webNotreDame, comyoutube). Fig. 3 confirms the fact that the effectiveness of RMCE can be further improved by employing tight bounds. Although the extent of benefit brought by good bounds is inferior to that brought by vertex orders with strong locality, our proposed truss bound still surpasses the stateoftheart core bound by a significant margin.
6.2 Efficiency
While our main concern in this paper is the output size, the efficiency of RMCE and RMCE (with three types of bounds and orders) is also reported. To provide a fuller discussion of the efficiency, we plotted both the total running time and the memory requirement.
6.2.1 Running time
Name  RMCETU  RMCETU 

socEpinions1  0.62  0.61 
locGowalla  8.55  8.55 
amazon0302  0.21  0.21 
emailEuAll  1.22  1.20 
comdblp  0.48  0.47 
NotreDame  5.97  5.97 
comyoutube  12.33  12.32 
socpokec  40.11  39.57 
We compare the total running time of RMCE and RMCE with default setting of U and T. Results show that RMCE consistently surpasses RMCE on eight datasets for all the values. When , the time reduction is more than for all datasets, among which three datasets (socEpinions1, emailEuAll, comyoutube) achieve . When , this percentage exceeds for all datasets, and four of them (socEpinions1, amazon0302, emailEuAll, comyoutube) achieve more than .
To get a full understanding of why our proposed method benefits efficiency (although our initial purpose is to target the effectiveness), we recorded the firstresult time, that is, the duration from the beginning to the first maximal clique being included into summary. We found that the result varies very little for different bounds and vertex orders. Thus we use Table III to briefly summarize the results (T and U are set as default, ). The firstresult time takes up only a very small proportion (less than ) of the total running time, and this holds for both RMCE and RMCE on eight datasets with all the values. The fact is that most of the running time (more than ) is consumed by the enumeration procedure, which implies that the benefited efficiency of RMCE comes from the early pruning power that speeds up the enumeration recursion. The search tree of RMCE does not have to explore as deep as RMCE does to finally determine whether to discard a candidate clique, thus less time is wasted on growing cliques that would result in redundancy.
6.2.2 Efficiency of orders
To test the efficiency of three types of vertex orders, we implement RMCE and RMCE with orders U, I and R. The default bound is set to T. We recorded both the total running time and memory requirement for all experiments. The details are shown in Fig. 5 and Fig. 6.
Running time: Fig. 5 shows that the results of RMCE and RMCE are very similar on each dataset, hence we focus on orange curves of RMCE. We see that RMCETU shows the best performance on four out of eight datasets (socEpinions1 (), amazon0302 (), emailEuAll (), comyotube (), where the percentages in parentheses are the range of reductions vs. RMCETI). It shows similar performances as RMCETI on three datasets (comdblp, webNotreDame, socpokec) since the two lines coincide with each other. RMCETU shows the worst performance on a special dataset locGowalla because of its small degeneracy. This result implies that benefited from its summarization effectiveness, RMCETU shows a comparable or even better performance than the stateoftheart order on a variety of realworld datasets. However, degeneracy order is still the best choice for graphs with small degeneracies that this order is initially designed for.
Memory requirement: Fig. 6 shows the memory requirement for different orders. We see that the truss order U consistently outperforms the other two for both RMCE and RMCE. The memory reduction of RMCETU vs. RMCETI varies little when changes. The reduction is more than for all eight datasets, with two of which (emailEuAll, comyoutube) even achieving . The results of memory cost are much similar to the output size. This is because the memory requirement highly relies on the depth of recursion: more number of deep branches result in higher memory consumption. The strong locality thus early pruning power of RMCETU prevents some of the redundant branches from growing unnecessarily deep, hence the memory requirement can be reduced significantly, which has the same reason why the output summary size is reduced.
6.2.3 Efficiency of bounds
We test the efficiency of different bounds (T, C, H) with default vertex order U. Both the total running time (Fig. 7) and memory requirement (Fig. 8) are recorded.
Running time: Fig. 7 shows that for both RMCE and RMCE, H bound is the fastest choice on five out of eight datasets (except for socEpinions1, amazon0302, emailEuAll). U bound runs most slowly on seven out of eight datasets (except for socEpinions1). However, we still notice that the time differences between RMCETU and RMCECU are narrowed with decreasing for all datasets. This is consistent with Fig. 3: since the summary reduction increases with decreasing, the benefit of early pruning gradually offsets the cost of bound calculation. This explains why RMCETU shows the best performance when .
Memory requirement: As we explained in Section 6.2.2, the result of memory requirement is similar to that of output size. The performance of three bounds for both RMCE and RMCE are quite clear: U is better than C, and C is better than H. When we focus on RMCE, we see that the memory reduction of RMCETU vs. RMCECU is more than on eight datasets, among which three datasets (socEpinions1, comyoutube, socpokec) even achieves . This reduction is mainly caused by the fact that a tight bound thus early pruning helps to avoid redundant search branches from growing unnecessarily deep, which shows the superiority of the truss bound.
6.3 Summary
After a full discussion of all experiments, we can now answer the three questions at the beginning of Section 6:
(1) RMCE consistently outperforms RMCE for both effectiveness and efficiency on all datasets with all the values. The output reduction can be up to one order of magnitude, and time reduction is more than at . RMCE achieves relatively better performance on large graphs than RMCE.
(2) When implemented with RMCE, the truss order reduces up to output size vs. the stateoftheart degeneracy order at , and this reduction of truss bound vs. core bound can be up to . The boost of vertex order is more significant than that of bounds.
(3) The running time of truss order implemented with RMCE has comparable or even better performance than the degeneracy order except when implemented on small degeneracy graphs. The memory requirement of truss order consistently shows the best performance, of which the reduction vs. degeneracy order is more than . Although the running time of truss bound is surpassed by core bound and H bound, the difference is narrowed with decreasing. The memory requirement of truss bound still shows the best performance, which achieves more than reduction vs. core bound.
7 Related Work
The number of maximal cliques in an undirected graph is proved to be exponential [16]. Bron and Kerbosch [15], Akkoyunlu et al. [20] introduced backtracking algorithms to enumerate all maximal cliques in a graph. There are sufficient studies focusing on the efficiency of MCE. To effectively reduce the search space, pruning strategies were introduced in [6, 21, 7] by selecting good pivots. The key idea is to avoid searching in some unnecessary branches which leads to duplicated results. Degeneracy vertex ordering was introduced by [12] to bound the time complexity because with the degeneracy order the size of candidate set in the first recursion level can be bounded by the degeneracy, thus all the candidate set at all depths of the search tree can be bounded. Pivot selection strategies were studied by [10, 11] to optimize the algorithms. Naudé [11] relaxed the restriction of pivot selection while keeping the time complexity unchanged. Segundo et al. [10] improved the practical performance of the algorithm by avoiding too much time consumed by selecting the pivot. With distributed computing paradigms, scalable and parallel algorithms were designed for MCE in [8, 9, 22, 23]. Schmidt et al. [8] decomposed the search tree to enable parallelization. Xu et al. [9] proposed a distributed MCE algorithm based on a sharenothing architecture. Blanuša et al. [22] developed a scalable parallel implementation by using hashjoinbased setintersection algorithms within MCE. Das et al. [23] designed sharedmemory parallel algorithms both for MCE and for maintaining the set of all maximal cliques on a dynamic graph. The I/O performance of MCE in massive networks was improved by [24, 25]. The externalmemory algorithm for MCE was first introduced by [24] to bound the memory consumption. A partitionbased MCE algorithm is designed by [25] to reduce the memory used for processing large graphs. The maximal spatial clique enumeration was studied by [26], in which some geometric properties were used to enhance the enumeration efficiency. Dynamic maximal clique enumeration was studied in [27, 28, 29], in which the graph structure can evolve mildly. All the three works considered the dynamic cases where edges can be added or deleted. When considering an uncertain graph, which is a nondeterministic distribution on a set of deterministic graphs, the uncertain version of MCE was designed by [30, 31]. Mukherjee et al. [31] designed an algorithm to enumerate all maximal cliques in an uncertain graph. The size of an uncertain graph can be reduced by corebased algorithms proposed by [30]. The top maximal clique finding problem was also studied by [32] on uncertain graphs. While these efficient approaches reduced the running time of MCE, the bottleneck in applications is the large output size, which is our main focus.
There exist a large volume of works [33, 34, 35, 36, 37] studying the maximum clique problem, which aimed to find a maximal clique with the largest size. An approximate coloring technique was employed by [33] to bound the maximum clique size, which was further improved by [34] and [35]. Lu et al. [36] proposed a randomized algorithm with a binary search technique to find the maximum clique in massive graphs, while the work [37] studied this problem over sparse graphs by transforming the maximum clique in sparse graphs to the clique over dense subgraphs. Although the concept of maximum clique is closely related to the maximal clique, the MCE and maximum clique finding are two distinguishable problems and there is no need to employ a summary to summarize the output of this problem since the number of maximum cliques is typically small.
Summarizing has also been studied for frequent pattern mining [38, 39, 40]. Afrati et al. [38] studied how to find at most patterns to span a collection of patterns which is an approximation of the original pattern sets. Yan et al. [39] proposed a profilebased approach to summarize all frequent patterns by representives. The pattern redundancy was introduced by [40], which studied how to extract redundancyaware and top significant patterns. While cliques share great similarity with frequent patterns, these algorithms cannot be used to summarize maximal cliques efficiently due to their offline nature. There are some studies focusing on online algorithms to do summarizing. Saha et al. [41] and Ausiello et al. [42] studied how to find diversified sets to represent all sets with a streaming approach, based on which [43] introduced an online algorithm to give diversified top maximal cliques. In these works, is normally small, and coverage is not the focus.
Our work is close to the work [13] which introduced the visible summary of maximal cliques. Other than giving a better sampling function in the earlier version [44], we further discuss the optimality conditions and propose to approach the optimal by introducing the novel truss vertex order and truss bound.
8 Conclusion and Future Work
In this paper, we have studied how to report a summary of less overlapping maximal cliques during the online maximal clique enumeration process. We have proposed so far the best sampling strategy, which can guarantee that the summary expectedly represents all the maximal cliques while keeping the summary sufficiently concise, i.e., each maximal clique can be expectedly covered by at least one maximal clique in the summary with a ratio of at least ( is given by a user and reflects the user’s tolerance of overlap). We have proved the optimality of this sampling approach under two conditions (ideal bound estimation and sufficiently strong locality), and proposed the novel truss order as well as the truss bound to approach the optimal. Experimental studies have shown that the new strategy can outperform the stateoftheart approach in both effectiveness and efficiency on eight realworld datasets. Future work could be conducted towards approaching the optimal conditions further. It would also be interesting to solve the problem in parallel considering maximal clique enumeration is expensive on large graphs.
Acknowledgments
The work was supported by Australia Research Council discovery projects DP170104747, DP180100212. We would like to thank Yujun Dai for her effort in the earlier version [44].
References
 [1] Z. Lu, J. Wahlström, and A. Nehorai, “Community detection in complex networks via clique conductance,” Scientific Reports, vol. 8, no. 1, pp. 5982–5997, 2018.
 [2] O. Rokhlenko, Y. Wexler, and Z. Yakhini, “Similarities and differences of gene expression in yeast stress conditions,” Bioinformatics, vol. 23, no. 2, pp. 184–190, 2007.
 [3] B. Zhang, B.H. Park, T. Karpinets, and N. F. Samatova, “From pulldown data to protein interaction networks and complexes with biological relevance,” Bioinformatics, vol. 24, no. 7, pp. 979–986, 2008.
 [4] A. Tandon and K. Karlapalem, “Agent strategies for the hideandseek game,” in Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems, 2018, pp. 2088–2090.
 [5] N. Berry, T. Ko, T. Moy, J. Smrcka, J. Turnley, and B. Wu, “Emergent clique formation in terrorist recruitment,” in AAAI04 Workshop on Agent Organizations: Theory and Practice, 2004.
 [6] I. Koch, “Enumerating all connected maximal common subgraphs in two graphs,” Theoretical Computer Science, vol. 250, no. 1, pp. 1–30, Jan. 2001.
 [7] E. Tomita, A. Tanaka, and H. Takahashi, “The worstcase time complexity for generating all maximal cliques and computational experiments,” Theoretical Computer Science, vol. 363, no. 1, pp. 28–42, Oct. 2006.
 [8] M. C. Schmidt, N. F. Samatova, K. Thomas, and B.H. Park, “A scalable, parallel algorithm for maximal clique enumeration,” Journal of Parallel and Distributed Computing, vol. 69, no. 4, pp. 417–428, Apr. 2009.
 [9] Y. Xu, J. Cheng, and A. W. Fu, “Distributed maximal clique computation and management,” IEEE Transactions on Services Computing, vol. 9, no. 1, pp. 110–122, Jan. 2016.
 [10] P. San Segundo, J. Artieda, and D. Strash, “Efficiently enumerating all maximal cliques with bitparallelism,” Computers & Operations Research, vol. 92, pp. 37–46, Apr. 2018.
 [11] K. A. Naudé, “Refined pivot selection for maximal clique enumeration in graphs,” Theoretical Computer Science, vol. 613, pp. 28–37, Feb. 2016.
 [12] D. Eppstein, M. Löffler, and D. Strash, “Listing all maximal cliques in large sparse realworld graphs,” Journal of Experimental Algorithmics, vol. 18, pp. 3.1–3.21, Dec. 2013.
 [13] J. Wang, J. Cheng, and A. W.C. Fu, “Redundancyaware maximal cliques,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. Chicago, Illinois, USA: ACM Press, 2013, pp. 122–130.
 [14] L. G. Valiant, “The complexity of enumeration and reliability problems,” SIAM Journal on Computing, vol. 8, no. 3, pp. 410–421, 1979.
 [15] C. Bron and J. Kerbosch, “Algorithm 457: finding all cliques of an undirected graph,” Communications of the ACM, vol. 16, no. 9, pp. 575–577, Sep. 1973.
 [16] J. W. Moon and L. Moser, “On cliques in graphs,” Israel Journal of Mathematics, vol. 3, no. 1, pp. 23–28, Mar. 1965.
 [17] S. B. Seidman, “Network structure and minimum degree,” Social networks, vol. 5, no. 3, pp. 269–287, 1983.
 [18] W. Khaouid, M. Barsky, V. Srinivasan, and A. Thomo, “Kcore decomposition of large networks on a single PC,” Proceedings of the VLDB Endowment, vol. 9, no. 1, pp. 13–23, Sep. 2015.
 [19] J. Wang and J. Cheng, “Truss decomposition in massive networks,” arXiv preprint arXiv:1205.6693, 2012.
 [20] E. Akkoyunlu, “The enumeration of maximal cliques of large graphs,” SIAM Journal on Computing, vol. 2, no. 1, pp. 1–6, Mar. 1973.
 [21] F. Cazals and C. Karande, “A note on the problem of reporting maximal cliques,” Theoretical Computer Science, vol. 407, no. 13, pp. 564–568, Nov. 2008.
 [22] J. Blanuša, R. Stoica, P. Ienne, and K. Atasu, “Manycore clique enumeration with fast set intersections,” Proceedings of the VLDB Endowment, vol. 13, no. 12, pp. 2676–2690, 2020.
 [23] A. Das, S.V. SaneiMehri, and S. Tirthapura, “Sharedmemory parallel maximal clique enumeration from static and dynamic graphs,” ACM Transactions on Parallel Computing (TOPC), vol. 7, no. 1, pp. 1–28, 2020.
 [24] J. Cheng, Y. Ke, A. W.C. Fu, J. X. Yu, and L. Zhu, “Finding maximal cliques in massive networks by H*graph,” in Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data. New York, USA: ACM, 2010, pp. 447–458.
 [25] J. Cheng, L. Zhu, Y. Ke, and S. Chu, “Fast Algorithms for Maximal Clique Enumeration with Limited Memory,” in Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’12. New York, NY, USA: ACM, 2012, pp. 1240–1248, eventplace: Beijing, China.
 [26] C. Zhang, Y. Zhang, W. Zhang, L. Qin, and J. Yang, “Efficient maximal spatial clique enumeration,” in 2019 IEEE 35th International Conference on Data Engineering (ICDE), Apr. 2019, pp. 878–889.
 [27] A. Das, M. Svendsen, and S. Tirthapura, “Incremental maintenance of maximal cliques in a dynamic graph,” The VLDB Journal, vol. 28, no. 3, pp. 351–375, 2019.
 [28] V. Stix, “Finding All Maximal Cliques in Dynamic Graphs,” Computational Optimization and Applications, vol. 27, no. 2, pp. 173–186, Feb. 2004.
 [29] S. Sun, Y. Wang, W. Liao, and W. Wang, “Mining maximal cliques on dynamic graphs efficiently by local strategies,” in 2017 IEEE 33rd International Conference on Data Engineering (ICDE), Apr. 2017, pp. 115–118.
 [30] R. Li, Q. Dai, G. Wang, Z. Ming, L. Qin, and J. X. Yu, “Improved algorithms for maximal clique search in uncertain networks,” in 2019 IEEE 35th International Conference on Data Engineering (ICDE), Apr. 2019, pp. 1178–1189.
 [31] A. P. Mukherjee, P. Xu, and S. Tirthapura, “Mining maximal cliques from an uncertain graph,” in 2015 IEEE 31st International Conference on Data Engineering (ICDE), Apr. 2015, pp. 243–254.
 [32] Z. Zou, J. Li, H. Gao, and S. Zhang, “Finding topk maximal cliques in an uncertain graph,” in 2010 IEEE 26th International Conference on Data Engineering (ICDE), Mar. 2010, pp. 649–652.
 [33] E. Tomita and T. Kameda, “An efficient branchandbound algorithm for finding a maximum clique with computational experiments,” Journal of Global Optimization, vol. 37, no. 1, pp. 95–111, Jan. 2007.
 [34] E. Tomita, Y. Sutani, T. Higashi, S. Takahashi, and M. Wakatsuki, “A simple and faster branchandbound algorithm for finding a maximum clique,” in WALCOM: Algorithms and Computation, ser. Lecture Notes in Computer Science, M. S. Rahman and S. Fujita, Eds. Springer Berlin Heidelberg, 2010, pp. 191–203.
 [35] E. Tomita, K. Yoshida, T. Hatta, A. Nagao, H. Ito, and M. Wakatsuki, “A much faster branchandbound algorithm for finding a maximum clique,” in Frontiers in Algorithmics, ser. Lecture Notes in Computer Science, D. Zhu and S. Bereg, Eds. Springer International Publishing, 2016, pp. 215–226.
 [36] C. Lu, J. X. Yu, H. Wei, and Y. Zhang, “Finding the maximum clique in massive graphs,” Proceedings of the VLDB Endowment, vol. 10, no. 11, pp. 1538–1549, Aug. 2017.
 [37] L. Chang, “Efficient maximum clique computation over large sparse graphs,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York, USA: ACM, 2019, pp. 529–538.
 [38] F. Afrati, A. Gionis, and H. Mannila, “Approximating a collection of frequent sets,” in Proceedings of the 2004 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Seattle, WA, USA: ACM Press, 2004, pp. 12–19.
 [39] X. Yan, H. Cheng, J. Han, and D. Xin, “Summarizing itemset patterns: a profilebased approach,” in Proceeding of the 11th ACM SIGKDD International Conference on Knowledge Discovery in Data Mining. Chicago, Illinois, USA: ACM Press, 2005, pp. 314–323.
 [40] D. Xin, H. Cheng, X. Yan, and J. Han, “Extracting redundancyaware topk patterns,” in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Philadelphia, PA, USA: ACM Press, 2006, pp. 444–453.
 [41] B. Saha and L. Getoor, “On maximum coverage in the streaming model & application to multitopic BlogWatch,” in Proceedings of the 2009 SIAM International Conference on Data Mining, C. Apte, H. Park, K. Wang, and M. J. Zaki, Eds. Philadelphia, PA: Society for Industrial and Applied Mathematics, Apr. 2009, pp. 697–708.
 [42] G. Ausiello, N. Boria, A. Giannakos, G. Lucarelli, and V. T. Paschos, “Online maximum kcoverage,” in Fundamentals of Computation Theory, O. Owe, M. Steffen, and J. A. Telle, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, vol. 6914, pp. 181–192.
 [43] L. Yuan, L. Qin, X. Lin, L. Chang, and W. Zhang, “Diversified topk clique search,” The VLDB Journal, vol. 25, no. 2, pp. 171–196, Apr. 2016.
 [44] X. Li, R. Zhou, Y. Dai, L. Chen, C. Liu, Q. He, and Y. Yang, “Mining maximal clique summary with effective sampling,” in 2019 IEEE International Conference on Data Mining (ICDM). IEEE, 2019, pp. 1198–1203.
Comments
There are no comments yet.