Risk-Averse Matchings over Uncertain Graph Databases

01/09/2018 ∙ by Charalampos E. Tsourakakis, et al. ∙ Boston University Yale University University of Washington 0

A large number of applications such as querying sensor networks, and analyzing protein-protein interaction (PPI) networks, rely on mining uncertain graph and hypergraph databases. In this work we study the following problem: given an uncertain, weighted (hyper)graph, how can we efficiently find a (hyper)matching with high expected reward, and low risk? This problem naturally arises in the context of several important applications, such as online dating, kidney exchanges, and team formation. We introduce a novel formulation for finding matchings with maximum expected reward and bounded risk under a general model of uncertain weighted (hyper)graphs that we introduce in this work. Our model generalizes probabilistic models used in prior work, and captures both continuous and discrete probability distributions, thus allowing to handle privacy related applications that inject appropriately distributed noise to (hyper)edge weights. Given that our optimization problem is NP-hard, we turn our attention to designing efficient approximation algorithms. For the case of uncertain weighted graphs, we provide a 1/3-approximation algorithm, and a 1/5-approximation algorithm with near optimal run time. For the case of uncertain weighted hypergraphs, we provide a Ω(1/k)-approximation algorithm, where k is the rank of the hypergraph (i.e., any hyperedge includes at most k nodes), that runs in almost (modulo log factors) linear time. We complement our theoretical results by testing our approximation algorithms on a wide variety of synthetic experiments, where we observe in a controlled setting interesting findings on the trade-off between reward, and risk. We also provide an application of our formulation for providing recommendations of teams that are likely to collaborate, and have high impact.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graphs model a wide variety of datasets that consist of a set of entities, and pairwise relations among them. In several real-world applications, these relations are inherently uncertain. For example, protein-protein interaction (PPI) networks are associated with uncertainty since protein interactions are obtained via noisy, error-prone measurements [4]

. In privacy applications deterministic edge weights become appropriately defined random variables

[9, 29], in dating applications each recommended link is associated with the probability that a date will be successful [13], in viral marketing the extent to which an idea propagates through a network depends on the ‘influence probability’ of each social interaction [30], in link prediction possible interactions are assigned probabilities [37, 52]

, and in entity resolution a classifier outputs for each pair of entities a probability that they refer to the same object.

Mining uncertain graphs poses significant challenges. Simple queries –such as distance queries– on deterministic graphs become #P-complete ([55]) problems on uncertain graphs [24]. Furthermore, approaches that maximize the expected value of a given objective typically involve high risk solutions. On the other hand, risk-averse methods are based on obtaining several graphs samples, a procedure that is computationally expensive, or even prohibitive for large-scale uncertain graphs.

Two remarks about uncertain graph models used in prior work that are worth making before we discuss the main focus of this work follow. The datasets used in the majority of prior work are uncertain, unweighted graphs. There appears to be less work related to uncertain, weighted hypergraphs that are able to model a wider variety of datasets, specifically those containing more than just pairwise relationships (i.e., hyperedges). Secondly, the model of uncertain graphs used in prior work [11, 23, 31, 32, 34, 38, 42, 43, 44, 45] are in-homogeneous random graphs [10]. More formally, let be an uncertain graph where , is the function that assigns a probability of success to each edge independently from the other edges. According to the possible-world semantics [10, 15] that interprets as a set of possible deterministic graphs (worlds), each defined by a subset of . The probability of observing any possible world is

This model restricts the distribution of each edge to be a Bernoulli distribution, and does not capture various important applications such as privacy applications where noise is injected on the weight of each edge

[9, 29].

In this work, we focus on risk-averse matchings over uncertain (hyper)graphs. To motivate our problem consider Figure 1 that shows a probabilistic graph (i.e., a 2-regular hypergraph) with two perfect matchings, and . Each edge follows a Bernoulli distribution with success probability , and is associated with a reward that is obtained only when the edge is successfully realized. These two parameters annotate each edge in Figure 1. The maximum weight matching in expectation is with expected reward . However, with probability the reward we receive from equals zero. However, the second matching has expected reward equal to with probability 1. In other words, matching offers potentially higher reward but entails higher risk than . Indeed, in many situations with asymmetric rewards, one observes that high reward solutions are accompanied by higher risks and that such solutions may be shunned by agents in favor of safer options [33].

A

B

C

D

Figure 1: Probabilistic graph, each edge is annotated with , its probability and its reward/weight. The matching has higher expected weight than . However, the reward of the former matching is 0 with probability , but the reward of the latter matching is 80 with probability 1. For details, see Section 1.

Another way to observe that matching entails greater risk is to draw graph samples from this probabilistic graph multiple times, and observe that around 25% of the realizations of result in zero reward. However, sampling is computationally expensive on large-scale uncertain graphs. Furthermore, in order to obtain statistical guarantees, a large number of samples may be needed [43] which makes the approach computationally intensive or infeasible even for medium-scale graphs. Finally, it is challenging and sometimes not always clear how to aggregate different samples [43]. These two drawbacks are well-known to the database community, and recently Parchas et al. [43]

suggested a heuristic to extract representative instances of uncertain graphs. While their work makes an important practical contribution, their method is an intuitive heuristic whose theoretical guarantees and worst-case running time are not well understood 

[43].

Motivated by these concerns, we focus on the following central question:

How can we design efficient, risk-averse algorithms with solid theoretical guarantees for finding maximum weight matchings in uncertain weighted graphs and hypergraphs?

This question is well-motivated, as it naturally arises in several important applications. In online dating applications a classifier may output a probability distribution for the probability of matching two humans successfully [54]

. In kidney exchange markets, a kidney exchange is successful according to some probability distribution that is determined by a series of medical tests. Typically, this distribution is unknown but its parameters such as the mean and the variance can be empirically estimated

[13]. Finally, the success of any large organization that employs skilled human resources crucially depends on the choice of teams that will work on its various projects. Basic team formation algorithms output a set of teams (i.e., hyperedges) that combine a certain set of desired skills [3, 21, 26, 25, 36, 40]. A classifier can leverage features that relate to crowd psychology, conformity, group-decision making, valued diversity, mutual trust, effective and participative leadership [28] to estimate the probability of success of a team.

In detail, our contributions are summarized as follows.

Novel Model and Formulation. We propose a general model for weighted uncertain (hyper)graphs, and a novel formulation for risk-averse maximum matchings. Our goal is to select (hyper)edges that have high expected reward, but also bounded risk of failure. Our problem is a novel variation of the well-studied stochastic matching problem [5, 13].

Approximation algorithms. We design efficient approximation algorithms. For the case of uncertain graphs, using Edmond’s blossom algorithm [17] as a black-box, we provide a risk-averse solution that is a -approximation of the optimal risk-averse solution. Similarly, using a greedy matching algorithm as a black box we obtain a -risk-averse approximation. For hypergraphs of rank (i.e., any hyperedge contains at most nodes) we obtain a risk-averse -approximation guarantee. Our algorithms are risk-averse, do not need to draw graph samples, and come with solid theoretical guarantees. Perhaps more importantly, the proposed algorithms that are based on greedy matchings have a running time of , where represent the number of nodes, and (hyper)edges in the uncertain (hyper)graph respectively- this makes the algorithm easy to deploy on large-scale real-world networks such as the one considered in our experiments (see Section 4).

Experimental evaluation. We evaluate our proposed algorithm on a wide variety of synthetic experiments, where we observe interesting findings on the trade-offs between reward and risk. There appears to be little (or even no) empirical work on uncertain, weighted hypergraphs. We use the Digital Bibliography and Library Project (DBLP) dataset to create a hypergraph where each node is an author, each hyperedge is a team of co-authors for each paper, the probability of a hyperedge is the probability of collaboration estimated from historical data, and the weight of a hyperedge is its citation count. This uncertain hypergraph is particularly interesting as there exist edges with high reward (citations) but whose authors have low probability to collaborate. On the other hand, there exist papers with a decent number of citations whose co-authors consistently collaborate. Intuitively, the more risk-averse we are, the more we should prefer the latter hyperedges. We evaluate our proposed method on this real dataset, where we observe several interesting findings. The code and the datasets will become publicly available at https://github.com/tsourolampis/risk-averse-graph-matchings.

2 Related Work

Uncertain graphs. Uncertain graphs naturally model various datasets including protein-protein interactions [4, 35], kidney exchanges [46], dating applications [13], sensor networks whose connectivity links are uncertain due to various kinds of failures [48], entity resolution [42], viral marketing [30], and privacy-applications [9].

Given the increasing number of applications that involve uncertain graphs, researchers have put a lot of effort in developing algorithmic tools that tackle several important graph mining problems, see [11, 23, 31, 32, 34, 38, 42, 43, 44, 45]. However, with a few exceptions these methods suffer from a critical drawback; either they are not risk-averse, or they rely on obtaining many graphs samples. Risk-aversion has been implicitly discussed by Lin et al. in their work on reliable clustering [38], where the authors show that interpreting probabilities as weights does not result in good clusterings. Jin et al. provide a risk-averse algorithm for distance queries on uncertain graphs [24]. Parchas et al. have proposed a heuristic to extract a good possible world in order to combine risk-aversion with efficiency [43]. However, their work comes with no guarantees.

Graph matching

is a major topic in combinatorial optimization. The interested reader should confer the works of Lovász and Plummer

[39] for a solid exposition. Finding maximum matchings in weighted graphs is solvable in polynomial time [17, 20]. A faster algorithm sorts the edges by decreasing weight, and adds them to a matching greedily. This algorithm is a -approximation to the optimum matching. Finding a maximum weight hypergraph matching is NP-hard, even in unweighted 3-uniform hypergraphs (aka 3-dimensional matching) [27]. The greedy algorithm provides a -approximation (intuitively for each hyperedge we greedily add to the matching, we lose at most hyperedges) where is the maximum cardinality of an edge.

Stochastic Matchings. Various stochastic versions of graph matchings have been studied in the literature. We discuss two papers that lie close to our work [5, 13]. Both of these works consider a random graph model with a Bernoulli distribution on each edge, i.e., a graph on nodes, where each edge exists with probability , independent of other edges. In contrast to our work, these models allow the central designer to probe each edge to verify its realization: if the each edge exists, it gets irrevocably added to the matching. While Chen et al. [13] provide a constant factor approximation on unweighted graphs based on a simple greedy approach, Bansal et al. [5] obtain a -factor for even weighted graphs using an LP-rounding algorithm. On the other hand, our work focuses on designing fast algorithms that achieve good matchings with bounded risk on weighted graphs without probing the edges. Finally, since the hypergraph matching problem is also known as the set packing problem, the above problems are special cases of stochastic set packing [16].

Risk-averse optimization is a major topic in operations research, control theory, and finance. The typical setting of risk-averse optimization is the following: suppose that is a cost function of a random variable , and a decision variable . Different choices of lead to different values of the mean . There is also a risk function associated with . The goal of risk-averse optimization is to choose such that both and are small. This framework captures optimization problems that arise in a number of environments with uncertainty. For example, modern portfolio theories of investment are based on the idea that risk-averse investors should maximize expected profit conditional on a given level of market risk. This is intuitive as higher rewards come with higher risk [41] in markets. Other examples of risk-averse optimization include risk averse [47], risk averse stochastic shortest paths [6], risk averse linear/quadratic/Gaussian control [56], risk averse covering of integer programs [49, 50], and risk averse bandit arm selection [57].

3 Model and Proposed Method

Uncertain Weighted Bernoulli hypergraphs. Before we define a general model for uncertain weighted hypergraphs that allows for both continuous and discrete probability distributions, we introduce a simple probabilistic model for weighted uncertain hypergraphs that generalizes the existing model for random graphs. Each edge is distributed as a weighted Bernoulli variable independently from the rest: with probability it exists, and its weight/reward is equal to , and with the remaining probability it does not exist, i.e., its weight is zero. More formally, let be an uncertain hypergraph on nodes with potential hyperedges, where , is the function that assigns a probability of existence to each hyperedge independently from the other hyperedges, and . The value is the reward we receive from hyperedge if it exists. Let be the expected reward from edge . According to the possible-world semantics [10, 15], the probability of observing any possible world where each hyperedge has weight is

Uncertain Weighted hypergraphs. More generally, let be an uncertain hypergraph on nodes, with hyperedge set . The reward of each hyperedge is drawn according to some probability distribution with parameters , i.e., . We assume that the reward for each hyperedge is drawn independently from the rest; each probability distribution is assumed to have finite mean, and finite variance. Given this model, we define the probability of a given hypergraph with weights on the hyperedges as:

For example, suppose the reward of hyperedge is distributed as a normal random variable . Then, the probability of a hypergraph is

Our model allows for both discrete and continuous distributions, as well as mixed discrete and continuous distributions. In our experiments (Section 4) we focus on the weighted Bernoulli, and Gaussian cases.

Problem definition. In contrast to prior work on stochastic matchings [5, 13], we do not probe edges to verify their existence; our goal is to output a matching with high expected reward and low variance. Formally, let be the set of all matchings from the hyperedge set . The total associated reward with a matching is the expected reward, i.e.,

Similarly, the associated risk in terms of the standard deviation is defined as

where denotes the standard deviation of the distribution .

Given an uncertain weighted hypergraph, and a risk upper-bound , our goal is to maximize the expected reward over all matchings with risk at most . We refer to this problem as the Bounded Risk Maximum Weighted Matching (BR-MWM) problem. Specifically,

(1)

In the case of uncertain weighted Bernoulli hypergraphs, Formulation (1) becomes

(2)

and in the case of uncertain weighted Gaussian hypergraphs

(3)

Finally, we remark that the BR-MWM problem is NP-Hard even on graphs via a simple reduction from Knapsack.

Other Measures of Risk. It is worth outlining that our model and proposed method adapts easily to other risk measures. For example, if we define the risk of a matching in terms of its variance, i.e.,

(4)

then all of our theoretical guarantees and the insights gained via our experiments still hold with minor changes in the algorithm. At the end of this section, we discuss in detail the required changes. For the sake of convenience and concreteness, we present our results in terms of the version of the risk (the standard deviation).

An LP-approximation algorithm. The Hypermatching Assignment Problem (HAP) was introduced by Cygan et al. [14]: given a -uniform hypergraph , and a set of clients, each with a budget , a profit and a cost for hyperedge respectively, the goal is to compute a matching , and partition into subsets so that the total profit is maximized and the budget constraint is satisfied for all clients . Our BR-MWM problem is a special case of HAP where there is one client (), the profit is the expected reward , and the cost is the standard deviation . Notice that without any loss of generality we can convert the uncertain hypergraph to a -uniform hypergraph where is the maximum cardinality of a hyperedge by adding dummy nodes. Therefore, we can invoke the randomized -approximation algorithm for HAP [14] to solve our problem, here

is constant. However, this approach –at least for the moment– is unlikely to scale well: it requires solving a linear program with an exponential number of variables in terms of

, and then strengthen this LP by one round of the Lasserre’s lift-and-project method. This motivates the design of scalable approximation algorithms.

Algorithm. Our algorithm is described in pseudocode 1. It takes as input an uncertain weighted hypergraph as well as a hypergraph matching algorithm - as a black-box: the black-box takes a weighted hypergraph and returns a hypergraph matching. First, our algorithm removes all hyperedges that have negative reward as they are not part of any optimal solution. Similarly, it removes any edge for which ; since the risk of any matching is the sum of the standard deviation of its edges, any such edge cannot be part of any optimal solution either. For any given edge , define . Now, we label the edges in as such that , breaking ties arbitrarily. Sorting the values requires time. Next, we consider the nested sequence of hypergraphs , where contains the hyperedges , and each edge is weighted by the expected reward .

Let be the matching returned by Match-Alg on with weights . We first compute the maximum weight matching on . If the quantity is less than or equal to , then we output . Otherwise, we binary search the nested sequence of hypergraphs to find any index for which

A

B

C

D

Figure 2: The risk of the optimum matching is not monotonically increasing with . For details, see Section 3.

The final output matching is either or , depending on which one achieves greater expected reward. Intuitively, the latter case is required when there exists a single high-reward hyperedge whose risk is comparable to the upper bound . In general, there may be more than one index that satisfies the above condition since the variance is not monotonically increasing with . Figure 2 provides such an example that shows that increasing the set of allowed edges can actually decrease the overall risk of the optimum matching. Specifically, Figure 2 shows an uncertain graph, each edge is annotated with . One can always find distributions that satisfy these parameters. We consider Algorithm 1 with the black-box matching algorithm Match-Alg as the optimum matching algorithm on weighted graphs. As our algorithm considers edges in decreasing order of their -value, we get that . The risk of the above three matchings are , and respectively. Thus, the risk of the optimum matching is not monotonically increasing with .

0:  , Black-box algorithm Match-Alg
  Let be the expectation, and the standard deviation (s.t.d) of for each hyperedge
  Remove all hyperedges that have either non-positive reward (, or s.t.d greater than () {Such edges are not part of any optimal solution.}
  Sort the hyperedges in decreasing order according to , let .
   Match-Alg()
  if   then
     
     Return
  end if
  
  while True do
     
     Compute
     if  then
        
        Return
     else if   then
        
     else
        
     end if
  end while
Algorithm 1 Algorithm for computing a -approximate matching for the BR-MWM problem on uncertain weighted hypergraphs.

While it is not hard to see how a binary search would work, we provide the details for completeness. We know that , and . Let . We search the middle position between low and high, and . If , then we set equal to and return. If not, then if , we repeat the same procedure with . Otherwise, if we repeat with . This requires iterations, and each iteration requires at most two maximum weighted matching computations.

Our proposed algorithm uses the notion of a black-box reduction: wherein, we take an arbitrary -approximation algorithm for computing a maximum-weight hypermatching (Match-Alg, ) and leverage its properties to derive an algorithm that in addition to maximizing the expected weight also has low risk. This black-box approach has a significant side-effect: organizations may have already invested in graph processing software for deterministic graphs, that they would like to use regardless of the uncertainty inherent in the data. Our search takes time where is the running time of maximum weighted matching algorithm Match-Alg.

-approximation for uncertain weighted graphs. First we analyze our algorithm for the important case of uncertain weighted graphs. Unlike general hypergraphs, we can find a maximum weight graph matching in polynomial time using Edmond’s algorithm [20]. Our main result is stated as the following theorem.

Theorem 1.

Assuming an exact maximum weight matching algorithm Match-Alg, Algorithm 1 returns a matching whose risk is less or equal than , and whose expected reward is at least of the optimal solution to the Bounded Risk Maximum Weighted Matching problem on uncertain weighted graphs.

Before we prove Theorem 1, it is worth pointing out, that besides the fact that our proposed algorithm can be easily implemented using existing graph matching software, it also provides a better approximation than the approximation achieved using [14], i.e., for any constant .

Proof.

Let denote an optimum matching whose risk is at most . Since it is immediately clear by the description of our algorithm that , our goal is to prove that the matching returned by our algorithm has reward at least one-third as good as the reward of the optimum matching, i.e., .

In order to prove this bound, we prove a series of inequalities. By definition, differs from in exactly one edge, that is . We also know that the maximum weight matching in is different from the maximum weight matching in since the former entails risk that exceeds the budget . We conclude that contains the edge .

Therefore, we have that . This is true because is the maximum weight matching in and so its weight is larger than or equal to that of . In conclusion, our first non-trivial inequality is:

(5)

Next, we lower-bound by using the facts that for all , and that the total risk of is at least by definition. Specifically,

(6)
(7)

Now we show upper bounds on the optimum solution to the BR-MWM problem . We divide into two parts: and , where the first part is the set of edges in and the second part is the edges not present in . We present separate upper bounds on and . By definition, is a matching on the set of edges . Therefore, its reward is smaller than or equal to that of the optimum matching on , which happens to be . Hence,

(8)

Next, consider . To upper-bound we also use inequalities 5,7:

Now, we are ready to complete the proof. Recall that the output of the algorithm satisfies . Combining the upper bounds for and , yields

This completes the proof. ∎

Running time: Assuming that the  [20] implementation of Edmond’s algorithm is used as a black-box, we remark that the run time of Algorithm 1 is .

Fast -approximation for uncertain weighted graphs. Since the running time using Edmond’s algorithm is prohibitively expensive, we show how the approximation guarantee changes when we use the (much faster) greedy algorithm for maximum weighted matchings as Match-Alg. Recall, the greedy matching algorithm runs in time.

Theorem 2.

If the black-box Match-Alg is set to be the greedy matching algorithm, then Algorithm 1 computes a -approximation to the optimal solution of the BR-MWM problem in -time.

The proof is omitted as it is essentially identical to the proof of Theorem 1, with the only change that the greedy matching algorithm provides a -approximation to the maximum weighted matching problem.

Fast -approximation for uncertain weighted hypergraphs. Recall that finding a maximum weight hypergraph matching is NP-hard even for unweighted, 3-regular hypergraphs [27]. However, there exist various algorithms, that achieve different approximation factors . For example, the greedy algorithm provides a approximation guarantee, where is the rank of the hypergraph (i.e., any hyperedge contains at most nodes). Our main theoretical result follows.

Theorem 3.

Given any -approximation, polynomial-time algorithm Match-Alg () for the maximum weighted hypergraph matching problem, we can compute in polynomial time a hypermatching such that its risk is at most and its expected weight is a -approximation to the expected weight of the optimal hypermatching that has risk at most .

Again the proof proceeds step by step as the proof of Theorem 1, and is omitted. In what follows, we restrict our attention to using the greedy hypermatching algorithm as a black-box. Our focus on greedy matchings stems from the fact that its approximation factor () is asymptotically optimal [8, 12], that it is easy to implement, and runs in time using appropriate data structures. Since we will be using the greedy algorithm in our experiments (Section 4), we provide the following corollary.

Corollary 1.

For any hypergraph of rank , we can compute in poly-time a hypergraph-matching whose risk is at most and whose weight is a approximation to the optimum bounded-risk hypergraph matching.

Algorithm 1 using the greedy hypermatching algorithm in lieu of Match-Alg runs in time.

Remark. We reiterate the point that our algorithm can be used to compute risk-averse matchings for other notions of risk such as variance. For instance, if we define risk as in Equation (4), then the only thing that changes in our algorithm is the definition of the , namely that is set equal to for each (hyper)edge . The rest, including the theoretical guarantees remain identical.

4 Experimental Results

4.1 Experimental Setup and Normalization

We test our proposed algorithm on a diverse range of datasets, where the orders of magnitude of risk (e.g., standard deviation) can vary greatly across datasets. In order to have a consistent interpretation of the trade-off between expected reward and risk across datasets, we normalize the allowed risk relative to the maximum possible standard deviation of a benchmark matching, . For the purpose of computing or more precisely approximating , we run the greedy matching algorithm on the (hyper)graph () where the weight on edge is , and set to be the aggregate risk of the computed matching. While in theory one may observe a matching with greater risk than the obtained value , this does not occur in any of our simulations. We range according to the rule:

where and is incremented in steps of . We refer to as the normalized risk from now on.

Code. We implement our proposed fast approximation algorithm for uncertain weighted hypergraphs in Python. The code is available at Github [2].

Machine specs. All experiments were performed on a laptop with 1.7 GHz Intel Core i7 processor and 8GB of main memory.

4.2 Controlled Experiments

Synthetic experiments. We experiment with two random graph topologies [19]: Erdős-Rényi random graphs , and preferential attachment graphs, generated according to the Barabási-Albert model. For each normalized risk bound , we generate random graphs that in expectation have 90 000 edges. For we set , and . The resulting graphs are connected, as is above the connectivity threshold . For the Barabási-Albert model we set , . For each random graph we generate, we choose the weights according to some distribution. Once we have fixed the weights we sample edge probabilities according to some probability distribution. This procedure generates uncertain weighted Bernoulli graphs. In a similar way we generate uncertain weighted Gaussian graphs, by first sampling means, and then the variances.

Specifically, for uncertain weighted Bernoulli graphs, we sample weights independently four times from (i) uniform , and (ii) Gaussian . Then, for each choice of weights, we create four different Bernoulli probability settings. We sample probabilities according to (i) uniform , and (ii) Gaussian

distributions. Notice that for the Gaussian distribution, we carefully set the variance at the same order of magnitude as the mean, to allow a greater range of values.

For uncertain weighted Gaussian graphs we first sample the means independently from (i) uniform , and (ii) Gaussian distributions. Again, we sample four times for each distribution. For each choice of mean, we create four different Bernoulli variance settings. We sample the variance of each edge independently from (i) uniform , or Gaussian distributions.


Figure 3: Per column: average (avg.) expected weight, avg. probability, avg. number of edges in the output matching, and the avg. run time of our greedy approximation algorithm vs. the normalized risk bound across different choices of probability distributions for weights, and probabilities on uncertain weighted Bernoulli Erdős-Rényi graphs. For details, see Section 4.2.
Figure 4: Per column: average (avg.) expected weight, avg. number of edges in the output matching, and the avg. run time of our greedy approximation algorithm vs. the normalized risk bound across different choices of probability distributions for weights, and probabilities on uncertain weighted Gaussian Erdős-Rényi graphs. For details, see Section 4.2.
Figure 5: Per column: average expected weight (avg.), average avg. probability, avg. number of edges in the output matching, and the avg. run time of our greedy approximation algorithm vs. the normalized risk bound across different choices of probability distributions for weights, and probabilities on uncertain weighted Bernoulli Barabási-Albert graphs. For details, see Section 4.2.
Figure 6: Per column: average (avg.) expected weight, avg. number of edges in the output matching, and the avg. run time of our greedy approximation algorithm vs. the normalized risk bound across different choices of probability distributions for weights, and probabilities on uncertain weighted Gaussian Barabási-Albert graphs. For details, see Section 4.2.

Figure 3 plots our findings for the case of uncertain weighted Bernoulli graphs on Erdős-Rényi topologies. Each plot shows the averages with error bars showing the variability of our findings. Overall, the results tend to be well concentrated. In all plots the -axis corresponds to the normalized risk bound . Each row corresponds to a different setting of sampling distributions for the edge weights, and the edge probabilities. The first row corresponds to choosing both the weights and the edge probabilities uniformly at random, the second row to uniform weights, and Gaussian probabilities, the third to Gaussian weights, and uniform probabilities, and the fourth to Gaussian weights, and Gaussian probabilities respectively. The first column corresponds to the average expected weight, i.e., the expected weight of each matching, averaged over all experiments per value, the second column to the average probability of the hyperedges chosen in the matching, averaged over all experiments, the third to the average number of edges in the matching, and the fourth to the average run time. We observe similar results across all settings in how the objective changes as a function of the normalized bound .

Figure 4 shows our findings for uncertain weighted Gaussian graphs using an Erdős-Rényi topology. Since the edge weight distribution is continuous, there is no plot for average probability as in the case of uncertain Bernoulli graphs. We observe that the expected reward is greater for Erdős-Rényi topologies. Interestingly, when the variance is sampled from a Gaussian, the growth of number of edges in the output matching is a linear function of . A positive side-effect of risk-aversion is faster run times: the smaller the , the faster the algorithm completes.

The corresponding plots for the Barabási-Albert topologies with both Bernoulli and Gaussian distributions are presented in Figures 5 and 6 respectively. These plots are mostly similar to the plots in Figures 3 and 4 and indicate the same kind of trends. Erdős-Rényi graphs seem to yield matchings with higher expected reward on average when compared to Barabási-Albert graphs. For large values (e.g., ), this is to be expected since there is a perfect matching with high probability in the former graphs for above the connectivity threshold [18]. Proving that this relation holds for intermediate values as well is an interesting question.

Figure 7: (a) Expected reward, (b) average probability (over matching’s edges), (c) number of edges in the matching, and (d) running time in seconds versus normalized risk for the uncertain PPI network. For details, see Section 4.2.

Uncertain Unweighted PPI network. We use a real-world uncertain protein-protein interaction (PPI) network that contains 7 123 protein-protein interactions involving 2 708 proteins [35]. The input graph is unweighted, i.e., all weights are equal to one. The dataset is publicly available as supplementary material to [35]. Figure 7 shows our findings for the PPI network. The observed trends are similar to those seen in the case of synthetic topologies. It is worth noting that when is small, the algorithm quickly picks the most certain edges, and then keeps adding edges with lower probability.

4.3 Recommending impactful but probable collaborations

Dataset. In many ways, academic collaboration is an ideal playground to explore the effect of risk-averse team formation for research projects as there exist teams of researchers that have the potential for high impact but may also collaborate less often. To explore this further, we use our proposed algorithm for uncertain weighted hypergraphs as a tool for identifying a set of disjoint collaborations that are both impactful and likely to take place. For this purpose, we use the Digital Bibliography and Library Project (DBLP) database. From each paper, we obtain a team that corresponds to the set of authors of that paper. As a proxy for the impact of the paper we use the citation count. Unfortunately, we could not obtain the citation counts from Google Scholar for the whole DBLP dataset as we would get rate limited by Google after making too many requests. Therefore, we used the AMiner citation network dataset [1] that contains citation counts, but unfortunately is not as up-to-date as Google Scholar is.

We preprocessed the dataset by removing all single-author papers since the corresponding hyperedge probabilities are one. Furthermore, multiple hyperedges are treated as one, with citation count equal to the sum of the citation counts of the multiple hyperedges. To give an example, if there exist three papers in the dataset that have been co-authored by authors with citation counts we create one hyperedge on the nodes that correspond to with weight equal . If there exists another paper co-authored by , this yields a different hyperedge/team , and we do not include its citations in the impact of team .

For hyperedge we find the set of papers authored by authors respectively. We set the probability of hyperedge as

(a) (b)
Figure 8: (a) DBLP citation histogram. (b) Hypergraph rank versus normalized risk . For details, see Section 4.3.

Intuitively, this is the empirical probability of collaboration between the specific set of authors.

To sum up, we create an uncertain weighted hypergraph using the DBLP dataset, where each node corresponds to an author, each hyperedge represents a paper whose reward follows a Bernoulli distribution with weight equal to the number of its citations, and probability is the likelihood of collaboration. The final hypergraph consists of nodes and edges, and will be made publicly available on the first author’s website. The largest collaboration involves a paper co-authored by 27 people, i.e., the rank of the hypergraph is 27. Figure 8(a) shows the histogram of citations.

(a) (b)
(c) (d)
Figure 9: (a) Expected reward, (b) average probability (over hypermatching’s edges), (c) number of edges in the hypermatching, and (d) running time in seconds versus normalized risk . For details, see Section 4.3.

Results. Figure 9 shows our findings when we vary the normalized risk bound and obtain a hypermatching for each value of this parameter, using our algorithm. For the record, when , then . Figure 9(a) plots the expected weight of the hypermatching versus

. We observe an interesting phase transition when

changes from 0.15 to 0.2. This is because after the average probability of the hyper-matching drops from to . This is shown in Figure 9(b) that plots the average probability of the edges in each hypermatching computed by our algorithm vs. . Figures 9(a),(b) strongly indicate what we verified by inspecting the output: up to

, our algorithm picks teams of co-authors that tend to collaborate frequently. This finding illustrates that our tool may be used for certain anomaly detection tasks. Figures 

9(c),(d) plot the number of hyperedges returned by our algorithm, and its running time in seconds vs . We observe that a positive side-effect of using small risk bounds is speed: for small values, the algorithm computes fewer maximum matchings.

By carefully inspecting the output of our algorithm for different values, we see that at low values, e.g., , we find hyperedges typically with 50 to 150 citations with probabilities ranging typically from 0.66 to 1. When becomes large we find hyper-edges with significantly more citations but with lower probability. For example, for we find the team of David Bawden, and Lyn Robinson with weight 934 and probability 0.085. Additionally, we observe that the rank of the hypergraph we obtain when our algorithm terminates as a function of increases. This is shown in Figure 8(b). This is intuitive as collaborations with many co-authors are less likely to happen regularly.

() () () ()
() () () ()
Figure 10: Figures in first row (second row ): histograms showing the hyperedge probabilities (citations) in the hypermatching returned by our algorithm for normalized risk values equal to respectively. For details, see Section 4.3.

Finally, Figure 10 shows four pairs of histograms corresponding to the output of our algorithm for four different normalized risk values , i.e., respectively. Each pair (, , , and ) plots the histogram of the probabilities, and the number of citations of the hyperedges selected by our algorithm for respectively. The histograms provide a view of how the probabilities decrease and citations increase as we as we increase , i.e., as we allow higher risk.

5 Conclusion

In this work we study the problem of finding matchings with high expected reward and bounded risk on large-scale uncertain hypergraphs. We introduce a general model for uncertain weighted hypergraphs that allows for both continuous and discrete probability distributions, we provide a novel stochastic matching formulation that is NP-hard, and develop fast approximation algorithms. We verify the efficiency of our proposed methods on several synthetic and real-world datasets.

In contrast to the majority of prior work on uncertain graph databases, we show that it is possible to combine risk aversion, time efficiency, and theoretical guarantees simultaneously. Moving forward, a natural research direction is to design risk-averse algorithms for other graph mining tasks such as motif clustering [7, 53], the -clique densest subgraph problem [22, 51], and -core decompositions [11]?

Acknowledgements

Charalampos Tsourakakis would like to thank his newborn son Eftychios for the happiness he brought to his family.

References

  • [1] Aminer citation network dataset, August 2017. https://aminer.org/citation.
  • [2] Risk-averse matchings over uncertain graph databases, January 2018. https://github.com/tsourolampis/risk-averse-graph-matchings.
  • [3] A. Anagnostopoulos, L. Becchetti, C. Castillo, A. Gionis, and S. Leonardi. Online team formation in social networks. In Proceedings of WWW 2012, pages 839–848, 2012.
  • [4] S. Asthana, O. D. King, F. D. Gibbons, and F. P. Roth. Predicting protein complex membership using probabilistic network reliability. Genome research, 14(6):1170–1175, 2004.
  • [5] N. Bansal, A. Gupta, J. Li, J. Mestre, V. Nagarajan, and A. Rudra. When lp is the cure for your matching woes: Improved bounds for stochastic matchings. Algorithmica, 63(4):733–762, 2012.
  • [6] M. G. Bell. Hyperstar: A multi-path astar algorithm for risk averse vehicle navigation. Transportation Research Part B: Methodological, 43(1):97–107, 2009.
  • [7] A. R. Benson, D. F. Gleich, and J. Leskovec. Higher-order organization of complex networks. Science, 353(6295):163–166, 2016.
  • [8] P. Berman. A d/2 approximation for maximum weight independent set in d-claw free graphs. Proceedings of SWAT 2000, pages 31–40, 2000.
  • [9] P. Boldi, F. Bonchi, A. Gionis, and T. Tassa. Injecting uncertainty in graphs for identity obfuscation. Proceedings of the VLDB Endowment, 5(11):1376–1387, 2012.
  • [10] B. Bollobás, S. Janson, and O. Riordan. The phase transition in inhomogeneous random graphs. Random Structures & Algorithms, 31(1):3–122, 2007.
  • [11] F. Bonchi, F. Gullo, A. Kaltenbrunner, and Y. Volkovich. Core decomposition of uncertain graphs. In Proceedings of the KDD 2014, pages 1316–1325, 2014.
  • [12] Y. H. Chan and L. C. Lau. On linear and semidefinite programming relaxations for hypergraph matching. Mathematical programming, 135(1-2):123–148, 2012.
  • [13] N. Chen, N. Immorlica, A. R. Karlin, M. Mahdian, and A. Rudra. Approximating matches made in heaven. In Proceedings of ICALP 2009, pages 266–278. Springer, 2009.
  • [14] M. Cygan, F. Grandoni, and M. Mastrolilli. How to sell hyperedges: The hypermatching assignment problem. In Proceedings of SODA 2013, pages 342–351, 2013.
  • [15] N. N. Dalvi and D. Suciu. Efficient query evaluation on probabilistic databases. VLDB J., 16(4):523–544, 2007.
  • [16] B. C. Dean, M. X. Goemans, and J. Vondrák. Adaptivity and approximation for stochastic packing problems. In Proceedings of SODA 2005, pages 395–404, 2005.
  • [17] J. Edmonds. Paths, trees, and flowers. Canadian Journal of mathematics, 17(3):449–467, 1965.
  • [18] A. Frieze. On matchings and Hamilton cycles in random graphs. Carnegie Mellon University, Department of Mathematics, 1988.
  • [19] A. Frieze and M. Karoński. Introduction to random graphs. Cambridge University Press, 2015.
  • [20] H. N. Gabow. Data structures for weighted matching and nearest common ancestors with linking. In Proceedings of SODA 1990, pages 434–443, 1990.
  • [21] A. Gajewar and A. Das Sarma. Multi-skill collaborative teams based on densest subgraphs. In Proceedings of ICDM 2012, pages 165–176, 2012.
  • [22] A. Gionis and C. E. Tsourakakis. Dense subgraph discovery: Kdd 2015 tutorial. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 2313–2314. ACM, 2015.
  • [23] X. Huang, W. Lu, and L. V. Lakshmanan. Truss decomposition of probabilistic graphs: Semantics and algorithms. In Proceedings of SIGMOD 2016, pages 77–90, 2016.
  • [24] R. Jin, L. Liu, and C. C. Aggarwal. Discovering highly reliable subgraphs in uncertain graphs. In Proceedings of KDD 2011, pages 992–1000, 2011.
  • [25] M. Kargar and A. An. Discovering top-k teams of experts with/without a leader in social networks. In Proceedings of CIKM 2011, pages 985–994, 2011.
  • [26] M. Kargar, A. An, and M. Zihayat. Efficient bi-objective team formation in social networks. Machine Learning and Knowledge Discovery in Databases, pages 483–498, 2012.
  • [27] R. M. Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer, 1972.
  • [28] J. R. Katzenbach. Peak performance: Aligning the hearts and minds of your employees. Harvard Business Press, 2000.
  • [29] M. Kearns, A. Roth, Z. S. Wu, and G. Yaroslavtsev. Private algorithms for the protected in social network search. Proceedings of the National Academy of Sciences, 113(4):913–918, 2016.
  • [30] D. Kempe, J. Kleinberg, and É. Tardos. Maximizing the spread of influence through a social network. In Proceedings of KDD 2003, pages 137–146. ACM, 2003.
  • [31] A. Khan, F. Bonchi, A. Gionis, and F. Gullo. Fast reliability search in uncertain graphs. In EDBT, pages 535–546, 2014.
  • [32] A. Khan and L. Chen. On uncertain graphs modeling and queries. Proceedings of the VLDB Endowment, 8(12):2042–2043, 2015.
  • [33] G. Kolata. Grant system leads cancer researchers to play it safe. New York Times, 24, 2009.
  • [34] G. Kollios, M. Potamias, and E. Terzi. Clustering large probabilistic graphs. IEEE Transactions on Knowledge and Data Engineering, 25(2):325–336, 2013.
  • [35] N. J. Krogan, G. Cagney, H. Yu, G. Zhong, X. Guo, A. Ignatchenko, J. Li, S. Pu, N. Datta, A. P. Tikuisis, et al. Global landscape of protein complexes in the yeast saccharomyces cerevisiae. Nature, 440(7084):637, 2006.
  • [36] T. Lappas, K. Liu, and E. Terzi. Finding a team of experts in social networks. In Proceedings of KDD 2009, pages 467–476. ACM, 2009.
  • [37] D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. journal of the Association for Information Science and Technology, 58(7):1019–1031, 2007.
  • [38] L. Liu, R. Jin, C. Aggarwal, and Y. Shen. Reliable clustering on uncertain graphs. In Proceedings of ICDM 2012, pages 459–468. IEEE, 2012.
  • [39] L. Lovász and M. D. Plummer. Matching theory, volume 367. American Mathematical Soc., 2009.
  • [40] A. Majumder, S. Datta, and K. Naidu. Capacitated team formation problem on social networks. In Proceedings of KDD 2012, pages 1005–1013, 2012.
  • [41] H. M. Markowitz. Foundations of portfolio theory. The journal of finance, 46(2):469–477, 1991.
  • [42] W. E. Moustafa, A. Kimmig, A. Deshpande, and L. Getoor.

    Subgraph pattern matching over uncertain graphs with identity linkage uncertainty.

    In Proceedings of ICDE 2014, pages 904–915. IEEE, 2014.
  • [43] P. Parchas, F. Gullo, D. Papadias, and F. Bonchi. The pursuit of a good possible world: extracting representative instances of uncertain graphs. In Proceedings SIGMOD 2014, pages 967–978, 2014.
  • [44] P. Parchas, N. Papailiou, D. Papadias, and F. Bonchi. Uncertain graph sparsification. arXiv preprint arXiv:1611.04308, 2016.
  • [45] M. Potamias, F. Bonchi, A. Gionis, and G. Kollios. K-nearest neighbors in uncertain graphs. Proceedings of the VLDB Endowment, 3(1-2):997–1008, 2010.
  • [46] A. E. Roth, T. Sönmez, and M. U. Ünver. Kidney exchange. The Quarterly Journal of Economics, 119(2):457–488, 2004.
  • [47] A. Ruszczyński.

    Risk-averse dynamic programming for markov decision processes.

    Mathematical programming, 125(2):235–261, 2010.
  • [48] A. K. Saha and D. B. Johnson. Modeling mobility for vehicular ad-hoc networks. In Proceedings of the 1st ACM international workshop on Vehicular ad hoc networks, pages 91–92. ACM, 2004.
  • [49] A. Srinivasan. Approximation algorithms for stochastic and risk-averse optimization. In Proceedings of SODA 2007, pages 1305–1313. Society for Industrial and Applied Mathematics, 2007.
  • [50] C. Swamy. Risk-averse stochastic optimization: Probabilistically-constrained models and algorithms for black-box distributions. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011, pages 1627–1646, 2011.
  • [51] C. Tsourakakis. The k-clique densest subgraph problem. In Proceedings of WWW 2015, pages 1122–1132, 2015.
  • [52] C. E. Tsourakakis, M. Mitzenmacher, J. Błasiok, B. Lawson, P. Nakkiran, and V. Nakos. Predicting positive and negative links with noisy queries: Theory & practice. arXiv preprint arXiv:1709.07308, 2017.
  • [53] C. E. Tsourakakis, J. Pachocki, and M. Mitzenmacher. Scalable motif-aware graph clustering. In Proceedings of WWW 2017, pages 1451–1460, 2017.
  • [54] K. Tu, B. Ribeiro, D. Jensen, D. Towsley, B. Liu, H. Jiang, and X. Wang. Online dating recommendations: matching markets and learning preferences. In Proceedings of WWW 2014, pages 787–792, 2014.
  • [55] L. G. Valiant. The complexity of computing the permanent. Theoretical computer science, 8(2):189–201, 1979.
  • [56] P. Whittle. Risk-sensitive linear/quadratic/gaussian control. Advances in Applied Probability, 13(4):764–777, 1981.
  • [57] J. Y. Yu and E. Nikolova. Sample complexity of risk-averse bandit-arm selection. In Proceedings of IJCAI 2013, pages 2576–2582, 2013.