Maximizing Social Welfare Subject to Network Externalities: A Unifying Submodular Optimization Approach

02/17/2021 ∙ by S. Rasoul Etesami, et al. ∙ University of Illinois at Urbana-Champaign 0

We consider the problem of allocating multiple indivisible items to a set of networked agents to maximize the social welfare subject to network externalities. Here, the social welfare is given by the sum of agents' utilities and externalities capture the effect that one user of an item has on the item's value to others. We first provide a general formulation that captures some of the existing models as a special case. We then show that the social welfare maximization problem benefits some nice diminishing or increasing marginal return properties. That allows us to devise polynomial-time approximation algorithms using the Lovasz extension and multilinear extension of the objective functions. Our principled approach recovers or improves some of the existing algorithms and provides a simple and unifying framework for maximizing social welfare subject to network externalities.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

One of the major challenges in resource allocation over networks is the existence of externalities among the agents. Network externality (also called network effect) is the effect that one user of a good or service has on the product’s value to other people. Externalities exist in many network systems such as social, economic, and cyber-physical networks and can substantially affect resource allocation strategies and outcomes. In fact, due to the rapid proliferation of online social networks such as Facebook, Twitter, and LinkedIn, the magnitude of such network effects has been increased to an entirely new level [9]. Here are just a few examples.

Allocation of Networked Goods: Many goods have higher values when used in conjunction with others [23]. For instance, people often derive higher utility when using the same product such as cellphones or game consoles (Figure 1). One reason is that companies often provide extra benefits for those who adopt their products. Another reason is that the users who buy the same product can share many benefits, such as installing similar Apps or exchanging video games. Such products are often referred to as networked goods and are said to exhibit positive network externalities.

Cyber-Physical Network Security: An essential task in cyber-physical security is that of providing a resource allocation mechanism for securing the operation of a set of networked agents (e.g., service providers, computers, or data centers) despite external malicious attacks. One way of doing that is to distribute limited security resources among the agents efficiently (e.g., by installing antivirus software on a subset of servers) [22]. However, since the agents are interconnected, the compromise of one agent puts its neighbors at higher risk, and such a failure can cascade over the entire network. As a result, deciding how much to invest in an agent’s security will indirectly affect all the others. Therefore, an efficient allocation of scarce security resources among the agents who experience network externalities is a major challenge in network security.

Network Congestion Games: There are many instances of socioeconomic systems such as transportation or facility location networks in which the utility of agents decreases as more agents use the same resource [17, 29]. For instance, as more drivers (agents) use the same road (resource), the traffic congestion on that road will increase, hence increasing the travel time and energy consumption for the drivers (Figure 1). Such network effects are often referred to as negative externalities and have been studied under the general framework of congestion games in both centralized or game-theoretic settings [17, 4, 28, 27, 29].

Fig. 1: The left figure shows an instance of network goods with four different cellphone products. Individuals tend to buy a product that is adopted by most of their friends. The middle figure shows networked servers that are highly interconnected and an adversary who has compromised one of them and hence influences all others. The right figure illustrates the GPS map of a traffic network. As more drivers use the same road, they will negatively influence each others’ travel time.

Motivated by the above, and many other similar examples, our objective in this paper is to study allocation problems when agents exhibit network externalities. While this problem has been significant in the past literature [23, 8, 1, 3], these results are mainly focused on allocating and pricing of copies of a single item; other than a handful of results [17, 13, 2], the problem of maximizing the social welfare by allocating multiple items subject to network externalities has not been well-studied before. Therefore, we consider the more realistic situation with multiple competing items and when the agents in the network are demand-constrained.

I-a Related Work

There are many papers that consider resource allocation under various network externality models. For example, negative externalities have been studied in routing [29, 16], facility location [17, 18], welfare maximization in congestion games [4], and monopoly pricing over a social network [3]. On the other hand, positive externalities have been addressed in the context of mechanism design and optimal auctions [2, 23], congestion games with positive externalities [13, 4], and pricing networked goods [20, 8]. There are also some results that consider unrestricted externalities where a mixture of both positive and negative externalities may exist in the network [4, 10]. However, those results are often for simplified anonymous models in which the agents do not care about the identity of others who share the same resource with them. One reason is that for unrestricted and non-anonymous externalities, maximizing the social welfare with agents is -inapproximable for any [13, 4]. Thus, in this work, we consider maximizing social welfare with non-anonymous agents but with either positive or negative externalities.

Optimal resource allocation subject to network effects is typically NP-hard and even hard to approximate [14, 23, 17, 4]. Therefore, a large body of past literature has been devoted to devising polynomial-time approximation algorithms with a good performance guarantee. Maximizing social welfare subject to network externalities can often be cast as a special case of a more general combinatorial welfare maximization problem [25]. However, combinatorial welfare maximization with general valuation functions is hard to approximate to within a factor better than , where is the number of items [5]. Therefore, to obtain improved approximation algorithms for the special case of welfare maximization with network externalities, one must rely on more tailored algorithms that take into account the special structure of the agents’ utility functions.

Another closely related problem on resource allocation under network effects is submodular optimization [12, 7]. The reason is that utility functions of the networked agents often exhibit diminishing return property as more agents adopt the same product. That property makes a variety of submodular optimization techniques quite amenable to design improved approximation algorithms. While this connection has been studied in the past literature for the special case of a single item [23], it has not been leveraged for the more complex case of multiple items. As we will show in this paper, there is a close relation between multi-item welfare maximization under network externalities, and the so-called submodular cost allocation [11, 15], multi-agent submodular optimization [30], and a generalization of both known as multivariate submodular optimization [31]. However, adapting those general frameworks naively to our problem setting can only deliver poly-logarithmic approximation factors, and that is the best one can hope for (as nearly matching logarithmic lower bounds are known). Instead, we will use the special structure of the agents’ utility functions and new ideas from submodular optimization to devise improved approximation algorithms for maximizing the social welfare subject to network externalities.

I-B Contributions and Organization

We provide a general model for multi-item social welfare maximization subject to network externalities. We show that the proposed model subsumes some of the existing ones as a special case. By making a connection to multi-agent submodular optimization, we devise unified approximation algorithms for that general problem using continuous extensions of the objective functions. Our principled approach not only recovers or improves the existing algorithms but also can be used for devising approximation algorithms for the maximum social welfare with more complex constraints.

The paper is organized as follows. In Section II, we formally introduce the problem of multi-item social welfare maximization subject to network externalities. In Section III, we provide some preliminary results from submodular optimization for later use. In Section IV, we consider a special case of positive linear externality functions and provide a 2-approximation algorithm for that problem. In Section V, we extend our results to positive polynomial and more general convex externality functions and devise improved approximation algorithms in terms of the curvature of the externality functions. In Section VI, we use a distributed primal-dual method to devise an approximation algorithm for positive concave externality functions. Finally, in Section VII, we consider the problem of maximum social welfare under negative externalities and provide a constant-factor approximation algorithm for that problem. We conclude the paper in Section VIII.

Notations: We adopt the following notations throughout the paper: For a positive integer we set

. We use bold symbols for vectors and matrices. For a matrix

we use to refer to its th row and to refer to its th column. Given a vector we denote the -norm and the transpose of by and , respectively. We let and be a column vector of all ones and all zeros, respectively. We denote the cardinality of a finite set by . Finally, we denote the set of stochastic matrices by , and let be the projection operator with respect to -norm on .

Ii Problem Formulation

Consider a set of agents and a set of distinct indivisible items (resources). There are unlimited copies of each item

; however, each agent can receive at most one item. For any ordered pair of agents

and any item , there is a nonnegative weight indicating the amount by which the utility of agent gets influenced from agent , given that both agents and receive the same item (see Figure 2). Let denote the set of agents that receive item in a given allocation. For any , the utility that agent derives from such an allocation is given by , where , is a nondecreasing nonnegative function. Depending on whether the functions are linear, convex, or concave, we will refer to them as linear externalities, convex externalities, or concave externalities. In the maximum social welfare (MSW) problem, the goal is to assign exactly one item to each agent in order to maximize the social welfare. In other words, we want to find a partition of agents (i.e., ) to maximize the sum of agents’ utilities given by

(1)

We note that MSW (1) can be formulated as the following integer program (IP):

(2)
(3)
(4)

where if and only if item is assigned to agent . In particular, using the fact that , the IP (2) can be written in the equivalent form of

(5)
(6)
(7)
Fig. 2: An instance of the MSW with agents and items: blue item and red item . Each layer represents the directed influence graph between the agents for that specific item. The influence weights are captured by . If there is no edge between two agents and in an item layer , it means that . Note that each agent can adopt at most one item. In the above figure, both agents and are allocated the red item.
Example 1

For the special case of linear functions , the objective function in (1) becomes , hence recovering the optimization problem studied in [13]. We refer to such externality functions as linear externalities.

Example 2

Let be a fixed directed graph among the agents, and denote the set of out neighbors of agent by . In a special case when each agent treats all of its neighbors equally regardless of what item they use (i.e., for every item we have if and otherwise), the objective function in (1) becomes . Therefore, we recover the social welfare maximization problem studied in [2]. For this special case, it was shown in [2, Theorem 3.9] that when the externality functions are convex and bounded above by a polynomial of degree , one can find an -approximation for the optimum social welfare allocation. In this work, we will improve this result for the more general setting of (1).

Iii Preliminary Results

This section provides some definitions and preliminary results, which will be used later to establish our main results. We start with the following definition.

Definition 1

Given a finite ground set , a set function is called submodular if and only if for any . Equivalently, is submodular if for any two nested subsets and any , we have . A set function is called supermodular if is submodular. A set function is called monotone if for .

Iii-a Lovász Extension

Let be a ground set of cardinality . Each real-valued set function on corresponds to a function over the vertices of hypercube , where each subset is represented by its binary characteristic vector. Therefore, by abuse of notation, we use and interchangeably where is the characteristic vector of the set . Given a set function , the Lovász extension of to the continuous unite cube , denote by , is defined by

(8)

where

is a uniform random variable, and

for a given vector is defined as: if , and , otherwise. In other words, is a random binary vector obtained by rounding all the coordinates of that are above to , and the remaining ones to . In particular, is equal to the expected value of at the rounded solution , where the expectation is with respect to the randomness posed by . It is known that the Lovász extension is a convex function of if and only if the corresponding set function is submodular [26]. This property makes the Lovász extension a suitable continuous extension for submodular minimization.

Iii-B Multilinear Extension

As mentioned earlier, the Lovász extension provides a convex continuous extension of a submodular function, which is not very useful for maximizing a submodular function. For the maximization problem, one can instead consider another continuous extension known as multilinear extension. The multilinear extension of a set function at a given vector , denoted by , is given by the expected value of at a random set that is sampled from the ground set by including each element to

independently with probability

, i.e.,

One can show that the Lovász extension is always a lower bound for the multilinear extension, i.e., . Moreover, at any binary vector , we have . In general, the multilinear extension of a submodular function is neither convex nor concave. However, it is known that there is a polynomial-time continuous greedy algorithm that can approximately maximize the multilinear extension of a nonnegative submodular function subject to a certain class of constraints. That result is formally stated in the following lemma.

Lemma 1

[21, Theorems I.1 & I.2] For any non-negative submodular function , down-monotone solvable polytope111A polytope is solvable if linear functions can be maximized over it in polynomial time. It is down-monotone if and (coordinate-wise) implies that . , there is a polynomial-time continuous greedy algorithm that finds a point such that , where is the optimal integral solution to the maximization problem . If in addition, the submodular function is monotone, the approximation guarantee can be improved to .

According to Lemma 1, the multilinear extension provides a suitable relaxation for devising an approximation algorithm for submodular maximization. The reason is that one can first approximately solve the multilinear extension in polynomial time and then round the solution to obtain an approximate integral feasible solution.

Iii-C Fair Contention Resolution

Here, we provide some background on a general randomized rounding scheme known as fair contention resolution that allows one to round a fractional solution to an integral one while preserving specific properties. Intuitively, given a fractional solution to a resource allocation problem, one ideally wants to round the solution to an integral allocation such that each item is allocated to only one agent. However, a natural randomized rounding often does not achieve that property as multiple agents may receive the same item. To resolve that issue, one can use a “contention resolution scheme,” which determines which agent should receive the item while losing at most a constant factor in the objective value.

More precisely, suppose agents compete for an item independently with probabilities . Denote by the random set of agents who request the item in the first phase, i.e., independently for each . In the second phase, If , we do not make any change to the allocation. Otherwise, allocate the item to each agent who requested the item in the first phase with probability

Note that for any , we have , so that after the second phase, the item is allocated to exactly one agent with probability . The importance of such a fair contention resolution scheme is that if the item was requested in the first phase by an agent, then after the second phase, that agent still receives the item with probability at least . More precisely, it can be shown that [19]:

Lemma 2

[19, Lemma 1.5] Conditioned on agent requesting the item in the first phase, she obtains it after the second phase with probability exactly

Iv MSW with Positive Linear Externalities

In this section, we consider MSW with positive linear externalities (see, Example 1). This problem was first introduced and studied by [13], where the authors provided a

-approximation algorithm for MSW by solving a linear program relaxation with

variables and then rounding the solution using an iterative greedy algorithm. However, the analysis in [13] are relatively involved and appear to be somewhat ad hoc. In particular, it is not clear how one can extend those results to the case of nonlinear externalities.

Here, we establish the same approximation guarantee using a natural nonlinear relaxation for MSW with positive linear externalities. By showing that the objective function is a monotone supermodular function, we provide a 2-approximation algorithm by solving a concave program with variables based on the Lovász extension of the objective function. We then show that a well-known randomized rounding algorithm for submodular minimization readily returns a 2-approximate solution. Therefore, we obtain a very natural and principled method for devising a constant approximation algorithm. In particular, our analysis is simple and provides a clear explanation of why the iterative greedy rounding performs well by making a novel connection to submodular optimization. This approach allows us to substantially extend those results to more general convex externalities using the same principled method. Interestingly, one can show that derandomization of our randomized algorithm for the case of positive linear externalities recovers the iterative greedy algorithm developed in [13].

For the case of linear externalities , the IP (5) can be written in the form of the following IP:

(9)

where for any , we define to be the column vector , and is a quadratic function with . Moreover, we use to refer to a column vector of all ones.

Since all the entries of are nonnegative, it is easy to see that the function is a nonnegative and monotone supermodular set function. Now let be the continuous Lovász extension of from the hypercube to the unite cube . Since is submodular, the function is convex, which implies that is a nonnegative concave function. In fact, using (8) and the quadratic structure of , one can derive a closed form for as

(10)
(11)

From the above relation, it is easy to see that for any , the function is indeed a nonnegative and monotone concave function. As the objective function in (9) is separable across variables , by linearity of expectation equals to the Lovász extension of the objective function in (9), which is also a concave function. Therefore, we obtain the following concave relaxation for the IP (9) whose optimal value upper-bounds that of (9).

(12)

By abuse of notation, let us denote the optimal (fractional) solution to (12) that can be found in polynomial time by , where is an matrix whose th column is given by . In what follows next, we provide a simple algorithm to round to a feasible integral solution to IP (9), whose expected objective value is at least of the optimum value in (12).

Iv-a An Iterative Randomized Rounding Algorithm

Here, we show that a slight variant of the randomized rounding algorithm derived from the work of Kleinberg and Tardos (KT) for metric labeling [24] provides a 2-approximation for the IP (9) when applied to the optimal solution of (12). The rounding scheme is summarized in Algorithm 1.

Let be the optimal solution to the concave program (12).

During the course of the algorithm, let be the set of allocated agents and be the set of agents that are allocated item . Initially set and .

While , pick uniformly at random. Let , and update and .

Return .

Algorithm 1 Iterative KT Rounding Algorithm

Algorithm 1 proceeds in several rounds until all the agents are assigned an item. At each round, the algorithm selects a random item and a random subset of unassigned agents , and assign item to the agents in .

Theorem 1

Algorithm 1 is a -approximation algorithm for the IP (9) with positive linear externalities.

We show that the expected value of the solution returned by Algorithm 1 is at least , where , and is the optimal solution to (12). To that end, let be the random set of agents that are selected during the first round of the algorithm. It is enough to show

(13)

where denotes the value of the Lovász extension when restricted to the rows of corresponding to the agents .222Equivalently, is equal to evaluating at a solution that is obtained from by setting all the rows to . In other words, inequality (13) shows that the expected utility of the agents assigned during the first round is at least of the expected value that those agents fractionally contribute to the Lovász extension. If we can show (13), the theorem then follows by an induction on the number of agents and using superadditivity of . More precisely, given , let be the (random) sets returned by the algorithm when applied on the remaining agents in . As , using the induction hypothesis on the remaining set of agents , we have

(14)

where is a variable restricted only to the agents in , and the second inequality holds because is a feasible solution to the middle maximization. Now, we can write

(15)
(16)
(17)
(18)

where the first inequality uses the superadditivity of the functions , that is implies , and the last inequality holds by (13) and (14).

Finally, to establish (13), we can write

(19)

Let be the restriction of to the rows and of the solution . Note that by the definition we have and . Thus, we can write

(20)
(21)
(22)
(23)
(24)
(25)

where the second equality in (20) holds because a pair of rows contribute exactly to the objective if at least one of or belong to , and contribute , otherwise. Moreover, the last inequality holds because . Finally, by combining (20) and (19), we obtain (13), which completes the proof.

V MSW with Positive Monotone Convex Externalities

Here, we extend the previous section’s results by considering the maximum social welfare problem with nonnegative monotone convex externalities.

Lemma 3

Given nondecreasing convex externality functions with , the objective function in (1) is a nondecreasing and nonnegative supermodular set function.

Let , and note that the objective function in (1) can be written in a separable form as . Thus, it is enough to show that each is a monotone supermodular set function. The monotonicity and nonnegativity of immediately follows from nonnegativity of weights , and monotonicity and nonnegativity of . To show supermodularity of , for any , we can write

(26)
(27)
(28)
(29)
(30)
(31)

Here, the first inequality holds by the monotonicity of functions and by (note that each of the summands is nonnegative). The second inequality in (26) follows from convexity of the functions . More precisely, given any , let , , and , where we note that . By convexity of we have , or equivalently , which is exactly the second inequality in (26).

V-a Convex Polynomial Externalities of Bounded Degree

Here, we show that if the externality functions can be represented (or uniformly approximated) by nonnegative-coefficient polynomials of degree less than , then Algorithm 1 returns a -approximate solution to the maximum social welfare problem.

Theorem 2

Assume each externality function is a polynomial with nonnegative coefficients of degree less than . Then Algorithm 1 is a -approximation algorithm for the MSW (1).

As each is a convex and nondecreasing function, using Lemma 3, the objective function in (1) is a monotone supermodular function. Therefore, the Lovász extension relaxation of the objective function (1) given by is a concave function, where

(32)

As a result, one can solve the Lovász extension relaxation of the MSW using the corresponding concave program (12), whose optimal solution, denoted by , can be found in polynomial time. Since each in (32) is a binary random variable, we have . Moreover, since each is a polynomial with nonnegative coefficients of degree less than , after expanding all the terms in (32), there are nonnegative coefficients such that

Note that we may assume does not have any constant term as it does not affect the MSW optimization. Taking expectation from the above relation and summing over all , we obtain

(33)

where for . Now, using the same argument as in Theorem 1, let be the random set obtained during the first round of Algorithm 1. Then, the term contributes to the expected value if at least one of the agents belong to . Therefore, using (33) and linearity of expectation, we can write

(34)
(35)
(36)
(37)
(38)
(39)

In the above derivations, the first inequality holds because , and the second inequality holds because the terms are nonnegative. This shows that after the first round of Algorithm 1, we have . Now, by following the same argument as in the first part of the proof of Theorem 1, one can see that the expected value of the Algorithm 1 is at least .

Remark 1

For convex polynomial externalities of degree at most , the -approximation guarantee of Theorem 2 is an exponential improvement over the -approximation guarantee given in [2, Theorem 3.9].

V-B Monotone Convex Externalities of Bounded Curvature

In this part, we provide an approximation algorithm for the MSW with general monotone and nonnegative convex externalities. Unfortunately, for general convex externalities, the Lovász extension of the objective function does not admit a closed-form structure. For that reason, we develop an approximation algorithm whose performance guarantee depends on the curvature of the externality functions.

Definition 2

Given , we define the -curvature of a nonnegative nondecreasing convex function as , where we note that .

Theorem 3

The MSW (1) with nondecreasing convex externality functions admits a - approximation algorithm, where .

Using Lemma 3 the MSW (1) with monotone convex externality functions can be cast as the supermodular maximization problem (9). Relaxing that problem via Lovász extension, we obtain the concave program (12), whose optimal solution, denoted by , can be found in polynomial time. We round the optimal fractional solution to an integral one using the two-stage fair contention resolution scheme. It is instructive to think about the rounding process as a two-stage process for rounding the fractional matrix . In the first stage, the columns are rounded independently, and in the second stage, the rows of the resulting solution are randomly revised to create the final integral solution , satisfying the partition constraints in (9). We note that although is correlated across its columns (due to the row-wise rounding), because the objective function in (9) is separable across the columns variables, using linearity of expectation and regardless of the rounding scheme we have .

The first stage of the rounding simply follows from the definition of the Lovász extension: for each pick an independent uniform random variable , and for let if , and , otherwise. Then, from the definition of the Lovász extension, we have

. Thus, after the first stage we obtain a binary random matrix

whose expected objective value equals to the maximum value of the concave relaxation (12). Unfortunately, after the first phase, the rounded solution may not satisfy the partition constraints as multiple items may be assigned to the same agent . To resolve that issue, in the second stage we round to by separately applying the fair contention resolution to every row of . More precisely, for each , let be a random set denoting the positions in the -th row of that are rounded to in the first stage of rounding. If , we do nothing. Otherwise, we set all the entries of the th row to 0 except the th entry, where is selected from with probability . Thus, after the second stage of rounding, is a feasible solution to (9).333If a row of is completely zero, we can set an arbitrary entry in that row to 1. By monotonicity of the objective function, such a change can only increase the rounded solution’s objective value.

Since are uniform independent random variables, for any , . Therefore, for any row , one can imagine that the entries of row compete independently with probabilities to receive the resource in the contention resolution scheme. Let us consider an arbitrary column . Using Lemma 2 for row , and given , the probability that item is given to player is at least , that is . Now consider any and note that , as the event