Price of Dependence: Stochastic Submodular Maximization with Dependent Items

05/23/2019 ∙ by Shaojie Tang, et al. ∙ 0

In this paper, we study the stochastic submodular maximization problem with dependent items subject to a variety of packing constraints such as matroid and knapsack constraints. The input of our problem is a finite set of items, and each item is in a particular state from a set of possible states. After picking an item, we are able to observe its state. We assume a monotone and submodular utility function over items and states, and our objective is to select a group of items adaptively so as to maximize the expected utility. Previous studies on stochastic submodular maximization often assume that items' states are independent, however, this assumption may not hold in general. This motivates us to study the stochastic submodular maximization problem with dependent items. We first introduce the concept of degree of independence to capture the degree to which one item's state is dependent on others'. Then we propose a non-adaptive policy based on a modified continuous greedy algorithm and show that its approximation ratio is α(1 - e^-κ/2 + κ/18m^2 - κ + 2/3mκ) where the value of α is depending on the type of constraints, e.g., α=1 for matroid constraint, κ is the degree of independence, e.g., κ=1 for independent items, and m is the number of items.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Stochastic submodular maximization (SSM) has been extensively studied recently (Asadpour et al. 2008). The input of SSM is a set of items, each item belongs to a particular state from a set of possible states. After picking an item, we are able to observe its state. Given a monotone and submodular utility function over all items and their states, our objective is to adaptively select a group of items that maximize the expected utility subject to a variety of constraints. One example is stochastic sensor cover problem. In this example, we are given a set of sensors and the state of each sensor is the subset of targets it covers, this subset may change due to uncertain environmental conditions. After selecting a sensor, we are able to observe its state, i.e., the actual subset of targets that can be covered by this sensor. Then the objective of stochastic sensor cover problem is to adaptively select a group of sensors to cover the largest amount of targets (in expectation).

Majority of existing work assume that items are independent from each other, i.e., one item’s state does not depend on others’. However, this assumption does not always hold in reality. Consider the example of stochastic maximum -cover, since each sensor’s state is affected by the environmental conditions which are shared among all sensors, thus their states are correlated. Another example is from viral marketing where one customer’s decision on whether or not buy a product could depend on her neighbors’ decisions. Golovin and Krause (2011) extended the previous studies to dependent items, however, their results only hold when the utility function is adaptive submodular. It is not clear how to generalize their results to more general settings. In this paper, we study a very general setting for the SMM with dependent items. To capture the degree to which one item’s state is correlated with others’, we introduce the concept of degree of independence. A larger degree of independence indicates a weaker correlation among all items’ states, i.e., this value is 1 for independent items. Then we propose a non-adaptive policy based on a modified continuous greedy algorithm. We say a policy is non-adaptive if it always picks the next item before observing the states of picked items. We show that our non-adaptive policy achieves approximation ratio where the value of is depending on the type of constraints, e.g., for matroid constraint, is the degree of independence, and is the number of items. Since our policy is non-adaptive, is also known as the adaptivity gap between the utilities of best adaptive and best non-adaptive policies.

2 Preliminaries

2.1 Submodular Function and Multilinear Extension

A submodular function is a set function , where denotes the power set of , which satisfies a natural “diminishing returns” property: the marginal gain from adding an element to a set is at least as high as the marginal gain from adding the same element to a superset of . Formally, a submodular function satisfies the follow property: For every with and every , we have that . We say a submodular function is monotone if whenever

. Consider any vector

. The multilinear extension of is defined as .

3 Notations and Problem Formulation

3.1 Items and States

Let denote a finite set of items, and each item is in a particular state from a set of possible states. Let denote a realization of item states. Let be a random realization where denotes a random realization of . After picking an item , we are able to observe its realization . Let

denote the set of all realizations, we assume there is a known prior probability distribution

over realizations, i.e., .

3.2 Utility Function and Problem Formulation

Let be a monotone and submodular function over all items and their states. A policy is a function that specifies which item to pick next under the observations made so far: . Note that

can be regarded as some decision tree that specifies a rule for picking items adaptively. Let

be a downward-closed family of subsets of . Let denote the subset of items picked by policy under . Then the utility of can be expressed as where denotes the probability that is realized. We say a policy is feasible if for any , . Our goal is to identify the best feasible policy that maximizes its expected utility.

3.3 More Notations and Degree of Independence

By abuse of notation, define as the value of items . Let denote the marginal value of item with respect to . Define as the marginal value of item ’s state with respect to .

Given a vector , let be a random set obtained by picking each item independently with probability , then the multilinear extension is defined as the expected value of : . Let denote the marginal value of item with respect to . Define as the marginal value of item ’s state with respect to . For notation convenience, denote as a new vector by setting in to be 0.

We next introduce the concept of degree of independence which refers to the degree to which one item’s state is correlated the others’. [Degree of Independence] The degree of independence of a known prior probability distribution is defined as follows.

(1)

Notice that if all items’ states are realized independently from each other, then the degree of independence is 1, i.e., .

4 Algorithm Design

In this section, we present a non-adaptive policy , later, we show that the ratio of the utilities of to the best adaptive policy is bounded. This result is also known as adaptivity gap. The general idea of is to first find a fractional solution using modified continuous greedy algorithm (Algorithm 1) and then round it to an integral solution.

Stage 1: Modified Continuous Greedy Algorithm

We first explain the design of the modified continuous greedy algorithm. Algorithm 1 maintains a fractional solution , starting with . Let be a random set which contains each independently with probability . In each round , it updates the weight of each item as follows,

where is a subset of by excluding . Since we are not able to obtain the exact value of

, we estimate this value by averaging over

independent samples of . Let denote the estimated value of . Notice that as compared with the standard continuous greedy algorithm (Calinescu et al. 2011), we define the weight of each item in a different way, i.e., Calinescu et al. (2011) adopt the following weight function . Assume , the convex relaxation for , is a down-monotone solvable polytope, we solve the following optimization problem.

P1: Maximize subject to:

After solving P1 at round and obtain an optimal solution , we update the fractional solution at round as follows . After rounds where , is returned as the final solution.

Stage 2: Rounding Fractional Solution

In the second stage, we round the fractional solution to an integral solution. As shown in Chekuri et al. (2014), if there exists a -balanced contention resolution scheme for where , then we can find a feasible solution such that . It turns out many useful constraints admit good -balanced contention resolution schemes, including matroid constraints and knapsack constraints. For example, when specifies an intersection of matroids and knapsacks, we have . Notice that when specifies a matroid constraint, we can apply pipage rounding technique (Ageev and Sviridenko 2004) to find a feasible solution with expected utility , i.e., .

1:  Set .
2:  while  do
3:     Let be a random set which contains each independent with probability .
4:     For each , let , estimate
5:     Solve P1 with estimated weight for each and obtain an optimal solution
6:      P1: Maximize subject to: where is a down-monotone solvable polytope.
7:     Let ;
8:     Increment ;
Algorithm 1 Modified Continuous Greedy

5 Performance Analysis

In this section, we prove the following main results.

Assume is the optimal policy and there exists a -balanced contention resolution scheme for , we have where is the degree of independence.

Proof: Observing that if there exists a -balanced contention resolution scheme for , we can find a feasible solution such that , it suffice to prove that . Thus, in the rest of the proof, we focus on proving .

For every , let denote the probability that is picked by . Due to is a feasible policy, is a convex combination of feasible solutions in , thus, . We first prove that for any vector .

(2)

The first two inequalities are due to the submodularity of . The third inequality is due to . The last inequality is due to the definition of degree of independence (Definition 3.3).

We next provide a lower bound on the increased utility of during one round of Algorithm 1. To simplify the notation, we use to denote the optimal solution to P1 in round .

(3)
(6)
(8)

Inequality (3) is due to Lemma 3.3 in (Calinescu et al. 2011). Inequality (5) is due to . Inequality (5) is due to . Inequality (6) is due to is an optimal solution to P1 and is a feasible solution to P1, and the rest proof is similar to the one of Lemma 3.2 in (Calinescu et al. 2011)). Inequality (5) is due to Inequality (2).

By induction, we have . It follows that . Finally, since , we have .

As a byproduct of our main theorem, we have the following corollary. The adaptivity gap of SSM with dependent items is .

6 Conclusion

Previous studies on SSM often assume that items are independent, however, this assumption may not always hold. In this paper, we study SSM with dependent items. To capture the impact of item dependency, we first introduce the concept of degree of independence. Then we propose a non-adaptive policy based on a modified continuous greedy algorithm and show that its performance is close to the optimal adaptive policy. In particular, we prove that our non-adaptive policy achieves an approximation ratio whose value is depending on the degree of independence.

References

  • Ageev and Sviridenko (2004) Ageev, Alexander A, Maxim I Sviridenko. 2004. Pipage rounding: A new method of constructing algorithms with proven performance guarantee.

    Journal of Combinatorial Optimization

    8(3) 307–328.
  • Asadpour et al. (2008) Asadpour, Arash, Hamid Nazerzadeh, Amin Saberi. 2008. Stochastic submodular maximization. International Workshop on Internet and Network Economics. Springer, 477–489.
  • Calinescu et al. (2011) Calinescu, Gruia, Chandra Chekuri, Martin Pál, Jan Vondrák. 2011. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing 40(6) 1740–1766.
  • Chekuri et al. (2014) Chekuri, Chandra, Jan Vondrák, Rico Zenklusen. 2014. Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM Journal on Computing 43(6) 1831–1879.
  • Golovin and Krause (2011) Golovin, Daniel, Andreas Krause. 2011.

    Adaptive submodularity: Theory and applications in active learning and stochastic optimization.

    Journal of Artificial Intelligence Research

    427–486.