# An FPTAS for Stochastic Unbounded Min-Knapsack Problem

In this paper, we study the stochastic unbounded min-knapsack problem (Min-SUKP). The ordinary unbounded min-knapsack problem states that: There are n types of items, and there is an infinite number of items of each type. The items of the same type have the same cost and weight. We want to choose a set of items such that the total weight is at least W and the total cost is minimized. The generalizes the ordinary unbounded min-knapsack problem to the stochastic setting, where the weight of each item is a random variable following a known distribution and the items of the same type follow the same weight distribution. In , different types of items may have different cost and weight distributions. In this paper, we provide an FPTAS for Min-SUKP, i.e., the approximate value our algorithm computes is at most (1+ϵ) times the optimum, and our algorithm runs in poly(1/ϵ,n, W) time.

## Authors

• 7 publications
• 7 publications
• ### Group Fairness for Knapsack Problems

We study the knapsack problem with group fairness constraints. The input...
06/14/2020 ∙ by Deval Patel, et al. ∙ 0

• ### On bounded pitch inequalities for the min-knapsack polytope

In the min-knapsack problem one aims at choosing a set of objects with m...
01/26/2018 ∙ by Yuri Faenza, et al. ∙ 0

• ### Approximation Schemes for Multiperiod Binary Knapsack Problems

An instance of the multiperiod binary knapsack problem (MPBKP) is given ...
03/31/2021 ∙ by Zuguang Gao, et al. ∙ 0

• ### Joint replenishment meets scheduling

In this paper we consider a combination of the joint replenishment probl...
04/19/2021 ∙ by Péter Gyögyi, et al. ∙ 0

• ### Stochastic Submodular Probing with State-Dependent Costs

In this paper, we study a new stochastic submodular maximization problem...
09/01/2019 ∙ by Shaojie Tang, et al. ∙ 0

• ### Logarithmic regret in the dynamic and stochastic knapsack problem

We study a dynamic and stochastic knapsack problem in which a decision m...
09/06/2018 ∙ by Alessandro Arlotto, et al. ∙ 0

• ### The Ad Types Problem

The Ad Types Problem (without gap rules) is a special case of the assign...
07/09/2019 ∙ by Riccardo Colini-Baldeschi, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper, we study the stochastic unbounded min-knapsack problem (Min-SUKP). The problem is motivated by the following renewal decision problems introduced in [7]. A system (e.g., a motor vehicle) must operate for units of time. A particular component (e.g., a battery) is essential for its operation and must be replaced each time it fails. There are different types of replacement components, and every kind of items has infinite supplies. A type replacement costs and has a random lifetime with distribution depending on . The problem is to assign the initial component and subsequent replacements from among the types to minimize the total expected cost of providing an operative component for the units of time. Formally, we would like to solve the following Min-SUKP problem, defined as follows:

###### Problem 1 (stochastic unbounded min-knapsack)

There are types of items . For an item of type , the cost is a deterministic value , and the weight is random value which follows a known distribution with non-negative integer support. Let denote . Each type has infinite supplies, and the weight of each item is independent of the weight of the items of other types and other items of the same type. Besides, there is a knapsack with capacity . Our objective is to insert items into the knapsack one by one until the total weight of items in the knapsack is at least . The realized weight of an item is revealed to us as soon as it is inserted into the knapsack. What is the expected cost of the strategy that minimizes the expected total cost of the items we insert?

###### Remark 1

The above problem is the stochastic version of the ordinary unbounded min-knapsack problem. Comparing to the ordinary knapsack problem, there is an infinite number of items of each type, and the objective is to minimize the total cost (rather than maximize the total profit).

###### Remark 2

It can be shown that Min-SUKP is NP-hard. In [9], the authors mentioned that the unbounded knapsack problem (UKP) is NP-hard, and it can be easily shown that the unbounded min-knapsack is NP-hard, since there is a polynomial reduction between these 2 problems. The problem Min-SUKP is NP-hard since it is a generalization of unbounded min-knapsack.

Derman et al.[7] discussed Min-SUKP when the weight distributions of items are exponential and provided an exact algorithm to compute the optimal policy. Assaf [1] discussed Min-SUKP when the weight distributions of items have a common matrix phase type representation.

In this paper, we present a fully polynomial time approximation scheme (FPTAS) for this problem for general discrete distributions.

Roughly speaking, we borrow the idea of the FPTAS for the knapsack problem and the method for computing the distribution of the sum of random variables [16]. However, there are a few technical difficulties we need to handle. The outline of our algorithm is as follows. We first compute a constant factor approximation for the optimal cost (Section 2), and then we apply the discretization and a dynamic program based on the approximation value (Section 3). However, the dynamic program can only solve the problem in a restricted case where the cost for any item is ‘not too small’ (the cost of each item is larger than a specific value). To solve the whole problem, we consider a reduction from the general setting to the restricted setting and show that the error of the reduction is negligible (Section 4).

### 1.1 Related Work

The knapsack problem is a classical problem in combinatorial optimization. The classical knapsack problem (max-knapsack problem) is the following problem: Given a set of items with sizes and costs, and a knapsack with a capacity, our goal is to select some items and maximize the total cost of selected items with the constraint that the total size of selected items does not exceed the capacity of the knapsack.

The min-knapsack problem (Min-KP) [5] is a natural variant of the ordinary knapsack problem. In the min-knapsack problem, the goal is to minimize the total cost of the selected items such that the total size of the selected items is not less than the capacity of the knapsack. Although the min-knapsack problem is similar to the max-knapsack problem, a polynomial-time approximation scheme (PTAS) for the max-knapsack problem does not directly lead to a PTAS for the min-knapsack problem. For the (deterministic) min-knapsack problem, approximation algorithms with constant factors are given in [5, 10, 4]. Han and Makino [12] considered an online version of min-knapsack, that is, the items are given one-by-one over time.

There is also a line of work focusing on the FPTAS for unbounded knapsack problem(UKP). UKP is similar to the original 0-1 knapsack problem, except that there are infinite number of items of each type. The first FPTAS for UKP is introduced by [13], and they show an FPTAS by extending their FPTAS for 0-1 knapsack problem. Their algorithm runs in time and needs space. Later, [15] showed an FPTAS with time complexity and space complexity . In 2018, [14] presented an FPTAS that runs in time and requires space.

However, in some applications, precisely knowing the size of each item is not realistic. In many real applications, we can only get the size distribution of a type of item. This problem leads to the stochastic knapsack problem (SKP [19]), which is a generalization of KP. In SKP, the cost of each item is deterministic, but the sizes of items are random variables with known distributions, and we get the realized size of an item as soon as it is inserted into the knapsack. The goal is to compute a solution policy which indicates the item we insert into the knapsack at a given remaining capacity. For the stochastic max-knapsack problem, an approximation with a constant factor was provided in the seminal work [6]. The current best approximation ratio for SKP is 2 [3, 18]. An approximation with relaxed capacity (bi-criterion PTAS) is given in [2, 17]. Besides, Deshpande et al.[8] gave a constant-factor approximation algorithm for the stochastic min-knapsack.

Gupta et al.[11] considered a generalization of SKP, where the cost of items may be correlated, and we can cancel an item during its execution in the policy. Cancelling an item means we can set a bounding size each time we want to insert an item, we cancel the item if the realized size of the item is larger than the bounding size. When we cancel an item, the size of the item is equal to the bounding size, and the cost of the item is zero. This generalization is referred to as Stochastic Knapsack with Correlated Rewards and Cancellations (SK-CC). Gupta et al.[11] gave a constant-factor approximation for SK-CC based on LP relaxation. A bicriterion PTAS for SK-CC is provided in [17].

### 1.2 Preliminary

###### Proposition 1

Without the loss of generality, we can assume that the support of , which is the weight distribution of an item of type , has positive integer support.

We skip the proof of Proposition 1. Please see the proof in Appendix 0.B.

From now on, we can suppose that each type of item has weight distribution with positive integer support.

In Min-SUKP, the optimal item added can be determined by the remaining capacity. Let denote the expected cost of the optimal strategy when the remaining size is . We can assume that the support of is . Let . Define . From the dynamic program, we have pseudo-polynomial time Algorithm 1 that can compute the exact optimal value.

Algorithm 1 runs in time.

In this paper, we show an FPTAS to compute . Our algorithm runs in time and return , which is an approximation for , such that . We assume that there is an oracle such that we can call to get . Since we require that our algorithm runs in time, our algorithm can call the oracle for at most times.

## 2 A Constant Factor Estimation

In this section, we show that there is a constant factor approximation for the optimal value. This constant factor approximation serves to estimate the optimal value roughly, and our FPTAS uses the standard discretization technique based on this rough estimation.

Define . When we insert an item of type , the expected weight is , and the cost is . Suppose , and we will show that is a constant approximation for the optimal value . Formally, we have the following lemma,

###### Lemma 1

For all , where .

This lemma can be proved by induction, and please see Appendix 0.C for its formal proof.

Specifically, when , we get directly from the above lemma. However, when computing , we need to enumerate the support. To avoid expensive enumeration, we can compute approximatively. We round the realized weight into . Just let

 ¯¯¯¯E[Xi]=W∑j=1di(j)2⌊log2j⌋=Di(1)+⌊logW⌋∑j=0(2j⋅(Di(2j+1)−Di(2j))).

We have , since .

Let . From the previous argument, we have , which means .

Let . We have . is the estimation of .

## 3 FPTAS Under Certain Assumption

In this section, we discuss Min-SUKP under the following assumption.

###### Definition 1 (Cheap/Expensive type)

Let . We call type is an expensive type if , otherwise we call type is a cheap type.

###### Assumption 1

we assume all the types are expensive.

And we give an algorithm with approximation error at most in this section under Assumption 1.

In general, our algorithm for Min-SUKP is inspired from the FPTAS of the ordinary knapsack problem [20]. We define , and compute the approximation for . However, the support of is the set of real numbers. So we discretize and only compute the approximation for for all , where is non-negative integer and . In our algorithm, we use dynamic programming to compute , which is the approximation for . Then we use to get an approximate value of . Since is monotonically increasing with respect to , we can find the smallest such that and return the value as the approximate value of .

Now we show how to compute . First, suppose that , and from the dynamic programming, we have

 OPTw∗=mink{ck+W∑j=1dk(w∗−j)OPTj}.

Since is non-decreasing while is increasing, recall , and we get,

 w∗= max{w′∣∣∣∃k,ck+W∑j=1dk(w′−j)OPTj≤iδT} = max{w′∣∣∣∃k,ck+w′−1∑j=1dk(w′−j)OPTj≤iδT}.

Define for all . Then is the rounding up discretization value of , and we can approximately compute (let denote the approximate value) by

 ^w=max{w′ ∣∣∣ ∃k,(ck+w′−1∑j=1dk(w′−j)^gj)≤iδT}.

However, we do not have during the computation. Instead, we use the following quantity to approximate . Given , define for all where , and define for all . Then we have

 fi=max{w′ ∣∣∣ ∃k,(ck+w′−1∑j=1dk(w′−j)gj)≤iδT}. (1)
###### Remark 3

When we compute , we have already gotten .

To compute , we use binary search to guess and accept the largest that satisfies the constraint in (1).

The pseudo-code of our algorithm is shown in Algorithm 2. The detailed version of the pseudo-codes is presented in Appendix 0.A.

In details, we enumerate and compute until reaches the weight lower limit . To compute , we use binary search starting with . In each step of binary search, let and compute , and decide to recur in which half according to the relation between and , until which means .

To quantify the approximation error by algorithm 2, we have the following theorem.

###### Theorem 3.1

The output of Algorithm 2 satisfies .

Generally speaking, this results can be shown in 2 steps: First, we will show that the real optimal value is upper bounded by the value computed in our algorithm, and next, we will show that under Assumption 1, the difference between the value computed in Algorithm 2 and the real optimal value is upper bounded by a small value. Given these two results, we can prove Theorem 3.1. Please see Appendix 0.D for the formal proof of Theorem 3.1.

From the above theorem, we know that the output of Algorithm 2 is a -approximation for .

## 4 FPTAS in the General Case

In the previous section, we show that there is an FPTAS of Min-SUKP under Assumption 1 (when all the types are expensive). In this section, we remove Assumption 1 and show that there is an FPTAS of Min-SUKP. We will first present the general idea of our algorithm.

Our Ideas: If we use the algorithm in the last section to compute in general case, the error will not be bounded. The key reason is that we may insert lots of items of cheap types. One idea is, we can bundle lots of items in the same cheap type into bigger items (an induced type ), such that is expensive. Then we replace type by the new type . Now, we can use the algorithm in the last section. However, we can only use bundled items even if we only want to use one item of a certain cheap item. Luckily, using some extra items of cheap items does not weaken the policy very much.

The remaining problem is, how to compute the distribution of many items of type ? For example, we always use items of type each time. We discretize the weight distribution , and use doubling trick to compute the approximate distributions for one by one, where are independent to each other and follow the same distribution of . We can show that, using the approximation distributions in the computation will not lead to much error.

### 4.1 Adding Limitations to Strategy

For type , if ( as defined in the previous section), then there exists such that . For convenience, if , we denote . We have the following restriction to the strategy.

###### Definition 2 (Restricted strategy)

A strategy is called restricted strategy, if for all type , the total number of items of type we insert is always a multiple of .

If we know that for all type , the total number of items of type is always a multiple of , we hope that each time we use an item of type , we will use of them together. This leads to the following definition.

###### Definition 3 (Block strategy)

A strategy is called block strategy, if we always insert a multiple of number of items of type together.

The following theorem shows that, adding limitation to the strategy will not affect the optimal value too much.

###### Theorem 4.1

Suppose the expected cost of the best block strategy is , then .

Because of the space limitation, we will present the proof sketch below. For the formal proof of Theorem 4.1, please see Appendix 0.E.

###### Proof (Proof sketch)

The proof Theorem 4.1 is divided into 2 parts. The first part shows that the optimal value for the original problem does not differ much from the optimal value with restricted strategy (see Definition 2), and the second part shows that the optimal value with restricted strategy is the same as the optimal value with block strategy (see Definition 3). The first part is simple since we can add some item after following the optimal strategy in the original problem. The second part follows from the intuition that if we must use an item in the future, it is good to use it right now.

### 4.2 Computing the Summation Distribution of Many Items of the Same Type

In the last part, we define block strategy by adding a constraint to the ordinary strategy. And we find the expected cost of the optimal block strategy is close to that of the optimal strategy.

The block strategies conform to Assumption 1 in Section 3. If we know the distribution of the total weight of items of type , we can compute the approximate optimal expected cost by Algorithm 2. In this part, we give an algorithm which approximately computes the distribution of the total weight of items of type .

Due to the space limitation, we present our algorithm in this section, and we put the analysis of our algorithm into the appendix (see Appendix 0.F). To present our idea, we need the following definitions.

###### Definition 4 (Distribution Array)

For a random variable with positive integer support, we use

to denote the probability that

, i.e. , and we use an array to denote the distribution. We call the distribution array of variable .

###### Remark 4

From the definition, we know that is a non-increasing array. Besides, in the definition, has only elements since we only care when .

###### Definition 5

For any non-increasing array of length , if and , there is a random variable such that . We say that is the variable corresponding to distribution array , denoted by .

Suppose are identical independent random variables with distribution array . Let denote and denote the corresponding distribution array. We want to compute the distribution array of and we have the following equations,

 Pr{S2i=w} =w−1∑j=1(Pr{Si=j}⋅Pr{Si=w−j}),∀1≤w≤W, (2) S2i[w] =Pr{Si≥w}+w−1∑j=1(Pr{Si=j}⋅Pr{Si≥w−j}) (3) =Si[w]+w−1∑j=1((Si[j]−Si[j+1])⋅Si[w−j]). (4)

Note that can be computed from , so we only need to compute successively (recall that where ). Note that could be got from the oracle.

However, computing the exact distribution of is slow (needs at least time), so we compute an approximate value of . To introduce our method which approximately computes the distribution, we need the following definitions.

###### Definition 6 (η-Approximate Array)

Given a positive real number , for distribution array , define as the -approximate array of , where for all ,

 a′i=(1+η)⌈log1+ηai⌉, ai>(1+η)−ζ.
###### Definition 7 ((ζ,η)-Approximate Array)

Given positive real numbers , for distribution array , define as the -approximate array of , where for all ,

 a′i={(1+η)⌈log1+ηai⌉,ai>(1+η)−ζ(1+η)−ζ,ai≤(1+η)−ζ.
###### Definition 8 ((ζ,η)-Approximation)

For random variable , suppose distribution array is -approximate array of . Define as the -approximation of .

###### Remark 5

The -Approximation of a random variable is still a random variable. And for any random variable with integer support in , the -approximation of has at most different possible values.

Let and , and our algorithm is shown as following: We first compute -approximation of which is denoted by . Then for all , we compute the distribution array of , which is the summation of independent and . Then we compute which is the -approximation for . Finally, we can get which is an approximate random variable of .

When we compute the summation of and , as there are at most different values in , there are at most values such that . Based on the previous argument, we can enumerate and such that and . In the end, we sort each by the value and arrange them to get the distribution array . This shows that we can compute the approximate distribution in time.

Formally, we have Algorithm 4 to compute .

Before we state the main theorem that bounds the approximation error of our algorithm, we combine the full procedure and get our final Algorithm 5 for Min-SUKP.

Then, we have our main theorem, which discusses the approximation error of Algorithm 5.

###### Theorem 4.2

The output of Algorithm 5 satisfies

 (1−ϵ)OPTW≤^V≤(1+ϵ)OPTW.

To prove this theorem, we first show in Algorithm 4 is approximation of , by constructing another strategy which is strictly better than and the expected cost of is closed to the expected cost of (induction is used). Then we combine all the errors in Algorithm 5 and prove that Algorithm 5 is FPTAS of Min-SUKP. For details, please see Appendix 0.F.

### 4.3 Time Complexity

Our algorithm runs in . Combined with Theorem 4.2, Algorithm 5 is an FPTAS for Min-SUKP. The theorem for the time complexity of Algorithm 5 is stated as follow,

###### Theorem 4.3

Algorithm 5 runs in polynomial time and thus is an FPTAS for Min-SUKP. More specifically, Algorithm 5 has time complexity

 O(nlog6Wϵ3+n3logWϵ4).

This theorem can be proved by recalling the parameters we have set, counting for the number of each operation, and expanding the parameters as and . Please see Appendix 0.G for the formal proof.

## 5 Conclusions and Further Work

We obtain the first FPTAS for Min-SUKP in this paper. We focus on computing approximately the optimal value, but our algorithms and proofs immediately imply how to construct an approximate strategy in polynomial time.

There are some other directions related to Min-SUKP which are still open. It would be interesting to design a PTAS (or FPTAS) for the 0/1 stochastic minimization knapsack problem, the 0/1 stochastic (maximization) knapsack problem and the stochastic unbounded (maximization) knapsack problem. Hopefully, our techniques can be helpful in solving these problem.

## Acknowledgement

The authors would like to thank Jian Li for several useful discussions and the help with polishing the paper. The research is supported in part by the National Basic Research Program of China Grant 2015CB358700, the National Natural Science Foundation of China Grant 61822203, 61772297, 61632016, 61761146003, and a grant from Microsoft Research Asia

## References

• [1] Assaf, D.: Renewal decisions when category life distributions are of phase-type. Mathematics of Operations Research 7(4), 557–567 (1982)
• [2] Bhalgat, A., Goel, A., Khanna, S., SIAM, ACM: Improved Approximation Results for Stochastic Knapsack Problems. Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SIAM, Philadelphia (2011)
• [3] Bhalgat, A.: A (2 + )-approximation algorithm for the stochastic knapsack problem. Manuscript (2012)
• [4] Carnes, T., Shmoys, D.: Primal-dual schema for capacitated covering problems, Lecture Notes in Computer Science, vol. 5035, pp. 288–302. Springer-Verlag Berlin, Berlin (2008)
• [5]

Csirik, J., Frenk, J.B.G., Labbe, M., Zhang, S.: Heuristics for the 0-1 min-knapsack problem. Acta Cybernetica

10(1-2), 15–20 (1991)
• [6] Dean, B.C., Goemans, M.X., Vondrak, J., ieee computer, s.: Approximating the stochastic knapsack problem: The benefit of adaptivity, pp. 208–217. Annual IEEE Symposium on Foundations of Computer Science, IEEE Computer Soc, Los Alamitos (2004)
• [7] Derman, C., Lieberman, G.J., Ross, S.M.: A renewal decision problem. Management Science 24(5), 554–561 (1978)
• [8] Deshpande, A., Hellerstein, L., Kletenik, D.: Approximation algorithms for stochastic submodular set cover with applications to boolean function evaluation and min-knapsack. ACM Transactions on Algorithms 12(3),  28 (2016)
• [9] Garey, M.R., Johnson, D.S.: Computers and intractability, vol. 29. wh freeman New York (2002)
• [10] Guntzer, M.M., Jungnickel, D.: Approximate minimization algorithms for the 0/1 knapsack and subset-sum problem. Operations Research Letters 26(2), 55–66 (2000)
• [11] Gupta, A., Krishnaswamy, R., Molinaro, M., Ravi, R.: Approximation algorithms for correlated knapsacks and non-martingale bandits. In: IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs, CA, USA, October 22-25, 2011. pp. 827–836 (2011)
• [12] Han, X., Makino, K.: Online Minimization Knapsack Problem, Lecture Notes in Computer Science, vol. 5893, pp. 182–193. Springer-Verlag Berlin, Berlin (2010)
• [13] Ibarra, O.H., Kim, C.E.: Fast approximation algorithms for the knapsack and sum of subset problems. Journal of the ACM (JACM) 22(4), 463–468 (1975)
• [14] Jansen, K., Kraft, S.E.: A faster fptas for the unbounded knapsack problem. European Journal of Combinatorics 68, 148–174 (2018)
• [15] Kellerer, H., Pferschy, U., Pisinger, D.: Multidimensional knapsack problems. In: Knapsack problems, pp. 235–283. Springer (2004)
• [16] Li, J., Shi, T.L.: A fully polynomial-time approximation scheme for approximating a sum of random variables. Operations Research Letters 42(3), 197–202 (2014)
• [17]

Li, J., Yuan, W.: Stochastic combinatorial optimization via poisson approximation. In: Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing. pp. 971–980. STOC ’13, ACM, New York, NY, USA (2013)

• [18] Ma, W.: Improvements and generalizations of stochastic knapsack and multi-armed bandit approximation algorithms. In: Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms (2014)
• [19] Ross, K.W., Tsang, D.H.: The stochastic knapsack problem. IEEE Transactions on communications 37(7), 740–747 (1989)
• [20] Sahni, S.: Approximate algorithms for 0/1 knapsack problem. Journal of the ACM 22(1), 115–124 (1975)

## Appendix 0.A Detailed Version of Algorithm 2

In this section, we provide the detailed version of Algorithm 2, which is shown below as Algorithm 6.

## Appendix 0.B Proof of Proposition 1

###### Proof (Proof of Proposition 1)

Firstly recall that in the definition of Min-SUKP, has non-negative integer support. If we add an item of type and the realized weight , because there are infinite number of items of each type and the state does not change, from the dynamic program, we should also add the item of type until the realized weight is not . We can construct another type with distribution and cost to replace type , where using one item of type is equivalent to using several items of type until the realized weight of one item is positive. Then, has positive integer support, and formally speaking, we have

 c′i=ci1−di(0),d′i(t)=di(t)1−di(0),∀t>0,

where (recall that ).

Then we can get

 D′i(t)=t∑j=1di(j)=Di(t)−Di(0)1−di(0).

## Appendix 0.C Proof of Lemma 1

###### Proof (Proof of Lemma 1)

Prove by induction. First, for all , ,

 bmw≤OPTw≤bm(w+W).

Suppose for all , . Then we have that

 OPTk+1= ≤ cm+EXm[OPTk+1−Xm] = cm+W∑w=1dm(w)OPTk+1−w ≤ cm+W∑w=1dm(w)bm(k+1−w+W) = cm+bm(k+1+W)−bmE[Xm] = bm(k+1+W).
 OPTk+1= = mini(ci+W∑w=1di(w)OPTk+1−w) ≥ mini(ci+W∑w=1di(w)bi(k+1−w)) = mini(ci+bi(k+1)−biE[Xi]) = mini(bi(k+1)) ≥ bm(k+1).

Then we arrange the terms, and we get

 bm(k+1)≤OPTk+1≤bm(k+1+W).

Above, we complete the proof by induction.∎

## Appendix 0.D Proof of Theorem 3.1

In this section, we will analyze the approximation error of Algorithm 2 and prove Theorem 3.1. We will rely on two lemmas to prove Theorem 3.1. Generally speaking, the first lemma shows that the real optimal value is upper bounded by the value computed in our algorithm, and the second lemma shows that under Assumption 1, the error between the real optimal value is lower bounded by the difference between the value computed in our algorithm and another small value.

Before proving the theorem, let’s first recall Assumption 1. Assumption 1 says that, for each type , , where and is the cost of type . Then, we will also recall the variables and notations defined previously.

We use denote the optimal value of Min-SUKP and denote the estimation of the optimal value in Section 2. We define .

Similar to the FPTAS of the ordinary knapsack problem, we define , and compute an approximation for . However, the support of is the set of real numbers. So we discretize and only compute the approximation for for all , where . In our algorithm, we use dynamic programming to compute , which is the approximation for .

We also define , for all . Then is the rounding up discretization value of . In the algorithm, we use the following quantity to approximate . Given , define for all , and define for all . The ideas behind Algorithm 2 and the process to compute are shown in Section 3.

Then, we will prove Theorem 3.1. We first have the following lemmas.

###### Lemma 2

For all , we have , which means .

###### Proof (Proof of Lemma 2)

We prove this by induction. The base case is true, which is just and . Now, assume the statement is true for where . We prove the statement is also true for .

For , . So for all , for all ,

 OPTw≤OPTfj≤jδT.

We know , so for all .

When we compute , we define for all , so for all .

We know

 fi=max{w ∣∣∣ ∃k,(ck+w−1∑j=1dk(w−j)gj)≤iδT},

and

 ^fiδT=max{w′ ∣∣∣ ∃k,(ck+w′−1∑j=1dk(w′−j)OPTj)≤iδT}.

So, , which implies .∎

For all , .

###### Proof (Proof of Lemma 3)

We prove it by induction. The base case is true which is . Now assume the statement is true for where . We show that the statement is also true for .

For , . So for all , for all ,

 OPTw≥OPTfj−1+1>(j−1)δT−(j−1)δ2θT.

We know , where , so

 OPTw≥(1−δθ)gw−δT,  ∀fj−1

We know

 fi=max{w′ ∣∣∣ ∃k,(ck+w′−1∑j=1dk(w′−j)gj)≤iδT}.

Let . Then for all ,

 (ck+w∗−1∑j=1dk(w∗−j)gj)>iδT.

From 5, we know that for all ,

 (ck+w∗−1∑j=1dk(w∗−j)OPTj) ≥(ck−δT+(1−δθ)w∗−1∑j=1dk(w∗−j)gj) ≥(ck−δθck+(1−δθ)w∗−1∑j=1dk(w∗−j)gj) ≥(1−δθ)(ck+w∗−1∑j=1dk(w∗−j)gj) >(1−δθ)iδT =iδT−iδ2θT.

It means

 OPTw∗=mink{ck+W∑j=1dk(w∗−j)OPTj}>iδT−iδ2θT.

Then, we complete the proof by induction.∎

Given Lemma 2 and Lemma 3, we can prove the main theorem (Theorem 3.1) in Section 3.

###### Proof (Proof of Theorem 3.1)

First, with our algorithm, we have , where . Then because is increasing with respect to , we know that

 OPTfi−1+1≤OPTW≤OPTfi.

Combining Lemma 2 and Lemma 3, we have

 OPTfi≤iδT, OPTfi−1+1>(i−1)