# The Power of the Weighted Sum Scalarization for Approximating Multiobjective Optimization Problems

We determine the power of the weighted sum scalarization with respect to the computation of approximations for general multiobjective minimization and maximization problems. Additionally, we introduce a new multi-factor notion of approximation that is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. For minimization problems, we provide an efficient algorithm that computes an approximation of a multiobjective problem by using an exact or approximate algorithm for its weighted sum scalarization. In case that an exact algorithm for the weighted sum scalarization is used, this algorithm comes arbitrarily close to the best approximation quality that is obtainable by supported solutions - both with respect to the common notion of approximation and with respect to the new multi-factor notion. Moreover, the algorithm yields the currently best approximation results for several well-known multiobjective minimization problems. For maximization problems, however, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously by supported solutions.

## Authors

• 7 publications
• 11 publications
• 5 publications
• 2 publications
09/21/2021

08/28/2019

### One-Exact Approximate Pareto Sets

Papadimitriou and Yannakakis show that the polynomial-time solvability o...
09/21/2021

### An Approximation Algorithm for a General Class of Multi-Parametric Optimization Problems

In a widely studied class of multi-parametric optimization problems, the...
04/14/2021

### A Better-Than-2 Approximation for Weighted Tree Augmentation

We present an approximation algorithm for Weighted Tree Augmentation wit...
09/24/2019

### A common approximation framework for the early work, the late work, and resource leveling problems with unit time jobs

We study the approximability of two related machine scheduling problems....
08/08/2021

### Approximation schemes for stochastic compliance-based topology optimization with many loading scenarios

In this paper, approximation schemes are proposed for handling load unce...
08/02/2018

### A Class of Weighted TSPs with Applications

Motivated by applications to poaching and burglary prevention, we define...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Almost any real-world optimization problem asks for optimizing more than one objective function (e.g., the minimization of cost and time in transportation systems or the maximization of profit and safety in investments). Clearly, these objectives are conflicting, often incommensurable, and, yet, they have to be taken into account simultaneously. The discipline dealing with such problems is called multiobjective optimization. Typically, multiobjective optimization problems are solved according to the Pareto principle of optimality: a solution is called efficient (or Pareto optimal) if no other feasible solution exists that is not worse in any objective function and better in at least one objective. The images of the efficient solutions in the objective space are called nondominated points. In contrast to single objective optimization, where one typically asks for one optimal solution, the main goal of multiobjective optimization is to compute the set of all nondominated points and, for each of them, one corresponding efficient solution. Each of these solutions corresponds to a different compromise among the set of objectives and may potentially be relevant for a decision maker.

Several results in the literature, however, show that multiobjective optimization problems are hard to solve exactly [9, 8] and, in addition, the cardinalities of the set of nondominated points (the nondominated set) and the set of efficient solutions (the efficient set) may be exponentially large for discrete problems (and are typically infinite for continuous problems). This impairs the applicability of exact solution methods to real-life problems and provides a strong motivation for studying approximations of multiobjective optimization problems.

Both exact and approximate solution methods for multiobjective optimization problems often resort to using single objective auxiliary problems, which are called scalarizations of the original multiobjective problem. This refers to the transformation of a multiobjective optimization problem into a single objective auxiliary problem based on a procedure that might use additional parameters, auxiliary points, or variables. The resulting scalarized optimization problems are then solved using methods from single objective optimization and the obtained solutions are interpreted in the context of Pareto optimality.

The simplest and most widely used scalarization technique is the weighted sum scalarization (see, e.g., [9]). Here, the scalarized auxiliary problem is constructed by assigning a weight to each of the objective functions and summing up the resulting weighted objective functions in order to obtain the objective function of the scalarized problem. If the weights are chosen to be positive, then every optimal solution of the resulting weighted sum problem is efficient. Moreover, the weighted sum scalarization does not change the feasible set and, in many cases, boils down to the single objective version of the given multiobjective problem — which represents an important advantage of this scalarization especially for combinatorial problems. However, only some efficient solutions (called supported solutions) can be obtained by means of the weighted sum scalarization, while many other efficient solutions (called unsupported solutions) cannot. Consequently, a natural question is to determine which approximations of the whole efficient set can be obtained by using this very important scalarization technique.

### 1.1 Previous work

Besides many specialized approximation algorithms for particular multiobjective optimization problems, there exist several general approximation algorithms that can be applied to broad classes of multiobjective problems.

Most of these general approximation methods for multiobjective problems are based on the seminal work of Papadimitriou and Yannakakis [19], who present a method for generating a -approximation for general multiobjective minimization and maximization problems with a constant number of positive-valued, polynomially computable objective functions. They show that a -approximation with size polynomial in the encoding length of the input and always exists. Moreover, their results show that the construction of such an approximation is possible in (fully) polynomial time, i.e., the problem admits a multiobjective (fully) polynomial time approximation scheme or MPTAS (MFPTAS), if and only if a certain auxiliary problem called the gap problem can be solved in (fully) polynomial time. More recent articles building upon the results of [19] present methods that additionally yield bounds on the size of the computed -approximation relative to the size of the smallest -approximation possible [22, 7, 4].

Besides the general approximation methods mentioned above that work for both minimization and maximization problems, there exist several general approximation methods that are restricted either to minimization problems or to maximization problems.

For minimization problems, there are two general approximation methods that are both based on using (approximations of) the weighted sum scalarization. The previously best general approximation method for multiobjective minimization problems with an arbitrary constant number of objectives that uses the weighted sum scalarization can be obtained by combining two results of Glaßer et al. [13, 12]. They introduce another auxiliary problem called the approximate domination problem, which is similar to the gap problem. Glaßer et al. show that, if this problem is solvable in polynomial time for some approximation factor , then an approximating set providing an approximation factor of in every objective function can be computed in fully polynomial time for every . Moreover, they show that the approximate domination problem with can be solved by using a -approximation algorithm for the weighted sum scalarization of the -objective problem. Together, this implies that a -approximation can be computed in fully polynomial time for -objective minimization problems provided that the objective functions are positive-valued and polynomially computable and a -approximation algorithm for the weighted sum scalarization exists. As this result is not explicitly stated in [13, 12], no bounds on the running time are provided.

For biobjective minimization problems, Halffmann et al. [15] show how to obtain a -approximation for any given if a polynomial time -approximation algorithm for the weighted sum scalarization is given.

Obtaining general approximation methods for multiobjective maximization problems using the weighted sum scalarization seems to be much harder than for minimization problems. Indeed, Glaßer et al. [12] show that certain translations of approximability results from the weighted sum scalarization of an optimization problem to the multiobjective version that work for minimization problems are not possible in general for maximization problems.

An approximation method specifically designed for multiobjective maximization problems is presented by Bazgan et al. [3]. Their method is applicable to biobjective maximization problems that satisfy an additional structural assumption on the set of feasible solutions and the objective functions: For each two feasible solutions none of which approximates the other one by a factor of  in both objective functions, a third solution approximating both given solutions in both objective functions by a certain factor depending on and a parameter  must be computable in polynomial time. The approximation factor obtained by the algorithm then depends on  and .

### 1.2 Our contribution

Our contribution is twofold: First, in order to better capture the approximation quality in the context of multiobjective optimization problems, we introduce a new notion of approximation for the multiobjective case. This new notion comprises the common notion of approximation, but is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. Second, we provide a precise analysis of the approximation quality obtainable for multiobjective optimization problems by means of an exact or approximate algorithm for the weighted sum scalarization – with respect to both the common and the new notion of approximation.

In order to motivate the new notion of approximation, consider the biobjective case, in which a -approximation can be obtained from the results of Glaßer et al. [13, 12] using an exact algorithm for the weighted sum scalarization. As illustrated in Figure 1, this approximation guarantee is actually too pessimistic: Since each point  in the image of the approximating set is nondominated (since it is the image of an optimal solution of the weighted sum scalarization), no images of feasible solutions can be contained in the shaded area. Thus, every feasible solution is actually either - or -approximated. Consequently, the approximation quality obtained in this case can be more accurately described by using two vectors of approximation factors. In order to capture such situations and allow for a more precise analysis of the approximation quality obtained for multiobjective problems, our new multi-factor notion of approximation uses a set of vectors of approximation factors

instead of only a single vector.

The second part of our contribution consists of a detailed analysis of the approximation quality obtainable by using the weighted sum scalarization – both for multiobjective minimization problems and for multiobjective maximization problems. For minimization problems, we provide an efficient algorithm that approximates a multiobjective problem using an exact or approximate algorithm for its weighted sum scalarization. We analyze the approximation quality obtained by the algorithm both with respect to the common notion of approximation that uses only a single vector of approximation factors as well as with respect to the new multi-factor notion. With respect to the common notion, our algorithm matches the best previously known approximation guarantee of obtainable for -objective minimization problems and any from a -approximation algorithm for the weighted sum scalarization. More importantly, we show that this result is best-possible in the sense that it comes arbitrarily close to the best approximation guarantee obtainable by supported solutions for the case that an exact algorithm is used to solve the weighted sum problem (i.e., when ).

When analyzing the algorithm with respect to the new multi-factor notion of approximation, however, a much stronger approximation result is obtained. Here, we show that every feasible solution is approximated with some (possibly different) vector of approximations factors such that . In particular, the worst-case approximation factor of can actually be tight in at most one objective for any feasible point. This shows the multi-factor notion of approximation yields a much stronger approximation result by allowing a refined analysis of the obtained approximation guarantee. Moreover, for , we show that the obtained multi-factor approximation result comes arbitrarily close to the best multi-factor approximation result obtainable by supported solutions. We also demonstrate that our algorithm applies to a large variety of multiobjective minimization problems and yields the currently best approximation results for several problems.

Multiobjective maximization problems, however, turn out to be much harder to approximate by using the weighted sum scalarization. Here, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously when using only supported solutions.

In summary, our results yield essentially tight bounds on the power of the weighted sum scalarization with respect to the approximation of multiobjective minimization and maximization problems – both in the common notion of approximation and in the new multi-factor notion.

The remainder of the paper is organized as follows: In Section 2, we formally introduce multiobjective optimization problems and provide the necessary definitions concerning their approximation. Section 3 contains our general approximation algorithm for minimization problems (Subsection 3.1) as well as a faster algorithm for the biobjective case (Subsection 3.2). Moreover, we show in Subsection 3.3 that the obtained approximation results are tight. Section 4 presents applications of our results to specific minimization problems. In Section 5, we present our impossibility results for maximization problems. Section 6 concludes the paper and lists directions for future work.

## 2 Preliminaries

In the following, we consider a general multiobjective minimization or maximization problem  of the following form (where either all objective functions are to be minimized or all objective functions are to be to maximized):

 min/max f(x)=(f1(x),…,fp(x)) s. t. x∈X

Here, as usual, we assume a constant number  of objectives. The elements are called feasible solutions and the set  is referred to as the feasible set. An image of a feasible solution is also called a feasible point. We let denote the set of feasible points.

We assume that the objective functions take only positive rational values and are polynomially computable. Moreover, for each , we assume that there exist strictly positive rational lower and upper bounds of polynomial encoding length such that for all . We let and .

###### Definition 1

For a minimization problem , we say that a point is dominated by another point if and

 y′j=fj(x′)≤fj(x)=yj for % all j∈{1,…,p}.

Similarly, for a maximization problem , we say that a point is dominated by another point if and

 y′j=fj(x′)≥fj(x)=yj for % all j∈{1,…,p}.

If the point is not dominated by any other point , we call  nondominated and the feasible solution efficient. The set  of nondominated points is called the nondominated set and the set  of efficient solutions is called the efficient set or Pareto set.

### 2.1 Notions of approximation

We first recall the standard definitions of approximation for single objective optimization problems.

###### Definition 2

Consider a single objective optimization problem  and let . If is a minimization problem, we say that a feasible solution -approximates another feasible solution if . If is a maximization problem, we say that a feasible solution -approximates another feasible solution if . A feasible solution that -approximates an optimal solution of  is called an -approximation for .

A (polynomial time) -approximation algorithm is an algorithm that, for every instance  with encoding length , computes an -approximation for in time bounded by a polynomial in .

The following definition extends the concept of approximation to the multiobjective case.

###### Definition 3

Let with for all .

For a minimization problem , we say that a feasible solution -approximates another feasible solution if

 fj(x)≤αj⋅fj(x′) for all j∈{1,…,p}.

Similarly, for a maximization problem , we say that a feasible solution -approximates another feasible solution if

 αj⋅fj(x)≥fj(x′) for all j∈{1,…,p}.

The standard notion of approximation for multiobjective optimization problems used in the literature is the following one.

###### Definition 4

Let with for all .

A set  of feasible solutions is called an -approximation for the multiobjective problem  if, for any feasible solution , there exists a solution  that -approximates .

In the following definition, we generalize the standard notion of approximation for multiobjective problems by allowing a set of vectors of approximation factors instead of only a single vector, which allows for tighter approximation results.

###### Definition 5

Let be a set of vectors with for all and all . Then a set  of feasible solutions is called a (multi-factor) -approximation for the multiobjective problem  if, for any feasible solution , there exists a solution  and a vector  such that  -approximates .

Note that, in the case where is a singleton, an -approximation for a multiobjective problem according to Definition 5 is equivalent to an -approximation according to Definition 4.

### 2.2 Weighted sum scalarization

Given a -objective optimization problem  and a vector with for all , the weighted sum problem (or weighted sum scalarization associated with  is defined as the following single objective optimization problem:

 min/max p∑j=1wj⋅fj(x) s. t. x∈X
###### Definition 6

A point is called supported if there exists a vector of positive weights such that  is an optimal solution of the weighted sum problem . In this case, the feasible solution is called a supported solution. The set of all supported solutions will be denoted by .

If the supported point is an extreme point of , then  is called an extreme supported point and  is called an extreme supported solution. The set of all extreme supported solutions will be denoted by .

It is well-known that every supported point is nondominated and, correspondingly, every supported solution is efficient (cf. [9]).

In the following, we assume that there exists a polynomial time -approximation algorithm for the weighted sum problem, where can be either a constant or a function of the input size. When calling with some specific weight vector , we denote this by . This algorithm then returns a solution such that , if is a minimization problem, and , if is a maximization problem, where is an optimal solution of . The running time of algorithm is denoted by .

The following result shows that a -approximation for the weighted sum problem is also a -approximation of any solution in at least one of the objectives.

###### Lemma 1

Let  be a -approximation for  for some positive weight vector . Then, for any feasible solution , there exists at least one such that -approximates  in objective .

###### Proof

Consider the case where  is a multiobjective minimization problem (the proof for the case where  is a maximization problem works analogously). Then, we must show that, for any feasible solution , there exists at least one such that .

Assume by contradiction that there exists some  such that for all . Then, we obtain , which contradicts the assumption that  is a -approximation for .

## 3 A multi-factor approximation result for minimization problems

In this section, we study the approximation of multiobjective minimization problems by solving weighted sum problems. In Subsection 3.1, we propose a multi-factor approximation algorithm that significantly improves upon the -approximation algorithm that can be derived from Glaßer et al. [12]. The biobjective case is then investigated in Subsection 3.2. Finally, we show in Subsection 3.3 that the resulting approximation is tight.

### 3.1 General results

###### Proposition 1

Let be a feasible solution of  and let be such that for and some . Applying  with for yields a solution  that -approximates  for some such that for at least one and

 ∑j:αj>1αj=(1+ε)⋅σ⋅p.

###### Proof

Let  be an optimal solution for . Since  is the solution returned by , we have

 p∑j=11bjfj(^x) ≤σ⋅(p∑j=11bjfj(x∗))≤σ⋅(p∑j=11bjfj(¯x)) ≤σ⋅(1+ε)⋅(p∑j=11)=(1+ε)⋅σ⋅p.

Since , we get , which yields

 p∑j=1fj(^x)fj(¯x)≤(1+ε)⋅σ⋅p.

Setting for , we have

 ∑j:αj>1αj≤(1+ε)⋅σ⋅p.

The worst case approximation factors are then obtained when equality holds in the previous inequality.

Moreover, by Lemma 1, there exists at least one such that . Thus, we have for at least one , which proves the claim.

Proposition 1 motivates to apply the given -approximation algorithm  for  iteratively for different weight vectors  in order to obtain an approximation of the multiobjective minimization problem . This is formalized in Algorithm LABEL:alg:mainAlgo, whose correctness and running time are established in Theorem 1.

algocf[htbp]

###### Theorem 1

For a -objective minimization problem, Algorithm LABEL:alg:mainAlgo outputs an -approximation where

 A= {(α1,…,αp):α1,…,αp≥1,αi≤σ for at least one i, and∑j:αj>1αj=σ⋅p +ε}

in time bounded by .

###### Proof

In order to approximate all feasible solutions, we can iteratively apply Proposition 1 with instead of , leading to the modified constraint on the sum of the  where the right-hand side becomes . More precisely, we iterate with and , where is the largest integer such that , for each . Actually, this iterative application of Proposition 1 involves redundant weight vectors. More precisely, consider a weight vector where with for , and let  be an index such that . Then problem  is equivalent to problem  with , where for . Therefore, it is sufficient to consider all weight vectors  for which at least one component  is set to (see Figure 2 for an illustration). The running time follows.

Note that, depending on the structure of the weighted sum algorithm , the practical running time of Algorithm LABEL:alg:mainAlgo could be improved by not solving every weighted sum problem from scratch, but using the information obtained in previous iterations.

Also note that, as illustrated in Figure 2, Algorithm LABEL:alg:mainAlgo also directly yields a subdivision of the objective space into hyperrectangles such that all solutions whose images are in the same hyperrectangle are approximated by the same solution (possibly with different approximation guarantees): For each weight vector  considered in the algorithm (where for at least one ), all solutions  with images in the hyperrectangles for are approximated by the solution returned by .

When the weighted sum problem can be solved exactly in polynomial time, Theorem 1 immediately yields the following result:

###### Corollary 1

If is an exact algorithm for the weighted sum problem, Algorithm LABEL:alg:mainAlgo outputs an -approximation where

 A= {(α1,…,αp):α1,…,αp≥1,αi=1 for at least one i, and ∑j:αj>1αj=p+ε}

in time .

Note that the multi-factor approximation result in Corollary 1 holds independently of which optimal solution the algorithm  returns for each weighted sum problem . Since, for each weight vector  considered, there always exists at least one optimal solution of  that is extreme supported, we obtain the following structural result:

###### Corollary 2

For any , the set of extreme supported solutions is an -approximation, where

 A= {(α1,…,αp):α1,…,αp≥1,αi=1 for at least one i, and ∑j:αj>1αj=p+ε}.

Another special case worth mentioning is the situation where the weighted sum problem admits a polynomial time approximation scheme. Here, similar to the case in which an exact algorithm is available for the weighted sum problem (see Corollary 1), we can still obtain a set of vectors  of approximation factors with while only losing the property that at least one equals .

###### Corollary 3

If the weighted sum problem admits a polynomial time -approximation for any , then, for any and any , Algorithm LABEL:alg:mainAlgo can be used to compute an -approximation where

 A= {(α1,…,αp):α1,…,αp≥1,αi≤1+τ for at least one i, and ∑j:αj>1αj=p+ε}

in time .

###### Proof

Given and , apply Algorithm LABEL:alg:mainAlgo with and .

Since any component of a vector in the set  from Theorem 1 can get arbitrarily close to in the worst case, the best “classical” approximation result using only a single vector of approximation factors that is obtainable from Theorem 1 reads as follows:

###### Corollary 4

Algorithm LABEL:alg:mainAlgo computes a -approximation in time
.

### 3.2 Biobjective Problems

In this subsection, we focus on biobjective minimization problems. We first specialize some of the general results of the previous subsection to the case . Afterwards, we propose a specific approximation algorithm for biobjective problems, which significantly improves upon the running time of Algorithm LABEL:alg:mainAlgo in the case where an exact algorithm  for the weighted sum problem is available.

Theorem 1, which is the main general result of the previous subsection, can trivially be specialized to the case . It is more interesting to consider the situation where the weighted sum can be solved exactly, corresponding to Corollary 1. In that case, we obtain the following result:

###### Corollary 5

If is an exact algorithm for the weighted sum problem and , Algorithm LABEL:alg:mainAlgo yields an -approximation where

 A= {(1,2+ε),(2+ε,1)}

in time .

It is worth pointing out that, unlike for the previous results, the set of approximation factors is now finite. This type of result can be interpreted as a disjunctive approximation result: Algorithm LABEL:alg:mainAlgo outputs a set  ensuring that, for any , there exists such that  -approximates  or  -approximates .

Similar to Corollary 2, we obtain the following structural result about the approximation guarantee obtained by the set of extreme supported solutions in the biobjective case:

###### Corollary 6

For any , the set of extreme supported solutions of a biobjective problem is an -approximation, where

 A= {(1,2+ε),(2+ε,1)}.

In the biobjective case, we may scale the weights in the weighted sum problem to be of the form  for some . In the following, we make use of this observation and refer to a weight vector  simply as .

Algorithm LABEL:alg:biobjAlg is a refinement of Algorithm LABEL:alg:mainAlgo in the biobjective case when an exact algorithm  for the weighted sum problem is available. Algorithm LABEL:alg:mainAlgo requires to test all the weights , , …, , ,…, , or equivalently the weights of the form , where for . Instead of testing all these weights, Algorithm LABEL:alg:biobjAlg considers only a subset of these weights. More precisely, in each iteration, the algorithm selects a subset of consecutive weights , solves for the weight  with , and decides whether 0, 1, or 2 of the subsets and need to be investigated further. This process can be viewed as developing a binary tree where the root, which corresponds to the initialization, requires solving two weighted sum problems, while each other node requires solving one weighted sum problem. This representation is useful to bound the running time of our algorithm. The following technical result on binary trees, whose proof is given in the appendix, will be useful for this purpose:

###### Lemma 2

A binary tree with height  and  nodes with two children contains nodes.

algocf[htbp]

###### Theorem 2

For a biobjective minimization problem, Algorithm LABEL:alg:biobjAlg returns a -approximation in time

 O(TWS1⋅log(1ε⋅logUBLB)⋅logUBLB).

###### Proof

The approximation guarantee of Algorithm LABEL:alg:biobjAlg derives from Theorem 1. We just need to prove that the subset of weights used here is sufficient to preserve the approximation guarantee.

In lines LABEL:if2-LABEL:if2then, the weights  for are not considered if -approximates  or if   -approximates . We show that, indeed, these weights are not needed.

To this end, first observe that any solution for is such that

 f1(xℓ)≤f1(xi)≤f1(xt) and f2(xℓ)≥f2(xi)≥f2(xt).

since . Thus, if   -approximates , we obtain

 f2(xℓ)≤(2+ε)⋅f2(xt)≤(2+ε)⋅f2(xi),

which shows that  also -approximates . Therefore,  and the corresponding weight  are not needed.

Similarly, if  -approximates , we have

 f1(xt)≤(2+ε)⋅f1(xℓ)≤(2+ε)⋅f1(xi),

which shows that  -approximates . Therefore  and the corresponding weight  are again not needed.

In lines LABEL:if3-LABEL:if3then, the weights  for are not considered if  -approximates  or if  -approximates  for similar reasons.

Also, in line LABEL:if1 can be discarded and the weights for can be ignored if  -approximates  and  -approximates . Indeed, using similar arguments as before, we obtain that -approximates  for and  -approximates  for in this case. Consequently, compared to Algorithm LABEL:alg:mainAlgo, only superfluous weights are discarded in Algorithm LABEL:alg:biobjAlg and the approximation guarantee follows by Theorem 1.

We now prove the claimed bound on the running time. Algorithm LABEL:alg:biobjAlg explores a set of weights of cardinality = . The running time is obtained by bounding the number of calls to algorithm , which corresponds to the number of nodes of the binary tree implicitly developed by the algorithm. The height of this tree is .

In order to bound the number of nodes with two children in the tree, we observe that we generate such a node (i.e. add the pairs and to ) only if  does not -approximate  and does not -approximate , and also  does not -approximate  and  does not -approximate . Hence, whenever a node with two children is generated, the corresponding solution  does neither nor -approximate any previously generated solution and vice versa, so their objective values in both of the two objective functions must differ by more than a factor . Using that the th objective value of any feasible point is between  and , this implies that there can be at most

 min{log2+ε(UB(1)LB(1));log2+ε(UB(2)LB(2))}∈O(log% UBLB)

nodes with two children in the tree.

Using the obtained bounds on the height of the tree and the number of nodes with two children, Lemma 2 shows that the total number of nodes in the tree is

 O(log(1ε⋅logUBLB)⋅logUBLB),

which proves the claimed bound on the running time.

### 3.3 Tightness results

When solving the weighted sum problem exactly, Corollary 1 states that Algorithm LABEL:alg:mainAlgo obtains a set  of approximation factors in which for each .

The following theorem shows that this multi-factor approximation result is arbitrarily close to the best possible result obtainable by supported solutions:

###### Theorem 3

For , let

 A\colonequals{α∈Rp:α1,…,αp≥1,αi=1 for at least one i, and ∑j:αj>1αj=p−ε}.

Then there exists an instance of a -objective minimization problem for which the set  of supported solutions is not an -approximation.

###### Proof

In the following, we only specify the set  of images. A corresponding instance consisting of a set  of feasible solutions and an objective function  can then easily be obtained, e. g., by setting and .

For , let with , , …, and . Note that the point  is unsupported, while are supported (an illustration for the case is provided in Figure 3).

Moreover, the ratio of the -th components of the points  and  is exactly

 M\nicefrac(M+1)p=p⋅MM+1,

which is larger than for . Consequently, for such , the point  is not -approximated by any of the supported points for any , which proves the claim.

We remark that the set of points  constructed in the proof of Theorem 3 can easily be obtained from instances of many well-known multiobjective minimization problems such as multiobjective shortest path, multiobjective spanning tree, multiobjective minimum (--) cut, or multiobjective TSP (for multiobjective shortest path, for example, a collection of disjoint --paths whose cost vectors correspond to the points suffices). Consequently, the result from Theorem 3 holds for each of these specific problems as well.

Moreover, note that also the classical approximation result obtained in Corollary 4 is arbitrarily close to best possible in case that the weighted sum problem is solved exactly: While Corollary 4 shows that a -approximation is obtained from Algorithm LABEL:alg:mainAlgo when solving the weighted sum problem exactly, the instance constructed in the proof of Theorem 3 shows that the supported solutions do not yield an approximation guarantee of for any . This yields the following theorem:

###### Theorem 4

For any , there exists an instance of a -objective minimization problem for which the set  of supported solutions is not a -approximation.

## 4 Applications

Our results can be applied to a large variety of minimization problems since exact or approximate polynomial time algorithms are available for the weighted sum scalarization of many problems.

### 4.1 Problems with a polynomial time solvable weighted sum scalarization

If the weighted sum scalarization can be solved exactly in polynomial time, Corollary 1 shows that Algorithm LABEL:alg:mainAlgo yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee such that and for at least one .

Many problems of this kind admit an MFPTAS, i.e., a -approximation that can be computed in time polynomial in the encoding length of the input and . The approximation guarantee we obtain is worse in this case, even if the sum of the approximation factors for which an error can be observed is in both approaches. The running time, however, is usually significantly better in our approach.

For the multiobjective shortest path problem, for example, the existence of an MFPTAS was shown in [19], while several specific MFPTAS have been proposed. Among these, the MFPTAS with the best running time is the one proposed in [21]. For , their running time for general digraphs with  vertices and  arcs is while ours is only using one of the fastest algorithms for single objective shortest path [20], and even for , using Theorem 2 and the same single objective algorithm.

There are, however, also problems for which the weighted sum scalarization can be solved exactly in polynomial time, but whose multiobjective version does not admit an MFPTAS unless . For example, this is the case for the minimum --cut problem [19]. For yet other problems, like, e.g., the minimum weight perfect matching problem, only a randomized MFPTAS is known so far [19]. In both cases, our algorithm can still be applied.

### 4.2 Problems with a polynomial time approximation scheme for the weighted sum scalarization

For problems where the weighted sum scalarization admits a polynomial time approximation scheme, Corollary 3 shows that Algorithm LABEL:alg:mainAlgo yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee such that . Thus, only the property that for at least one  is lost compared to the case where the weighted sum scalarization can be solved exactly on polynomial time.

Since there exists a vast variety of single objective problems that admit polynomial time approximation schemes, this result is also widely