1 Introduction
Truthful scheduling of unrelated parallel machines is a prototypical problem in algorithmic mechanism design, introduced in the seminal paper of Nisan and Ronen [NR99] that essentially initiated this field of research. It is an extension of the classical combinatorial problem for the makespan minimization objective (see, e.g., [Vaz03, Ch. 17] or [Hal97, Sec. 1.4]), with the added twist that now machines are rational, strategic agents that would not hesitate to lie about their actual processing times for each job, if this can reduce their personal cost, i.e., their own completion time. The goal is to design a scheduling mechanism, using payments as incentives for the machines to truthfully report their true processing costs, that allocates all jobs in order to minimize the makespan, i.e., the maximum completion time across machines.
Nisan and Ronen [NR01] showed right away that no such truthful deterministic mechanism can achieve an approximation better than to the optimum makespan; this is true even for just machines. It is worth emphasizing that this lower bound is not conditioned on any computational complexity assumptions; it is purely a direct consequence of the added truthfulness requirement and holds even for mechanisms that have unbounded computational capabilities. It is interesting to compare this with the classical (i.e., nonstrategic) algorithmic setting where we do know [LST90] that a approximate polynomialtime algorithm does exist and that it is NPhard to approximate the minimum makespan within a factor smaller than . On the positive side, it is also shown in [NR01] that the mechanism that myopically allocates each job to the machine with the fastest reported time for it, and compensates her with a payment equal to the report of the secondfastest machine, achieves an approximation ratio of (where is the number of machines); this mechanism is truthful and corresponds to the paradigmatic VCG mechanism (see, e.g., [Nis07]).
Based on these, Nisan and Ronen [NR01, Conjecture 4.9] made the bold conjecture that their upper bound of is actually the tight answer to the approximation ratio of deterministic scheduling; more than 20 years after the first conference version of their paper [NR99] though, very little progress has been made in closing their gap of . Thus, the NisanRonen conjecture remains up to this day one of the most important open questions in algorithmic mechanism design. Christodoulou et al. [CKV07] improved the lower bound to , even for instances with only machines and, soon after, Koutsoupias and Vidali [KV07] showed that by allowing the lower bound can be increased to . The journal versions of these papers can be found at [CKV09] and [KV13], respectively. In our paper we provide the first improvement on this lower bound in well over a decade.
Another line of work tries to provide better lower bounds by imposing further assumptions on the mechanism, in addition to truthfulness. Most notably, Ashlagi et al. [ADL12] were actually able to resolve the NisanRonen conjecture for the important special case of anonymous mechanisms, by providing a lower bound of . The same can be shown for mechanisms with stronglymonotone allocation rules [MS18, Sec. 3.2] and for mechanisms with additive or local payment rules [NR01, Sec. 4.3.3].
Better bounds have also been achieved by modifying the scheduling model itself. For example, Lavi and Swamy [LS09] showed that if the processing times of all jobs can take only two values (“high” and “low”) then there exists a approximate truthful mechanism; they also give a lower bound of . Very recently, Christodoulou et al. [CKK20] showed a lower bound of for a slightly generalized model where the completion times of machines are allowed to be submodular functions (of the costs of the jobs assigned to them) instead of additive in the standard setting.
Although in this paper we focus exclusively on deterministic mechanisms, randomization is also of great interest and has attracted a significant amount of attention [NR01, MS18, Yu09], in particular the twomachine case [LY08b, LY08a, Lu09, CDZ15, KV19]. The currently best general lower bound on the approximation ratio of randomized (universally) truthful mechanisms is [MS18], while the upper one is [LY08a]. For the more relaxed notion of truthfulness in expectation, the upper bound is [LY08b]. Related to the randomized case is also the fractional model, where mechanisms (but also the optimum makespan itself) are allowed to split jobs among machines. For this case, [CKK10] prove lower and upper bounds of and , respectively; the latter is also shown to be tight for taskindependent mechanisms.
Other variants of the strategic unrelated machine scheduling problem that have been studied include the Bayesian model [CHMS13, DW15, GK17]
(where job costs are drawn from probability distributions), scheduling without payments
[Kou14, GKK19] or with verification [NR01, PV14, Ven14], and strategic behaviour beyond (dominantstrategy) truthfulness [FRGL19]. The related machines model, which is essentially a singledimensional mechanism design variant of our problem, has of course also been wellstudied (see, e.g., [AT01, DDDR11, APPP09]) and a deterministic PTAS exists [CK13].1.1 Our Results and Techniques
We present new lower bounds on the approximation ratio of deterministic truthful mechanisms for the prototypical problem of scheduling unrelated parallel machines, under the makespan minimization objective, introduced in the seminal work of Nisan and Ronen [NR01]. Our main result (Theorem 2) is a bound of , where is the solution of the cubic equation (6). This improves upon the lower bound of by Koutsoupias and Vidali [KV13] which appeared well over a decade ago [KV07]. Similar to [KV13], we use a family of instances with the number of machines growing arbitrarily large ().
Furthermore, our construction (see Section 3.4) provides improved lower bounds also pointwise, as a function of the number of machines that we are allowed to use. More specifically, for we recover the bound of by [CKV09]. For we can already match the bound that [KV13] could achieve only in the limit as . The first strict improvement, namely , comes from . As the number of machines grows, our bound converges to . Our results are summarized in Table 1.
A central feature of our approach is the formulation of our lower bound as the solution to a (nonlinear) optimization programme (NLP); we then provide optimal, analytic solutions to it for all values of (Lemma 3). It is important to clarify here that, in principle, just giving feasible solutions to this programme would still suffice to provide valid lower bounds for our problem. However, the fact that we pin down and use the actual optimal ones gives rise to an interesting implication: our lower bounds are provably the best ones that can be derived using our construction.
There are two key elements that allow us to derive our improved bounds, compared to the approach in previous related works [CKV09, KV13]. First, we deploy the weakmonotonicity (Theorem 1) characterization of truthfulness in a slightly more delicate way; see Lemma 1. This gives us better control and flexibility in considering deviating strategies for the machines (see our caseanalysis in Section 3). Secondly, we consider more involved instances, with two auxiliary parameters (namely and ; see, e.g., (3) and (4)) instead of just one. On the one hand, this increases the complexity of the solution, which now has to be expressed in an implicit way via the aforementioned optimization programme (NLP). But at the same time, finetuning the optimal choice of the variables allows us to (provably) push our technique to its limits. Finally, let us mention that, for a small number of machines () we get in an optimal choice of parameters. Under , we end up with as the only free parameter, and our construction becomes closer to that of [CKV09, KV13]; in fact, for machines it is essentially the same construction as in [CKV09] (which explains why we recover the same lower bound). However, for machines we need a more delicate choice of .
2 Notation and Preliminaries
Before we go into the construction of our lower bound (Section 3), we use this section to introduce basic notation and recall the notions of mechanism, truthfulness, monotonicity, and approximation ratio. We also provide a technical tool (Lemma 1) that is a consequence of weak monotonicity (Theorem 1); this lemma will be used several times in the proof of our main result.
2.1 Unrelated Machine Scheduling
In the unrelated machine scheduling setting, we have a number of machines and a number of tasks to allocate to these machines. These tasks can be performed in any order, and each task has to be assigned to exactly one machine; machine requires units of time to process task . Hence, the complete description of a problem instance can be given by a cost matrix of the values , which we denote by . In this matrix, row , denoted by , represents the processing times for machine (on the different tasks) and column , denoted by , represents the processing times for task (on the different machines). These values are assumed to be nonnegative real quantities, .
Applying the methodology of mechanism design, we assume that the processing times for machine are known only by machine herself. Moreover, machines are selfish agents; in particular, they are not interested in running a task unless they receive some compensation for doing so. They may also lie about their processing times if this would benefit them. This leads us to consider the central notion of (directrevelation) mechanisms: each machine reports her values, and a mechanism decides on an allocation of tasks to machines, as well as corresponding payments, based on the reported values.
Definition 1 (Allocation rule, payment rule, mechanism).
Given machines and tasks,

a (deterministic) allocation rule is a function that describes the allocation of tasks to machines for each problem instance. Formally, it is represented as a function such that, for every and every task , there is exactly one machine with , that is,
(1) 
a payment rule is a function that describes the payments to machines for each problem instance. Formally, it is represented as a function ;

a (directrevelation, deterministic) mechanism is a pair consisting of an allocation and payment rules.
We let denote the set of feasible allocations, that is, matrices satisfying (1). Given a feasible allocation , we let denote its row , that is, the allocation to machine
. Similarly, given a payment vector
, we let denote the payment to machine ; note that the payments represent an amount of money given to the machine, which is somewhat the opposite situation compared to other mechanism design frameworks (such as auctions, where payments are done by the agents to the mechanism designer).2.2 Truthfulness and Monotonicity
Whenever a mechanism assigns an allocation and a payment to machine , this machine incurs a quasilinear utility equal to her payment minus the sum of processing times of the tasks allocated to her,
Note that the above quantity depends on the machine’s both true and reported processing times, which in principle might differ. As already explained, machines behave selfishly. Thus, from the point of view of a mechanism designer, we wish to ensure a predictable behaviour of all parties involved. In particular, we are only interested in mechanisms that encourage agents to report their true valuations.
Definition 2 (Truthful mechanism).
A mechanism is truthful if every machine maximizes their utility by reporting truthfully, regardless of the reports by the other machines. Formally, for every machine , every , , we have that
(TR) 
In (TR), we “freeze” the reports of all machines other than . The left hand side corresponds to the utility achieved by machine when her processing times correspond to and she truthfully reports . The right hand side corresponds to the utility achieved if machine lies and reports .
The most important example of a truthful mechanism in this setting is the VCG mechanism that assigns each task independently to the machine that can perform it fastest, and paying that machine (for that task) a value equal to the secondlowest processing time. Note that this is somewhat the equivalent of secondprice auctions (that sell each item independently) for the scheduling setting.
A fundamental result in the theory of mechanism design is the very useful property of truthful mechanisms, in terms of “local” monotonicity of the allocation function with respect to singlemachine deviations.
Theorem 1 (Weak monotonicity [Nr01, Ls09]).
Let be a cost matrix, be a machine, and another report from machine . Let be the allocation of for cost matrix and be the allocation of for cost matrix . Then, if the mechanism is truthful, it must be that
(WMON) 
As a matter of fact, (WMON) is also a sufficient condition for truthfulness, thus providing an exact characterization of truthfulness [SY05]. However, for our purposes in this paper we will only need the direction in the statement of Theorem 1 as stated above. We will make use of the following lemma, which exploits the notion of weak monotonicity in a straightforward way. The second part of this lemma can be understood as a refinement of a technical lemma that appeared before in [CKV09, Lemma 2] (see also [KV13, Lemma 1]).
Lemma 1.
Suppose that machine changes her report from to , and that a truthful mechanism correspondingly changes her allocation from to . Let be a partition of the tasks into three disjoint sets.

Suppose that (a) the costs of on do not change, that is, and (b) the allocation of on does not change, that is, . Then

Suppose additionally that (c) the costs of strictly decrease on her allocated tasks in and strictly increase on her unallocated tasks in . Then her allocation on does not change, that is, .
Proof.
To prove the first point, simply apply (WMON) and split the sum into the three sets of tasks,
since and , the result follows.
To prove the second point, we look at each term appearing in the inner product . Let be a task which was originally allocated to machine ; then, and, by assumption, . Since is either or , it follows that is either (if the allocation does not change) or (if the allocation changes). Similarly, assume now that was originally not allocated to machine ; then, and, by assumption, . Since is either or , it follows that is either (if the allocation does not change) or (if the allocation changes). By the first point, the sum over all these terms must be nonpositive. We conclude that all these terms must be zero, and hence, the allocation of machine for tasks on must not change. ∎
2.3 Approximation ratio
One of the main open questions in the theory of algorithmic mechanism design is to figure out what is the “best” possible truthful mechanism, with respect to the objective of makespan minimization. This can be quantified in terms of the approximation ratio of a mechanism.
Definition 3.
Given machines and tasks:

Let be a feasible allocation and a problem instance. The makespan of on is defined as the quantity

Let be a problem instance. The optimal makespan is defined as the quantity

Let be an allocation rule. We say that has approximation ratio if, for any problem instance , we have that
if no such quantity exists, we say that has infinite approximation ratio.
As shown in [NR01], the VCG mechanism has an approximation ratio of , the number of machines. The longstanding conjecture by Nisan and Ronen states that this mechanism is essentially the best one; any truthful mechanism is believed to attain a worstcase approximation ratio of (for sufficiently many tasks). In this paper, we prove lower bounds on the approximation ratio of any truthful mechanism (Theorems 2 and 1); our bounds converge to as .
3 Lower Bound
To prove our lower bound, from here on we assume machines, since the case is trivial and the case is resolved by [NR99] (with an approximation ratio of ). Our construction will be made with the choice of two parameters . For now we shall simply assume that . Later we will optimize the choices of and in order to achieve the best lower bound possible by our construction.
We will use to denote the matrix with in its diagonal and elsewhere,
We should mention here that allowing is a technical convenience. If only finite values are allowed, we can replace by an arbitrarily high value. We also follow the usual convention, and use an asterisk to denote a full or partial allocation. Our lower bound begins with the following cost matrix for machines and tasks:
(2) 
The tasks of cost matrix can be partitioned in two groups. The first tasks (i.e., the ones corresponding to the submatrix) will be called dummy tasks. Machine has a cost of for dummy task and a cost of for all other dummy tasks. The second group of tasks, numbered , will be called proper tasks. Notice that machines and have the same costs for proper tasks; they both need time to execute task and time to execute task , for all . Finally for , machine has a cost of on proper task and cost for all other proper tasks.
In order for a mechanism to have a finite approximation ratio, it must not assign any tasks with unbounded costs. In particular, each dummy task must be assigned to the unique machine that completes it in time ; and proper task must be assigned to either machine or . Since the costs of machines and are the same on all proper tasks, we can without loss assume that machine receives proper task . Hence, the allocation on should be as (designated by an asterisk) in (2).
Next, we reduce the costs of all proper tasks for machine , and get the cost matrix
(3) 
Under the new matrix , the cost of machine for proper task is reduced from to ; and her cost for any other proper task , , is reduced by a factor of , that is, from to . The key idea in this step is the following: we want to impose a constraint on and that ensures that at least one of the proper tasks is still allocated to machine . Using the properties of truthfulness, this can be achieved via the following lemma:
Lemma 2.
Consider a truthful scheduling mechanism that, on cost matrix , assigns proper task to machine . Suppose also that
(4) 
Then, on cost matrix , machine must receive at least one of the proper tasks .
Proof.
We apply part 1 of Lemma 1, taking , as the set of dummy tasks, and as the set of proper tasks. If , denote the allocations of machine for cost matrices , respectively, we get that
Assume further, for the sake of obtaining a contradiction, that on cost matrix , machine 1 does not get either task or ; that is, . Notice that (since machine 1 gets task on cost matrix ) and we have the lower bounds as well as for . Combining all these, we get
where in the last step we observe that the terms for tasks form a telescoping sum. Thus, we obtain that , which contradicts our original assumption (4). ∎
For the remainder of our construction, we assume that and are such that (4) is satisfied. Next, we split the analysis depending on the allocation of the proper tasks to machine on cost matrix , as restricted by Lemma 2.
3.1 Case 1: Machine gets all proper tasks
In this case, we perform the following changes in machine ’s tasks, obtaining a new cost matrix . We increase the cost of dummy task , from to , and we decrease the costs of all her proper tasks by an arbitrarily small amount. Notice that

for the mechanism to achieve a finite approximation ratio, it must still allocate the dummy task 1 to machine 1;

given that the mechanism does not change the allocation on dummy task 1, and that machine 1 only decreases the completion times of her proper tasks, part 2 of Lemma 1 implies that machine 1 still gets all proper tasks.
Thus, the allocation must be as shown below (for ease of exposition, in the cost matrices that follow we omit the “arbitrarily small” amounts by which we change allocated / unallocated tasks):
This allocation achieves a makespan of , while a makespan of can be achieved by assigning each proper task to machine . Hence, this case yields an approximation ratio of at least .
3.2 Case 2: Machine gets task , but does not get all proper tasks.
That is, at least one of tasks is not assigned to machine . Suppose that task is the lowest indexed proper task that is not allocated to her. We decrease the costs of her allocated proper tasks to , while increasing the cost of her (unallocated) proper task by an arbitrarily small amount. By Lemma 1, the allocation of machine on the proper tasks does not change. Hence we get a cost matrix of the form
Since task is not allocated to machine , and the mechanism has finite approximation ratio, it must be allocated to either machine or machine . In either case, we increase the cost of the dummy task of this machine from to , while decreasing the cost of her proper task by an arbitrarily small amount. For example, if machine got task , we would end up with
Similarly to the previous Case 1, the mechanism must still allocate the dummy task to this machine, and given that the allocation does not change on the dummy task, Lemma 1 implies that the allocation must also remain unchanged on the proper task . Finally, observe that the present allocation achieves a makespan of at least , while a makespan of can be achieved by assigning proper task to machine and proper task to machine , for . Hence, this case yields an approximation ratio of at least
3.3 Case 3: Machine 1 does not get task
By Lemma 2, machine must receive proper task . In this case, we decrease the cost of her task , from to , while increasing the cost of her (unallocated) task by an arbitrarily small amount. Since by truthfulness, the allocation of machine for these two tasks does not change, the allocation must be as below:
Since task is not allocated to machine , and the mechanism has finite approximation ratio, it must be allocated to machine . We now increase the cost of the dummy task of machine from to , while decreasing the cost of her proper task by an arbitrarily small amount. Similarly to Cases 1 and 2, the mechanism must still allocate the dummy task to machine , and preserve the allocation of machine on the proper task . Thus, we get the allocation shown below:
This allocation achieves a makespan of at least , while a makespan of can be achieved by assigning proper tasks to machine and proper task to machine , for all . Hence, this case yields an approximation ratio of at least
3.4 Main result
The three cases considered above give rise to possibly different approximation ratios; our construction will then yield a lower bound equal to the smallest of these ratios. First notice that Case 3 always gives a worse bound than Case 2: the approximation ratio for the former is , whereas for the latter it is . Thus we only have to consider the minimum between Cases 1 and 3.
Our goal then is to find a choice of and that achieves the largest possible such value. We can formulate this as a nonlinear optimization problem on the variables and . To simplify the exposition, we also consider an auxiliary variable , which will be set to the minimum of the approximation ratios:
This can be enforced by the constraints , and . Thus, our optimization problem becomes
(NLP)  
s.t.  
Notice that any feasible solution of (NLP) gives rise to a lower bound on the approximation ratio of truthful machine scheduling. In our next lemma, we characterize the limiting optimal solution of the above optimization problem. Thus, the lower bound achieved corresponds to the best possible lower bound using the general construction in this paper.
Lemma 3.
An optimal solution to the optimization problem given by (NLP) is as follows.

For , choose , , and as the positive solution of the equation

For , choose , , and as the positive solution of the equation
(5)
We defer the (admittedly technical) proof of Lemma 3 to Section 3.5 below; for the time being, we show how this lemma allows us to prove our main result.
Theorem 2.
No deterministic truthful mechanism for unrelated machine scheduling can have an approximation ratio better than , where is the (unique real) solution of equation
(6) 
For a restricted number of machines the lower bounds can be seen in Table 1.
Proof.
For large enough we can use Case 2 of Lemma 3. In particular, taking the limit of (5) as , we can ensure a lower bound of , where is the (unique) real solution of equation
Performing the transformation , and multiplying throughout by , we get exactly (6).
For a fixed number of machines , we can directly solve the equations given by either Case 1 () or Case 2 of Lemma 3 to derive the corresponding value of , for a lower bound of . In particular, for one gets , (i.e., the golden ratio) and , respectively. The values of for up to machines are given in Table 1. ∎
3.5 Proof of Lemma 3
For the remainder of the paper we focus on proving Lemma 3, that is, we characterize the limiting optimal solution of (NLP). We begin by introducing a new variable , and restate the problem in terms of .
(7)  
s.t.  
Notice that the function , defined in the feasibility domain , has a continuous extension to the closure , which is a compact set. By the extreme value theorem, the continuous extension must achieve its supremum at some point in ; that is to say, the supremum of (7) corresponds to the maximum of the relaxed problem,
s.t.  
(8) 
which always exist.
Let be an optimal solution. Our next step is to prove that . Suppose otherwise; then, since , one must have that either
(9) 
We will show that, under such circumstances, we could find a perturbed with a strictly better objective value, thus yielding a contradiction. Our analysis proceeds in three cases.
Case 1: . This implies that , and thus Also, since for , (8) is not tight, that is to say, . Thus, we can increase by an arbitrarily small , thus yielding a feasible solution with a strictly better objective value.
Case 3: and . Take sufficiently small and perturb to a new pair , so that , , and (9) remains valid. Notice that, under this perturbation, decreases, increases, and remains constant. Hence, we do not leave the feasibility region; in particular, (8) can be written as , and this inequality can only remain valid after the perturbation. Finally, the perturbation increases both lefthand sides and decreases both righthand sides of (9). Therefore, the perturbed gives rise to a strictly better objective value.
We have thus deduced that in an optimal solution. This allows us to restate the optimization problem,
s.t.  
Further rearranging, and removing unnecessary inequalities, yields
s.t.  
Next observe that we can remove the dependency on by setting , as long as a feasible choice of exists. Thus, we end up with
s.t.  (10)  
(11)  
(12) 
Notice that (12) is redundant from , and can be removed. Also, we can rewrite (10) and (11) as
In both of the above inequalities, the left hand side is decreasing in , from as to at , whereas the right hand side is increasing in , from either or at to at . Hence, there are unique positive solutions , to the equations
and moreover, (10), (11) are equivalent to , respectively. Since is a valid feasible point, we also get that . Since our goal is to maximize , this is obtained by taking to be the maximum of , .
We can finally convert back to . Since , . We recover via . Also, the reciprocals of , correspond to the unique positive solutions , to the equations
(13)  
Comments
There are no comments yet.