Unbounded lower bound for k-server against weak adversaries

11/05/2019 ∙ by Marcin Bienkowski, et al. ∙ 0

We study the resource augmented version of the k-server problem, also known as the k-server problem against weak adversaries or the (h,k)-server problem. In this setting, an online algorithm using k servers is compared to an offline algorithm using h servers, where h< k. For uniform metrics, it has been known since the seminal work of Sleator and Tarjan (1985) that for any ϵ>0, the competitive ratio drops to a constant if k=(1+ϵ) · h. This result was later generalized to weighted stars (Young 1994) and trees of bounded depth (Bansal et al. 2017). The main open problem for this setting is whether a similar phenomenon occurs on general metrics. We resolve this question negatively. With a simple recursive construction, we show that the competitive ratio is at least Ω(loglog h), even as k→∞. Our lower bound holds for both deterministic and randomized algorithms. It also disproves the existence of a competitive algorithm for the infinite server problem on general metrics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The -server problem is one of the most well-studied and influential online problems in competitive analysis, defined in 1990 by Manasse et al. [MMS90]. It generalizes many problems in which an algorithm has to maintain a feasible state while satisfying a sequence of requests. Formally, the -server problem is defined as follows. There are servers in a metric space and a sequence of requests to metric space points appears online. In response to a request , an algorithm has to move its servers, so that one of them ends at point . The goal is to minimize the cost defined as the total distance traveled by all servers.

1.1 From uniform to general metrics

The definition of the -server problem is deceivingly simple, but it has led to substantial progress in many branches of competitive analysis. Historically, the results were obtained first for the case where is a uniform metric space: the -server problem is then equivalent to the paging problem with a cache of size  [ST85]. In particular, the competitive ratio for paging is for deterministic algorithms and there is a lower bound of that holds for arbitrary metric spaces of more than points [MMS90]. This led to the bold -server conjecture [MMS90] stating that this ratio is for all metric spaces. Following several papers proving an upper bound of for particular metrics (e.g., trees or lines), the conjecture has been positively resolved (in the asymptotic sense) by the celebrated upper bound due to Koutsoupias and Papadimitriou [KP95]. For a more thorough treatment of the history of deterministic approaches, see a survey by Koutsoupias [Kou09].

Similarly, randomized competitive solutions for uniform metrics [MS91, ACN00, FKL91] showed that the achievable competitive ratio is exactly and led to the analogous randomized -server conjecture, stating that the randomized competitive ratio is on arbitrary metrics. Some cornerstone results towards resolving this conjecture deserve closer attention. On the lower bound side, Bartal et al. [BBM06] used Ramsey-type phenomena for metric spaces to show that the randomized competitive ratio is at least for any metric space.111In the description of all lower bounds on the competitive ratio for the -server problem, we silently assume that the metric space in question has more than points. On the algorithmic side, a major breakthrough (building on a long line of results for particular metrics) was obtained by Bansal et al. [BBMN15], who constructed an algorithm of ratio poly-logarithmic in the number of metric space points, based on HST embeddings (hierarchically separated trees) and the so-called fractional allocation problem. It was recently improved by Bubeck et al. [BCL18], who used mirror descent dynamics with multi-scale entropic regularization to obtain an -competitive algorithm on HSTs and an -competitive algorithm on general -point metrics. Based on this, Lee [Lee18] proposed a dynamic embedding technique to achieve a competitive ratio poly-logarithmic in on arbitrary metrics.

1.2 Weak adversaries

A way to compensate for the online algorithm’s lack of knowledge of the future is to assume that the algorithm has more “resources” than the offline optimum it is compared to. This natural concept, called resource augmentation, has led to spectacular success for online scheduling problems (see e.g. [KP00, PSTW02]). It can be a way to overcome pessimistic worst-case bounds of the original setting. In the context of the -server problem, it is also known as the weak adversaries model [Kou99, BEJ18] or the -server problem: an online algorithm with servers is compared to an optimal algorithm (an adversary) with servers. For a metric space , we denote by and the best competitive ratio of deterministic and randomized algorithms, respectively, for the -server problem on .

Again, the first results for the -server problem were developed for uniform metrics: Sleator and Tarjan [ST85] gave an exact answer of , with the upper bound being achieved by the LRU (least recently used) paging strategy. This implies that having servers suffices to attain a constant competitive ratio. It is natural to ask whether such phenomenon extends to other metrics. This question was raised already by Manasse et al [MMS90] when they introduced the -server problem.

Formally, we study the following questions:

Strong -server hypothesis: For any metric space and any , whenever .

Weak -server hypothesis: For any metric space and any , as .

Generalizing the result for uniform metrics, the same competitive ratio of was later also obtained for weighted star metrics [You94]. More recently, Bansal et al. [BEJK19] confirmed the strong -server hypothesis also for trees of bounded depth. Using randomization, tight bounds of were obtained for uniform metrics [You91] and weighted stars [BBN12] when . The recent results by Bubeck et al. [BCL18] and Buchbinder et al. [BGMN19] for the -server problem extend also to the -server setting, implying that for HSTs of depth when .222For general trees of depth , they obtain a fractional algorithm achieving the same competitive ratio.

Surprisingly, the performance of some classical algorithms can slightly degrade when additional online servers are available. Bansal et al. [BEJ18, BEJK19] showed that this can occur for both the Work Function algorithm and the Double Coverage algorithm. On the positive side, Koutsoupias [Kou99] showed that the Work Function algorithm obtains a competitive ratio of at most simultaneously for all . The algorithm of [BEJK19] confirming the -server hypothesis on bounded depth trees is actually a variant of the Double Coverage algorithm.

In [CKL17], the infinite server problem (denoted -server problem here) is introduced as a possible way to resolve the question on general metrics. This is the variant of the -server problem where , and all infinitely many servers initially reside at the same point of the metric space. The existence of an -competitive algorithm for the -server problem was shown to be equivalent to an affirmative resolution of the weak -server hypothesis.

In terms of lower bounds, it is known that — unlike the case of uniform and weighted star metrics — the ratio does not converge to on general metrics even as . Namely, Bar-Noy and Schieber [BE98, page 175] showed that for all when is the line metric. For large , the lower bound on was improved to [BEJK19] using depth-2 trees and later to  [CKL17] by a reduction from the -server problem. In the absence of any super-constant lower bounds, the -server hypothesis continued to seem plausible. In fact, Bansal et al. [BEJK19] argued that it would be very surprising if were not when is sufficiently large.

1.3 Our results

Our main result is the refutation of both versions of the -server hypothesis:

Theorem

There exists a tree metric such that , even for arbitrarily large .

Since , the lower bound obviously extends to deterministic algorithms. The underlying construction is simple. It is based on recursively combining Young’s lower bound for randomized -paging [You91] along many scales. At higher scales, the construction is applied to groups of servers rather than individual servers.

Due to the connection between the -server problem and the -server problem [CKL17], a direct consequence of Theorem 1.3 is that there is no competitive algorithm for the -server problem on general metrics. In fact, we first found the lower bound by analyzing the -server problem.

Corollary

The competitive ratio of the -server problem on trees of depth is . In particular, there exists no competitive algorithm for the -server problem on general metrics.

1.4 Preliminaries

An online algorithm Alg is called -competitive if

for all request sequences , where and denote the cost of Alg and the optimal (offline) cost for , respectively, and is a constant independent of . The competitive ratio of a problem is the infimum of all such that a -competitive algorithm exists. In the case of randomized algorithms, is replaced by its expectation. Note that for the -server problem, Opt denotes the optimal solution using servers, while Alg uses servers.

An algorithm is fractional if it is allowed to move an arbitrary fraction of a server, paying the same fractions of the distance moved, but it is still required to bring “a total mass” of at least one server to the requested point. A fractional algorithm can be derived from a randomized one by setting the server mass at each point to the expected number of servers; clearly, the cost of the fractional algorithm is at most the expected cost of the randomized algorithm.333On weighted stars and HST metrics, the converse is also true: Any fractional algorithm can be rounded online to a randomized integral one while increasing its cost by at most a multiplicative constant [BBN12, BBMN15]. It is unknown whether this also holds for general metrics.

All metric spaces constructed in this paper are trees with a distinguished root, and we assume that servers reside initially at the root. We will charge cost (to both the online and offline algorithms) only for traversing edges in direction away from the root. Since movement away from the root is within a factor of the total movement, the error due to this is absorbed in the asymptotic notation of our results.

We will further assume, without loss of generality, that all algorithms are downwards lazy: By this, we mean that they move server mass away from the root only if it is used to serve the current request; however they might move server mass towards the root in a non-lazy fashion.

For an infinite request sequence , we denote its prefix of the first requests by .

2 Proof of the lower bound

Below we state the main lemma towards proving thm:main.

Lemma

Let , and be arbitrary. Let , , . There exists a tree of depth such that, for any fractional online -server algorithm Alg, there exists an infinite request sequence in satisfying two properties:

  1. for all , where denotes the optimal cost for serving using servers,

  2. as .

Proof

For each , it suffices to prove that the lemma holds for some . By scaling all distances by a small multiplicative constant, this implies that can be chosen arbitrarily close to .

The lemma is proved by induction on . For , we choose to be the unweighted star. The statement of the lemma then follows from lower bounds on randomized -paging, which extend to fractional algorithms [You91, Theorem 2.2].

For the induction step, suppose the lemma holds for some fixed . Let be the tree induced by the induction hypothesis for . The root of has infinitely many children at distance ; all the subtrees rooted at these children are copies of . We will issue requests in a copy of only if the server mass in is at most . Since Alg is downwards lazy, this guarantees that the server mass is at most in each subtree before each request. This allows us to invoke the induction hypothesis on the subtrees: If the mass inside a subtree is for some , we interpret this as mass  sitting at the root of the subtree.444From the point of view of the subtree, moving mass to the root is a non-lazy move. Strictly speaking, the sub-algorithms for the different subtrees are not independent of each other, as a request in one subtree can trigger movement towards the root in another subtree. However, we construct the request sequence in an online manner where each request is independent of decisions of the algorithm for future requests, and thus we can analyze the sub-algorithms independently of each other.

The request sequence consists of phases. In each phase, subtrees will be marked. The marked subtrees of phase are chosen arbitrarily; this phase does not contain any requests. All other phases proceed as follows:

  • Mark a fresh subtree that has never received any requests before.

  • While the server mass in the fresh subtree is at most , issue requests in it according to the induction hypothesis.

  • For :

    • Among the subtrees that were marked in the last phase but have not been marked (yet) in the current phase, mark the one with the least server mass.

    • While the server mass in a marked subtree of the current phase is at most , issue requests in it according to the induction hypothesis.

The request sequence clearly satisfies Property (b).

We compare Alg against an offline algorithm Adv with servers that always has servers at each marked subtree of the current phase, and uses servers optimally within the subtrees.

Consider some phase. Denote by and the cost of Alg incurred during the phase along edges of level and at most , respectively. We define and analogously. Here, we use the convention that edges incident to the leaves have level and edges incident to the root have level .

Consider the case that the phase under consideration is complete. We analyze first the cost along edges incident to the root. Alg pays at least to move server mass to the fresh subtree. At the beginning of iteration of the for-loop, Alg has server mass at least in each of the  subtrees that were marked during the current phase. Thus, the average amount of server mass in the subtrees that were marked in the last phase but not yet in the current phase is at most . In effect, the cost to move mass to the subtree that is marked in the th iteration is at least

Hence, the total cost of moving server mass to the marked subtrees of the phase is at least

In contrast, the offline cost during the phase along edges incident to the root is only

because the offline algorithm moves only servers from the subtree that was marked in the last but not the current phase to the fresh subtree of the current phase.

For the cost within the subtrees, the induction hypothesis yields

where the term is due to the fact that there are marked subtrees, and the induction hypothesis was invoked for . For the total cost during a complete phase, we obtain

In the last phase, which may be incomplete, we have

Scaling all distances by a small factor, the subtrahend (and the cost of Adv to bring servers to the marked subtrees of phase ) can be made arbitrarily small.

We obtain the main result by combining the trees guaranteed by this lemma:

Proof (Proof of thm:main)

For , let and . The lower bound holds on the following tree : It contains as subtrees, for each , infinitely many copies of the tree guaranteed by lem:main for and . The roots of the subtrees are connected to the root of by edges of length .

Let be the numbers of offline and online servers respectively. Let . The request sequence consists of phases: In each phase, select a subtree of type whose online server mass is zero. Requests are issued in this subtree as induced by lem:main. As soon as the online server mass in the subtree exceeds , the phase ends and a new phase begins.

At the start of each phase, the offline algorithm brings servers to the subtree of that phase. For a given phase, denote by Alg and Adv, respectively, the online and offline cost suffered within the active subtree of the phase. By lem:main,

If the phase runs indefinitely (because the algorithm never brings the required number of servers to the subtree), then the cost within the active subtree dominates the competitive ratio. Since , the theorem follows.

Otherwise, the online algorithm pays at least , to bring as many servers to the subtree, whereas the offline algorithm pays only to move servers to the subtree. Thus, the ratio of the total online to offline cost during each phase is at least

Proof (Proof of cor:infty)

Consider the same tree as in the proof of thm:main, except that it contains the subtrees for only one value of . By the identical arguments as in the proof of thm:main, we obtain a lower bound of for trees of depth . If the subtrees are included for all , we obtain a metric space with no competitive algorithm for the -server problem.

3 Conclusions

We have refuted the -server hypothesis by proving that on trees of sufficient depth, even when is arbitrarily large. When expressed in terms of the depth of the tree, the lower bound amounts to and applies also to the -server problem.

The construction of our lower bound is inherently fractional: On higher scales, even if an algorithm is deterministic, it can move only a fraction of a group of servers. It would be interesting to show a lower bound for deterministic algorithms that is substantially larger than the randomized one.

Intriguing gaps remain between the lower and upper bounds. The upper bound that would follow from the randomized -server conjecture when disabling the extra servers, , is exponentially larger than our lower bound. For deterministic algorithms, the gap is even doubly exponential.

References

  • [ACN00] Dimitris Achlioptas, Marek Chrobak, and John Noga. Competitive analysis of randomized paging algorithms. Theoretical Computer Science, 234(1–2):203–218, 2000.
  • [BBM06] Yair Bartal, Béla Bollobás, and Manor Mendel. Ramsey-type theorems for metric spaces with applications to online problems. Journal of Computer and System Sciences, 72(5):890–921, 2006.
  • [BBMN15] Nikhil Bansal, Niv Buchbinder, Aleksander Madry, and Joseph Naor. A polylogarithmic-competitive algorithm for the k-server problem. Journal of the ACM, 62(5):40:1–40:49, 2015.
  • [BBN12] Nikhil Bansal, Niv Buchbinder, and Joseph Naor. A primal-dual randomized algorithm for weighted paging. Journal of the ACM, 59(4):19, 2012.
  • [BCL18] Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Madry. k-server via multiscale entropic regularization. In

    Proc. 50th ACM Symp. on Theory of Computing (STOC)

    , pages 3–16. ACM, 2018.
  • [BE98] Allan Borodin and Ran El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 1998.
  • [BEJ18] Nikhil Bansal, Marek Eliáš, Lukasz Jez, Grigorios Koumoutsos, and Kirk Pruhs. Tight bounds for double coverage against weak adversaries. Theory of Computing Systems, 62(2):349–365, 2018.
  • [BEJK19] Nikhil Bansal, Marek Eliáš, Lukasz Jez, and Grigorios Koumoutsos. The (h, k)-server problem on bounded depth trees. ACM Transactions on Algorithms, 15(2):28:1–28:26, 2019.
  • [BGMN19] Niv Buchbinder, Anupam Gupta, Marco Molinaro, and Joseph (Seffi) Naor. k-servers with a smile: Online algorithms via projections. In Proc. 30th ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 98–116, 2019.
  • [CKL17] Christian Coester, Elias Koutsoupias, and Philip Lazos. The infinite server problem. In Proc. 44th Int. Colloq. on Automata, Languages and Programming (ICALP), pages 14:1–14:14, 2017.
  • [CL91] Marek Chrobak and Lawrence L. Larmore. The server problem and on-line games. In On-Line Algorithms, Proceedings of a DIMACS Workshop, pages 11–64, 1991.
  • [FKL91] Amos Fiat, Richard M. Karp, Michael Luby, Lyle A. McGeoch, Daniel D. Sleator, and Neal E. Young. Competitive paging algorithms. Journal of Algorithms, 12(4):685–699, 1991.
  • [Kou99] Elias Koutsoupias. Weak adversaries for the k-server problem. In Proc. 40th IEEE Symp. on Foundations of Computer Science (FOCS), pages 444–449, 1999.
  • [Kou09] Elias Koutsoupias. The k-server problem. Computer Science Review, 3(2):105–118, 2009.
  • [KP95] Elias Koutsoupias and Christos H. Papadimitriou. On the k-server conjecture. Journal of the ACM, 42(5):971–983, 1995.
  • [KP00] Bala Kalyanasundaram and Kirk Pruhs. Speed is as powerful as clairvoyance. Journal of the ACM, 47(4):617–643, 2000.
  • [Lee18] James R. Lee. Fusible HSTs and the randomized k-server conjecture. In Proc. 59th IEEE Symp. on Foundations of Computer Science (FOCS), pages 438–449, 2018.
  • [MMS90] Mark S. Manasse, Lyle A. McGeoch, and Daniel D. Sleator. Competitive algorithms for server problems. Journal of the ACM, 11(2):208–230, 1990.
  • [MS91] Lyle A. McGeoch and Daniel D. Sleator. A strongly competitive randomized paging algorithm. Algorithmica, 6(6):816–825, 1991.
  • [PSTW02] Cynthia A. Phillips, Clifford Stein, Eric Torng, and Joel Wein. Optimal time-critical scheduling via resource augmentation. Algorithmica, 32(2):163–200, 2002.
  • [ST85] Daniel D. Sleator and Robert E. Tarjan. Amortized efficiency of list update and paging rules. Communications of the ACM, 28(2):202–208, 1985.
  • [You91] Neal E. Young. On-line caching as cache size varies. In Proc. 2nd ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 241–250, 1991.
  • [You94] Neal E. Young. The k-server dual and loose competitiveness for paging. Algorithmica, 11(6):525–541, 1994.