1 Introduction
The server problem is one of the most fundamental and extensively studied problems in the theory of online algorithms. In this problem, we are given a metric space of points and mobile servers located at points of the metric space. At each step, a request arrives at a point of a metric space and must be served by moving a server there. The goal is to minimize the total distance travelled by the servers.
The server problem generalizes various online problems, most notably the paging (caching) problem, which corresponds to the server problem on uniform metric spaces. Paging, first studied in the seminal work of Sleator and Tarjan [26], is wellunderstood: the competitive ratio is for deterministic algorithms and for randomized; those algorithms and matching lower bounds are folklore results for online algorithms [26, 22, 1].
The server problem in general metric spaces is much deeper and intriguing. In a landmark result, Koutsoupias and Papadimitriou [18] showed that the Work Function Algorithm (WFA) [18] is competitive, which is almost optimal for deterministic algorithms since the competitive ratio is at least [21]. For randomized algorithms, it is believed that an competitive algorithm is possible; despite several breakthrough results over the last decade [2, 9, 10, 20], this conjecture still remains open.
Memoryless Algorithms:
One drawback of the online algorithms achieving the bestknown competitive ratios for the server problem is that they are computationally inefficient. For example, the space used by the WFA is proportional to the number of different configurations of the servers, i.e., , which makes the whole approach quite impractical.
This motivates the study of tradeoffs between the competitive ratio and computational efficiency. A starting point in this line of research, is to determine the competitive ratio of memoryless algorithms: a memoryless algorithm, decides the next move based solely on the current configuration of the servers and the given request.
Memoryless algorithms for the server problem have been extensively studied (see e.g., [8, 17] for detailed surveys). The most natural memoryless algorithm is the Harmonic Algorithm
, which moves each server with probability inversely proportional to its distance from the requested point. It is known that its competitive ratio is
and [5]. It is conjectured that in fact the Harmonic Algorithm is competitive; this remains a longstanding open problem. For special cases such as uniform metrics and resistive metric spaces, an improved competitive ratio of can be achieved and this is the best possible for memoryless algorithms [13].We note that the study of memoryless algorithms for the server problem is of interest only for randomized algorithms; it is easy to see that any deterministic memoryless algorithm is not competitive. Throughout this paper, we adopt the standard benchmark to evaluate randomized memoryless algorithms, which is comparing them against an adaptive online adversary, unless stated otherwise. For a detailed discussion on the different adversary models and relations between them, see [8, 6].
The generalized server problem.
In this work, we focus on the generalized server problem, a farreaching extension of the server problem, introduced by Koutsoupias and Taylor [19]. Here, each server lies in a different metric space and a request is a tuple , where ; to serve it, some server should move to point . The standard server problem is the very special case where all metric spaces are identical, i.e., and all requests are of the form . Other wellstudied special cases of the generalized server problem are (i) the weighted server problem [14, 3], where all metrics are scaled copies of a fixed metric , i.e., and all requests of the form and (ii) the CNN problem [19, 16], where all metrics are real lines.
Previous Work.
The generalized server problem has a much richer structure than the classic server problem and is much less understood. For general metric spaces, no  competitive algorithms are known, except from the special case [24, 25, 23]. For , competitive algorithms are known only for the following special cases:

Uniform Metrics: All metric spaces are uniform (possibly with different number of points), with the same pairwise distance, say 1.

Weighted Uniform Metrics: All metrics are uniform, but they have different weights; the cost of moving in metric is .
Perhaps surprisingly, those two cases are qualitatively very different. For deterministic algorithms Bansal et. al. [4] obtained algorithms with (almost) optimal competitive ratio. For uniform metrics their algorithm is competitive, while the best possible ratio is at least [19]. For weighted uniform metrics, they obtained a competitive algorithm (by extending an algorithm of Fiat and Ricklin [14] for weighted server on uniform metrics), while the lower bound for the problem is [3].
Memoryless Algorithms for Generalized server: Recently Chiplunkar and Vishnawathan [11] studied randomized memoryless algorithms in weighted uniform metrics. They showed tight doubly exponential () bounds on the competitive ratio. Interestingly, the memoryless algorithm achieving the optimal bound in this case is different from the Harmonic Algorithm.
Since the weighted uniform case seems to be much harder than the uniform case, it is natural to expect that a better bound can be achieved by memoryless algorithms in uniform metrics. Moreover, in weighted uniform metrics the competitive ratios of deterministic algorithms (with memory) and randomized memoryless algorithms are essentially the same. Recall that a similar phenomenon occurs for the paging problem (standard server on uniform metrics) where both deterministic and randomized memoryless algorithms have a competitive ratio of . Thus, it is natural to guess that for uniform metrics, a competitive ratio of order (i.e., same as the deterministic competitive ratio) can be achieved by memoryless algorithms.
1.1 Our Results
In this work we study the power of memoryless algorithms for the generalized server problem in uniform metrics and we determine the exact competitive ratio by obtaining tight bounds.
First, we determine the competitive ratio of the Harmonic Algorithm on uniform metrics.
Theorem 1.
The Harmonic Algorithm for the generalized server problem on uniform metrics is competitive, where is the solution of the recursion , with .
It is not hard to see that , therefore the competitive ratio of the Harmonic Algorithm is . This shows that indeed, uniform metrics allow for substantial improvement on the performance compared to weighted uniform metrics where there is a doublyexponential lower bound.
To obtain this result, we analyse the Harmonic Algorithm using Markov Chains and random walks, based on the
Hamming distance between the configuration of the algorithm and the adversary, i.e., the number of metric spaces where they have their servers in different points. Based on this, we then provide a proof using a potential function, which essentially captures the expected cost of the algorithm until it reaches the same configuration as the adversary. The proof is in Section 2.Next we show that the upper bound of Theorem 1 is tight, by providing a matching lower bound.
Theorem 2.
The competitive ratio of any randomized memoryless algorithm for the generalized server problem on uniform metrics is at least .
Here the analysis differs, since the Hamming distance is not the right metric to capture the “distance” between the algorithm and the adversary: assume that all their servers are at the same points, except one, say server . Then, in the next request, the algorithm will reach the configuration of the adversary with probability ; clearly, if is large, the algorithm is in a favourable position, compared to the case where is small.
This suggests that the structure of the algorithm is not solely characterized by the number of different servers (i.e., Hamming distance) between the algorithm and the adversary, but also the labels of the servers matter. For that reason, we need to focus on the subset of different servers, which gives a Markov Chain on states. Unfortunately, analyzing such chains in a direct way can be done only for easy cases like or . For general values of , we find an indirect way to characterize the solution of this Markov Chain. A similar approach was taken by Chiplunkar and Vishwanathan [11] for weighted uniform metrics; we use some of the properties they showed, but our analysis differs since we need to make use of the special structure of our problem to obtain our bounds.
In fact, we are able to show that any memoryless algorithm other than the Harmonic has competitive ratio strictly larger than . We describe the details in Section 3.
On the positive side, our results show that improved guarantees can be achieved compared to the weighted uniform case. On the other hand, the competitive ratio of memoryless algorithms () is asymptotically worse than the deterministic competitive ratio of . This is somewhat surprising, since (as discussed above) in most uniform metric settings of server and generalizations, the competitive ratio of deterministic algorithms (with memory) and randomized memoryless is (almost) the same.
1.2 Notation and Preliminaries
Memoryless Algorithms.
A memoryless algorithm for the generalized server problem receives a request and decides which server to move based only on its current configuration and
. For the case of uniform metrics, a memoryless algorithm is fully characterized by a probability distribution
; whenever it needs to move a server, it uses server of metric with probability . Throughout the paper we assume for convenience (possibly by relabeling the metrics) that given a memoryless algorithm we have that . We also assume that for all ; otherwise it is trivial to show that the algorithm is not competitive.The Harmonic Algorithm.
In the context of generalized server on uniform metrics, the Harmonic Algorithm is a memoryless algorithm which moves at all metric spaces with equal probability, i.e., , for all .
The harmonic recursion.
We now define the recursion that will be used to get the competitive ratio of the harmonic algorithm, which we call the harmonic recursion and do some basic observations that will be useful throughout the paper.
Definition 1 (Harmonic recursion).
The harmonic recursion satisfies the recurrence for , and .
Based on the definition, we make the following observation:
Observation 1.
The harmonic recursion is strictly increasing, i.e., for any .
Also it is easy to show that has a closed form, given by
(1) 
Based on this closed form, we get the following:
Observation 2.
For any , it holds that .
This observation also shows that for any , we have .
2 Upper Bound
In this section we prove Theorem 1. More precisely, we use a potential function argument to show that for any request sequence, the expected cost of the Harmonic Algorithm is at most times the cost of the adversary.
Organization.
In Section 2.1, we define a potential between the Harmonic Algorithm’s and the adversary’s configurations that is inspired by random walks on a special type of Markov Chains [15] we refer to as the “Harmonic Chain”. The required background of Markov Chains is presented in Appendix A. Then, in Section 2.2 we will use this potential to prove the upper bound of Theorem 1 with a standard potentialbased analysis.
2.1 Definition of the Potential Function
We begin by presenting the intuition behind the definition of our potential function. Our first observation is that since (i) the metrics are uniform with equal weights and (ii) the Harmonic Algorithm does not distinguish between metrics since it has equal probabilities , it makes sense for the potential between two configurations to depend only on their Hamming distance and not on the labels of their points. In order to come up with an appropriate potential, we need to understand how the Hamming distance between the Harmonic Algorithm’s and the adversary’s configurations evolves over time.
Imagine that the adversary moves to an “optimal” configuration of his choice and then it serves requests until the Harmonic Algorithm reaches this configuration as well. Since the adversary must serve all the requests using a server from its configuration, we know that for each request , at least one of the requested points should coincide with the th server of the adversary. In that case, with probability the Harmonic Algorithm moves in metric , thus it decreases his Hamming distance from the adversary by 1. On the other hand, assume that servers of the algorithm coincide with the ones of the adversary. Then, with probability it would increase its Hamming distance from the optimal configuration by 1. This shows that the evolution of the Hamming distance between the Harmonic Algorithm’s and the adversary’s configurations is captured by a random walk on the following Markov Chain that we refer to as the Harmonic Chain.
While not necessary for the definition of the potential, a formal definition of the Harmonic Chain is included in Appendix A. In the scenario we described above, the expected movement cost of the Harmonic Algorithm until it reaches the adversary’s configuration with an initial Hamming distance of would be where
denotes a random variable defined as
and denotes the state of the Harmonic Chain at time . In the literature, this quantity is known as the Expected Extinction Time (ETT) [15] of a Markov Chain and we use to denote it. Intuitively, should immediately give an upper bound on the competitive ratio of the Harmonic Algorithm.We study the Harmonic Chain and prove the following Theorem:
Theorem 3.
For any initial state , the EET of the Harmonic Chain is given by
Proof.
By using conditional probabilities on the EET of the Harmonic Chain, we get that for any ,
This yields a secondorder recurrence relation we need to solve for . A formal proof is given in Appendix A, where we derive the Theorem from the EET of the more general class of Markov Chains called BirthDeath Chains. ∎
From Theorem 3 and Observations 1, 2 we immediately get that is (strictly) increasing and . Furthermore, we make the following observation:
Observation 3.
For any , in the Harmonic Chain, with the equality holding only for .
Proof.
Fix any . Then:
where both inequalities hold from Observation 1 which states that is strictly increasing. ∎
Suppose that the adversary moves servers whenever the algorithm reaches its configuration and then it doesn’t move until the algorithm reaches its new configuration. Intuitively, the competitive ratio would be which is maximized for by Observation 3. This means that is an upper bound for the competitive ratio of the Harmonic Algorithm. While this intuition is very important, it is not enough to formally prove Theorem 1. However, motivated by it, we will define the potential between two configurations of Hamming distance as . Formally,
Definition 2 (Potential Function).
The potential between two configurations is defined as
2.2 Bounding the Competitive Ratio
In this section, we will prove the upper bound of Theorem 1 by using the potential we defined in Section 2.1. Fix any request sequence for any such that . Let be the configuration of the Harmonic Algorithm and the configuration of the adversary after serving request . Also, let be the initial configuration of the instance. We will prove that when the adversary moves servers the increase in potential is at most and when the Harmonic Algorithm moves one server, the expected decrease in potential is at least . Then, using these properties, we will prove Theorem 1.
To simplify the analysis, we make the following observation for the potential function.
Observation 4.
For any such that it holds that
Proof.
By telescoping we have
where the second equality holds by the definition of the potential. ∎
Using this observation, we are now ready to prove the following lemmata:
Lemma 1 (Adversary Moves).
For any it holds that
Proof.
Let and . Clearly, . Since the potential is strictly increasing on , if then this means that the adversary’s move didn’t increase the potential and then the Lemma follows trivially. Thus, we only need to prove the Lemma for . We have:
Lemma 2 (Harmonic Moves).
For any it holds that
Proof.
If the Harmonic Algorithm serves the request, then and the Lemma follows trivially. Otherwise, by definition, it moves to a configuration such that . Let and . Also, let , i.e., the number of the adversary’s servers that could serve the current request. By definition, must serve which gives . Furthermore, doesn’t serve the request but does, and thus .
Recall that the Harmonic Algorithm randomly moves at a metric with equal probabilities in order to serve a request. If it moves in any of the metrics where the adversary serves the request, we get and the potential decreases with probability . If it moves on any of the metrics where , we get and the potential increases with probability . In any other case, we have and the potential doesn’t change. To simplify the notation, we define as . We have:
where the first equality follows from the definition of the potential, the second equality from the possible changes in the Hamming distance between the algorithm and the adversary, the third equality follows from Observation 4 and the definition of , the fourth equality follows from the definition of the recursion and the inequality follows from . ∎
Proof of Theorem 1
We are now ready to prove Theorem 1. By combining lemmata 1 and 2, we get that for any , the expected difference in potential is
Now, let be used to denote the total cost of the adversary and be used to denote the expected cost of the Harmonic Algorithm. Summing over all we finally get
and since (i.e., ) and , we get that , which concludes the proof of Theorem 1.
3 Lower Bound
In this section we prove Theorem 2. More precisely, we construct an adversarial request sequence against any memoryless algorithm and prove that its competitive ratio is lower bounded by the solution of a linear system of equations. Since solving this system directly is possible only for easy cases like or , we show how to get a lower bound for the solution (similarly to the approach taken by Chiplunkar and Vishwanathan [11] for weighted uniform metric spaces) and thus the competitive ratio of any memoryless algorithm.
Organization.
3.1 Constructing the adversarial instance
Before we state the adversarial instance, it is useful to give the intuition behind it. It is natural to construct an adversary that moves only when it has the same configuration with the algorithm.
In fact, we construct an adversary that moves in only one metric space: the one that the algorithm uses with the smallest probability (ties are broken arbitrarily). Recall that in the analysis of the harmonic algorithm from Section 2, the competitive ratio is also maximized when in each “phase” the adversary starts with only one different server than the algorithm and does not move until the configurations (of algorithm and adversary) match (Observation 3).
Let be any online algorithm and be the adversary. Consider a “phase” to be a part of the request sequence where in the beginning the configurations of and coincide and it ends when matches the configuration of . Since must serve all requests, in each request one point is such that ; we say that the th position of is revealed in such a request. Thus every request will reveal to the algorithm exactly one of the positions of the adversary’s servers in some metric space . The main idea behind our lower bound instance is that, in each request, out of the metric spaces that servers of and differ, we reveal to the algorithm the position of in the metric that serves with the highest probability; this implies that whenever and differ by only one server, this will be in metric . Intuitively, this way we exploit best the “assymetries” in the distribution of (this is formalized in Lemma 3).
The instance.
Recall that any memoryless algorithm for the generalized server problem on uniform metric spaces is fully characterized by a probability distribution over the metric spaces . W.l.o.g., we can assume that . Let be used to denote the configurations of the algorithm and the adversary after serving request respectively. Also, let be used to denote the initial configuration of both the algorithm and the adversary. We will now construct the request sequence. For :

Observe , i.e., the algorithm’s current configuration.

If , then:
for any such that and .
otherwise:
.

Determine .

Pick any such that and , .
Note that for steps 2 and 4, we need to have at least points in order to pick a point that isn’t occupied by neither the algorithm’s nor the adversary’s servers. As we explain in Section 4, this is a necessary requirement; if all metrics have points, then the competitive ratio of the Harmonic Algorithm is and therefore a lower bound of order is not possible.
As an example of our instance, for , let and for some . Clearly, the algorithm and the adversary have different servers in metric and . From step 3, , i.e., is the metric space that the algorithm serves with highest probability out of the metric spaces that it and the adversary have their servers in different points. Then, from step 4, (actually, the selection of the last three coordinates is arbitrary as long as neither the algorithm nor the adversary have their server on this point).
Notice that moves one server in metric space whenever it has the same configuration with . On the other hand, never serves request with configuration and thus moves at every time step. This means that the competitive ratio of is lower bounded by the (expected) number of requests it takes for it to reach configuration of .
3.2 Proving the Lower Bound
Our approach.
We define the state of the algorithm at time as , i.e., the subset of metric spaces with different servers between the algorithm and the adversary. In this context, is used to denote the expected number of requests it takes for the algorithm to reach the adversary’s configuration, i.e. state , starting from some state . From the request sequence we defined, is a lower bound for the competitive ratio of any memoryless algorithm.
By observing how the state of the algorithm (and by extension ) evolves under the request sequence, we can write down a linear system of equations on the variables . In fact, these equations give the EET of a random walk in a Markov Chain of states. We then prove a lower bound on and thus the competitive ratio of any memoryless algorithm. Notice that for the given instance, if we were analyzing the Harmonic Algorithm, then the Hamming distance between it and the adversary would be captured by the Harmonic Chain and we would immediately get that .
Analysis.
Fix any two different configurations for the algorithm and the adversary that are represented by state with . Then, we know that for the next request we have constructed it holds that and for any . Recall that the memoryless algorithm will randomly move to some state and move to a different configuration that is captured by state . We distinguish between the following three cases:

If , then this means that and and thus .

If , then and and thus .

If then and and thus .
Since denotes the expected number of steps until the state of the algorithm becomes starting from , from the above cases we have that for any state :
Combined we the fact that obviously and , we get the following set of linear equations with variables:
(3) 
Normally, we would like to solve this linear system to compute and this would be the proven lower bound for the memoryless algorithm. However, even for it is hopeless to find closed form expressions for the solutions of this system. Interestingly, similar equations were studied by Chiplunkar and Vishnawathan [11] for the weighted uniform metric case. In their study, they showed a monotonicity property on the solutions of their linear system that directly transfers to our setting and is stated in Lemma 3 below. Using this, combined with the special structure of our problem, we show how to derive a lower bound of for instead of solving (3) to directly compute it.
Lemma 3.
For any with such that (and thus ), the solutions of linear system (3) satisfy
The proof is deferred to Appendix B. Let us first see the intuition behind the inequality of Lemma 3. Let be the subset of metric spaces where the servers of and occupy different points: then, in the next move, the expected time to match decreases the most, if matches first the th server of the adversary (i.e., the “state” changes from to ) where is the metric with the smallest the probability . This explains why in our adversarial instance we choose to reveal to the location of in the metric it serves with the highest probability: this makes sure that the decrease in the expected time to reach is minimized.
Using Lemma 3, we can now prove the following:
Lemma 4.
For any with and , the solutions of linear system (3) satisfy
Proof.
We are now ready to prove the main theorem of this section.
Theorem 4.
The solution of linear system (3) satisfies
Proof.
In order to prove the theorem, it suffices to show that for any such that and , it holds that
Then, by setting () and , we get , and since by definition, the Theorem follows. It remains to prove the desired property. This can be shown by induction on the size of .
Base case: If (this means that then for any , by (3) we have
Inductive hypothesis: Suppose that for any with and any , we have
Inductive step: Let be any set with and be any element of this set. By Lemma 4, we have that
Now, for any we can use the hypothesis on the set with size . Thus, we have
for any . Combining, we get
∎
Proof of Theorem 2.
Since , we have that . Thus, by Theorem 4 we have that for any distribution. Since is a lower bound for any memoryless algorithm, the Theorem follows.
Corollary 1.
The Harmonic Algorithm is the only memoryless algorithm with a competitive ratio of .
Proof.
By Theorem 4, the competitive ratio of the Harmonic Algorithm is at least and combined with the upper bound of Theorem 1 we get that the Harmonic Algorithm is ()competitive. Assuming , any other memoryless algorithm will have . Thus, by Theorem 4 its competitive ratio will be lower bounded by which is strictly worse that the competitive ratio of the Harmonic Algorithm. ∎
4 Concluding Remarks
We provided tight bounds on the competitive ratio of randomized memoryless algorithms for generalized server in uniform metrics. Combining our results with the work of Chiplunkar and Vishwanathan [11], the power of memoryless algorithms in uniform and weighted uniform metrics is completely characterized. It might be interesting to determine the power of memoryless algorithms for other metric spaces such as e.g., weighted stars. However we note that memoryless algorithms are not competitive on arbitrary metric spaces, even for ; this was shown by Chrobak and Sgall [12] and Koutsoupias and Taylor [19] independently. We conclude with some side remarks.
Metrics with points.
In our lower bound instance from Section 3 we require that all metric spaces have at least points. We observe that this is necessary, and that if all metric spaces have points, the Harmonic Algorithm is competitive, thus a lower bound of can not be achieved. The underlying reason is the following: in the Harmonic Chain described in Section 2, while being at state (i.e., having servers different than the adversary), the algorithm moves to state with probability and remains in the same state with probability . This happens because if , then given the algorithm’s configuration and the adversary’s configuration , we can construct a request such that and in metric spaces. However if , the algorithm moves only for (i.e., is the algorithm’s anticonfiguration) and thus implies that and if the algorithm moves in , then it reduces the number of different servers to . Thus the Markov Chain used to analyse this instance becomes the following:
Randomized Algorithms with Memory.
References
 [1] Dimitris Achlioptas, Marek Chrobak, and John Noga. Competitive analysis of randomized paging algorithms. Theor. Comput. Sci., 234(12):203–218, 2000.
 [2] Nikhil Bansal, Niv Buchbinder, Aleksander Madry, and Joseph Naor. A polylogarithmiccompetitive algorithm for the kserver problem. J. ACM, 62(5):40, 2015.
 [3] Nikhil Bansal, Marek Eliáš, and Grigorios Koumoutsos. Weighted kserver bounds via combinatorial dichotomies. In 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 493–504, 2017.
 [4] Nikhil Bansal, Marek Eliáš, Grigorios Koumoutsos, and Jesper Nederlof. Competitive algorithms for generalized kserver in uniform metrics. In Proceedings of the TwentyNinth Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pages 992–1001, 2018.
 [5] Yair Bartal and Eddie Grove. The harmonic kserver algorithm is competitive. J. ACM, 47(1):1–15, 2000.
 [6] Shai BenDavid, Allan Borodin, Richard M. Karp, Gábor Tardos, and Avi Wigderson. On the power of randomization in online algorithms. Algorithmica, 11(1):2–14, 1994.
 [7] Marcin Bienkowski, Lukasz Jez, and Pawel Schmidt. Slaying hydrae: Improved bounds for generalized kserver in uniform metrics. In 30th International Symposium on Algorithms and Computation, ISAAC 2019, pages 14:1–14:14, 2019.
 [8] Allan Borodin and Ran ElYaniv. Online computation and competitive analysis. Cambridge University Press, 1998.

[9]
Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and
Aleksander Madry.
kserver via multiscale entropic regularization.
In
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC)
, pages 3–16, 2018.  [10] Niv Buchbinder, Anupam Gupta, Marco Molinaro, and Joseph (Seffi) Naor. kservers with a smile: Online algorithms via projections. In Proceedings of the Thirtieth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2019, pages 98–116, 2019.
 [11] Ashish Chiplunkar and Sundar Vishwanathan. Randomized memoryless algorithms for the weighted and the generalized kserver problems. ACM Trans. Algorithms, 16(1):14:1–14:28, 2020.
 [12] Marek Chrobak and Jiří Sgall. The weighted 2server problem. Theor. Comput. Sci., 324(23):289–312, 2004.
 [13] Don Coppersmith, Peter Doyle, Prabhakar Raghavan, and Marc Snir. Random walks on weighted graphs and applications to online algorithms. J. ACM, 40(3):421–453, 1993.
 [14] Amos Fiat and Moty Ricklin. Competitive algorithms for the weighted server problem. Theor. Comput. Sci., 130(1):85–99, 1994.
 [15] Charles M. Grinstead and J. Laurie Snell. Introduction to Probability. AMS, 2003.
 [16] Kazuo Iwama and Kouki Yonezawa. Axisbound CNN problem. IEICE TRANS, pages 1–8, 2001.
 [17] Elias Koutsoupias. The kserver problem. Computer Science Review, 3(2):105–118, 2009.
 [18] Elias Koutsoupias and Christos H. Papadimitriou. On the kserver conjecture. J. ACM, 42(5):971–983, 1995.
 [19] Elias Koutsoupias and David Scot Taylor. The CNN problem and other kserver variants. Theor. Comput. Sci., 324(23):347–359, 2004.
 [20] James R. Lee. Fusible hsts and the randomized kserver conjecture. In 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, pages 438–449, 2018.
 [21] Mark S. Manasse, Lyle A. McGeoch, and Daniel D. Sleator. Competitive algorithms for server problems. J. ACM, 11(2):208–230, 1990.
 [22] Lyle A. McGeoch and Daniel Dominic Sleator. A strongly competitive randomized paging algorithm. Algorithmica, 6(6):816–825, 1991.
 [23] René Sitters. The generalized work function algorithm is competitive for the generalized 2server problem. SIAM J. Comput., 43(1):96–125, 2014.
 [24] René Sitters, Leen Stougie, and Willem de Paepe. A competitive algorithm for the general 2server problem. In ICALP, pages 624–636, 2003.
 [25] René A. Sitters and Leen Stougie. The generalized twoserver problem. J. ACM, 53(3):437–458, 2006.
 [26] Daniel Dominic Sleator and Robert Endre Tarjan. Amortized efficiency of list update and paging rules. Commun. ACM, 28(2):202–208, 1985.
 [27] Zikun Wang and Hsiangchʻün Yang. Birth and death processes and Markov chains. Berlin; New York : Springer, rev. edition, 1992. ”Revised edition of the original Chinese edition”.
Appendix A Analysis of the Harmonic Chain (Proof of Theorem 3)
In this part of the Appendix, our main objective is to prove Theorem 3 that states the EET of the Harmonic Chain, which is a special type of Markov Chain that we use in our analysis. We note that a family of Markov Chains of similar structure, called BirthDeath Chains, has been extensively studied in the literature [27].
Definition 3 (BirthDeath Chain).
A BirthDeath Markov Chain is an important subclass of discretetime Markov Chains that limits transitions to only adjacent states. Formally, a Markov process with statespace for some is characterized as a BirthDeath Chain if its transition matrix has the following form:
where and for the endpoints of the chain. A graphical representation of a BirthDeath chain is given in Figure 3.
Furthermore, a Birth Death Chain will be called absorbing on the state if , which means that the random process will remain on the state if it ever reaches it. As we mentioned, the Harmonic Chain is a special case of BirthDeath Chains. A formal definition is given below:
Definition 4 (Harmonic Chain).
The Harmonic Chain can be defined as a BirthDeath Chain with statespace for some , forward probabilities and backward probabilities . A graphical representation of a Harmonic Chain is given in Figure 4.
We will compute the Expected Extinction Time (EET) of a BirthDeath Chain starting from an initial state , which is defined as the expected number of transitions needed to reach for the first time, starting from state . Formally, the EET of a BirthDeath chain starting from some state is defined as where the random variable is defined as . A closedform expression for the EET of a BirthDeath Chain is given by the following Theorem:
Theorem 5.
For any BirthDeath Chain with states for some , transition probabilities and absorption state , the EET starting from an initial state is given by
where
While this result is by no means novel, a BirthDeath Chain is usually defined on a state space in the literature, while we study BirthDeath Chains with (finite) state space . Thus, for the sake of completion, we give a formal proof of Theorem 5 in A.1.
Proof of Theorem 3
With Theorem 5 stated, we are now ready to prove Theorem 3. We have defined the the Harmonic Chain as a BirthDeath Chain with forward probabilities and backward probabilities . From these probabilities and Theorem 5, it is simple to compute that for any , the EET of the Harmonic Chain is given by
(4) 
a.1 Analysis of the BirthDeath Chain (Proof of Theorem 5)
Obviously, for the EET of the absorbing state we have be definition. For any other initial state , by using conditional probabilities on the definition of the EET, we get
(5) 
This is a secondorder recurrence relation we need to solve in order to compute the EET of a BirthDeath Chain. There are two main points in the proof. Firstly, by solving the equations on the differences instead on , we can reduce the problem to solving a firstorder recurrence relation that is easier to solve. Secondly, a secondorder recurrence relation generally needs two initial conditions in order to be solved. Notice that we only know . Using well known results from the literature on Markov Chains, we show how to compute and overcome this technical problem.
By rearranging (5), we get
with . For , we define and get:
where and . This is a firstorder nonhomogeneous recurrence relation with variable coefficients that yields the solution
Finally, by substitution of , and and by using the telescoping property we have to solve for , we get that for any :
(6) 
It remains to determine the value of . Notice that if we set (i.e. state always transitions to state instead of being absorbing) the EET of the chain won’t change, since we are only interested in the transitions until state is reached for the first time. However, if and denotes return time of state (i.e., the time it takes to return to starting from ), it holds that
(7) 
For Markov Chains, it is known [15] that the Expected Return Time of a state is given by where is the stationary distribution of the Markov Chain. The stationary distribution of a BirthDeath Chain is known to satisfy
Comments
There are no comments yet.