## 1. Introduction

The processor sharing (PS) policy is a well known service discipline, which was introduced in the 1960’s by Kleinrock [9] in the performance evaluation of computer networks (see also [10]). With this service discipline, jobs in the queue are served in an egalitarian way. Thus, if there are jobs in the queue, each job receives the fraction of the server capacity. The -PS queue has been studied in the queuing literature by several authors, see for instance [13, 15, 16], who computed the Laplace transform of the sojourn time of a tagged customer conditioned on the service time. For the specific case of the queue, it is possible to obtain an explicit expression of the sojourn time distribution of an arbitrary customer [3, 12]. In [6]

, an orthogonal structure involving Pollaczek polynomials was introduced to solve the infinite differential system satisfied by the vector composed of the distributions of the sojourn time of a customer conditioned on the number of customers in the system upon arrival. The

-PS system has then been further extended to account of permanent customers [2, 8, 17].From a practical point view, the processor sharing discipline has gained renewed interest in the study of resource sharing in the Internet. As a matter of fact, the PS discipline can be used to model how TCP connections share the bandwidth of a bottleneck in a packet network [11]. More recently, in the context of cloud platforms, the processor sharing can reflect how the capacity of a multi-core platform is shared among several tenants [14].

In this paper, instead of directly computing the distribution of the sojourn time of a customer, we study the number of both arrivals and departures seen by a tagged customer while it is in service, denoted by and , respectively. We explicitly compute the distributions of these two quantities in the stationary regime. We notably establish that (equality in distribution). As a byproduct, we can recover the distribution of the sojourn time of the tagged customer in the queue. To compute the distribution of , we use the orthogonal structure underlying the -PS queue, namely Pollaczek polynomials and their associated orthogonality measure [6].

The knowledge of the distribution of

allows us to easily test the accuracy of the following approximation: Since at each departure, all customers have equal chance of leaving the system, because of the memory-less property of the exponential distribution, we can suppose that when a tagged customer enters the queue, this customer is randomly served among those customers served in the residual busy period (i.e., the busy period starting at the arrival epoch of the tagged customer and ending when the queue empties). Under this assumption, we compute the distribution of the number

of customers leaving the system before the tagged customer is randomly picked. Even if this approximation seems to be rough at first glance, numerical results show that is a reasonable upper bound for . The motivation for this approximation is to develop a method of approximating the sojourn time of a batch in an -PS queue. While the Laplace transform of the sojourn time of a job has been computed in [7]when batches are geometrically distributed, results for the sojourn time of an entire batch is much more challenging.

The organization of this paper is as follows: In Section 2

, we introduce the model and the various random variables. In particular, these random variables are related to an absorbed Markov discrete time, whose transition matrix introduces a selfadjoint operator in an ad-hoc Hilbert space. The spectral properties of this operator are studied in Section

3 and the distributions of the random variables are computed in Section 4. Numerical results are presented in Section 5. Further research directions are discussed in Section 6.## 2. Model description

We consider a classical Processor sharing queue with arrival rate and unit service rate. We assume that a tagged customer arrives at the queue at time and that there are customers in the queue upon arrival of this tagged customer. We introduce the discrete-time process describing the number of customers in the queue other than the tagged one at the departures or arrivals of customers. When the tagged customer completes its service, the process is absorbed in some state, denoted by . The index is thus the number of departures from the queue or arrivals at this queue before the process gets absorbed. We set since we assume that there are customers in the queue upon arrival of the tagged customer.

The state space of the process is and

is a discrete-time Markov chain with transition matrix

given byThe non-null coefficients of the matrix are given by and for

Let us now consider the sub-matrix of obtained by deleting the first row and the first column of matrix . The non-null coefficients of matrix are given for for by

with the convention

. This matrix is tridiagonal and sub-stochastic, and gives the transition probabilities of the Markov chain

before absorption.Let be the column vector with all entries equal to 0 except the -th one equal to 1. Then, for ,

where is the row vector equal to the transpose of the column vector . The probability that the tagged customer leaves the system at stage and leaves customers in the queue is equal to

Let denote the number of customers left in the system by the tagged customer upon service completion. We have

(1) |

Similarly, if we denote by the time at which the tagged customer leaves the system, we have for

Finally, for and

Let and respectively denote the number of arrivals (excluding the tagged customer) and departures, while the tagged customer is in the system. Assume that there are customers in the system at the arrival time of the tagged customer and customers in the system upon service completion of the tagged customer. We have

so that

In view of Equation (1), we see that for characterizing the distribution of the various random variables introduced above, we have to compute the resolvent of the infinite matrix , namely as well as the powers of matrix . For this purpose, we prove in the next section that this matrix induces a selfadjoint operator in some ad-hoc Hilbert space.

## 3. Spectral properties of matrix

### 3.1. Selfadjointness properties

We introduce the same real Hilbert space as in [6]. Let

where . The Hilbert space is equipped with the scalar product

and the norm

The infinite matrix induces in an operator that we also denote by . By using the same arguments as in [6], we can easily prove the following lemma, where we use the norm of the operator defined by

by definition, the operator is bounded if .

###### Lemma 1.

The operator is symmetric and bounded in and hence self-adjoint.

###### Proof.

The symmetry of is straightforward since it is easily checked for every , owing to the reversibility property for all .

For , we have by Schwarz inequality

This implies that . ∎

### 3.2. Spectrum

The spectrum of the operator is defined by

Since , we know that .

Let us consider some such that for some real number . By setting without loss of generality and , we have for

(2) |

which can be rewritten as

Let us introduce the Pollaczek polynomials defined for real and by the recursion: , and for

It is then easily checked that

In the following, we introduce the vectors such that the -th component is

For , let be such that and define

The Pollaczek polynomials have the generating function defined for by

these polynomials are orthogonal with respect to the weight function supported by the interval and given by

such that

(3) |

where is the Kronecker symbol.

It follows that the polynomials are orthogonal with respect to the measure given by

(4) |

or equivalently for with

and satisfy

(5) |

The polynomials have the generating function

for with . The generating function satisfies the first order differential equation

It is worth noting that for real

(6) |

In particular,

(7) |

With the above observations, we state the following lemma.

###### Lemma 2.

The operator has the continuous spectrum and spectral measure defined by Equation (4).

## 4. Computation of distributions

Let us first consider the random variables and .

###### Proposition 1.

The distribution of the random variable (i.e., the number of jobs in the system upon completion of the tagged job) conditionally on the number of jobs in the system upon arrival of the tagged job is given for by

(8) |

Moreover,

(9) |

and

(10) |

###### Proof.

It is worth noting that

Let defined by

so that

We then have

By introducing the Laplace transform of the random variable , which is the sojourn time of a tagged job entering the PS queue while there are jobs in service, we have from [6]

as expected. This shows in particular that for all

(11) |

By using the above computations, we have the following corollary.

###### Corollary 1.

When the tagged job enters an -PS queue in the stationary regime, the number of jobs left in the system upon service completion of the tagged customer has distribution

(12) |

The generating function of the random variable (i.e., the time at which the tagged customer leaves the system) is given by

(13) |

###### Proof.

Equation (12) is consistent with the fact that in the stationary regime, the distribution of the occupancy of the queue seen by departing customers is the same as the distribution seen by arriving customers (equal to the stationary distribution owing to the PASTA property); this is a classical result in queuing theory. Moreover, the sojourn time of the tagged customer finding customers in the queue is

where is a sequence of independent and identically distributed exponential random variables with mean . It follows that the Laplace transform of is

By inverting the Laplace transform we eventually obtain

which corresponds to Equation (15) in [6].

###### Corollary 2.

The number of arrivals at the queue while the tagged job is in service is given by

(14) |

and the number of departures under the same condition is such that (equality in distribution).

###### Proof.

Let be an integer. We have

We note that is a polynomial with degree and hence owing to the orthogonality property of polynomials

(15) |

for that is . It follows that

and Equation (14) follows by deconditioning on .

By using similar arguments, we have

By using Equation (15), we deduce that

for and hence

so that we have . ∎

To conclude this section, let us study the asymptotic behavior of the common distribution of the random variables and . In the following, we set .

###### Proposition 2.

When tends to infinity, we have

(16) |

where

(17) |

###### Proof.

The idea of the proof is to show that when tends to infinity

and to use the same technique as in [5] to obtain Equation (16).

By definition, we have

where

We clearly have

Let . We can write

where

We first note that

where we have used the normalizing condition (3). It follows that if we choose sufficiently small so that

Comments

There are no comments yet.