Poly-Symmetry in Processor-Sharing Systems

10/03/2016 ∙ by Thomas Bonald, et al. ∙ 0

We consider a system of processor-sharing queues with state-dependent service rates. These are allocated according to balanced fairness within a polymatroid capacity set. Balanced fairness is known to be both insensitive and Pareto-efficient in such systems, which ensures that the performance metrics, when computable, will provide robust insights into the real performance of the system considered. We first show that these performance metrics can be evaluated with a complexity that is polynomial in the system size if the system is partitioned into a finite number of parts, so that queues are exchangeable within each part and asymmetric across different parts. This in turn allows us to derive stochastic bounds for a larger class of systems which satisfy less restrictive symmetry assumptions. These results are applied to practical examples of tree data networks, such as backhaul networks of Internet service providers, and computer clusters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Systems of processor-sharing queues with state-dependent service rates have been extensively used to model a large variety of real communication and computation systems like content delivery systems [17, 18], computer clusters [2, 11] and data networks [8, 14]. They are natural models for such real systems as they capture the complex interactions between different jobs and also have a promise of analytical tractability of user performance when subject to stochastic loads. Indeed, in the past two decades researchers have been able to obtain explicit performance expressions and bounds for several such systems, see [4, 5, 6, 7, 12, 13, 14, 17, 18].

However, few performance results scale well with the system size. Those that do rely on restrictive assumptions related to the topology or the symmetry of the system [14, 18]. One of the main goals of this paper is to provide scalable performance results for a class of processor-sharing systems which find applications in bandwidth-sharing networks and computer clusters.

One of the key features of processor-sharing systems is the allocation of the service rates per queue in each state. A particular class of resource allocations which is more tractable for performance analysis is characterized by the balance property which constrains the relative gain in the service rate at one queue when we remove a job from another queue. Processor-sharing systems where the resource allocation satisfies this property are called Whittle networks [16]. In particular, if the service rates are constrained by some capacity set, corresponding to the resources of the real system considered, then there exists a unique policy which satisfies the balance property while being efficient, namely balanced fairness [3]. In this paper we focus on systems which are constrained by a polymatroid capacity set [10, 17] and operate under balanced fair resource allocation.

It was proved in [17] that balanced fairness is Pareto-efficient when it is applied in polymatroid capacity sets, which in practice yields explicit recursion formulas for the performance metrics. However, if no further assumptions are made on the structure of the system, the time complexity to compute these metrics is exponential with the number of queues. It was proved in [17] that it can be made linear at the cost of strict assumptions on the overall symmetry of the capacity set and the traffic intensity at each queue. Under symmetry in interaction across queues, it was shown in [18] that the performance is robust to heterogeneity in loads and system configuration under an appropriate scaling regime. However, there is little understanding of performance for scenarios where queues themselves interact in heterogeneous fashion.

In this paper, we consider a scenario where the processor-sharing system is partitioned into a finite number of parts, so that queues are exchangeable within each part and asymmetric across different parts. For such systems, that we call poly-symmetric, we obtain a performance expression with computational complexity which is polynomial in the number of queues. We demonstrate the usefulness of these bounds by applying them to tree data networks, which are representative of backhaul networks, and to randomly configured heterogeneous computer clusters. In addition, we provide a monotonicity bound which allows us to bound performance of systems with capacity regions which are ‘nearly’ poly-symmetric.

The paper is organized as follows. Section 2 introduces the model and shows that it applies to real systems as varied as tree data networks and computer clusters. We also recall known facts about balanced fairness. In Section 3, we introduce the notion of poly-symmetry and show that it yields explicit recursion formulas for the performance metrics which have a complexity that is polynomial in the number of queues in the processor-sharing system. Finally, Section 4 gives stochastic bounds to compare the performance of different systems. We conclude in Section 5.

2 System model

2.1 Processor-sharing queueing system with a polymatroid capacity set

We consider a system of processor-sharing queues with coupled service rates and we denote by the set of queue indices. For each , jobs enter the system at queue according to some Poisson process with intensity and have i.i.d. exponential service requirements with mean , resulting in a traffic intensity at queue . Jobs leave the system immediately after service completion. Such a queueing system will be called a processor-sharing system throughout the paper.

The system state is described by the vector

, where is the number of jobs at queue for each . For each , denotes the set of active queues in state . Queues have state-dependent service rates. For each , denotes the vector of service rates per queue when the system is in state .

The system is characterized by a capacity set, which is defined as the set of all feasible resource allocations . This capacity set may be specified by practical constraints like the capacities of the links in a data network or the service rates of the servers in a computer cluster. We are interested in queueing systems whose capacity set is a particular type of polytope called a polymatroid [10].

Definition 1.

A polytope in is a polymatroid if there exists a non-negative function defined on the power set of such that

and satisfies the following properties:

[noitemsep]

Normalization: ,

Monotonicity: for all , if , then ,

Submodularity: for all , .

is called the rank function of the polymatroid .

(a) User routes of the tree data network

1

2

3

1

2
(b) Assignment graph of the cluster

 

 

 

 

 

 

(c) Equivalent processor-sharing system

(d) Capacity set
Figure 1: A tree data network and a computer cluster with their representation as a processor-sharing system with queues

Before we specify the resource allocation, we give two examples of real systems that fit into this model.

2.2 Tree data networks

The first example is a data network with a tree topology [6], representative of backhaul networks of Internet service providers. There are users that can generate flows in parallel and we denote by the set of user indices. For any , user generates data flows according to some Poisson process with intensity that is independent of the other users. All flows generated by user

follow the same route in the network and have i.i.d. exponentially distributed sizes with mean

in bits, resulting in a traffic intensity in bit/s. The state of the network is described by the vector , where is the number of ongoing flows of user , for each .

We make the following assumptions on the allocation of the resources. The capacity of each link can be divided continuously among the flows that cross it. Also, the resource allocation per flow only depends on the number of flows of each user in progress. In particular, all flows of a user receive the same capacity, so that the per-flow resource allocation is entirely defined in any state by the total capacity allocated to flows of user , for any .

Under these assumptions, we can represent the data network by a processor-sharing system with queues, one per user. For each , the jobs at queue in the equivalent processor-sharing system are the ongoing flows of user in the data network, and the service rate of this queue in state is the total capacity allocated to the flows of user . We will now describe the corresponding capacity set.

Each link can be identified by the set of users that cross it. Specifically, we can describe the network by a family of subsets of , where a set is in if and only if there is a link crossed by the flows of all users . We assume that the network is a tree in the following way.

Definition 2.

The network is called a tree if for all , implies that or .

There is no loss of generality in assuming that , for if not, the network is a forest where each subtree can be considered independently. For each , we denote by the capacity in bit/s of link . We assume that all links are constraining since otherwise we can simply ignore the non-constraining ones. The resource allocation must then satisfy the capacity constraints

(1)

so that the capacity set is given by

Example 1.

Figures 0(a), 0(c) and 0(d) give the example of a tree data network with users. The routes of the users are given in Figure 0(a). The flows of each user cross one link that is individual and another that is shared by both users. The representation of this data network as a processor-sharing system is given in Figure 0(c) and the corresponding capacity set is given in Figure 0(d). It is easy to see that it is a polymatroid for any value of the link capacities.

The following theorem generalizes this last remark to any tree data network.

Theorem 1.

The capacity set of a tree data network is a polymatroid with rank function defined by

for all non-empty set . In addition, we have for each .

Proof.

We can certainly assume that contains all the singletons since letting for each does not modify the capacity set . We can easily see that the result remains true if we do not make this assumption.

We apply the following lemma which is a direct consequence of Theorems 2.5 and 2.6 of [10] about intersecting-submodular functions on intersecting families of subsets.

Lemma 1.

Let be a family of subsets of and such that, for all with , we have , and . Further assume that , and contains all the singletons of . Then the set of solutions in of the equations

is given by

where is the real-valued, normalized, submodular function defined on the power set of by

The definition of a tree ensures that satisfies the assumptions of the lemma, with the function defined on by for any and . Hence, the set of solutions of the capacity constraints (1) in is

where is the normalized, submodular function given by

Note that no claim about the monotonicity of can be made above because the points in can have negative components. This is illustrated in Figure 2, where the intersection point of the sides of corresponding to the sets and has a negative ordinate because .

Figure 2: Construction of the capacity set of a tree data network in from the set of solutions of its capacity constraints in

Since the components of a vector of resource allocation are always positive, the capacity set of the data network is given by . As we will see, since we restrict ourselves to points with positive components, the function which characterizes is not only normalized and submodular like but also non-decreasing. This is illustrated in Figure 2, which shows that we can replace by to describe the side corresponding to the set in .

More formally, we prove that is equal to the polymatroid with rank function given by

One can check that this function coincides with the one given in the theorem statement. We first show that is indeed a rank function and then we prove that is equal to .

The normalization of follows from that of . Also is non-decreasing by construction. Finally, for each , we have for some such that and , and also

where the first inequality holds by submodularity of and the second by definition of , since and . Hence is submodular.

We finally prove that . It is clear that any vector in is also in since for all . Conversely, consider . If is not in , then there is so that , which implies that . By definition of , it follows that there is so that is a strict subset of and . But then

so that at least one component of is negative. This is a contradiction. ∎

Example 2.

(a) User routes

(b) Capacity set
Figure 3: Representation of a tree data network

Figure 3 gives the example of a tree data network with its capacity set. The routes of the users are given in Figure 2(a). Each link is labeled with the set of user indices whose flows cross this link. The capacity constraints are

The rank function of the capacity set is given by

2.3 Computer clusters

We consider a cluster of servers which can be pooled to process jobs in parallel. The set of servers is denoted by . There are classes of jobs and we denote by the set of class indices. For any , class- jobs enter the cluster as a Poisson process with intensity and have i.i.d. exponential service requirements with mean , resulting in a traffic intensity for class . Jobs leave the cluster immediately after service completion. The state of the cluster is described by the vector , where is the number of jobs of class , for each .

The class of a job defines the set of servers that can process it. The server assignment is given by a family of subsets of , where denotes the set of servers that can serve class- jobs, for each . Equivalently, the server assignment can be described by a bipartite graph

called the assignment graph of the computer cluster. The service capacity of server is for each . For any set of job classes, we let

(2)

denote the aggregate capacity available for the classes in .

We make the following assumptions on the allocation of the server capacities. Servers can be pooled to process jobs in parallel. When a job is in service on several servers, its service rate is the sum of the rates allocated by each server to this job. We also assume that the capacity of each server can be divided continuously among the jobs it can serve. Finally, the allocation of the service rates per job only depends on the number of jobs of each class in the cluster. In particular, all jobs of a class receive service at the same rate, so that the per-job resource allocation is entirely defined in any state by the total capacity allocated to class- jobs, for each .

Under these assumptions, we can describe the evolution of the cluster with a processor-sharing system with queues, one per class. For each , queue contains class- jobs and its service rate in state is the total capacity allocated to class- jobs collectively. It was proved in [17] that the capacity set of such a cluster is a polymatroid and that the function defined by (2) is its rank function.

Example 3.

Figure 0(b) gives the assignment graph for an example of a computer cluster, where job classes are on the left and servers are on the right. Server can serve both classes whereas servers and are specialized. The corresponding processor-sharing system with queues is shown in Figure 0(c) and its capacity set, which is a polymatroid in , is depicted Figure 0(d). The vertical and horizontal sides correspond to the individual constraints of classes and , with and . The diagonal side corresponds to the joint constraint on classes and , with .

2.4 Balanced fairness

The service rates are allocated by applying balanced fairness [3] in the polymatroid capacity set introduced in Section 2.1.

For each , let denote the -dimensional vector with in position and elsewhere. Balanced fairness is defined as the only resource allocation that both satisfies the balance property

and maximizes the resource utilization in the following sense: in any state , and there exists such that

The balance property ensures that there exists a balance function on such that and

The second condition implies that satisfies the recursion

In [17] it is proved that balanced fairness is Pareto-efficient in polymatroid capacity sets, which means that this maximum is always achieved by the set of active queues:

(3)

Since the balance property is satisfied, the processor-sharing system defined in Section 2.1 is a Whittle network [16]. A stationary measure of the system state is

where we use the notation for any . Substituting (3) into this expression yields

It is proved in [3] that the system is stable, in the sense that the underlying Markov process is ergodic, if and only if

which means that the vector of traffic intensities belongs to the interior of the capacity set. In the rest of the paper, we assume that this condition is satisfied and we denote by the stationary distribution of the system state.

2.5 Performance metrics

By abuse of notation, for each , we denote by

the stationary probability that the set of active queues is

:

For each , let denote the mean number of jobs at queue and, for each , let denote the mean number of jobs at queue given that the set of active queues is . By the law of total expectation, we have

The following theorem gives a recursive formula for and for any and . It is a restatement of Theorem in [17] using the same idea as Proposition and Theorem in [18].

Theorem 2.

For each non-empty set , we have

(4)

Let . For each set , we have if , and otherwise

(5)

Observe that (4) allows one to evaluate recursively for each , from which can be computed. Similarly, for each , (5) allows one to evaluate recursively for each and each , from which the value of can be deduced. One could then compute performance metrics like the mean delay or the mean service rate per queue from by applying Little’s law. Note that the complexity is exponential in the number of queues.

3 Poly-symmetry

3.1 Definition

The exponential complexity of the formulas of Theorem 2 makes it impractical when we want to predict the performance of large-scale systems. To cope with this, we introduce the notion of poly-symmetry, which allows us to obtain formulas with a complexity that is polynomial in the number of queues at the cost of some regularity assumptions on the capacity set and the traffic intensity at each queue. Poly-symmetry is a generalization of the notion of symmetry which was considered in [17, 18].

The following definition will be used subsequently to introduce poly-symmetry. It is easy to check that it defines an equivalence relation on the set of indices.

Definition 3.

Let be a polymatroid on and denote its rank function by . Let with . We say that indices and are exchangeable in if

As the name suggests, two indices are exchangeable if and only if exchanging these indices does not modify the capacity set. Note that the exchangeability of two indices and implies that they have the same individual constraints . The reverse implication is not true when , as we will see in the following example.

Example 4.

1

2

3
(a) Assignment graph

(b) Capacity set
Figure 4: Computer cluster with two exchangeable indices and a third index

Consider the computer cluster with the assignment graph depicted in Figure 3(a), where all servers have the same unit capacity. The corresponding polymatroid capacity set is illustrated in Figure 3(b). We have and , so that indices and are exchangeable. Index is not exchangeable with any of the two other indices because while .

Let us now define poly-symmetry. Suppose and consider a partition of in parts.

Definition 4.

Let be a polymatroid in . is called poly-symmetric with respect to partition if for any , all indices in are pairwise exchangeable in .

Since the exchangeability of indices defines an equivalence relation on , we can consider the quotient set of by this relation, which is the partition of into the maximal sets of pairwise exchangeable indices. Definition 4 can then be rephrased as follows: a polymatroid is poly-symmetric with respect to a partition if and only if is a refinement of the quotient set of by the exchangeability relation in . It follows directly from the definition that the polymatroid of Example 2 is poly-symmetric with respect to partition when , as we can see in Figure 2(b). Also in Example 4, the polymatroid is poly-symmetric with respect to partition .

For each , let denote the size of part , where by part we mean a subset of the partition. For any , let denote the vector of sizes of each part of in the partition. The set of these vectors is denoted by

We now give an alternative definition of poly-symmetry which is equivalent to Definition 4. It is a generalization of the definition of symmetry given in [17, 18]. We will use it to express and prove Theorem 3.

Definition 5.

Let be a polymatroid in and denote its rank function by . is called poly-symmetric with respect to partition if for any , depends on only through the size of for each . Equivalently, there exists a componentwise non-decreasing function such that for all . We call the cardinality rank function of with respect to partition .

Proof of the equivalence. We only prove that Definition 4 implies Definition 5; the reverse implication is clear. For any with , we can write and , where denotes the union of two disjoint sets. Since we have , we are thus reduced to proving that for all disjoint sets such that . This can be done by ascending induction on the cardinality of and . ∎

Example 5.

1

2

3
(a) Assignment graph

(b) Capacity set
Figure 5: Computer cluster with a polymatroid capacity set which is poly-symmetric with respect to partition

Consider the computer cluster with the assignment graph depicted in Figure 5, where all servers have the same unit capacity. The corresponding capacity set is illustrated in Figure 4(b). It is poly-symmetric with respect to partition and the corresponding cardinality rank function is given by , and .

3.2 Performance metrics

Let be a partition of . We consider a processor-sharing system with a polymatroid capacity set which is poly-symmetric with respect to . For each , the vector gives the number of active queues in each part of the partition when the set of active queues is . By abuse of notation, for each , we denote by the vector of with in component and elsewhere.

As in Section 2.4, the resources are allocated by applying balanced fairness in this capacity set under some vector of traffic intensity which satisfies the stability constraints. For simplicity of notation, for each , we denote by the probability that the number of active queues in part is for each :

For each , let denote the mean number of jobs in the queues of part and, for each , let denote the mean number of jobs in the queues of part given that there are active queues in part for each . The regularity assumptions ensure that, for each , and for each also give the mean numbers of jobs at queue for any . By the law of total expectation, we have

The following theorem gives a recursive formula for and that allows one to compute recursively these quantities with a complexity . The proof is given in Appendix A.

Theorem 3.

Consider a system of processor-sharing queues with state-dependent service rates allocated according to balanced fairness in a polymatroid capacity set . Assume that is poly-symmetric with respect to partition and denote by the corresponding cardinality rank function. Further assume that for each , all queues of receive jobs with the same traffic intensity , i.e.  for all . For each , we have

(6)

Let . For each , we have if , and otherwise

(7)

This result applies to Example 5 with the partition when classes and have the same traffic intensity. The set of suitable vectors of traffic intensities is depicted as the darkly shaded region in Figure 4(b).

In this theorem, we have assumed that the cardinality rank function was given. Given a real system like those of Sections 2.2 and 2.3 which is known to be poly-symmetric with regard to some partition , one could ask if it is also possible to build with a complexity . This is straightforward for a computer cluster. Concerning the tree data networks, we can actually apply a method similar to that of the proof of Theorem 1. Specifically, we first define recursively a concave function on by , if there is so that , and otherwise

from which we can construct by letting

We will now see two examples of real systems where this result applies.

3.3 Application to tree data networks

We consider the simple example of a tree data network where each user has an individual access line and all users share an aggregation link which has a capacity in bit/s. The user access lines can have different capacities in bit/s. This corresponds to the model introduced in [1] to predict some performance metrics in Internet service provider access networks, where the individual access lines represent subscriber lines which are connected to the aggregation link by the digital subscriber line access multiplexer (DSLAM).

Example 6.

Access lineswith capacity

Access lineswith capacity

Aggregation linkwith capacity
Figure 6: User routes

Figure 6 gives a toy example with possible access rates and . There are three users with access rate and two users with access rate . All users are constrained by the aggregation link with capacity .

For each we denote by the set of users with access rate . These form a partition of the set of users. Theorem 1 ensures that the capacity set of this data network is a polymatroid with rank function given by

(8)

It is poly-symmetric with respect to partition . The corresponding cardinality rank function is given by

We further assume that for each , all users with access line have the same traffic intensity . Then the network is stable whenever , and it meets the conditions of Theorem 3.

A metric of interest is the mean throughput per user. For each , we denote by and the conditional probability measure and expectation given that user is active, corresponding to the stationary distribution . For each and each , the mean throughput perceived by user is then given by

where the second equality holds by the conservation equation for all . Using the notations of Section 3.2, the mean throughput of the users with access rate is given by

where for each can be computed with a complexity by (6). Other performance metrics such as the mean congestion rate per user can be computed similarly.

3.4 Application to computer clusters

Let . We consider a computer cluster with servers and classes. All servers have the same unit capacity and all jobs have a unit mean size. The set of classes is partitioned into two subsets and . contains classes that can each be served by servers and contains classes that can each be served by servers. For any , the -th class of can be served by the servers for . For any , the -th class of can be served by the servers for . Figure 7 gives a toy example with and .

1

2

3

4

5

6

1

2

3

4

5
Figure 7: Computer cluster with and

Any class of shares exactly one server with any class of , and this server is dedicated to these two classes. The rank function of this cluster is thus given by

The polymatroid capacity set defined by this rank function is poly-symmetric with respect to partition and the corresponding cardinality rank function is given by

For each , assume that all classes in have the same traffic intensity . Further assume that the vector of traffic intensities stabilizes the system, that is

We can then apply Theorem 3 with partition to compute the mean number of jobs of each class with a complexity . We deduce the mean delay of class- jobs for each by Little’s law:

4 Stochastic Bounds

4.1 Monotonicity result

While the property of poly-symmetry is not often satisfied in practice, except in specific cases like the examples of Sections 3.3 and 3.4, it can be used to derive stochastic bounds on most systems, as shown below. The following result will allow us to control the impact of the capacity set on performance.

Given and a polymatroid in with rank function , we denote by the polymatroid in with rank function and by the polymatroid in with rank function .

Theorem 4.

Let . Consider two polymatroids and in such that is a subset of and a superset of . Let be an element in the interior of and denote respectively by , and the steady state distributions of the processor-sharing systems with capacity sets , and under traffic intensity . Then

Specifically, for each , we have

where , and are the mean number of job at queue under distributions , and respectively.

Proof.

Denote by and the rank functions of and respectively. Let , and denote the balance functions of the resource allocations defined by balanced fairness in the capacity sets , and respectively. We first prove by induction on that

The property holds for . Let and assume the inequality is valid for any such that