A Processor-Sharing model for the Performance of Virtualized Network Functions

04/11/2019
by   Fabrice Guillemin, et al.
0

The parallel execution of requests in a Cloud Computing platform, as for Virtualized Network Functions, is modeled by an M^[X]/M/1 Processor-Sharing (PS) system, where each request is seen as a batch of unit jobs. The performance of such paralleled system can then be measured by the quantiles of the batch sojourn time distribution. In this paper, we address the evaluation of this distribution for the M^[X]/M/1-PS queue with batch arrivals and geometrically distributed batch size. General results on the residual busy period (after a tagged batch arrival time) and the number of unit jobs served during this residual busy period are first derived. This enables us to provide an approximation for the distribution tail of the batch sojourn time whose accuracy is confirmed by simulation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2020

On the sojourn time of a batch in the M^[X]/M/1 Processor Sharing Queue

In this paper, we analyze the sojourn of an entire batch in a processor ...
research
04/19/2021

Asymptotic analysis of the sojourn time of a batch in an M^[X]/M/1 Processor Sharing Queue

In this paper, we exploit results obtained in an earlier study for the L...
research
04/11/2019

On the sojourn of an arbitrary customer in an M/M/1 Processor Sharing Queue

In this paper, we consider the number of both arrivals and departures se...
research
12/31/2021

BatchLens: A Visualization Approach for Analyzing Batch Jobs in Cloud Systems

Cloud systems are becoming increasingly powerful and complex. It is high...
research
09/19/2022

Capacity Allocation for Clouds with Parallel Processing, Batch Arrivals, and Heterogeneous Service Requirements

Problem Definition: Allocating sufficient capacity to cloud services is ...
research
02/05/2019

Optimal Divisible Load Scheduling for Resource-Sharing Network

Scheduling is an important task allowing parallel systems to perform eff...
research
12/13/2019

Queueing Analysis of GPU-Based Inference Servers with Dynamic Batching: A Closed-Form Characterization

GPU-accelerated computing is a key technology to realize high-speed infe...

Please sign up or login with your details

Forgot password? Click here to reset