I-a Background and Problem Statement
To provide agile service responses and alleviate backbone networks, edge and cloud computing are gradually converging to achieve this goal by hosting services as close as possible to where requests are generated [22, 20]. Edge-cloud systems are commonly built on Kubernetes (k8s) [4, 12, 17, 3] and are designed to seamlessly integrate distributed and hierarchical computing resources at the edge and the cloud . One fundamental problem for supporting efficient edge-cloud systems is: how to schedule request dispatch  and service orchestration (placement)  within the k8s architecture. However, native k8s architecture is hard to manage the geographically distributed edge computing resources, while the customized edge-cloud frameworks (e.g., KubeEdge , OpenYurt  and Baetyl ) based on k8s do not address the above scheduling issues.
To serve various requests, the edge-cloud system needs to manage corresponding service entities across edge and cloud while able to determine where these requests should be processed. Though k8s is the most popular tool for managing cloud-deployed services, it is not yet able to accommodate both edge and cloud infrastructure and support request dispatch at the distributed edge. In this case, how to () adapt k8s components and extend its current logic to bind the distributed edge and the cloud and () devise scheduling algorithms that can fit in k8s is the key for efficient edge-cloud systems.
I-B Limitations of Prior Art and Motivation
Most scheduling solutions for request dispatch and service orchestration rely on the accurate modeling or prediction of service response times, network fluctuation, request arrival patterns, etc. [7, 19, 15]. Nevertheless, () the heterogeneous edge nodes and the cloud cluster are connected in uncertain network environments, and practically form a dynamic and hierarchical computing system. As shown in Fig. 1, the system behavior, i.e., the average throughput rate of that system managed by native k8s, substantially varies with the available resources and the request loads (refer to Sec. V for detailed settings). More importantly, () the underlying model that captures this behavior is highly nonlinear and far from trivial
. However, even though rich historical data are available, it is hard to achieve the exact estimation of these metrics[28, 2] and then design scheduling policies for any specific request arrivals, system scales and structures, or heterogeneous resources. Further, () few solutions carefully consider whether the proposed scheduling framework or algorithms are applicable to the actual deployment environment, i.e., whether they are compatible with k8s or others to integrate with the existing cloud infrastructure. Therefore, a scheduling framework for a k8s-oriented edge-cloud system, without relying on the assumption about system dynamics, is desired.
I-C Technical Challenges and Solutions
In this paper, we show that learning techniques  can help edge-cloud systems by automatically learning effective system scheduling policies to cope with stochastic service request arrivals. We propose KaiS, a ḳ8s-oriented and leạrnịng-based scheduling framework for edge-cloud ṣystems. Given only a high-level goal, e.g., to maximize the long-term throughput of service processing, KaiS automatically learns sophisticated scheduling policies through experiences of the system operation, without relying on the assumptions about system execution parameters and operating states. To guide KaiS to learn scheduling policies, we need to tailor learning algorithms in the following aspects: the coordinated learning of multiple agents, the effective encoding of system states, the dimensionality reduction of scheduling actions, etc.
For request dispatch, as depicted in Fig. 2, KaiS needs to scale to hundreds of distributed edge Access Points (eAPs) . Traditional learning algorithms, such as DQN  and DDPG , that usually use one centralized learning agent, is not feasible for KaiS since the distributed eAPs will incur dispatch action space explosion . To ensure timely dispatch, KaiS requires the dispatch action be determined at where the request arrives, i.e., eAPs, in a decentralized (instead of centralized) manner 
. Thus, we leverage Multi-Agent Deep Reinforcement Learning (MADRL) and place a dispatch agent at each eAP. However, such settings () require numerous agents to interact with the system at each time and () have varying dispatch action spaces that depend on available system resources, making these agents difficult to learn scheduling policies. Hence, we decouple centralized critic and distributed actors, feeding in global observations during critic training to stabilize each agent’s learning process, and design a policy context filtering mechanism for actors to respond to the dynamic changes of dispatch action space.
Besides, KaiS must orchestrate dozens of or more services according to the system’s global resources and adapt to different system scales and structures. Hence, KaiS requires our learning techniques to () encode massive and diverse system state information, and () represent bigger and complex action space for orchestration. Thus, we employ Graph Neural Networks (GNNs)  and multiple policy networks  to encode the system information and reduce the orchestration dimensionality, respectively, without manual feature engineering. Compared with common DRL solutions with raw states and fixed action spaces, our design can reduce model complexity, benefiting the learning of scheduling policies.
I-D Main Contributions
A coordinated multi-agent actor-critic algorithm for decentralized request dispatch with a policy context filtering mechanism that can deal with dynamic dispatch action spaces to address time-varying system resources.
A GNN-based policy gradient algorithm for service orchestration that employs GNNs to efficiently encode system information and multiple policy networks to reduce orchestration dimensionality by stepwise scheduling.
A two-time-scale scheduling framework implementation of the tailored learning algorithms for the k8s-oriented edge-cloud system, i.e., KaiS, and an evaluation of KaiS with real workload traces in various scenarios and against baselines.
Ii Scheduling Problem Statement
We focus on scheduling request dispatch and service orchestration for the edge-cloud system to improve its long-term throughput rate, i.e., the ratio of processed requests that meet delay requirements during the long-term system operation.
Ii-a Edge-Cloud System
As shown in Fig. 2, neighboring eAPs and edge nodes form a resource pool, i.e., an edge cluster, and connect with the cloud. When requests arrive at eAPs, the edge cluster then handles them together with the cloud cluster . For clarity, we only take one edge cluster to exemplify KaiS, and consider the case that there is no cooperation between geographically distributed edge clusters. Nonetheless, by maintaining a service orchestrator for each edge cluster, KaiS can be easily generalized to support geographically distributed edge clusters.
Edge Cluster and Edge Nodes. To process requests, the edge cluster should host corresponding service entities. An edge cluster consists of a set of eAPs indexed by , and is the set of edge nodes attached to and managed by eAP . All edge nodes in the edge cluster are represented by . All eAPs, along with associated edge nodes, are connected by Local Area Network (LAN). A request arrived at the edge can be dispatched to an edge node or the cloud by the eAP that admits it for processing.
Cloud Cluster. The cloud cluster has sufficient computing and storage resources compared to the edge and is connected to eAPs through WAN (Wide Area Network), It can undertake requests that edge clusters cannot process. In addition, it manages all geographically distributed edge clusters, including orchestrating all service entities in each edge cluster according to the system’s available resources.
Ii-B Scheduling to Improve Long-term System Throughput
We adopt a two-time-scale mechanism  to schedule request dispatch and service orchestration, i.e., KaiS performs request dispatch at a smaller scale, slot , while carrying out service orchestration at a larger scale, frame ().
Dispatch of Requests at eAPs. Delay-sensitive service requests are stochastically arriving at eAPs. For each eAP , it maintains a queue for the requests arrived at it, and a dynamic dispatch policy varying with time. According to , at each slot , each eAP dispatches a request to an edge node, where the required service entity is deployed and that has sufficient resources, or the cloud cluster with sufficient computing resources for processing. The processing of each request consumes both computation resources and network bandwidth of the edge or the cloud. Moreover, dispatching requests to the cloud may lead to extra transmission delay since it is not as close to end devices, i.e., where requests are generated. Each edge node and the cloud maintain a queue of dispatched requests, i.e., for edge nodes and for the cloud, and process their respective queue by a specific strategy, e.g., prioritizing requests with strict delay requirements. To ensure timely scheduling, it is ideal to have the eAPs, where requests first arrive, perform request dispatch independently, instead of letting the cloud or the edge to make dispatching decisions in a centralized manner, since it may incur high scheduling delays . For requests that are not processed in time, the system drops them at each slot.
Orchestration of Services at Edge Cluster. Due to the storage capacity and memory limit of edge nodes, not all services can be stored and hosted on each of them. In this case, service entities at the edge cluster should be orchestrated, which includes the following questions: () which service should be placed on which edge node and () how many replicates the edge node should maintain for that service. Besides, service requests arrivals at different times may have different patterns, resulting in the intensity of demand for different services varying over time. Hence, the scheduling should be able to capture and identify such patterns and, based on them, to orchestrate services to fulfill stochastically arriving requests. Unlike request dispatch, too frequent large-scale service orchestration in the edge cluster may incur system instability and high operational costs . A more appropriate solution is to have the cloud perform service orchestration for the edge with a dynamic scheduling policy at each frame . Based on , the cloud determines , i.e., the number of replicates of service on edge node during frame . Particularly, means that edge node does not host service .
The scheduling objective is to maximize the long-term system throughput , where , represent the number of requests that have been processed timely by edge node or the cloud in frame , respectively. To avoid , we use a more realistic metric, i.e., the long-term system throughput rate , which is the ratio of requests, completed within delay requirements, to the total number of arrived requests at the system. The long-term throughput rate can be denoted as , where indicates the number of requests arrived at eAP during frame . In this case, our scheduling problem for both request dispatch and service orchestration can be formulated as
where, for clarity, we use scheduling policies and instead of a series of scheduling variables at slots and frames to represent the problem. Compared to the problem in , our scheduling is more complicated since it involves integer dispatch variables. More details on the constraints and NP-hard proof of such a long-term scheduling problem can be found in . In this work, we tailor learning algorithms for KaiS to improve the long-term system throughput rate.
Iii Algorithm Design
The overall training and scheduling process of KaiS is given in Algorithm 1. We explain the technical details of request dispatch and service orchestration in the following. Detailed training settings are presented in Sec. IV-C.
Iii-a Tailored MADRL for Decentralized Request Dispatch
Request dispatch is to let each eAP independently decide which edge node or the cloud should serve the arrived request. The goal of dispatch is to maximize the long-term system throughput rate by () balancing the workloads among edge nodes and () further offloading some requests to the cloud.
Iii-A1 Markov Game Formulation
To employ MADRL, we formulate that eAPs independently perform request dispatch as a Markov game for eAP agents . Formally, the game is defined as follows.
State. is the state space. At each slot , we periodically construct a local state for each eAP agent , which consists of () the service type and delay requirement of the current dispatching request , () the queue information of requests awaiting dispatch at eAP , () the queue information, , of unprocessed requests at edge nodes , () the remaining CPU, memory and storage resources of , () the number of , i.e., , and () the measured network latency between the eAP and the cloud. Meanwhile, for centralized critic training, we maintain a global state , which includes () the above information for all eAPs and edge nodes , instead of only eAP and , and () the queue information of unprocessed requests at the cloud cluster .
Action space. The joint action space of is , where the individual action space of eAP agent specifies where the current request can be dispatched to. For an edge cluster, we consider all available edge nodes as a resource pool, namely the cooperation between eAPs in enabled. In this case, includes discrete actions denoted by , where and specify dispatching to the cloud or edge nodes, respectively. At each slot , we use to represent the joint dispatch actions of all requests required to be scheduled at all eAPs. Note that multiple requests may be queued in eAPs (), we only allow each eAP agent to dispatch one request at a slot . Meanwhile, for KaiS, we set the time slot to a moderate value (refer to Sec. IV) to ensure its scheduling timeliness to serve request arrivals.
Reward function. All agents in the same edge cluster share a reward function , i.e., holds for all . Each agent wants to maximize its own expected discounted return , where is the immediate reward for the -th agent associated with the action and is a discount factor. The immediate reward is defined as . Specifically, () is the ratio of requests that violate delay requirements during , () , where
is the standard deviation of the CPU and memory usage of all edge nodes, and () is the weight to control the degree of load balancing among edge nodes. The introduction of is to stabilize the system, preventing too much load are imposed on some edge nodes. When is closer to , i.e., , the loads of edge nodes are more balanced, thus leading to more scheduling rooms for dispatch. Such a reward is to improve the long-term throughput while ensuring the load balancing at the edge.
State transition probability. We use
to indicate the transition probability from stateto given a joint dispatch action . The action is deterministic in , i.e., if , the agent will dispatch the current request to edge node at slot .
Iii-A2 Coordinated Multi-Agent Actor-Critic
The challenges of training these dispatch agents are: () The environment of each agent is non-stationary because other agents are learning and affecting the environment simultaneously. Specifically, each agent usually learns its own policy that is changing over time , increasing the difficulty of coordinating them; () The action space of each agent changes dynamically since its feasible dispatch options vary with the available system resources, making vanilla DRL algorithms unable to handle. For instance, if the memory of the edge node is run out at slot , the dispatch action should not include that dispatching the request to edge node .
Therefore, we design coordinated Multi-Agent Actor-Critic (cMMAC), as illustrated in Fig. 3: () Adopt a centralized critic and distributed actors to coordinate learning, i.e., all agents share a centralized state-value function when training critic, while during distributed actor training and inference each actor only observes the local state. () By policy context filtering, we can adjust their policies to tolerate dynamic action space and establish explicit coordination among agents to facilitate successful training. The details are illustrated as follows.
Centralized state-value function (Critic)
. The state-value function shared by eAP agents can be obtained by minimizing the loss function derived from Bellman equation:
where and denote the parameters of the value network and the target value network, respectively. In total, for eAP agents, there are unique state-values at each slot. Each state-value output is the expected return received by agent at slot . To stabilize the learning of the state-value function, we fix a target value network parameterized by and update it at the end of each training episode.
Policy context filtering (Actors). Policy context filtering is mainly reflected in the resource context when scheduling request dispatch. In the operating edge-cloud system, the available resources of edge nodes fluctuate along with the scheduling events. To avoid, as much as possible, the situation that an eAP agent dispatches a request to an edge node with insufficient resources, before dispatch, we compute a resource context
for each eAP agent, which is a binary vector that filters out invalid dispatch actions. The value of the element ofis defined as:
where () () represents the validity of dispatching the current request to -th edge node and () specifies that the cloud cluster () is always a valid action of request dispatch, namely . The coordination of agents is also achieved by masking available action space based on the resource context . To proceed, we first use
to denote the original output logits from the actor policy network for the-th agent conditioned on state . Then, we let , where the operation is element-wise multiplication, to denote the valid logits considering the resource context for agent . Note that the output logits are restricted to be positive to achieve effective masking. Based on the above denotations, the probability of valid dispatch actions for agent can be given by:
where is the parameters of actor policy network. At last, for cMMAC, the policy gradient can be derived and the advantage can be computed as follows:
Iii-B GNN-based Learning for Service Orchestration
We propose a GNN-based Policy Gradient (GPG) algorithm and describe how () the system state information is processed flexibly; () the high-dimensional service orchestration is decomposed as stepwise scheduling actions: selecting high-value edge nodes and then performing service scaling on them.
Iii-B1 GNN-based System State Encoding
As shown in Fig. 4, KaiS must convert system states into feature vectors on each observation and then pass them to policy networks. A common choice is directly stacking system states into flat vectors. However, the edge-cloud system is practically a graph consisting of connected eAPs, edge nodes, and the cloud cluster. Simply stacking states has two defects: () processing a high-dimensional feature vector requires sophisticated policy networks, which increases training difficulty; () it cannot efficiently model the graph structure information for the system, making KaiS hard to generalize to various system scales and structures. Therefore, we use GNNs to encode system states into a set of embeddings layer by layer as follows.
Embedding of edge nodes. For edge nodes associated with eAP , each of them, , carries the following attributes at each frame , denoted by a vector : () the available resources of CPU, memory, storage, etc., () the periodically measured network latency with eAP and the cloud, () the queue information of the backlogged requests at itself, i.e., , and () the indexes of deployed services and the number of replicates of each deployed service. Given , KaiS performs embedding for each edge node as . To perform embedding, for an edge node , we build a virtual graph by treating other edge nodes as its neighbor nodes. Then, as depicted in Fig. 4, we traverse the edge nodes in and compute their embedding results one by one. Once an edge node has accomplished embedding, it provides only the embedding results for the subsequent embedding processes of the remaining edge nodes. For edge node , its embedding results can be computed by propagating information from its neighbor nodes to itself in a message passing step. In message passing, edge node aggregates messages from all of its neighbor nodes and computes its embeddings as:
are both non-linear transformations implemented by Neural Networks (NNs), combined to express a wide variety of aggregation functions. Throughout the embedding, we reuse the same NNsand .
Embedding of eAPs and the edge cluster. Similarly, we leverage GNNs to compute an eAP embedding for each eAP , , and further an edge cluster embedding for all eAPs, . To compute the embedding for eAP as in (8), we add an eAP summary node to and treat all edge nodes in as its neighbor nodes. These eAP summary nodes are also used to store their respective eAP embeddings. Then, the eAP embedding for each eAP can be obtained by aggregating messages from all neighboring nodes and computed as (8). In turn, these eAP summary nodes are regarded as the neighbor nodes of an edge cluster summary node, such that (8) can be used to compute the global embedding as well. Though the embeddings and are both computed by (8), different sets of NNs, i.e., () , for and () , for , are used for non-linear transformations.
Iii-B2 Stepwise Scheduling for Service Orchestration
The key challenge in encoding service orchestration actions is to deal with the learning and computational complexity of high-dimensional action spaces. A direct solution is to maintain a huge policy network and orchestrate all services for all edge nodes at once based on the embedding results in Sec. III-B1. However, in this manner, KaiS must choose actions from a large set of combinations , thereby increasing sample complexity and slowing down training . Besides, too frequent large-scale service orchestration will bring huge system overhead and harm system stability.
Therefore, we consider stepwise scheduling, which in each frame first selects high-value edge nodes ( in experiments), and then scales services for each of them in a customized action space of a much smaller size . Specifically, KaiS passes the embedding vectors from Sec. III-B1 as inputs to the policy networks, which output a joint orchestration action , including () the action of selecting high-value edge nodes and () the joint service scaling action corresponding to high-value edge nodes.
Selection of high-value edge nodes. At each frame, KaiS first uses a policy network to select high-value edge nodes, denoted by action . As illustrated in Fig. 5, for edge node associated with eAP , it computes a value , where is a non-linear value-evaluation function implemented by a NN . The introduction of function is to map the embedding vectors to a scalar value. The value specifies the priority of KaiS performing service scaling at edge node . A softmax operation is used to compute the probability of selecting edge node based on the values :
According to the probabilities for all edge nodes, KaiS selects edge nodes with high probabilities as high-value edge nodes to perform service scaling.
Service scaling for high-value edge nodes. For a selected high-value edge node , KaiS uses an action-evaluation function , implemented by a NN , to compute a value for edge node performing service scaling at frame . The action space of is defined as with size , i.e., . The meaning of is as follows: () indicates that all services remain unchanged, () means deleting a replicate of service , and () specifies adding a replicate of service . Particularly, for an invalid service scaling action due to resource limitations of an edge node, KaiS always transforms it to . Similarly, we apply a softmax operation on to compute the probabilities of scaling actions, and choose to perform the action with the highest probability. For all high-value edge nodes , KaiS will generate a joint service scaling action at each frame.
While KaiS decouples request dispatch and service orchestration, this does not affect our objective of improving . In fact, as the dispatch policy is contained in a regularly updated policy network that provides an appropriate load-balanced edge cluster for orchestration, we also implicitly optimize the dispatch when optimizing the orchestration.
To guide GPG, KaiS generates a reward after each service orchestration at frame , where is the queue length of unprocessed requests at edge node . By such a design, GPG will gradually lead KaiS to reduce backlogged requests as much as possible, thereby improving the throughput rate, which we will show in experiments. KaiS adopts a policy gradient algorithm for training NNs , and used in GPG. For clarity, we denote all parameters of these NNs jointly as , all GNN-encoded system states as , the joint service orchestration action as , and the scheduling policy as , i.e., the probability of taking action when observing state . At each frame, KaiS collects the observation and updates the parameters using policy gradient:
where is the length of a GPG training episode, is the learning rate, and
is a baseline used to reduce the variance of the policy gradient. A method for computing the baseline is settingto the cumulative reward from frame onwards, averaged over all training episodes .
Iv Implementation Design
All services are hosted in the system as Docker containers. KaiS is implemented based on k8s and k3s (a lightweight k8s for edge)  in Ubuntu 16.04 using Python 3.6.
Iv-a Edge-cloud System Setup
Requests. Real-world workload traces from Alibaba 
are modified and used to generate service requests. We classify the workload requests in that trace intoservices. Specifically, the “task_type” in that trace is considered the service type and the delay requirements of each request are acquired by properly scaling “start_time” minus “end_time”. Instead of employing real end devices, we implement a request generator to generate service requests and then forward them to k3s master nodes (eAPs) at random.
Edge cluster and nodes. By default, we set up k3s clusters in different geographic regions of the Google Cloud Platform (GCP) to emulate geographic distribution, each cluster consisting of a k3s master node and k3s edge nodes. K3s master nodes and k3s edge nodes use GCP Virtual Machine (VM) configurations “2 vCPU, 4 GB memory, and 0.3 TB disk” and “1-2 vCPU, 2-4 GB memory, and 0.3 TB disk”, respectively. Besides, we use more powerful k3s master nodes to accelerate offline training.
Cloud cluster. We build a homogeneous 15-VM cluster as the cloud cluster, where each VM is with “4 vCPU, 16 GB memory, and 1 TB disk”. A k8s master node is deployed at one VM to manage others. We handcraft services with various CPU and memory consumption and store their Docker images in the cloud. Moreover, we intentionally control the network bandwidth and delay between the cloud and the edge, with Linux TC, to simulate practical scenarios.
Iv-B Main Components of KaiS
We decouple KaiS into two main parts as shown in Fig. 6.
Decentralized request dispatchers. KaiS maintains a k3s dispatcher at each k3s master node to periodically observe and collect the current system states by a state monitor in the following manner. Each k3s edge node () runs a Kubelet process and () reads the virtual filesystem /proc/* in Linux to collect the states about Docker services and physical nodes. Concerning network status, each k3s edge node and k3s master node host a latency probe to measure network latency. State monitors at k3s edge nodes will periodically push the above collected system states to the state monitor at the k3s master node for fusion. To implement cMMAC, we deploy a cMMAC agent at each k3s master node as k3s cMMAC service while maintaining a k8s cMMAC service at the k8s master node to support training. At each scheduling slot s, empirically determined from experiment results as shown in Fig. 9, the k3s cMMAC service at a k3s master node computes a dispatch action by observing local states from the state monitor, and then notifies the k3s dispatcher to execute this dispatch for the current request.
Centralized service orchestrator. To implement GPG, KaiS holds a set of GNN encoding services with different GNNs (Sec. III-B1) at k3s edge nodes, k3s master nodes and k8s master node. These GNN encoding services are communicated with each other and used to compute the embeddings of edge nodes, eAPs (i.e., k3s master nodes) and the edge cluster, respectively. Once KaiS finishes the GNN-based encoding, the GNN encoding service at k8s master node will merge all embedding results. The remaining parts of GPG, i.e., the policy networks, are realized as a GPG service and deployed at the k8s master node. Frame length is set as slots to ensure system stability. At each frame, the GPG service pulls all embeddings from the GNN encoding service and computes the orchestration action. Then, the GPG service calls the k8s orchestrator to communicate with specific k3s API servers to accomplish service scaling via python-k8sclient. Unlike other scaling actions, only when a service is idle, the k3s API server can delete it. Otherwise, KaiS will delay the scaling until the condition is met.
Iv-C Training Settings
We implement Algorithm 1
based on TensorFlow 1.14. The detailed settings are as follows.cMMAC: cMMAC involves a critic network and an actor policy network. Both of them are trained using Adam optimizer with a fixed learning rate of . The critic
is parameterized by a four-layer ReLU NN, where the node sizes of each layer are 256, 128, 64 and 32, respectively. The actor
is implemented using a three-layer ReLU NN, with 256, 128, and 32 hidden units on each layer. Note that the output layer of the actor uses ReLU+1 as an activation function to ensure that the elements in the original logits are positive.GPG: GPG uses () six GNNs, i.e., and () two policy networks including and . Among them, are implemented with two-hidden-layer NNs with 64 and 32 hidden units on each layer. Besides, and are both three-hidden-layer NNs with node sizes of 128, 64 and 32 from the first layer to the last layer. All NNs, related to GPG, use Adam optimizer with a learning rate of for parameter updates.
V Performance Evaluation
In our evaluation, baseline scheduling methods include: () Greedy (for dispatch), which schedules each request to the edge node with the lowest resource utilization; () Native (for orchestration), i.e., the default Horizontal Pod Autoscaler  in k8s based on the observation of specific system metrics; () GSP-SS (for both), assuming that the request arrival rate of each service is known in advance; () Firmament (for dispatch), designed to find the policy-optimal assignment of work (request) to cloud cluster resources.
We consider three main performance metrics: () Per frame throughput rate , which reflects the short-term characteristics of ; () Scheduling delay , the time required for a scheduling action; () Scheduling cost , primarily in terms of network bandwidth consumption, including additional packet forward due to request dispatch, and bandwidth consumption for the edge pulling service Docker images from the cloud during service orchestration. For clarity, we perform the necessary normalization for some metrics, and give their statistical characteristics from the results of multiple experiments.
V-a Learning Ability and Practicability of KaiS
KaiS should be able to learn how to cope with request arrivals with underlying statistical patterns and even stochastic request arrivals. According to the service type, we sample or clip the workload dataset in Sec. IV-A to acquire request arrival sequences with patterns ( for each) as shown in Fig. 7, viz., () Pattern : periodically fluctuating CPU sum load; () Pattern : periodically fluctuating memory sum load; () Pattern : with fluctuating frequency; () Pattern : raw stochastic request arrivals clipped from .
Learning ability. Fig. 8(a) gives the performance evolution of KaiS during training for different request patterns. The throughput rate in all cases is improving over time, which demonstrates that KaiS can gradually learn to cope with different request patterns. In particular, KaiS requires experiencing at least times more frames to achieve stable scheduling, when coping with stochastic request arrivals () rather than others (). Nonetheless, once KaiS converges, its scheduling performance gap for requests of different patterns is within .
Decentralized or centralized dispatch? Fig. 8(b) shows that the scheduling delay of centralized service orchestration is almost than that of decentralized request dispatch, while the latter can be completed within around ms. Moreover, we maintain a cMMAC agent for each eAP in the cloud to dispatch requests in a centralized manner for comparison. From Fig. 8(c), we observe that decentralized dispatch can bring higher throughput rates, since centralized dispatch requires additional time to upload local observations () and wait for dispatch decisions. However, these extra delays are not trivial for some delay-sensitive service requests.
Two-time-scale scheduling and stepwise orchestration. Frequent scheduling may not lead to better performance. As shown in Fig. 9, when a slot is s, cMMAC agents often experience similar system states in adjacent slots, weakening their learning abilities. When a slot is too large (s), the untimely dispatch also degrades performance. Besides, too frequent service orchestration will result in more scheduling costs and make cMMAC agents hard to converge. Though selecting more high-value edge nodes for service orchestration at each frame can benefit the throughput, when , the improvement is very limited, while a larger leads to more scheduling cost. The capability of KaiS is affected by the above factors. We will show that a default configuration “s (slot), s (frame), ” can already yield decent performance compared to baselines.
V-B Impact of Load Balancing
In Fig. 10, we present the scheduling performance of KaiS trained with different settings of , which indicates the degree of edge load balancing, in . KaiS achieves the best throughput when , while its performance sharply drops when . This performance gap lies in that, when , KaiS focuses too much on load balancing while in many cases waiving the dispatch options that can tackle requests more efficiently. Besides, when , namely load balancing is not considered, both throughput rate and load balancing are still better than the case . This fact demonstrates that even if we are not deliberately concerned about load balancing when designing cMMAC, KaiS can still learn load-balancing policies that are beneficial to improve the throughput. Nonetheless, setting a moderate for the reward function can lead KaiS to learn such policies more effectively.
V-C Role of GNN-based Service Orchestration
Next, we first combine request arrival sequences of four patterns to construct a series of long one, in order to evaluate the ability of GPG to respond to pattern-fluctuating request arrivals. Note that these long request arrival sequences are constructed to reflect extreme scenarios with high variability.
Coping with stochastic request arrivals. In Fig. 11(a), we present the scheduling performance of KaiS, Greedy-GPG, cMMAC-Native and Greedy-Native to the scenarios with high variability. The results make evident the following points: () KaiS achieves a higher average throughput rate than the closest competing baselines, and particularly, whenever the request arrival pattern changes, KaiS can still quickly learn a scheduling policy adapted to the new pattern; () For patterns , cMMAC-Native can achieve scheduling performance close to KaiS, the reason of which lies in that an efficient request dispatch algorithm, e.g., cMMAC, can already address the request arrivals with obvious patterns; () For the sophisticated pattern , i.e., requests are arriving stochastically as the raw traces , due to the lack of service orchestration to adaptively release and capture the global system resources, the performance of cMMAC-Native and Greedy-Native deteriorates.
GNN-based encoding against vector stacking. We show in Fig. 11(b) the role of GNN-based system state encoding (Sec. III-B1) in KaiS. For evaluation, we build two edge clusters with different system scales: () a default setting introduced in Sec. IV-A; () a complex setting with k3s master nodes, each of which manages - heterogeneous k3s edge nodes ( in total). From Fig. 11(b), we observe that under smaller system scales, the effect of stacking system states is very close to GNN-based encoding, with only loss of scheduling performance. However, for the complex scenario, simply stacking system states cannot effectively help KaiS understand the request characteristics and system information, resulting in a performance loss. Instead, GNN-based encoding can significantly reduce KaiS’ dependence on the model complexity of NNs, which is key to efficient and fast learning. Further, it embeds the network latency and the system structure information, assisting KaiS scale to large-scale edge-cloud systems.
V-D Performance Comparison with Baselines
To evaluate KaiS, we need to consider both scheduling performance and cost. We clip the workload dataset to acquire request arrival sequences with the same length and use them to evaluate KaiS. From Fig. 12(b-c), we observe that in almost all cases, regardless of how the loads and the delay requirements of requests fluctuate (Fig. 12(a)), KaiS yields a higher throughput rate and a lower scheduling cost than the closest competing baselines.
Particularly, when the request loads and delay requirements are mild at some frames, the scheduling performance of GSP-SS can be very close to that of KaiS. However, in contrast to KaiS, the scheduling performance of GSP-SS degrades during frames with high loads: as it does not understand the system capability to process requests, when the request load level is high, it cannot load balancing the edge cluster to apportion these loads, thereby narrowing available scheduling spaces. Besides, () KaiS adopts two-time-scale scheduling, and unlike GSP-SS performing large-scale orchestration at each frame, () it only selects a fixed number of high-value edge nodes to perform service orchestration limited by . Hence, the scheduling cost of KaiS is bounded in each frame, thereby reducing the overall cost, as shown in Fig. 12(b).
Vi Related Work
Though existing optimization works explore upper bounds of scheduling performance, they are not applicable to practical deployment environments (e.g., k8s) due to various model assumptions. Besides, to our knowledge, there exists no system design works to accommodate decentralized request dispatch.
Vi-1 Theoretical Analysis Work
Many works, e.g., [31, 6], give scheduling solutions for offloading stochastic computation or service requests, which complement our work. The works of [7, 19, 15] set a theoretical basis on jointly optimizing request dispatch and service orchestration. However, the proposed one-shot scheduling optimization in [19, 15] cannot address continuously arriving service requests, i.e., without considering the long-term impact of scheduling. In , the authors propose to perform optimization on two different time scales to maximize the number of requests completed in each schedule. Nevertheless, the long-term optimization in  relies on the accurate prediction of future service requests, which is difficult to achieve in practice. Last but not least, these works [7, 19, 15] cannot be applied practically since: () They assume that the computing resources, network requirements, or the processing time for specific requests can be accurately modeled or predicted; () The dispatch is scheduled in a centralized manner, while it must take extra time to wait for the aggregation of context information across the entire system.
Vi-2 System Design Work
Many efficient schedulers have been developed for k8s-based cloud clusters. These works either schedule all tasks through minimum cost maximum flow optimization for general workloads 
or exploit domain-specific knowledge of, e.g., deep learning, to improve overall cluster utilization for specific workloads. However, they cannot accommodate decentralized request dispatch at the edge, since their schedulers are deployed at the cloud in a centralized fashion. The scheduler proposed in  orchestrates services by periodically measuring the latency between edge nodes to estimate whether the expected processing delay of service requests can meet requirements. The work most related to ours is , which uses model-based RL to deal with the service orchestration and is compatible with geographically distributed edge clusters. Nonetheless, neither  nor  consider request dispatch at the edge.
Leveraging k8s to seamlessly merge the distributed edge and the cloud is the future of edge-cloud systems. In this paper, we introduce KaiS, a scheduling framework integrated with tailored learning algorithms for k8s-based edge-cloud systems, that dynamically learns scheduling policies for request dispatch and service orchestration to improve the long-term system throughput rate. Our results show the behavior of KaiS across different scenarios and demonstrate that KaiS can at least enhance the average system throughput rate by while reducing scheduling cost by . In addition, by modifying the scheduling action spaces and reward functions, KaiS is also applicable to other scheduling optimization goals, such as minimizing the long-term system overhead.
-  Aliababa-clusterdata. External Links: Cited by: 1st item.
-  (2019) vrAIn: A Deep Learning Approach Tailoring Computing and Radio Resources in Virtualized RANs. In ACM MobiCom, Cited by: §I-B.
-  Baetyl: extend cloud computing, data and service seamlessly to edge devices. External Links: Cited by: §I-A.
-  (2016-Apr.) Borg, Omega, and Kubernetes. Commun. ACM. Cited by: §I-A.
-  (2008) A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst., Man, Cybern. C, Appl. Rev. 38 (2), pp. 156–172. Cited by: §I-C, §III-A2.
-  (2016) Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing. IEEE/ACM Trans. Netw. 24 (5), pp. 2795–2808. Cited by: §II-B, §VI-1.
-  (2019) Service Placement and Request Scheduling for Data-intensive Applications in Edge Clouds. In IEEE INFOCOM, Cited by: §I-B, §II-B, §II-B, §II-B, §V, §VI-1.
-  (2016) Firmament: Fast, centralized cluster scheduling at scale. In USENIX OSDI, Cited by: §V, §VI-2.
-  (2004) Variance reduction techniques for gradient estimates in reinforcement learning. J. Mach. Learn. Res.. Cited by: §III-B2.
-  (2019) Sharpening Kubernetes for the Edge. In ACM SIGCOMM Posters and Demos, Cited by: §VI-2.
-  K8s documentation: horizontal pod autoscaler. External Links: Cited by: §V.
-  KubeEdge: kubernetes native edge computing framework (project under CNCF). External Links: Cited by: §I-A.
-  Lightweight kubernetes. External Links: Cited by: §IV.
-  (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §I-C.
-  (2020) Cooperative Service Caching and Workload Scheduling in Mobile Edge Computing. In IEEE INFOCOM, Cited by: §I-B, §VI-1.
-  (2015-Feb.) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529–533. Cited by: §I-C.
-  OpenYurt: extending your native kubernetes to edge. External Links: Cited by: §I-A.
-  (2019) Service Placement with Provable Guarantees in Heterogeneous Edge Computing Systems. In IEEE INFOCOM, Cited by: §I-A.
-  (2019) Joint Service Placement and Request Routing in Multi-cell Mobile Edge Computing Networks. In IEEE INFOCOM, Cited by: §I-B, §VI-1.
-  (2019-Oct.) A Survey on End-Edge-Cloud Orchestrated Network Computing Paradigms. ACM Comput. Surv. 52 (6), pp. 1–36. Cited by: §I-A, §I-C.
-  (2020-Jun.) Geo-distributed efficient deployment of containers with Kubernetes. Comput. Commun. 159, pp. 161–174. Cited by: §VI-2.
Edge Computing: Vision and Challenges. IEEE Internet Things J. 3 (5), pp. 637–646. Cited by: §I-A.
-  (2014) Deterministic policy gradient algorithms. In ICML, Cited by: §I-C.
-  (2016) Mastering the game of go with deep neural networks and tree search. Nature 529 (7587), pp. 484–489. Cited by: §III-B2.
-  Reinforcement learning: an introduction. Cited by: §I-C, 1st item.
-  (2017) Online job dispatching and scheduling in edge-clouds. In IEEE INFOCOM, Cited by: §I-A.
-  (2020) Intelligent Video Caching at Network Edge : A Multi-Agent Deep Reinforcement Learning Approach. In IEEE INFOCOM, Cited by: §I-C.
-  (2020) Convergence of Edge Computing and Deep Learning: A Comprehensive Survey. IEEE Commun. Surv. Tutor. 22 (2), pp. 869–904. Cited by: §I-A, §I-B.
-  (2018) Gandiva: Introspective cluster scheduling for deep learning. In USENIX OSDI, Cited by: §VI-2.
-  (2020) Deep Learning on Graphs: A Survey. IEEE Trans. Knowl. Data Eng. (Early Access). Cited by: §I-C.
-  (2020) A3C-DO: A Regional Resource Scheduling Framework based on Deep Reinforcement Learning in Edge Scenario. IEEE Trans. Comput. (Early Access). Cited by: §VI-1.