Pagurus: Eliminating Cold Startup in Serverless Computing with Inter-Action Container Sharing

08/25/2021 ∙ by Zijun Li, et al. ∙ Shanghai Jiao Tong University 0

Serverless computing provides fine-grain resource sharing between Cloud tenants through containers. Each function invocation (action) runs in an individual container. When there is not an already started container for a user function, a new container has to be created for it. However, the long cold startup time of a container results in the long response latency of the action. Our investigation shows that the containers for some user actions share most of the software packages. If an action that requires a new container can “borrow” a similar warm container from other actions, the long cold startup can be eliminated. Based on the above finding, we propose Pagurus, a runtime container management system for eliminating the cold startup in serverless computing. Pagurus is comprised of an inter-action container scheduler and an intra-action container scheduler for each action. The inter-action container scheduler schedules shared containers among actions. The intra-action container scheduler deals with the management of the container lifecycle. Our experimental results show that Pagurus effectively eliminates the time-consuming container cold startup. An action may start to run in 10ms with Pagurus, even if there is not warm container for it.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Adopting serverless computing, Cloud tenants submit functions directly without renting virtual machines of different specifications, Cloud vendors schedule the tenants’ functions automatically. For the high maintainability and testability, most hyperscalers now provide serverless computing services (such as Amazon Lambda [5], Google Cloud Function [4], Microsoft Azure Functions [2], and Alibaba Function Compute [1]). Serverless computing is perfect for the Internet services that have unstable query loads, since the tenants are charged for each query execution, instead of the long term renting.

We use the terminology in Apache OpenWhisk [6], an event-driven serverless computing platform. An action represents the invocation of a user function. Whenever an action is received, serverless computing runs the action using either a newly-launched container or a running warm container. The warm containers keep serving queries (warm startup) until timeout to be recycled. While containers of different actions rely on various software packages, the containers are not shared. If there is not a warm container for an action, a new cold container needs to be started for it. The long latency of booting containers, as well as the software environment and code initialization, restricts the performance of serverless computing [30, 43, 11, 33, 34].

While some actions suffer from the long container cold startup time, we observe that current serverless computing systems may launch too many containers for some other actions. For instance, they may launch too many containers for Internet services with diurnal load patterns [14, 16] (the low load is less than 30% of the peak load) at the peak load. We also observe that there are idle containers for some actions, even if loads of these actions are stable. While these warm containers are idle and wasting the system resources, they are not used by any other action because the containers for different actions install different software packages.

With the development of micro-service architecture, the actions tend to use popular and common library package. For instance, by extracting likely dependencies in the projects on packages in the popular Python Package Index (PyPI) repository, 36% of imports are of 20 popular packages[31]. Actions tend to use similar software packages. In this scenario, if an action that requires cold container startup is able to utilize the idle containers of other actions, the cold startup is turned into a warm startup, its end-to-end latency can be greatly reduced.

There are three main challenges in achieving the above purpose. As for the first challenge, the loads of the actions are not stable [35]. It is difficult to determine whether an action can safely lend an idle container to other actions, without affecting its own Quality-of-Service (QoS). Secondly, existing serverless computing systems do not support the “borrow” operation. Container sharing is not allowed. Thirdly, multiple renters and lenders coexist. It is non-trivial to design an efficient container sharing strategy among actions that minimizes the number of the cold startup.

To tackle the above challenges, we propose Pagurus

, a container management system that reduces container cold startup through adaptive inter-action container sharing. In Pagurus, the containers are classified into

lender containers, executant containers and renter containers. The executant containers can only be used by the owner action itself. The lender containers can be lent to other actions, and will turn into renter containers. Pagurus proposes an enhanced container component that enables the container sharing between multiple actions and guarantees the security of users’ code. For each action, an intra-action container scheduler is adopted to manage its executant containers, the renter containers borrowed from other actions, and the lender containers to be lent to other actions. The whole serverless computing system adopts an inter-action container scheduler to schedule the containers between the actions and handles the proactive re-packing based on the package similarity of the actions and their workloads.

The main contributions of this paper are as follows.

  • The container enhancement that enables the sharing. We enhance the container design to support the runtime package re-packing. It enables the container sharing between different actions.

  • The design of a similarity-based container re-packing policy. While the containers for different actions install different software packages, we analyze the similarities of the actions, and minimize the number of packages installed for container sharing.

  • The design of an efficient inter-action container sharing mechanism. This mechanism divides the containers into three types, based on which Pagurus manages them in different ways and enables efficient inter-action container sharing.

Through the adaptive inter-action container sharing, Pagurus greatly reduces the possibility that an action suffers from cold container startup. Pagurus is also able to be integrated with prior work on reducing the container cold startup time to minimize the overhead of serverless computing in all the cases.

Ii Background and Related work

Ii-a Background

In serverless computing, container act as a lightweight virtualization that creates multiple isolated user-space instances for actions. The serverless platform uses containers to encapsulate and execute the queries.

Figure 1 shows the way that a user action is scheduled to run with serverless computing. As shown in the figure, a container must be booted/restored/initialized/invoked to host the action. If an action is invoked for the first time or there is no alive container for this action, the serverless system encapsulates it and starts up a new container, initializes software environment, loads application-specific code [26], and runs the function. All these steps make up a cold startup [24, 13], and may take several seconds. The container cold startup significantly increases the end-to-end latency of user queries [43, 11, 33, 34], especially the processing of a single query is often short (hundreds of milliseconds) for Internet services.

Fig. 1: Four possibilities to start an action in a serverless computing system.

To deal with this problem, researchers also proposed to use CRIU (checkpoint-restore in user-space) [3, 42, 41, 40] technique that restores container images from checkpoints to reduce the cold startup time. However, it still incurs the long end-to-end latency [18]. Another approach called prewarm startup adopted by OpenWhisk spawns stem cell containers that are already initialized with the software environment in advance. Though it skips the container startup and users only need to perform application-specific code initialization [6, 7, 22, 31], its pre-loaded libraries can either make the image size too large [22, 39], or cause long startup latency for the prewarm container [31, 15].

If a container for a type of action is alive (just complete the previous invocation), a new action query of the same type can be directly executed in the running container (warm startup). Warm startup eliminates the container booting and initialization, and these warm containers keep serving actions to achieve better end-to-end latency [30]. However, warm startups are not always possible because these warm containers may be recycled before cold startup happens [24, 13].

Ii-B Related Work

There are already lots of prior work on reducing the container startup latency to improve the performance of serverless computing [23, 12, 18, 36]. SAND [12] separates different applications from each other via containers, while allowing functions of one application to run in the same container by different processes. X-container [36] has been proposed as a new security paradigm for isolating cloud-native containers to achieve higher throughput. Catalyzer [18] adopts the design that utilizes the technology of CRIU with on-demand recovery. Hendrickson [23] also proposes OpenLambda to deal with the long function startup latency and locality consideration.

Slacker [22] and SOCK [31] share the similar idea that containers are launched by generalizing zygote initialization to reduce the startup latency. To achieve function isolation, Unikernels [28, 27] can achieve less latency and better throughput via bypassing the kernel with unikernels in serverless environments. McGrath [30] proposed to reuse containers and create containers by introducing a queuing scheme with workers collecting the availability in different queues.

Existing works mainly focus on seeking more lightweight virtualization technologies to pursue lower overhead, or to reduce the container startup time for one kind of action. Our work tries to make different actions work collaboratively to alleviate the container cold startup problem. Furthermore, Pagurus can be combined with different container technologies to achieve less cold startup latency.

Iii Motivation

Iii-a Experimental Setup

In this investigation, we use Apache OpenWhisk [6] with local cache as the representative serverless computing platform, FunctionBench [25] and Faas-Profiler [34] as the benchmarks. The experiment setup is based on a 2-node cluster where the nodes are connected with a 25Gb/s Ethernet switch. In the 2-node experimental cluster, we use one node to perform the computing, and one node to generate the queries for execution. Table I shows the hardware and software configurations of each node. We use representative serverless computing benchmark suites FunctionBench [25] and Faas-profiler [34] and the used benchmark workloads are shown in Table II.

Configuration
Node
CPU: Intel Xeon Platinum 8163@2.50GHz
Cores: 40, L3 shared cache: 32MB
DRAM: 256GB, Disk: NVME SSD
Network Interface Card (NIC): 25Gb/s
Network 25Gb/s Ethernet Switch
Software
Nginx version: nginx/1.10.3 Database: Apache/couchdb:2.3
Container runtime: Python-3.7.0, Linux with kernel 4.15.0
Docker server and client version: 19.03
Docker runc version: 1.0.0-rc10 Docker containerd version: 1.2.13
Memory and Timeout of serverless containers: 256MB, 60s
Operating system: Linux with kernel 3.10.0
TABLE I: Hardware and software setups
Benchmark Workloads Description
dd Convert and copy a file.
float_operation(fop) Float operations (sin, cos and sqrt).
cloud_storage (clou) Cloud storage service.
mapreduce (mr) MapReduce wordcount workload.
FunctionBench video_processing (vid) Video processing.
linpack (lp) Solve linear equations .
matmul (mm) Matrix multiplication.
k-means (kms) Model training of k-means.
image_resize (img) Resizes an image to several icons.
Faas-profiler couchdb (cdb) Json_dump from Coucdb files.
markdown2htmll (md) Renders Markdown text to HTML.
TABLE II: Benchmarks used in this paper

Iii-B Breakdown of the End-to-end Latency

The end-to-end latency of processing a user’s query seriously affects the user experience. We make an investigation and break down the end-to-end latencies of the benchmarks for serverless computing.

In a serverless computing system based on container technology, cold container startup happens when there are no idle containers exist, and a user query is received. In this scenario, the system creates a new container to serve the query. For an action, the cold container startup includes operations like initializing the customized execution environment. Traditionally, the cold startup overhead includes the container startup, the software environment of the function initialization, and application-specific code initialization. These operations may incur significant extra latency. Figure 2 shows the percentages of the time spent on the cold container startup in the end-to-end latencies of the benchmarks.

Fig. 2: The distribution and percentages of time spent on the cold container startup and action execution in the end-to-end latencies of the representative benchmarks.

Observed from Figure 2, the cold startup overhead increases the end-to-end latencies of the benchmarks. In general, the container cold startup time is relatively stable. In the best case, the cold container startup still takes 48.2% of the end-to-end latency (cdb). While in the worst case, the cold container startup takes 93.8% of the end-to-end latency (dd).

If we can eliminate the container cold startup, the end-to-end latencies of the applications with serverless computing can be greatly reduced. To this end, Cloud vendors [4, 1, 2, 5, 32, 19], as well as recent works [23, 13, 29, 38] have focused more on reducing the container startup time as we discussed in Section II.

Even if the container cold startup time is reduced to about 40ms in the best case [18], the cold startup still takes longer time than the case that the query can directly get a warm container (10ms) [6, 34]. The increasing usage of high-level language like Python, can make the cold startup even more expensive[12, 34]. For the latency-sensitive applications that have millisecond-level latency targets, such as Internet services, the 30ms already results in poor user experience.

Iii-C Existence of Redundant Warm Containers

An important feature of serverless computing is elasticity. By current container startup strategy, the containers are started up upon queries waiting in the queue, and will be recycled soon when there is no query for a certain period (e.g., 60 seconds in OpenWhisk). Therefore, whenever the running containers fail to catch up with the queries waiting in the queue, a new container must get started and experience cold startup. In a large-scale serverless computing platform, there may coexist a large number of actions from different users. If some actions have redundant warm containers, we envision that it is possible to reuse these redundant warm containers to eliminate the cold startup. To this end, we manually schedule the container startup process to reuse the redundant warm containers, and check the number of containers launched and the QoS in the metric of end-to-end latency. Figure 3 reports our investigation results. Our investigation proves that our assumption holds.

(a) The number of containers launched in OpenWhisk
(b) The number of actually needed containers
Fig. 3: The number of containers launched (a) and required (b) to ensure the 95%-ile latency target of an example benchmark vid.

Taking the benchmark vid as an example, Figure 3(a) and Figure 3(b) show the number of containers launched and the 95%-ile end-to-end latencies of the queries in various loads, for OpenWhisk and our manual scheduling, respectively. In the figure, the -axis represents the query workload (Query-per-Second, QPS); the bars show the number of warm containers (corresponding to the left -axis), and the line shows the 95%-ile latency (corresponding right -axis).

Observed from Figure 3(a), the 95%-ile latency of the benchmark shows the pattern of cyclic variation. This is because a little increase in the QPS at the saturation point will bring in a new container startup, thanks to which, lower overall latency will be achieved. But further increasing the QPS will result in higher overall latency. While, there is still headroom between the 95%-ile latency and the QoS target, it is possible to use fewer containers without violating the QoS requirement. This is exactly what we do in the manual scheduling. As shown in Figure 3(b), in some cases (the bars in black), we can safely reduce at least one container while still achieving the 95%-ile latency target. It proves that some applications in the serverless computing platform do have some warm idle containers during execution. In addition, from Figure 3(b), we also observe that the idle container usually appears in a minimum turning point of the latency (represented by blue circle in the figure). It is because that the minimum turning point usually comes with a new container startup to deal with the increasing queries in the waiting queue. We also find that it is a common phenomenon based on Openwhisk. Besides , the other benchmarks also produce the similar results. On the other hand, it can be easily anticipated that there will be even more idle containers in the case when the query workload drops suddenly. While, it has been widely witnessed and proved that real Internet services are with diurnal load pattern [14, 16]. Definitely, this will bring in more idle containers potentially for reuse.

Iii-D Challenges in Reusing Containers and Ensuring QoS

Based on the above analysis and investigation, there is an opportunity to leverage the warm containers of some actions to eliminate the cold container startup of other actions. While the actions may require different software packages and execution environment, an action’s container is not able to be used by another container. An action’s container has to be re-packed before it can be used by other actions. The re-packing operation may take a relatively long time.

By extracting the packages from the benchmark suite FunctionBench [25], we also find that 16.7% of the benchmarks import and , and some commom libraries like are even shared by 22.2% of them. This finding indicates that, different actions tend to share a high proportion of packages with others. Therefore, it is possible to build a shared container image, allowing several actions to run without installing extra packages. Even if in some cases the similarity of libraries between different actions is not high, we can use specific designed algorithms to build some connection between them. However, it is not an easy task to build a shared container image as there are still several challenges, such as

  • Actions are not able to share containers. While the containers of different actions pack different software packages, the containers of an action cannot be reused by other actions by default.

  • Container reuse brings extra time overhead. To allow other actions to run in an action’s container, the reused container is supposed to install extra packages. Inappropriate package installation brings large time overhead that negates the latency reduction from the elimination of the cold container startup.

  • Security concerns about inter-action container sharing. When containers are shared between different actions, isolation is weakened. While, the security and privacy of the actions must be ensured.

  • Inter-action container reuse brings extra schedule complexity. While multiple actions are active concurrently, an efficient mechanism has to be proposed to manage the container lend and rent between the actions.

Iv Design of Pagurus

To tackle the above challenges, we propose Pagurus, a runtime container management system for eliminating the cold container startup through inter-action container sharing.

For a traditional serverless computing system, the design of distributed deployment is usually implemented in two ways. One is to divide the nodes into master and slave nodes. Load balancing is realized by the central controller in the master node, and the data in the slave node (server node) is synchronized by the database, as shown in Figure 4. However, according to the previous studies, the network bandwidth between the server nodes and the database is usually the bottleneck of the serverless computing, and such master-slave design is also unpractical [24, 31, 23].

Fig. 4: The Master-Slave design of a serverless computing system.

Therefore, we design Pagurus using the single node management, where all nodes communicate with each other to maintain and update local databases and files. By such means, the performance is only relevant to the computing power of the server node. Figure 5 shows the design overview of Pagurus. For the management of shared containers between actions, an inter-action container scheduler is introduced. It is also responsible for the re-packing of the containers at runtime when necessary. For each action, there is an intra-action container scheduler responsible for coordinating three container pools, i.e., an executant container pool, a lender container pool, and a renter container pool. Whenever a container experiences the cold startup, it is added to the executant container pool by default, and will keep intact provided that it is recognized as a warm container. When there is no query requesting the container for a certain period, the container will be identified as idle and moved to the lender container pool for possible cross-action reuse. The renter container pool reserves the containers rent from the other actions’ lender container pool. Once a cold startup is about to occur on an action, Pagurus allows it to first check whether it is possible to reuse an existing lender container from another action to avoid cold startup. It should be noticed that the lender actions can also get renter containers whenever it is re-packed by others.

Fig. 5: Design of Pagurus.

To enable the container sharing, we customize the container structure of Pagurus with four modules, i.e., code load, action run, lend and rent and code encryption, as shown in Figure 5. Code load and action run are the same as current serverless computing platform, responsible for code loading from database and execution monitoring when invoking actions, respectively. Lend and rent and code encryption are specially introduced in Pagurus for container sharing and safety guarantee, respectively. Lend and rent essentially consists of lend and rent functions. Lend helps the container transferred from executant container to lender container. Rent helps renter inherit containers and properties from lender container. In addition, code encryption module guarantees the container security during code reload.

Container re-packing and container life-cycle management play critical roles in Pagurus. We will discuss the two issues in the following two sections, respectively, with special emphasis on answering the following questions:

  • On what conditions, we can identify a container in the executant container pool as idle and transfer it to lender container pool?

  • For a set of renter candidates, how to select the appropriate renter container in the consideration of runtime performance efficiency in the metric of end-to-end latency of renter containers?

  • How to guarantee the security of lender and renter without exposing both sides’ code and data?

V Image Re-packing

V-a Idle Container Identification

Idle container identification and lender container generation are two essential functions in the intra-action container scheduler. To realize container sharing, it is first important to identify the containers of an action that can be lent, i.e., idle container. In the serverless system, the queries are executed by the running containers. If the total capacity of the containers exceeds the one required to ensure the QoS of the queries during a period, idle container arises. As we have known, whether a container can be judged as idle or not, depends on the query workload, the processing power, and the desired QoS. Therefore, we first describe the query processing logic of serverless computing into a producer-consumer problem, and analyze it via queuing theory, which has been widely applied in communication systems, computations and storage systems.

Without loss of generality, the query arrival process to a container can be described as Poisson process with exponentially distributed interval averaged as

. The query processing time follows an exponential distribution with average value , which is independent to the task arrival. The queries are fairly allocated among the containers. Thus, we can apply model [20] to analyze the processing process.

When the traffic density , the system is in a stable state. In this case, we can derive the stable distribution that there are queries in the waiting queue as

(1)

where . For brevity, the detailed analysis is omitted. Then, we can further derive the average waiting time under the stable state (i.e., ). No query will be in the waiting queue if the number of queries is less than the number of containers, i.e., . Thus, we can derive the waiting time (i.e., the time spent in the waiting queue) distribution as

(2)

and

(3)

Summing up (2) and (3), we obtain the general waiting time distribution as

(4)

Let represent the QoS target and define as the -ile latency of an action when there are containers, and as the -ile latency requested by an action. So when the waiting time is set as the maximum waiting time , will represent whether the QoS requirement can be satisfied. Thus, we can derive the discriminant function to determine whether the idle container of an action exists as

(5)

Both criteria in (5) need to be satisfied to identify an idle container. It is necessary that , otherwise the current QoS of the action can not be satisfied when containers running in serverless platform, and the action suffers QoS violation due to cold startup. Upon QoS satisfaction, we further try to evaluate whether it is possible to remove one container from the executant container pool by checking the achievable QoS after the removal. If , containers are already enough to satisfy the QoS requirement, and an idle container will be removed by its intra-action container scheduler. Meanwhile, the inter-action container scheduler will re-pack the lender image for it.

Thus, the intra-action container scheduler of an action can apply the criteria in (5) to identify the idle containers for possible reuse by the other actions, and send the corresponding container information to the inter-action container scheduler for lender container image re-packing, as will be discussed in the next subsection.

V-B Similarity-based Re-packing

Re-packing refers to adding extra dependent libraries to maximize the possibility of reuse by the other actions as different actions usually ask for containers with different libraries. Intuitively, we may add arbitrary more libraries to build a lender container for the maximal reuse. However, this will result in an extremely large container, leading to high overhead. Fortunately, we notice that different actions also share some libraries with different degrees. This motivates us to design a similarity-based container re-packing in Pagurus.

The inter-action container scheduler analyzes the software environment of each action, and re-packs the lender container image for each running intra-action container scheduler by checking the similarities between lender actions and other actions. To filter out actions similar to a lender action, we apply collaborative filtering and adopt Nearest Neighbor Search (NNS) to calculate the similarity between two actions. Cosine-based Similarity is a well-known similarity algorithm which is traditionally used in users’ interests recommendation. We define the action- as the actions that require additional libraries, and action- as actions without additional libraries. The inter-action container scheduler generates the lender containers images by re-packing the similar images in the following steps.

  • Collect information about all actions. All the information about their libraries will be recorded, including the name and the version of libraries. For each action, by the user’s , additional installed libraries can be recorded in the format . In some cases when users do not declare the version of libraries, the inter-action container scheduler will take the latest version as default. But it will bring in the hazard of libraries version contradiction. For example, if requires with 1.0 version, while requires with 2.0 version, neither can lease the other’s containers because of the version contradictions.

  • Create a vector to hold the libraries of each action

    . For each lender action, the scheduler first filters the actions with common libraries with the lender action as the candidate actions. It then checks whether the libraries in the candidate actions are inconsistent with that in lender action (e.g., version contradiction). In that case, it will be removed from the candidate actions. Finally, the scheduler takes the union set of the libraries both in the lender action and rest candidate actions to form a libraries vector for distance calculation.

  • Calculate the cosine distance between the lender action and the candidate actions as the similarity. The filter logic is to select the top values of all the similarities and then take the corresponding actions as renters. If no candidate actions exist (for example, action- is selected as lender action), action-s without version contradiction will be added in random to be the renters. Besides, a number of action-s will also be selected in random as renters.

Therefore, up to action-s and action-s will be selected as renters, and the inter-action container scheduler will wrap the renters’ additional libraries into the image of the lender action. Meanwhile, the renters’ code files will also be re-packed by module for safety. and are hyper-parameters and obviously their values affect the re-packing overhead and time. Their values should be set according to (6), in which case all actions can get chances to be re-packed in lender containers.

(6)

Fig. 6: Timeline of Pagurus operations.

Figure 6 shows the re-packing operations along timeline. The inter-action container scheduler re-packs the lender containers images based on the data collection Image re-packing. The time cost in lender re-packing is hard to measure due to the uncertainty of the libraries vector cardinality. With no doubt that higher cardinality, i.e., more libraries for re-packing, indicates longer re-packing time. But the re-packing phase for each lender action usually takes less than 10s for most actions according to our experiments. After re-packing, the images are committed to different intra-action container schedulers for creating lender containers. The overhead depends on the number of additional libraries to be installed. If some libraries take a relatively long time to re-pack, users will resort to submitting a virtual environment [10] or custom container image [9] to avoid the long installation time. In this case, Pagurus will adopt traditional CRIU to generate the containers, instead of re-packing.

The inter-action container scheduler deals with creating and updating of the re-packing image, while the intra-action container scheduler is responsible for managing the container pools, e.g., starting an executant container from default image, generating the lender container from the re-packed image. Unless the re-packed image is updated, the container only boots from it for the first time, and any subsequent container will use CRIU to accelerate the startup. The renter container check module is designed to make sure that the runtime and libraries in the intra-action container scheduler are consistent with that in the inter-action container scheduler when performing the container re-packing.

V-C Security Guarantee

In Pagurus, as a lender container may be shared by several renter actions, a natural and inevitable concern is on the security of lender container. Meanwhile, as the code files of renter actions need to be re-packed in the shared container, the renters’ security cannot be ignored.

For lenders’ security guarantee, Pigurus explores the stateless nature of serverless computing to clean up user code and cache of the lender container before re-packing a lender image. No renter action can get any previous information about the action with the lender container. For renters’ security guarantee, Pigurus encrypts the renter action’s code file by module first before re-packing to prevent code disclosure. The code encryption is divided into two parts. First, to protect the privacy of the users’ files name, Pagurus adopts a renaming strategy by renaming code files uniformly such as , as adopted by OpenWhisk [8]. Then, the environment folder for user actions will be encrypted into a ZIP file with the user password. Secondly, when a lender container is generated, all the renters’ code files will coexist in this container. In this case, all the renters’ folders need to be encrypted with inter-action container controller specified password to protect the renters’ privacy and code security in the lender container. It should be noticed that the cleanup and code decryption are executed in inter-action container scheduler, and therefore neither side can get any information about each other.

In conclusion, although Pagurus weakens the level of isolation, the code security and privacy required by isolation are still ensured and satisfied. Using encryption to secure data and files in cloud computing is quite common in practice [21, 17, 37]. So it is acceptable for actions to adopt encryption to address the security concern in container sharing.

Vi Inter-action Container Management

In this section, we describe the steps of creating lender containers from idle containers, and using the borrowed container to run an action.

Vi-a Generating a Lender Container

If an executant container of an action is identified to be idle, its intra-action container scheduler will generate a lender container from the re-packed image returned by the inter-action container scheduler. Figure 7 shows the detailed workflow of generating a lender container.

Fig. 7: Generating a lender container from an idle executant container.

As shown in Figure 7, the executant containers of an action periodically feedback their status to the intra-action container scheduler. Based on the status of each container, the intra-action container scheduler identifies the redundant idle containers. Once an idle container is identified from the executant containers, the intra-action container scheduler re-packs the idle container to be a lender container (Step 2.1 and Step 2.2). In more detail, the idle container is deleted from the executant container pool, and the corresponding lender container is added to the lender container pool. This information is then feedbacked to the intra-action container scheduler (Step 3.1 and 3.2), so that the scheduler is aware of the change. In the last step, the intra-action container scheduler informs the inter-action container scheduler of the change (Step 4). In this way, other actions are able to borrow the container through the inter-action container scheduler.

Vi-B Renting a Container from Other Actions

When an action ACT needs a container to run but there is no free warm container for it, its intra-action container scheduler submits a rent request to the inter-action container scheduler. If there exists such lender container that is already prepared for ACT by other actions, the container is changed to be a renter container of ACT, and is put in the renter container pool of ACT. Figure 8 shows the detailed steps of renting a container from other actions.

Fig. 8: The steps of Action_B rents a container from Action_A.

Among these steps, it is crucial to guarantee the lender’s container delivery and ensure the information safety of the lender. In Step 3, the inter-action container scheduler deletes the code and data of , as well as the other renters’ code file in the lender container, and deciphers the code file of . Then inter-action container scheduler will inform and to prepare for container transferring (Step 3.1 and Step 3.2). Once ’s intra-action container scheduler receiving return container status, it will schedule this lender container to its renter container pool (Step 4.2) and ’s lender container pool will clear related status and information (Step 4.1). Meanwhile, the management privilege of the lender container is transferred from ’s intra-action container scheduler to ’s intra-action container scheduler.

The code cleaning of and the code decryption of are executed in parallel. While the overhead of cleaning is less than the time cost of code decryption, the overhead of code cleaning is hidden from users.

Fig. 9: The state transition diagram of three containers.

Based on Section VI-A and VI-B, we can summarize the state transitions for three different containers in Figure 9. The action is executed in the executant container by default, and all cold startup containers are managed in the executant container pool. Lender container is transformed from an idle executant container identified by its intra-action container scheduler to a shared container re-generated from the re-packed image. Renter container inherits from other actions’ lender container to make queries get executed without container cold startup. All these containers will keep running until timeout to be recycle.

Vi-C Recycling Containers in Different Pools

In serverless computing, when the load of an action drops, some warm containers for the action are recycled to save resources. Recycling is done by monitoring the status of the containers. If a container does not receive new requests for a time period (60s in OpenWhisk), the container is recycled. The recycling is only done by setting a timeout period for each container. If a container does not operate in the timeout period, the container will be recycled. This recycling policy is not able to be used in Pagurus directly as there are three types of containers in Pagurus.

Therefore, we design a priority-based recycle strategy for Pagurus. In this strategy, the inter-action container scheduler manages the recycling of all its containers, including executant containers, lender containers, and renter containers. For an action, Pagurus recycles the renter containers before all the other containers, and recycles lender containers after all the containers. The design philosophy here is that an action does not need extra rent containers when its containers tend to be recycled. Figure 10 shows the order of recycling containers when the load of an action drops.

Fig. 10: Recycling thee three types of containers.

Specifically, we set different timeout periods for the three types of the containers. The renter container pool has the minimum timeout period ( in Fig. 10). The executant container does not store information and libraries for other actions, the container recycle does not affect the scheduling of the intra-action container scheduler. The timeout period of executant containers ( in Fig. 10) is slightly larger than the timeout period of the lender containers. Because the lender containers re-pack additional libraries for multiple actions, even if executant and renter containers are all recycled, it can also meet the invocation of the action. For the above reasons, the lender containers have the maximum timeout period ( in Fig. 10).

In our current implementation, we set the timeout periods for the renter containers, executant containers, and lender containers to be 40s, 60s, and 120s by default.

Vii Evaluation of Pagurus

In this section, we first evaluate the performance of Pagurus in reducing the end-to-end latencies of applications when there is no warm containers for them. Then, we discuss the possibility of Pagurus in eliminating the cold startup, the effect of Pagurus in supporting bursty load, and its effect in integrating with the orthogonal techniques.

Vii-a Experimental Setup

In the experiment, we evaluate Pagurus based on a 2-node cluster described in Section III-A. In the 2-node experimental cluster, they serve for inter-action container scheduler re-packing. Although we only use a small scale cluster in this section, Section IV reveals the situation in large scale Clouds while serverless computing platforms often manage the containers on each node independently. While containers are in the process context, containers are not supposed to be migrated to other nodes in most cases.

Fig. 11: The configurations of actions running in the background.

We use representative serverless computing benchmark suites FunctionBench [25] and Faas-profiler [34] to evaluate Pagurus. Table II lists the used benchmark workloads. In the following experiment, we set the maximum number of containers in the renter pool to be 2 and randomly run two benchmarks in the background with high loads to simulate the real-system situation. To better understand the background configuration, we make a schematic diagram, as shown in Figure 11. In a real system, there are some long-running services and some occasional queries on the same serverless computing platform, and the services running in the background are also uncertain. So there are total combinations of experimental configurations for Pagurus when we randomly select two benchmarks to be the lender. In each experimental configuration, we run each benchmark for 100 times by invoking the benchmark once every 60 seconds. In this way, the benchmark suffers from the cold container startup in all the 100 tests. In the following experiment, we collect the end-to-end latencies of the 100 tests for each benchmark based on the above experimental setup.

Vii-B Reducing End-to-end Latency

We show the effectiveness of Pagurus in reducing the end-to-end latency of a benchmark in this subsection. In this experiment, for each benchmark, we randomly select two of the other 10 benchmarks to be the lenders in the background for Pagurus. We compare Pagurus with Apache OpenWhisk [6] and the restore-based method [3]. OpenWhisk creates a new container for a benchmark from the corresponding container image and startups the new container. Restore-based method stores the checkpoint of the container in the memory, and restores the checkpoint from the main memory when needed.

Fig. 12: The end-to-end latencies of the benchmarks when they suffer from cold startup with OpenWhisk, Restore-based method, and Pagurus.

Figure 12 shows the end-to-end latencies of the benchmarks with OpenWhisk, Restore-based method, and Pagurus. In the figure, the “optimal” reports the latencies of the benchmarks when they get warm containers directly. As shown in the figure, all the benchmarks achieve the shortest end-to-end latencies with Pagurus compared with OpenWhisk and Restore-based method. In the best case where actions get lender containers, Pagurus reduces the end-to-end latencies of the benchmarks by 75.6% and 51.9% compared with OpenWhisk and the restore-based method respectively. When compared to the optimal scenario, Pagurus only introduces 0.48% longer end-to-end latency on average.

Pagurus greatly reduces the end-to-end latencies because it schedules the idle shared containers to speed up the actions that may suffer from cold container startups. If an action query is hosted in a shared container, the container startup phase for the query is skipped and only the user-specific code initialization is needed. According to our measurement, Pagurus schedules a lender container to a query in less than 15µs, and completes the container cleaning and application-specific code initialization in less than 10ms.

The restore-based method is also able to reduce the end-to-end latencies of the benchmarks compared with OpenWhisk. This is mainly because it eliminates the overhead of creating the new container images. However, it consumes large memory space and still results in longer end-to-end latencies of the benchmarks compared with Pagurus.

Vii-C Eliminating the Container Cold Startup

It is possible that there is not a renter container for an action. For an action, the extra libraries (software libraries) it packs determine the probability that it will skip a cold startup. The probability of eliminating the cold startup is an important indicator that reflects the effectiveness of Pagurus.

In this experiment, for each benchmark, we run it in experimental setups. For each setup, we select 2 out of the 10 benchmarks as the renters. Figure 13 shows the percentages of the setups in which the benchmarks skip the cold container startup.

Fig. 13: The probability of eliminating cold startup with Pagurus.

Observe from Figure 13, Pagurus eliminates all the cold container startup for dd, fop, lp, mm, cdb and clou, because these benchmarks can always rent containers from the lenders. They can always find the lenders because these benchmarks do not require additional libraries to initialize the software environment. In this case, the container re-packing algorithm is able to pack the redundant idle containers of any actions to be its renter containers.

For the benchmarks that require extra libraries (img, vid, kms, mr, and md), the possibility of eliminating the cold startup depends on the libraries similarity of the lender actions and renter actions. The more common and popular the additional libraries required in the action, the higher the probability that it will be re-packed by the lender actions. For instance, in 77.3%, 59.1% and 57.6% of the configurations, Pagurus eliminates the cold container startup for vid, kms, and img respectively. This is because these benchmarks mainly use the shared Pillow and sk-learn software packages. However, for mr and md, due to the unpopular of packages they used, lender actions take lower priority to pack it in lender containers. The decision leads to the relatively low probabilities (34.8% and 36.4%) of eliminating cold startup for mr and md.

Fig. 14: The benchmark similarities in the container re-packing algorithm.

To better understand this problem, Figure 14 shows the heat map of the benchmark similarities in the container re-packing algorithm. In the figure, the very small square in row vid and column img represents the possibility that vid serves as the lender for img rental. The small square in the row img and column vid represents the possibility that vid serves as the renter for img. Observed from the figure, all the benchmarks do not tend to re-pack containers for mr and md. These results explain the reason that Pagurus shows a relatively low possibility in eliminating cold container startup for mr and md. An alternative method to further resolve this problem is taking the prior knowledge into consideration. Another way is increasing the number of renters that each lender can choose.

It should be noticed that the heat map in Figure 14 is asymmetric, because the benchmarks rely on different software packages. Assume an action relies on software packages , and another action relies on software packages . In this case, the containers of have all the packages for . At the same time, the containers of only have half of the software packages for . The possibilities of re-packing the containers of for , and re-packing the containers of for are different.

Renter containers can be used by multiple actions and not all the benchmarks can always skip the code container startup.

Vii-D Integrating with Work on Reducing Cold Startup Time

While Pagurus eliminates the container cold startup, it can be integrated with prior work proposed to reduce the container cold startup time. In this subsection, we integrate Pagurus with Restore-based method and Catalyzer [18] respectively, and report their performance. For each benchmark, we still run it in the 45 experimental setups. In each experimental setup, we launch the benchmark 100 times with an interval of 60 seconds. Figure 15 shows the average container startup time for each benchmark of the tests.

Observed from Figure 15, Restore+Pagurus reduces the average container startup time of the benchmarks by 43.4% on average compared with the original Restore-based method; Catalyzer+Pagurus reduces the average container startup time by 12.2% on average compared with Catalyzer. Pagurus is able to further reduce the average container startup time, because it is able to totally skip the container startup phase for the benchmarks sometimes. Even if no appropriate lender container returns, Pagurus will not slow down the container startup. Therefore, Pagurus can be integrated with prior work to further reduce the average cold startup time.

Fig. 15: The average cold startup latency when Pagurus is integrated with Restore-based method, and Catalyzer respectively.
(a) mm
(b) img
Fig. 16: The cumulative distribution of the benchmarks’ cold startup end-to-end latencies in Pagurus and Restore.

Figure 16 shows the cumulative distribution of the end-to-end latencies of mm and img with restore-based method and Restore+Pagurus. In the figure, “optimal” shows the latencies of the benchmarks when all the containers are warm. By integrating Pagurus with the restore-based method, the end-to-end latencies of all the 4,500 tests of mm are greatly reduced. Meanwhile, 52.1% of the queries of img show much shorter end-to-end latencies. In mm, action queries completely skip the container startup, thus overhead of cold startup is eliminated. While for img, about 52.1% of the action queries can eliminate cold startup overhead, the rest still have to experience the container startup phase with the restore-based method.

For Pagurus, we can observe a large discontinuity in Figure 16(b) as Pagurus helps most of the queries to skip the container startup phase to reduce the latency, and only a few of them still suffer from cold startup problem.

Fig. 17: The average cold startup latencies of the benchmarks with prewarm startup policy. ’Prewarm for each’ means that each action can get one prewarmed container, and ’prewarm for all’ means all actions can initialize one specific container created from a common cache.

Traditionally, we can also integrate the prewarm startup (introduced in Section II) policy with these advanced container startup techniques. From the Figure 17, we can observe that Pagurus still performs better than ‘prewarm for all’ method. It is because that the libraries in the specific prewarmed container may conflict with the user action. In this case, these specific prewarmed containers cannot be initialized, making user actions experience the cold startup. Even though the ‘prewarm for each’ method shows less end-to-end latency than Pagurus due to the prewarmed containers continuously running in the background, additional 2.75GB memory resources are required by it. So, although with comparatively high performance, ‘prewarm for each’ method is unpractical because of the extremely high resource usage.

Vii-E Supporting Bursty Loads

Traditional serverless platforms (e.g., OpenWhisk) fail to support bursty workload without causing QoS violation due to the long latency during the cold container startup. Pagurus is able to support the smooth process of the bursty workload of an action through the inter-action container sharing.

Fig. 18: The supported bursty loads of the benchmarks with Pagurus.

Figure 18 shows the supported bursty loads of the benchmarks without causing the QoS violation, when Pagurus allows the benchmark to rent 1 or 2 more containers. Observed from Figure 18, for all the benchmarks, Pagurus is able to support 3 of the bursty loads if the benchmarks are able to rent 2 more renter containers from other actions. This is mainly because the overhead of renting containers from other actions is much lower than creating new containers.

Fig. 19: The size of the reduced memory usage to support the bursty loads with Pagurus compared with OpenWhisk.

Besides, Pagurus also reduces the consumed memory space to support the bursty loads of the benchmarks compared to OpenWhisk. To suppose the bursty load with OpenWhisk, a straightforward method is maintaining more warm containers. However, these containers consume large main memory space. On the contrary, with Pagurus, there is no need to launch additional containers to support the bursty loads. Figure 19 shows the size of the main memory space saved with Pagurus. As shown in the figure, Observed from the figure, in Pagurus, 0.25GB to 3GB of the memory is saved in the case of 1 renter container, and 0.5GB to 6.75GB of the memory in the case of 2 renter containers compared with OpenWhisk.

Vii-F Overheads in Pagurus

Type Position Time/Resource Overhead
Encrypted code files size Running lender container 4.3125KB
Re-packed image size Creating lender container 485MB
Re-packing image time Creating lender container 6.647s
Checkpoint files size Creating container 332KB
CPU overhead Re-packing container 1.61%
TABLE III: Time and space overheads introduced by Pagurus

Table III shows the overheads introduced by Pagurus. As shown in the table, Pagurus incurs five types of overheads. In Pagurus, the lender container stores the renters’ encrypted code files as an extra operation, and decrypt the corresponding renter’s code file when eliminating cold startups. This approach only takes 4.3125KB of space to save information and less than 10ms to decrypt, which is far less than about 200ms of database transmitting. The operation of packing images is introduced in the generation of lender containers. The extra packed images experience creating by average 6.647s, and 485MB of space is allocated for storing. The preparation of lender containers is done asynchronously and does not result in the long end-to-end latency. Containers only need to boot from images for the first time. In other cases, containers get accelerated in startup by checkpoint files which takes average 332KB of space to store. Besides, the re-packed images and the checkpoint files will be recycled when the corresponding actions are not invoked.

The most important part of the overhead is the CPU usage when Pagurus re-packing containers. The experiment shows that when the inter-action container scheduler re-packs the container image for an action, the average CPU utilization on the node is only about 1.61%. After taking the communication and synchronization between nodes into account, the re-packing phase will consume about 2.4% of the CPU resource. If we limit the CPU usage of re-packing to less than 10% of the server, each node can re-pack container images for about 34 actions during the data collection in 1 minute. To conclude, the overhead incurred by Pagurus is negligible.

Viii Conclusion

This paper presents Pagurus,a container management system for serverless to eliminate container cold startup by inter-action container sharing. We implement the design by introducing three unique container pools, lender containers, executant containers and renter containers. The inter-action container scheduler cooperated with intra-action container schedulers in each action, enable containers scheduled between different actions to reduce container cold startup. The evaluation result shows that Pagurus can significantly eliminate the cold startup. Besides, Pagurus can also be integrated with several container technologies to minimize the container startup overhead of serverless computing.

References

  • [1] Note: https://cn.aliyun.com/product/fcApr., 2020 Cited by: §I, §III-B.
  • [2] Note: azure.microsoft.com/en-us/services/functionsApr., 2019 Cited by: §I, §III-B.
  • [3] Note: https://github.com/checkpoint-restore/criuApr., 2019 Cited by: §II-A, §VII-B.
  • [4] Note: cloud.google.com/functionsApr., 2019 Cited by: §I, §III-B.
  • [5] Note: aws.amazon.com/cn/lambdaApr., 2019 Cited by: §I, §III-B.
  • [6] Note: openwhisk.apache.orgApr., 2019 Cited by: §I, §II-A, §III-A, §III-B, §VII-B.
  • [7] Note: github.com/apache/openwhisk/blob/90c20a847b9a70b43e316fd89a0a15ae2ee39cc4/docs/annotations.mdApr., 2019 Cited by: §II-A.
  • [8] Note: https://github.com/apache/openwhisk/blob/master/docs/actions-python.mdApr., 2019 Cited by: §V-C.
  • [9] Note: https://github.com/apache/openwhisk/blob/master/docs/actions-docker.mdApr., 2019 Cited by: §V-B.
  • [10] Note: https://github.com/apache/openwhisk/blob/master/docs/actions-python.mdApr., 2019 Cited by: §V-B.
  • [11] Note: serverless.com/blog/2018-serverless-community-survey-huge-growth-usageApr., 2019 Cited by: §I, §II-A.
  • [12] I. E. Akkus, R. Chen, I. Rimac, M. Stein, and K. Satzke (2018) SAND: towards high-performance serverless computing. In ATC, pp. 923–935. Cited by: §II-B, §III-B.
  • [13] I. Baldini, P. Castro, K. Chang, P. Cheng, S. Fink, V. Ishakian, N. Mitchell, V. Muthusamy, R. Rabbah, A. Slominski, et al. (2017) Serverless computing: current trends and open problems. In Research Advances in Cloud Computing, pp. 1–20. Cited by: §II-A, §II-A, §III-B.
  • [14] L. A. Barroso, J. Dean, and U. Hölzle (2003) Web search for a planet: the google cluster architecture. IEEE micro (2), pp. 22–28. Cited by: §I, §III-C.
  • [15] E. A. Brewer (2015) Kubernetes and the path to cloud native. In Proceedings of the Sixth ACM Symposium on Cloud Computing, SoCC 2015, Kohala Coast, Hawaii, USA, August 27-29, 2015, pp. 167. External Links: Link, Document Cited by: §II-A.
  • [16] J. Dean and L. A. Barroso (2013) The tail at scale. Communications of the ACM 56 (2), pp. 74–80. Cited by: §I, §III-C.
  • [17] X. Dong, J. Yu, Y. Luo, Y. Chen, G. Xue, and M. Li (2013) Achieving secure and efficient data collaboration in cloud computing. In 21st IEEE/ACM International Symposium on Quality of Service, IWQoS 2013, Montreal, Canada, 3-4 June 2013, pp. 195–200. External Links: Link, Document Cited by: §V-C.
  • [18] D. Du, T. Yu, Y. Xia, B. Zang, G. Yan, C. Qin, Q. Wu, and H. Chen (2020) Catalyzer: sub-millisecond startup for serverless computing with initialization-less booting. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 467–481. Cited by: §II-A, §II-B, §III-B, §VII-D.
  • [19] Firecracker lightweight virtualization for serverless computing.. Note: https://aws.amazon.com/blogs/aws/firecracker-lightweight-virtualization-for-serverless-computing/Apr., 2019 Cited by: §III-B.
  • [20] N. Gautam (2012) Analysis of queues: methods and applications. CRC Press. Cited by: §V-A.
  • [21] V. Goyal, O. Pandey, A. Sahai, and B. Waters (2006) Attribute-based encryption for fine-grained access control of encrypted data. In Proceedings of the 13th ACM Conference on Computer and Communications Security, CCS 2006, Alexandria, VA, USA, Ioctober 30 - November 3, 2006, A. Juels, R. N. Wright, and S. D. C. di Vimercati (Eds.), pp. 89–98. External Links: Link, Document Cited by: §V-C.
  • [22] T. Harter, B. Salmon, R. Liu, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau (2016) Slacker: fast distribution with lazy docker containers. In 14th USENIX Conference on File and Storage Technologies, FAST 2016, Santa Clara, CA, USA, February 22-25, 2016, pp. 181–195. External Links: Link Cited by: §II-A, §II-B.
  • [23] S. Hendrickson, S. Sturdevant, E. Oakes, T. Harter, V. Venkataramani, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau (2016) Serverless computation with openlambda. login Usenix Mag. 41 (4). External Links: Link Cited by: §II-B, §III-B, §IV.
  • [24] E. Jonas, J. Schleier-Smith, V. Sreekanti, C. Tsai, A. Khandelwal, Q. Pu, V. Shankar, J. Carreira, K. Krauth, N. Yadwadkar, et al. (2019) Cloud programming simplified: a berkeley view on serverless computing. arXiv preprint arXiv:1902.03383. Cited by: §II-A, §II-A, §IV.
  • [25] J. Kim and K. Lee (2019) FunctionBench: a suite of workloads for serverless cloud function service. In CLOUD, pp. 502–504. Cited by: §III-A, §III-D, §VII-A.
  • [26] Knative. Note: https://github.com/knativeApr., 2019 Cited by: §II-A.
  • [27] R. Koller and D. Williams (2017) Will serverless end the dominance of linux in the cloud?. In Proceedings of the 16th Workshop on Hot Topics in Operating Systems, HotOS 2017, Whistler, BC, Canada, May 8-10, 2017, pp. 169–173. External Links: Link, Document Cited by: §II-B.
  • [28] A. Madhavapeddy and D. J. Scott (2014) Unikernels: the rise of the virtual library operating system. Commun. ACM 57 (1), pp. 61–69. External Links: Link, Document Cited by: §II-B.
  • [29] F. Manco, C. Lupu, F. Schmidt, J. Mendes, S. Kuenzer, S. Sati, K. Yasukata, C. Raiciu, and F. Huici (2017) My VM is lighter (and safer) than your container. In Proceedings of the 26th Symposium on Operating Systems Principles, Shanghai, China, October 28-31, 2017, pp. 218–233. External Links: Link, Document Cited by: §III-B.
  • [30] M. G. McGrath and P. R. Brenner (2017) Serverless computing: design, implementation, and performance. In 37th IEEE International Conference on Distributed Computing Systems Workshops, ICDCS Workshops 2017, Atlanta, GA, USA, June 5-8, 2017, pp. 405–410. External Links: Link, Document Cited by: §I, §II-A, §II-B.
  • [31] E. Oakes, L. Yang, D. Zhou, K. Houck, T. Caraza-Harter, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau (2018) SOCK: serverless-optimized containers. login Usenix Mag. 43 (3). External Links: Link Cited by: §I, §II-A, §II-B, §IV.
  • [32] Open-sourcing gvisor, a sandboxed container runtime.. Note: https://cloud.google.com/blog/products/gcp/open-sourcing-gvisor-a-sandboxed-container-runtime.Apr., 2019 Cited by: §III-B.
  • [33] Q. Pu, S. Venkataraman, and I. Stoica (2019) Shuffling, fast and slow: scalable analytics on serverless infrastructure. In 16th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2019, Boston, MA, February 26-28, 2019, pp. 193–206. External Links: Link Cited by: §I, §II-A.
  • [34] M. Shahrad, J. Balkind, and D. Wentzlaff (2019) Architectural implications of function-as-a-service computing. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2019, Columbus, OH, USA, October 12-16, 2019, pp. 1063–1075. External Links: Link, Document Cited by: §I, §II-A, §III-A, §III-B, §VII-A.
  • [35] V. Shankar, K. Krauth, Q. Pu, E. Jonas, S. Venkataraman, I. Stoica, B. Recht, and J. Ragan-Kelley (2018) Numpywren: serverless linear algebra. CoRR abs/1810.09679. External Links: Link, 1810.09679 Cited by: §I.
  • [36] Z. Shen, Z. Sun, G. Sela, E. Bagdasaryan, C. Delimitrou, R. van Renesse, and H. Weatherspoon (2019) X-containers: breaking down barriers to improve performance and isolation of cloud-native containers. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2019, Providence, RI, USA, April 13-17, 2019, I. Bahar, M. Herlihy, E. Witchel, and A. R. Lebeck (Eds.), pp. 121–135. External Links: Link, Document Cited by: §II-B.
  • [37] W. Tai, Y. Chang, and W. Huang (2020) Security analyses of a data collaboration scheme with hierarchical attribute-based encryption in cloud computing. I. J. Network Security 22 (2), pp. 212–217. External Links: Link Cited by: §V-C.
  • [38] J. Thalheim, P. Bhatotia, P. Fonseca, and B. Kasikci (2018) Cntr: lightweight OS containers. In 2018 USENIX Annual Technical Conference, USENIX ATC 2018, Boston, MA, USA, July 11-13, 2018, pp. 199–212. External Links: Link Cited by: §III-B.
  • [39] E. Tilevich and H. Mössenböck (Eds.) (2018) Proceedings of the 15th international conference on managed languages & runtimes, manlang 2018, linz, austria, september 12-14, 2018. ACM. External Links: Link, Document, ISBN 978-1-4503-6424-9 Cited by: §II-A.
  • [40] R. S. Venkatesh, T. Smejkal, D. S. Milojicic, and A. Gavrilovska (2019) Fast in-memory criu for docker containers. In Proceedings of the International Symposium on Memory Systems, MEMSYS ¡¯19, New York, NY, USA, pp. 53–65. External Links: ISBN 9781450372060, Link, Document Cited by: §II-A.
  • [41] M. Vrable, J. Ma, J. Chen, D. Moore, E. Vandekieft, A. C. Snoeren, G. M. Voelker, and S. Savage (2005) Scalability, fidelity, and containment in the potemkin virtual honeyfarm. In Proceedings of the 20th ACM Symposium on Operating Systems Principles 2005, SOSP 2005, Brighton, UK, October 23-26, 2005, pp. 148–162. External Links: Link, Document Cited by: §II-A.
  • [42] K. A. Wang, R. Ho, and P. Wu (2019) Replayable execution optimized for page sharing for a managed runtime environment. In Proceedings of the Fourteenth EuroSys Conference 2019, Dresden, Germany, March 25-28, 2019, pp. 39:1–39:16. External Links: Link, Document Cited by: §II-A.
  • [43] L. Wang, M. Li, Y. Zhang, T. Ristenpart, and M. Swift (2018) Peeking behind the curtains of serverless platforms. In ATC, pp. 133–146. Cited by: §I, §II-A.