I Introduction
Cloud computing offers cloud users with utilitylike computing services on a payasyougo fashion [1]. Computing resources including CPU, RAM, disk storage and bandwidth can be leased in custom packages with minimal management overhead. Virtualization technologies help cloud providers pack cloud resources into a functional package for serving user jobs. Such packages used to be dominantly virtual machines (VMs), until the recent emergence of cloud containers, e.g., Google Container Engine (largest Linux container) [2], Amazon EC2 Container Service (ECS) [3], Aliyun Container Service [4], Azure Container Service [5], and IBM Containers. Compared with generalpurpose VMs, containers are more flexible and lightweight, enabling efficient and agile resource management. Applications are encapsulated inside the containers without running in a dedicated operating system [6]. A representative cloud container is only megabytes in size and takes seconds to start [6], while launching a VM may take minutes. In the era of using VMs, VMs remain open throughout the life of the job. Because of the transient nature of a container, jobs could be seperated into several containers, and resource allocation is more convenient.
A complex cloud job in practice is often composed of subtasks [7]. For example, a social game server [8] typically consists of a frontend web server tier, a load balancing tier and a backend data storage tier; a network security application may consist of an intrusion detection system (IDS), a firewall, and a load balancer. Different subtasks require different configurations of CPU, RAM, disk storage and bandwidth resources. Each subtask can be served by a custommade container following the resource profile defined by the cloud user [9]. Some cloud containers are to be launched after others finish execution, following the inputoutput relation of their corresponding tasks. Such a dependence relation among containers is captured by a container (dependence) graph. For example, in Amazon ECS, a cloud user submits a job definition including resource requirements, type of docker image, a container graph, and environment variables. ECS then provisions the containers on a shared operating system, instead of running VMs with complete operating systems [10].
In the growing cloud marketplace (e.g., Amazon EC2 and ECS), fixed pricing mechanisms [11] and auctions complement each other. While the former is simple to implement, the latter can automatically discover the market price of cloud services, and allocate resources to cloud users who value them the most [12]. A series of recent cloud auction mechanisms implicitly aim at nonelastic cloud jobs. These include both oneround cloud auctions [12] and online cloud auctions [13], [14]. In both cases, the provider processes each bid immediately and commits to an irrevocable decision. Furthermore, even in the online auctions, users’ service time window is predefined by start and finish times in the bid [13], [14].
A large fraction of cloud jobs are elastic in nature, as exemplified by big data analytics and Google crawling data processing. They require a certain computing job to be completed without demanding alwayson computing service, and may tolerate a certain level of delay in bid acceptance and in job completion. For example, since Sanger et al. published the first complete genome sequence of an organism in 1977, DNA sequencing algorithms around the globe currently produce 15 billion gigabytes of data per annum, for cloud processing [15]. A typical job of DNA testing takes hours to complete, while the user is happy to receive the final result anytime in a few days after job submission [16].
Given that bids from cloud users can tolerate a certain level of delay in bid admission, it is natural to revise the common practice of immediate irrevocable decision making in online cloud auctions. We can group bids from a common time window into a batch, and apply batch bid processing to make more informed decisions on all bids from the same batch simultaneously. Actually, if one considers only online optimization and not online auctions, then such batch processing has already been studied in operations research, such as online scheduling to minimize job completion time [17], and scheduling batch and heterogeneous jobs with runtime elasticity in cloud computing platforms [18].
We study efficient auctions for cloud container services, where a bid submitted by a cloud user specifies: (i) the container dependence graph of the job; (ii) the resource profile of each container; (iii) the deadline of the job; and (iv) the willingness to pay (bidding price). Cloud containers can be agilely created and dropped to handle dynamic subtasks in cloud jobs; it becomes practically feasible to suspend and resume a subtask. As long as a container is scheduled to run for a sufficient number of time slots, its subtask will finish.
This work advances the stateoftheart in the literature of cloud auctions along two directions. First, while batch algorithms have been extensively studied in the field of online optimization, to the authors’ knowledge, this work is the first that studies batch auctions in online auction design. Second, this work is the first cloud auction mechanism designed for container services, with expressive bids based on container graphs. Our mechanism design simultaneously targets the following goals: (i) truthfulness, i.e., bidding true valuation for executing its job on the cloud maximizes a user’s utility, regardless of how other users bid; (ii) time efficiency, we require that all components of the auction run in polynomial time, for practical implementation; (iii) expressiveness; the target auction permits a user to specify its job deadlines, desired cloud containers, and intercontainer dependence relations; and (iv) social welfare maximization; i.e., the overall ‘happiness’ of the cloudecosystem is maximized.
Corresponding to the above goals, our auction design leverages the following classic and new techniques in algorithm and mechanism design. For effectively expressing and handling user bids that admit deadline specification and container dependence graphs, we develop the technique of
compact exponential Integer Linear Programs (ILPs)
. We transform a natural formulation of the social welfare optimization ILP into a compact ILP with an exponential number of variables corresponding to valid container schedules. Although such a reformulation substantially inflates the ILP size, it lays the foundation for later efficient primaldual approximation algorithm design, helping deal with nonconventional constraints that arise from container dependence and job deadlines, whose dual variables are hard to interpret and update directly. A combinatorial subroutine later helps identify good container schedules efficiently without exhaustively enumerating them.Towards truthful batch auction design, we leverage the recent developments in posted price auctions [19]
. At a high level, such an auction maintains an estimate of marginal resource prices for each resource type, based on expected supplydemand. Then upon decision making of each batch of bids, it chooses bids whose willingness to pay surpasses the estimated cost to serve them, based on resource demand of the container graph and projected marginal prices of resources. A winning user is charged with such estimated cost, which is independent from its bidding price. Truthfulness is hence guaranteed based on Myerson’s celebrated characterization of truthful mechanisms
[20].The social welfare maximization problem in our container auction is NPhard even in the offline setting, with all inputs given at once. A third key element of our cloud container auction is the classic primaldual schema for designing efficient approximation algorithms, with rigorous guarantee on worst case performance. This is further integrated with the posted price framework, in that the marginal resource prices are associated with dual variables. The primal dual framework relies on a subroutine that computes the optimal schedule of a given container graph, based on static resource prices (fixing dual variables, update primal solution). We apply dynamic programming [21] and graph traversal algorithms [22], for designing the subroutine for (i) service chain type jobs from network function virtualization, and (ii) general jobs with arbitrary topologies in their container graphs. We evaluate the effectiveness of our cloud container auction through rigorous theoretical analysis and tracedriven simulation studies.
Ii Related Work
There exist a large body of studies in recent cloud computing literature on cloud auction design. Shi et al. [23] studied online auctions where users bid for heterogeneous types of VMs and proposed RSMOA, an online cloud auction for dynamic resource provisioning. Zhang et al. [24] propose COCA, a framework for truthfull online cloud auctions based on a monotonic payment rule and utilitymaximizing allocation rule. These auction mechanisms are all confined to the solution space of immediately accepting or rejecting an arriving bid. To our knowledge, this work is the first that designs batchtype online auctions, both in the field of cloud computing and in the general literature of auction mechanism design.
In terms of batchtype online algorithms, Deng et al. [17] study online scheduling in a batch processing system. Kumar et al. [18] design scheduling mechanisms for runtime elasticity of heterogeneous workloads. They propose DelayedLOS and HybridLOS, two algorithms that improve an existing dynamic programming based scheduler. These work possess a resemblance to ours in terms of postponing immediate response for more informed decision making, although they focus on algorithm design only and do not consider payments or incentive compatibility.
Along the direction of posted price algorithms and mechanisms, Huang et al. [25] study online combinatorial auctions with production costs. They show that posted price mechanisms are incentive compatible and achieve optimal competitive ratios. Etzion et al. [26] present a simulation model to extend previous analytical framework, focusing on a firm selling consumer goods online using posted price and auction at the same time. This work was inspired in part by this line of recent developments on using posted prices to achieve effective resource allocation and bidindependent charges.
Iii The Cloud Container Auction Model
We consider a public cloud in which the cloud provider (auctioneer) manages a pool of R types of resources, as exemplified by CPU, RAM, disk storage and bandwidth, and the capacity of resourcer is . Integer set {1, 2,…, X} is denoted by [X]. There are I cloud users arriving in a large time span {1, 2, …, T}, acting as bidders in the auction. Each user submits a job bid that is tuple:
(1) 
Here is the workload of user , is arrival time of user i, and its required deadline for job completion is . is user ’s overall willingnesstopay for finishing its job by .
According to users workload, the detailed information will be obtained by cloud platform. Such as the number of subtasks of the job M, and each subtask requires a container to process, thus m is also the number of containers. The container graph that describes the dependence among subtasks. The number of requested time slots for each subtask . Each subtask can be suspended and resumed, as long as the total execution time accumulates to . is the resource configuration of containerm of user i.
A (container) schedule is a mapping from resources and time slots to cloud containers, serving accepted cloud jobs to meet their deadlines. We postpone immediate decision making on the bids, to judiciously exploit cloud jobs’ tolerable delays in bid admission. We group bids from every time slots into a batch, resulting in Q batches within the large time span T. Let be the number of users arriving within batch
. A binary variable
indicates whether user i’s bid is accepted (1) or not (0). Another binary variable indicates whether to execute user ’s subtask at time slot t (1) or not (0); it encodes a schedule of user ’s job. The cloud provider further computes a payment to charge for a winning cloud user . The holy grail of auction mechanism design is truthfulness, the property that greatly simplifies bidder strategy space and analysis of the auction mechanism.I  of users 

T  of time slots 
capacity of type resource  
M  of subtasks/containers of one job 
workload of user  
dependence graph of user i’s subtasks  
of time slots requested by user i’s container m  
demand of typer resource by user i’s container m  
user i’s arrival time  
deadline of user i’s bid  
bidding price of user i’s bid  
accept the user i’s bid(1) or not(0)  
of users arriving within batch  
total type resource occupation of schedule in for slot t  
of time slots within one batch interval  
allocated user i’s container m at time slot t(1) or not(0)  
amount of allocated type resource at time t  
availablity of typer resource at time slot t  
marginal price of typer resource at time slot t  
minimum value of user’s valuation per unit of typer resource  
maximum value of user’s valuation per unit of typer resource  
the set of valid schedules for each user  
user i’s utility  

Lemma 1.
Let
denote the probability of bidder
winning an auction and be the bidding price except . A mechanism is truthful if and only if the following hold for a fixed [28]:1) is monotonically nondecreasing in ;
2) bidder is charged by .
Lemma 1 can be explained in this orientation: the payment charged to bidder i for a fixed is independent of . We will use this mode to design a posted price function in Sec. IV. Since we meet the challenge that when we consider that online batch auction decisions are to be made based on hitherto information only. If user ’s job is accepted, its utility is , which equals under truthful bidding. The cloud provider’s utility is . The social welfare that captures the overall utility of both the provider and the users is () + (). With payments cancelling themselves, the social welfare is simplified to .
Under the assumption of truthful bidding, the Social Welfare Maximization problem in our cloud container auction can be formulated into the following Integer Linear Program (ILP):
(2) 
subject to:
(2a)  
(2b)  
(2c)  
(2d)  
(2e)  
(2f) 
Constraints (2a) and (2b) ensure that user ’s job is scheduled to execute only between its start time and deadline. (2c) enforces intertask dependence of user i’s subtasks, and (2d) makes sure that the total number of allocated time slots for each container is sufficient to finish the corresponding subtask. Constraint (2e) states that the total amount of type resource utilized at time slot is capped by system capacity.
Even in the offline setting with all inputs given, ILP (2) is still NPhard. This can be verified by observing that with constraints (2e) and 2(f) alone, and ILP (2) degrades into the classic knapsack problem known to be NPhard. We resort to the classic primaldual schema [29] for efficient algorithm design. We first reformulate ILP (2) into an equivalent compact exponential version, to hide the nonconventional constraints that arise from container dependence and job deadlines, whose dual variables would be hard to interpret and to update:
(3) 
subject to:
(3a)  
(3b)  
(3c) 
In the compact exponential ILP above, represents a set of valid schedules for subtasks that meet constraints (2a), (2b), (2c) and (2d). represents the bidding price of user in schedule . Since a time slot can serve two or more containers, we let represent the total type resource occupation of user i’s schedule S in t. Constraints (3a) and (3b) correspond to (2e) and (2f) in ILP (2). We relax the integer constraints to
, and introduce dual variable vectors
and to constraints (3a) and (3b) respectively, to formulate the dual of the LP relaxation of ILP (3).(4) 
subject to:
(4a)  
(4b) 
While the reformulated ILP (3) is compact in its form, it has an exponential number of variables that arise from the exponential number of feasible job schedules. Correspondingly, the dual problem (4) has an exponential number of constraints. Even there are exponential number of schedule options are available, we only select polynomial number of them to compute the approximately optimal objective through a subalgorithm (sec IVB). We next design an efficient auction algorithm that efficiently solves the primal and dual compact exponential ILPs simultaneously, pursuing social welfare maximization (in the primal solution) while computing payments (in the dual solution).
Iv Batch Auction Algorithm for Social Welfare Maximization
Iva The Batch Algorithm
Departing from traditional online auctions that make immediate and irrevocable decisions, our auction mechanism takes a batch processing approach to handle user bids. In each batch, we aim to choose a subset of bids to accept, and to dynamically provision containers, through choosing a feasible assignment of the primal variable . We let , if user i’s bid with schedule S is accepted, then allocate time slots according to the schedule, and update the amount of resources occupied.
We now focus on batch bid processing and container provisioning for social welfare maximization. A set of dual constraints exists for each primal variable . We minimize the increase of the dual objective and maintain dual feasibility (4a) by leveraging complementary slackness. Once the dual constraint (4a) is tight with user ’s schedule (KKT conditions [14]), the primal variable is updated to . According to constraint (4b), the dual variable . Therefore, we let be the maximum of and the RHS of (4a). If , the bid is rejected.
(5) 
can be viewed as the marginal price per unit of typer resource at t. Consequently, represents the cost of serving user i by schedule S, and is the utility of user i’s bid. The above assignment (5) chooses the schedule which can maximize the job’s utility..
Our auction strives to reserve a certain amount of resource for potential highvalue bids in the future. Careful implementation of such an intuition through dual price design is crucial in guaranteeing a good competitive ratio of the auction.
Let and represent the maximum and minimum user valuation per unit of typer resource respectively. denotes the amount of allocated typer resource at t. We define the marginal price to be an increasing function of :
(6) 
where = ; = .
The initial price of each typer resource should be low enough such that any user’s bid can be accepted; otherwise there might be a large amount of idle resource. Thus we decrease the starting price by a coefficient k, satisfying: and . The detailed explanation of k is given in Theorem 5. For all , , and it will reach when = . In that case, the cloud provider will not further allocate any type resource. The parameter is defined as the minimum occupation rate of all kinds of resources within slots T, i.e.,
We assume that there are enough cloud users to potentially exhaust resources within each slot. Thus the resource occupation rate is close to 1.
We design a batch auction algorithm in Alg. 1 with container scheduling algorithm in Alg. 2 or Alg. 3,which can select optimal container scheduling under different circumstances. defines the posted price function and initializes the primal and dual variables in line 1. Upon the arrival of users within batch , we first select the schedule that maximize users’ utility through the dual oracle(lines 46). in line 7 is viewed as the weighted total resource demand by user i, thus can be interpreted as the value for a unit resource of user i, and we select the bid with the maximum unit resource value. If user obtains positive utility, we update the primal variable and dual variable according to ’s schedule (lines 916).
IvB Subalgorithm of Auction Mechanism
Our container scheduling algorithms only selects utilitymaximizing schedules for each job, rather than an exponential number of schedules. Therefore, we compute a schedule that minimizes the cost of serving the job.
In our auction mechanism, dependence graph of user tasks is complicated to handle. We first focus on a relatively small, yet representative case of jobs from Network Function Virtualization [30], where each container graph is a service chain. We exploit the sequential chain structure to design Algorithm 2 with polynomial time complexity, based on dynamic programming. By choosing time slots that can ensure right operating sequence and minimum payment for each subtask, the first two nested for loops select minimumcost schedule for containers (lines 310). Then the second for loop updates the cost and schedule for each container m (lines 1115); line updates the cost and utility of user i’s schedule at the end.
Container graphs in practice can be more complex than a chain structure. For general jobs with arbitrary container graph topology, the container scheduling problem is NPhard, as proven in Theorem. 1; we design in Algorithm 3 to solve the optimization. Lines 28 in Algorithm 3 sort available time slots by . Then employs DepthFirst Search (DFS) (line 9). We adapt the DFS procedure with improvements to select available time slots with minimum cost in a recursive process that decides a container schedule. Truthfulness requires solving the problem exactly, and our algorithm runs in exponential time to the number of subtasks in a job, which is mostly small and can be viewed as a constant in practice.
Theorem 1.
In each batch of container based auction, given fixed resource prices, choosing the schedule of subtasks with minimum cost with a general container graph is NPhard.
Proof: We construct a polynomialtime reduction to subtask scheduling from the classic NPhard problem subset sum:
Given a set and a objective V, our problem reduces to an instance of = (), in which each user’s job has M types of containers with slot requirement, and the resource pool contains one type of resource. We should put as many containers in one slot with lowest price as possible. If a polynomialtime algorithm solves the capacitated container scheduling problem , it will solve the corresponding subset sum problem as well, and vice versa. Consequently, the subset sum problem can be viewed as a special case of the subtask scheduling problem, which must be NPhard as well. ∎
V Analysis of Auction Mechanism
Va Truthfulness of The Batch Algorithm
Theorem 2.
The batch auction in Algorithm 1 that computes resource allocation and payment is truthful.
Proof: In Algorithm 1, upon the arrival of user and our posted price mechanism, the payment that user needs to pay to the cloud provider (if its bid is accepted) depends only on the amount of resources that has been allocated and user ’s demand. Which means, user i’s bidding price does not affect its payment. Therefore, leveraging Lemma 1, our online batch auction is truthful. ∎
VB Solution feasibility of The Batch Algorithm
Proof: is initialized to and updated to only (line 10 in Algorithm ), so the solution of our algorithm is binary valued, and satisfies constraint (2f). Container scheduling algorithms and guarantee that the schedule S for each user’s bid satisfies constraints (2a), (2b), (2c) and (2d). For container provisioning and scheduling, both and select time slots satisfying resource capacity limits, Hence constraints (2e) is satisfied. In summary, the solution we obtain is feasible for ILP (2). ∎
Proof:
We first consider the case of service chains (). Line 1 in Algorithm 1 takes linear time to initialize the price function, primal and dual variables. According to user arrivals, the while loop iterates times to find user with maximum unit resource value, then updates the primal and dual variables in linear time. In the for loop (lines 46), Algorithm iterates times to select the best schedule of users with maximum utility. Then each in Algorithm 2 takes steps to compute the price of each time slot and examine resource capacity limits for each container. Thus it takes O() to choose the utility maximization schedule for user i. In summary, the running time of with is . We next consider the case of general container graphs (). The complexity of is exponential to the number of containers in the container graph, which is mostly small and an be viewed as a constant. ∎
VC Competitive Ratio of the Batch Algorithm
The competitive ratio is an upperbound ratio of the optimal social welfare achieved by ILP (2) to the social welfare achieved by our batch algorithm. The primaldual framework in our batch algorithm design enables a competitive ratio analysis based on LP duality theory [29]. Let and be the primal objective value (3) and dual objective value (4) after accepting user i’s job, respectively. Then we let and be the initial objective values of primal (3) and dual (4) programs, and = 0. and are the final primal and dual objective values achieved by our algorithm . Let and be the optimal objective values of (2) and (3), respectively. Since the compact exponential ILP is equivalent to the original ILP, we have = , which is hereafter referred to as .
Lemma 2.
According to the initial marginal price of each time slot, the initial dual objective value is at most OPT.
Proof: We first show a lower bound on the optimal social welfare:
.
Recall that we let denote the minimum resource occupation rate within slots T. can be interpreted as the minimum social welfare generated by a job per unit of typer resource and per unit of time. Therefore, is the minimum social welfare generated by all users.
According to dual (4) and marginal price function (6):
Therefore, the the initial dual objective value is bounded by OPT. ∎
Lemma 3.
If there is a constant , and the primal and dual objective values increased by handling each user i’s job satisfy , then the batch algorithm is competitive.
Proof: Since the inequality is satisfied for all users, we sum up the inequality of each user i:
According to weak duality and Lemma 2, and OPT. Therefore,
with the fact that . Our batch algorithm is competitive. ∎
Next we will define an Allocation Price Relation to identify this . If the Allocation Price Relation is satisfied by , the objective values achieved by our algorithm guarantee the inequality in Lemma 3.
Definition 1.
The Allocation Price Relation for is that , where represents the price of type resource after processing user i’s job. is the total amount of allocated type resource after accepting user i.
Lemma 4.
For a given , if the price function satisfies , then Algorithm .
Proof: If bid i is rejected, . Then we assume that bid i is accepted and let be the job schedule of user . Knowing that our algorithm accepts a bid when constraint (4a) is tight, . So the increase of primal objective is:
According to dual (4), the increase of dual objective is:
Since we have , and :
∎
We next try to find the for typer resource that satisfies the Allocation Price Relationship. Thus the is the maximum value among all . Since the capacity of typer resource is larger than a user demand, we let denote . We first prepare with the following definition.
Definition 2.
The Differential Allocation Price Relation for with a given parameter .
Lemma 5.
The marginal price defined in (5) satisfies the Differential Allocation Price Relation, and we can get .
Proof: The derivative of the marginal price function is:
Thus we can obtain . ∎
Lemma 6.
The batch auction Algorithm is competitive in social welfare with .
Proof: Lemma 5 implies that satisfies the Differential Allocation Price Relation of all kinds of resources. Since the above mentioned, ,
Thus, we can obtain due to the Allocation Price Relationship. ∎
Theorem 5.
If k satisfies and , the competitive ratio of batch auction algorithm is minimum, and is equal to k.
Proof: We assumpt that . By Lemma 5, the competitive ratio of our batch algorithm is =, thus the competitive ratio is a function of k. Differentiating on k is:
It suffices to show that is positive as . When k satisfies and , we can obtain the minimum competitive ratio:
If we consider the case that competition for resource is intense, the is close to 1. When is 2, the competitive ratio is close to , as illustrated in Fig. 1. ∎
VD Setting The Batch Interval
In our batch auction, the more jobs we handle in a batch, the more information we have for social welfare maximization. Nonetheless, we can’t overextend the length of a batch given that cloud jobs have deadlines to meet. Precise optimization of the job interval length is left as future research, and we provide here a brief discussion only. Let be the time required to execute a job , and be the expected number of user arrivals per slot. In general, an appropriate length of a batch round depends on values of , deadline and arrival time of user i, . We can set a target threshold on the job loss rate (e.g., 10%), the ratio of jobs who cannot meet their deadlines due to delayed bid admission.
Assume that job processing time and
are normally distributed, by
and , respectively. The max waiting time for each user equals , and is also normally distributed as . If user i’s maximum waiting time , we will lose this job. Thus the length of batch interval can be set by (for 10% job loss):{ }
Vi Performance Evaluation
We evaluate our batch auction algorithm and its subalgorithms by tracedriven simulation studies. We leverage Google cluster data [31], which captures rich information on user jobs, including start time, resource demand (CPU, RAM and Disk), and duration. We translate cloud job requests into bids, arriving in a one month time window. We assume that each subtask consumes [1,10] slots, and each time slot is one hour. Job deadlines are set randomly between the arrival time and system end time. The demand of resources (CPU, RAM and Disk) is set randomly between [0, 1], with the resource capacity set to 50. We use user density to express the number of users in one batch interval, arriving as a Poisson process.
A. Comparison with Classic Online Auctions
We compare our batch auction with a traditional online auction in terms of social welfare, as shown in Fig. 4. Under the same simulation settings, we compare the two algorithms in 10 different sets of simulation studies. Our batch auction achieves a higher social welfare in all of them. Intuitively, the online auction processes bids in a FCFS fashion, while the batch auction considers most attractive bids first in each batch. Fig. 4 shows another set of comparisons. The superiority of batch auction remains clear, with different number of time slots and user density. Social welfare fluctuates with the increase of the number of users and user density. The batch auction performs better with higher user density. The influence of different batch interval for the batch performance is illustrated in Fig. 4. As grows, the cloud social welfare initially grows as well. However, when is too large so that more bids are lost due to delays, as we can see in Fig. 4, a gradual decrease in the percentage of winners leads to a decreasing trend in social welfare. Recall that in the analysis of in the previous section, a too large is not suitable for our batch auction.
B. Competitive Ratio of The Batch auction
Next we study the competitive ratio achieved by our batch auction. As we proved in Theorem 6, the competitive ratio depends on . Fig. 7 shows that the competitive ratio grows as increases. The observed competitive ratio is much better than the theoretical bound and remains smaller than 2; this can be partly explained by the fact that the theoretical bound is a pessimistic worst case scenario uncommon in practice. The ratio fluctuates with user population and sightly decreases with as decreases. The batch auction favors intensive user arrivals.
C. Performance of : The Role of System Parameters
We next examine the resource occupation ratio (defined in Sec. III) of our batch auction. As we can see in Fig. 7, under different numbers of time slots and user density, the resource occupation ratio of the batch auction mechanism is constantly beyond 90% and often close to 1. Fig. 7 demonstrates the variation of social welfare with different number of users. The social welfare grows mildly but steadily as the number of users and the number of time slots grow.
Vii Conclusion
This work is the first in the cloud computing literature that studies efficient auction algorithm design for container services. It is also the first that designs batch online auctions, aiming at more informed decision making through exploiting the elastic nature of cloud jobs. We combined techniques from compact exponential optimization, posted price mechanisms, and primaldual algorithms for designing a cloud container auction that is incentive compatible, computationally efficient, and economically efficient. As future directions, it will be interesting to study (i) cloud jobs that cannot be suspended and resumed; (ii) preprocessing of cloud jobs with tight deadlines to choose between immediate acceptance or delayed processing of their bids; and (iii) cloud container auctions that make revocable decisions, where a partially executed cloud job may or may not contribute towards social welfare of the cloud.
References
 [1] X. Qiu, H. Li, C. Wu, and Z. Li, “Costminimizing dynamic migration of content distribution services into hybrid clouds,” in INFOCOM, 2012 Proceedings IEEE, 2012, pp. 2571–2575.
 [2] Google Container Engine, http://cloud.google.com/containerengine/.
 [3] Amazon ECS, https://aws.amazon.com/cn/ecs/.
 [4] Aliyun Container Engine, http://cn.aliyun.com/product/containerservice.
 [5] Azure Container, https://azure.microsoft.com/enus/services/containerservice/.
 [6] X. Xu, H. Yu, and X. Pei, “A Novel Resource Scheduling Approach in Container Based Clouds,” in Proc. of IEEE ICCS, 2014.
 [7] H. Li, C. Wu, Z. Li, and F. C. M. Lau, “Profitmaximizing virtual machine trading in a federation of selfish clouds,” in INFOCOM, 2013 Proceedings IEEE, 2013, pp. 25–29.
 [8] RightScale, “Social Gaming in the Cloud: A Technical White Paper,” 2013.
 [9] S. He, L. Guo, Y. Guo, and C. Wu, “Elastic Application Container: A Lightweight Approach for Cloud Resource Provisioning,” in Proc. of IEEE International Conference on Advanced Information NETWORKING and Applications, 2012.
 [10] A. Tosatto, P. Ruiu, and A. Attanasio, “ContainerBased Orchestration in Cloud: State of the Art and Challenges,” in Proc. of Ninth International Conference on Complex, Intelligent, and Software Intensive Systems, 2015.
 [11] J. Zhao, H. Li, C. Wu, Z. Li, Z. Zhang, and F. C. M. Lau, “Dynamic pricing and profit maximization for the cloud with geodistributed data centers,” in INFOCOM, 2014 Proceedings IEEE, 2014, pp. 118–126.
 [12] L. Zhang, Z. Li, and C. Wu, “Dynamic Resource Provisioning in Cloud Computing: A Randomized Auction Approach,” in Proc. of IEEE INFOCOM, 2014.
 [13] W. Shi, L. Zhang, C. Wu, Z. Li, and F. Lau, “An Online Auction Framework for Dynamic Resource Provisioning in Cloud Computing,” in Proc. of ACM SIGMETRICS, 2014.
 [14] X. Zhang, Z. Huang, C. Wu, Z. Li, and F. Lau, “Online Auctions in IaaS Clouds: Welfare and Profit Maximization with Server Costs,” in Proc. of ACM SIGMETRICS, 2015.
 [15]<