Consistent Dynamic Server Assignment in Content Delivery Network

04/15/2019 ∙ by Lemei Huang, et al. ∙ Peking University 0

Server assignment is an essential part in Content Delivery Network (CDN), responsible for choosing a surrogate server to provide the best Quality of Service (QoS) for clients and balancing the server load. But it faces two critical challenges in practical: 1) the constraint of DNS schedule granularity makes it difficult to distribute client demands in arbitrary proportion. 2) dynamic server assignment may bring about a huge amount of cached content migration among servers. To address these issues, we work with one of the largest CDN operators in the world, and design a persistent dynamic server assignment algorithm, DynDNS. It gives a more "sticky" solution that can be easily implemented in DNS-based load balancing while keeping the overall QoS and server load satisfying. To the best of our knowledge, this is the first quantitative model for the dynamic DNS server assignment problem. Theoretical analysis proves that the proposed greedy algorithm has considerable optimality guarantees, and evaluation shows that DynDNS can avoid about 50 shift in demand allocation and yield more stable cache hit ratio.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Today Content Delivery Network (CDN) becomes indispensable delivering online video contents. A typical CDN comprises surrogate servers distributed geographically across numerous infrastructure networks in a vast area. User requests, or user demands, are directed to surrogate server(s) to get the requested content. We refer the mapping between content requests and edge servers server assignment (Fig. 1). CDN’s Server assignment manager is responsible for figuring out the most appropriate (i.e. proximal, light-loaded, etc.) servers to meet the demand and responses with a list of IP address(es) by DNS name resolution[1].

Pioneering works endeavor to present server assignment strategy to guarantee the Quality of Services (QoS) for users and load balance for servers, e.g. Stable Matching algorithms of Akamai[2]. They use preference lists to match servers and requests. On the one hand, however, the unpredictable and dynamic traffic makes the assignments tend to load ”oscillation” between servers, which may causes frequent video content migrations. On the other hand, the coarse scheduling granularity in DNS-based CDN makes it difficult to distribute requests in arbitrary proportion. Both pose significant challenges to CDN server assignment in practical.


Fig. 1: Common CDN Framework: DNS-based server assignment, aiming at (1) better Quality of Service and (2) load balancing

Oscillation of Server Assignment. In the real world, both the client demands and network status changes dynamically [3], for which server assignment decisions should be more adaptive and therefore becomes ever-changing (shown in Sec. V). Unfortunately, when content outsourcing is pull-based, which is generally adopted in current CDNs [4], ”inconsistent” server assignment map requests to some ”virgin” servers without the requested object, which brings about a cache hit ratio drop. Video content migration must be done to regain the high cache hit ratio. However, a ”sensitive” server assignment strategy may leads to frequent content migration, and thus deceleration of content distribution, which may further causes frequent rebuffering, low bitrate, long join time, and even severely degrade user QoE. Therefore, a more consistent server assignment strategy is preferred.

For example, if requests under Y.com is remapped from edge server A to B. If B has not served Y.com before, it definitely has to re-cache these objects from the source. Thus, these requests in ”cold start” will suffer from a lot of inevitable cache misses, and re-caching massive amount of objects is a huge burden for CDN in bandwidth and storage cost.

In this paper, we work with one of the largest CDN operators in the world, and design a consistent dynamic server assignment algorithm, DynDNS. It adaptively addresses the dynamic client demands, network changes and server status with consistent assignment and QoS guarantee. To the best of our knowledge, it is the first quantitative model for the dynamic DNS server assignment problem. Moreover, DynDNS figures out server assignment decisions that can be easily implemented to meet the practical concerns in dynamic DNS-based load balancing. The extensive simulations show that the algorithm yields more consistent server assignments with satisfactory service quality and server load.

Traffic Granularity of DNS-based Scheduling. DNS-based load balancing is realized by carefully decided IP list. But due to the size limitation of DNS packets, demand cannot be distributed in arbitrary proportion, which is one of the practical concerns omitted in many server selection algorithms [2, 5, 6], and may bring about complexity in dealing with the server selection problem111In most cases, DNS responses are normally propagated in only one UDP packet[1]. Due to the max-length (i.e. 512 Bytes) limitation of a UDP packet in DNS protocol[7], the number of IP addresses in the replied list has an upper bound (may empirically be assumed as 16). Although TCP protocol or EDNS option can be used to expand the packet length, it requires the support of either the application or the DNS server..

For example, if the demand assigned to server A and B is 3:1, the IP list may contain three identical IP addresses of A and one IP address of B [8]. However, it limits the flexibility of load balancing in practical.

Our main contribution of this paper are summarized as:

  • We quantitatively formulate the Consistent Dynamic Server Assignment problem (Sec. III), i.e. the problem of how to assign the content requests to the edge servers with the constraints of traffic granularity and assignment consistency.

  • We present a 2/3-approximation simple greedy algorithm to efficiently solve the problem (Sec. IV), based on the properties of the formulation that it can be transformed equivalently into maximizing a monotone submodular function subject to matroid constraints.

Ii Background and Related Work

Server assignment refers to the question: which surrogate server is the best for each request? Earlier on, only the proximity is considered. CDN chooses the nearest server according to various measured criteria such as network distance [9], RTT [10], and hop count [11] etc. However, choosing the closest servers does not always yield in best QoS [12], due to the influence of network connectivity and congestion. As the Internet traffic grows rapidly, even the surrogate servers themselves are likely to be swamped by flash crowds [13], which necessitates load balancing among CDN servers.

The task of load balancing is left up to the server assignment scheme, to dispatch the demand among multiple CDN servers. Many works have theoretically formulated the server assignment (or load balancing) problem under various conditions [6, 5, 14]. The output, the portion of demand allocated to each server, is always assumed to be a real number. In DNS-based CDN, however, load balancing can only be done in coarse granularity, and thus the formulations may be in form of Integer Programming or Mixed Integer Programming, which brings about complexity in solving the problem.

Due to the dynamic client demand, as well as network and server status, dynamic server assignment becomes necessary to better utilize the CDN resources [15]. Most of the dynamic server selection schemes recompute their decisions on short time scales such as 1 min. However, few of them consider the damage caused by the inconsistent server assignment decision especially in pull-based content outsourcing CDN.

In 2015, Akamai groundbreakingly transforms the load balancing problem into a variant of the stable marriage problem, and presents their server selection scheme based on a generalized Gale-Shapley algorithm[2]. Difficulties exist when client demand for the same domain name can be divided into arbitrary proportions in the solution but cannot in practical. In addition, as is claimed in [2], innovations are needed to design a more consistent server assignment strategy. An inspiration is to raise the preference of a server that is likely to cache the requested contents to maintain the consistency. What is challenging, however, is to balance the tradeoff between proximity and consistency in the same preference list. In our paper, we consider this tradeoff in our model.

Iii DynDNS: Dynamic Server Assignment

In this section, we quantitatively formulate the Consistent Dynamic Server Assignment problem, the problem of how to assign content requests to edge servers with the constraints of traffic granularity and assignment consistency.

Iii-a Flow Partitioning


Fig. 2: Flow partitioning and server assignment

There are a group of edge servers in a CDN. Due to the limit of bandwidth, processing power, memory, and disk capacity[2], the amount of demand a CDN server can serve has an upper bound, a.k.a. capacity, denoted by for server . A server is overburden once it receives more demand than its capacity.

Let there be a set of domain names and client locations . Element in can either be the IP address of a local DNS on behalf of a group of clients, or the edns-client-subnet indicating the real location of clients [16]. Thus, the client demands are partitioned into different flows with each flow representing the requests from location under domain name .

Due to the limitation of granularity in DNS-based load balancing, we assume that each flow are divided into

sub-flows a prior, with an equal estimated demand of

, by some kind of rounding method. We assume that all the sub-flows belonging to the same flow are identical sharing the same group of requested contents, as well as client locations and domain name. The ”long tail” distribution of demands brings about a lot of small flow whose demand is less than . To better utilize the server capacity, those flows in the same or nearby locations can be merged into one flow with about demand in practical. is assumed to be empirically set up as 1/16 of the maximal demand among all the flows, which ensures the realizability of solution to our model in DNS-based load balancing. Thus, a sub-flow is a map unit, for , as is depicted in Fig. 2, where can be regarded as the number of each sub-flow.

Let be the 0-1 variable that indicates if map unit should be assigned to server when , or not otherwise. Then is our server assignment decisions.

Iii-B Quality of Service

Many CDN providers establish their own scoring system to estimate the complex network status (e.g. latency, connectivity) and evaluate the expected QoS of assigning a map unit to an edge server[17]. We let the QoS score of dispatching a single request belonging to map unit to server be . Note that the estimated QoS is the same for sub-flows belonging to the same flow, because of their uniformity.

The estimation of comprises multiple concerns, most dominant of which is the proximity, that is, if flow comes from a location with lower latency to server , is supposed to be higher. Besides, the contract term, server preference for business types as well as whether the demand comes from a downstream ISP, etc. are taken into account when scoring the QoS, which is beyond the discussion of this paper.

Now the total QoS can be quantified as:

(1)

CDN providers want to achieve performance as higher as possible.

Iii-C Load Balancing

Load balancing is another goal of CDN providers, which brings two requirements:

No overload. Every server must not be overload. Once a CDN server receives more requests than it can handle, the service may degrade or even lose. Consequently, we introduce the capacity constraints:

(2)

quantifies the demand that server receives, which represents the load of server as well.

Balanced load. To reduce the risk of overload, it is favorable that loads of servers with similar QoS grows evenly. We employ a function to quantify server ’s degree of load balance. can be some decreasing concave functions on . In our paper, we adopt . Then,

(3)

quantifies the load balancing degree of the whole system.

Iii-D Decision Consistency

In pull-based content outsourcing CDN, it is more favorable to achieve similar QoS and degree of load balancing while maintaining high decision consistency. That is, if server receives the requests under domain name in previous server assignment decisions, it is preferred to assign map units under the same domain name to this server . We regard the portion of client demands as ”consistent” if it is assigned to a server that previously serves demands under the same domain name after a recomputation.

Let be the parameter indicating the favorability to map to . Note that the number of sub-flow makes no difference to as well. The estimation of is based on the previous solution . Suppose consists of map units under the same domain name , and then can be defined as:

Thus, we can use to quantify the overall consistency of server assignment decisions:

(4)

Iii-E Dynamic Server Assignment Model

Taking the above-discussed factors into account, we can formulate the server assignment problem as:

(5a)
s.t. (5b)
(5c)
(5d)

(5a) is the objective function, where , and are constant factors to balance the tradeoff among three optimization goals: high quality of service, load balancing and decision consistency. (5c) indicates that each map unit should be assigned to at most one server. When total capacity of all the edge servers is insufficient to meet the demands, there may be requests belonging to some map unit without an assigned server and directed to the content source to fetch the data. Finally, (5b), (5d) are capacity and integrality constraints.

Computational Hardness of (5). However, (5) is an NP-hard quadratic integer programming problem[18]. Exact approaches such as Branch-and-bound and cutting-plane become too computationally inefficient to be applied when the dimension of the solution increases, as is in our problem.

Iv Efficient Algorithm with Optimality Guarantee

The hardness of (5) necessitates a computationally efficient approximate algorithm. In this sections, we figure out that there exists a simple, elegant greedy algorithm with a considerable approximation ratio of 2/3, to solve (5). Before introducing the algorithm and proving the optimality guarantee, we should first prove that (5) is equivalent as maximizing a monotone submodular function subject to matroid constraints:

Properties of (5) (abstract). The integrality constraint (5d) enables that every cache decision can be written as a set ,where . Thus, the constraints of (5) can be written as matroid constraints, according to the definition of partition matroids[19]. Moreover, the objective function (5a) can be written as a set function[20] as well. It is proved by [21] that the set function is a monotone submodular function, which means (5) can be written as an optimization problem that maximizing monotone submodular function subject to matroid constraints. Due to space constraints, detailed proof is in Appendix -A.

Since (5) can be written as maximizing a monotone submodular function with matroid constraints, one of the simplest and most common algorithms is the greedy algorithm:

0:    Submodular function, ;Matroid constraints, ;
0:    Approximate optimal solution to (5), ;
1:  Initializing: , , ;
2:  while  do
3:     ;
4:     if  then
5:        break;
6:     else if  then
7:        , ;
8:        
9:     else
10:        
11:     end if
12:  end while
13:  ;
14:  return  ;
Algorithm 1 The greedy algorithm

The solution is initialized as an empty set, and all the elements in the ground set are the candidates to be put in the solution. In each iteration, it keeps on greedily screening out an validate element with maximal gain (and meanwhile removing it) from the candidates, until the element maintains the solution’s feasibility after being added to it.

Convergence. On the one hand, the number of candidates is limited, and it decreases with each iteration. On the other hand, the gain of adding the new element decreases in each iteration, because the objective function is submodular. Both of the reasons ensure that the algorithm will eventually stop.

Time Complexity. Each time an element is dropped from the candidates, the time complexity of figuring out is and checking if the new solution remains feasible is , both of which are . There are elements to drop, for which the time complexity of Algorithm 1 is , appropriate for frequent recomputaion.

Approximation Ratio. [22] proves that the greedy algorithm yields a tight approximation ratio of if the matroid constraint can be written as the intersection of matroids. In our problem, it is proved that (in Appendix -A), for which the approximation ratio of Algorithm 1 is 2/3.

V Evaluation

In this section, we evaluate the performance of DynDNS in both stationary and temporal workloads, compare it with state-of-the-art, and explore the main benefits brought by DynDNS.

V-a Experimental Setup

V-A1 Network Status


Fig. 3: Converged performance under stationary workloads, with the total demand varies from 50% to 90% of the total capacity. DynDNS yields more balanced server load while maintaining the QoS

In our simulations, we assume client demands are generated and served within an ISP for simplicity. On the one hand, a small-scale CDN (e.g. country-wide CDN) rarely does cross-ISP load balancing due to the expensive inter-ISP links. On the other hand, DynDNS works as well in multi-ISP cases with similar results.

We generate a scale-free topology of about 1,000 routers with aSHIIP [23] to simulate an ISP-like topology, using the Barabasi-Albert model (, ) [24]. All links are supposed to be of equal latency, and end-to-end transmissions are supposed to go via the shortest path. Thus, we regard the latency between two routers as their shortest distance.

Without loss of generality, 25 routers are randomly chosen from all, as the locations of CDN surrogate servers. We allocate a randomly-decided capacity to each server while keeping the total capacity constant. In addition, each surrogate server is allocated a well-designed cache size, large enough to attain a high cache hit ratio of over 90% but not too large to cache all the contents, as is in the situation in the real world.

V-A2 Client Demand

For simplicity, we generally suppose that all the routers in the network are edge routers having access to clients. Although in practice a part of them are core routers, it makes little difference to the evaluation results. We also assume the client locations in

are all local DNS, and there allocates one local DNS in each region. Therefore, we partition the 1,000 routers into 5 regions by clustering method such as k-means clustering, considering that routers in the same region may have similar latencies to the same server. Then, we randomly choose a router from each region as the location of local DNS. In real world, region with more netizens is likely to have higher demand, for which we let the demand in each region be proportional to the number of routers in the region. We suppose there are 100 domain names whose popularity follows the Zipf law with the Zipf exponent

. Once the total demand is given, the expected demand for each domain name can thus be known.

V-A3 Benchmarks and Performance Metrics

We compare DynDNS with two benchmarks. Stable Matching: the generalized Gale-Shapley algorithm presented by Akamai [2]. NonDynDNS: the non-consistent form of DynDNS whose objective function comprises and but not .

CDN performance is evaluated by five metrics:

Standard Deviation of Server Loads:

We calculate the proportional load for all the servers, and use their standard deviation to estimate the degree of load balancing. Loads of servers are more balanced if the standard deviation is smaller.

Quality per Request: We measure the average assignment quality per request to estimate the QoS provided by CDN. Higher assignment quality indicates faster content delivery, which consequently means a higher QoS.

Fraction of Consistency: the fraction of ”consistent” demand, whose assigned server has served the demand for the same domain name in the previous solution. It shows the consistency of server assignment algorithms.

Number of Misses: the number of cache misses during the period between two sequent decision recomputations, which shows the influence of ”inconsistent” demand to some extent, as ”inconsistent” demand causes more cache misses.

Real-time Cache Hit Ratio: the real-time cache hit ratio of the whole CDN, which is another metric to show the influence of ”inconsistent” demand, as the cache hit ratio will have an immediate, drastic decrease due to ”inconsistent” demand.

V-B Scenario 1: Stationary Demand

In this scenario, the demands of clients are stationary, and thus the server assignment decisions are stationary as well. CDN performance will converge to a stable state once the server assignments are decided.

We tune the total demand from 10% of the total capacity to 90% and evaluate the converged performance. When the server assignment decisions do not change, DynDNS becomes exactly the same as NonDynDNS because is always 0, and there is no ”inconsistent” demand. Therefore, we compare DynDNS with only Stable Matching, in terms of standard deviation of server loads and average quality per request. Results are averaged over 100 distinct runs of this scenario.

Results. Results are depicted in Fig. 3

. When the total demand increases, both DynDNS and Stable Matching yield in very close QoS. However, when there is a surplus of resources, DynDNS does better in load balancing because when the alternatives provide similar QoS, DynDNS tends to assign the demand to servers with more spare capacity to balance the load, while in Stable Matching the capacity of a server is filled up before the rest of the demand goes to a second choice, which results in unbalanced server load. When the total demand reaches the peak at 90% , it takes up nearly all the resources to serve the requests, for which load of each server is over 95% and thus the variance of server load decreases.


Fig. 4: Domain names are partitioned into different sections according to popularity, and demands for them are randomly shuffled in each section to simulate random demand shifts. The length of the section is tuned from 5% to 100% of the number of domain names. DynDNS yield higher consistency and thus less cache misses while maintain the overall QoS

Average quality per request decreases when the total demand grows, because demands are assigned to their best choices when capacity is adequate, but to their second, or even worst choices when insufficient.


Fig. 5: Real-time cache hit ratio in the first 300 seconds. consistent server assignment strategy yield modest decline in cache hit ratio after reassignment. Demand shuffle and server assignment recomputation happens every 30s

V-C Scenario 2: Randomly-shuffled Demand

In this scenario, demands of clients vary dynamically as time goes by to evaluate the assignment consistency. We partition the domain names into sections with a length of {5%, 10%, 20%, 50%, 100%} domain names, respectively. Domain names in the same section can be regarded as in the same popularity grade. We randomly shuffle the demands in each section and recompute the server assignment decisions every 30 seconds. Therefore, demands suffer from more drastic oscillation when the length of the sections becomes larger. Before logging the results, we proactively run for 300 seconds in order to warm up the cache. The evaluation lasts for 6000 seconds.

Results. DynDNS outperforms the other two algorithms in terms of assignment consistency. Fig. 4 shows that, when the demands of clients sustain a different degree of oscillations, average quality per request changes little (no more than 5.4%), but the server assignment decisions are likely to change a lot. Their recomputed solution can bring about up to nearly 60% ”inconsistent” demand, while DynDNS no more than 30%, which shows that both Stable Matching and NonDynDNS are more sensitive to the demand changes than DynDNS. The consistency of DynDNS results in its better performance in cache hit ratio oscillation. Fig. 5 depicts part of the real-time cache hit ratio. Each time the server assignment decisions are recomputed, the cache hit ratio suffers an immediate dive and then grows gradually to convergence. More ”inconsistent” demand will lead to a more severe decline in cache hit ratio, and thus more cache misses.

V-D Scenario 3: Surging Demand

In the real world, however, client demands will not change as randomly as is in Scenario 2. Demand surge from one or two relatively popular domain names is more common. For example, Youtube is a popular domain name for video with a great amount of demand, and the demand will increase sharply if Youtube provides live coverage of the recent 2018 World Cup. To simulate this situation, we randomly choose a domain name from the top 10% popular domain names, tune the demand for it to 150% of the previous demand. We run the scenario for 100 times to prevent contingency.


Fig. 6: CDN performance when demand for one of the popular domain names surges to 150%, which is common in the real world.

Results. From the CDFs in Fig. 6, we find that the QoS attained by each algorithm is very close. However, DynDNS keeps the fraction of inconsistency within 10% in most cases, while NonDynDNS and Stable Matching yields double or even quadruple results. Consequently, DynDNS outperforms NonDynDNS and Stable Matching in terms of the number of cache misses as well. Due to space constraints, the real-time cache hit ratio is not presented in this paper, but similar to Scenario 2, DynDNS yields more stable cache hit ratio with less decline immediately after solution recomputation.

Vi Conclusion

In this paper, we address two challenges poses by dynamic server assignment in a DNS-based CDN for online video streaming. One is the need for consistent server assignment strategy to avoid frequent video content migration, and the other is the coarse traffic granularity of DNS-based scheduling. We design a consistent dynamic server assignment algorithm, DynDNS. Our paper begins with the formulation of dynamic server assignment problem, considering the trade-off among the goals of: (1) better QoS, (2) balanced server load, and (3) decision consistency. Then, we prove that our formulation can be transformed equivalently into maximizing a monotone submodular function subject to matroids constraints, result from which we present a simple greedy algorithm to solve the model with 2/3 approximation ratio. Evaluation proves that DynDNS yields more balanced server load, and better consistency with less fluctuating cache hit ratio.

References

  • [1] J. Pan, Y. T. Hou, and B. Li, “An overview of dns-based server selections in content distribution networks,” Computer Networks, vol. 43, no. 6, pp. 695–711, 2003.
  • [2] B. M. Maggs and R. K. Sitaraman, “Algorithmic nuggets in content delivery,” ACM SIGCOMM Computer Communication Review, vol. 45, no. 3, pp. 52–66, 2015.
  • [3] G. Tang, K. Wu, and R. Brunner, “Rethinking cdn design with distributee time-varying traffic demands,” in INFOCOM 2017-IEEE Conference on Computer Communications, IEEE.   IEEE, 2017, pp. 1–9.
  • [4] G. Pallis and A. Vakali, “Insight and perspectives for content delivery networks,” Communications of the ACM, vol. 49, no. 1, pp. 101–106, 2006.
  • [5] D. Leong, T. Ho, and R. Cathey, “Optimal content delivery with network coding,” in Information Sciences and Systems, 2009. CISS 2009. 43rd Annual Conference on.   IEEE, 2009, pp. 414–419.
  • [6] P. Wendell, J. W. Jiang, M. J. Freedman, and J. Rexford, “Donar: decentralized server selection for cloud services,” ACM SIGCOMM Computer Communication Review, vol. 40, no. 4, pp. 231–242, 2010.
  • [7] P. Mockapetris, “Rfc 1035—domain names—implementation and specification, november 1987,” URL http://www. ietf. org/rfc/rfc1035. txt, 2004.
  • [8] T. Brisco, “Rfc 1794,“,” DNS support for load balancing, 1995.
  • [9] J. D. Guyton and M. F. Schwartz, “Locating nearby copies of replicated internet servers,” in ACM SIGCOMM Computer Communication Review, vol. 25, no. 4.   ACM, 1995, pp. 288–298.
  • [10] M. Sayal, Y. Breitbart, P. Scheuermann, and R. Vingralek, “Selection algorithms for replicated web servers,” ACM SIGMETRICS Performance Evaluation Review, vol. 26, no. 3, pp. 44–50, 1998.
  • [11] G. Pierre and M. Van Steen, “Globule: a collaborative content delivery network,” IEEE Communications Magazine, vol. 44, no. 8, pp. 127–133, 2006.
  • [12] H. A. Tran, S. Hoceini, A. Mellouk, J. Perez, and S. Zeadally, “Qoe-based server selection for content distribution networks,” IEEE Transactions on Computers, vol. 63, no. 11, pp. 2803–2815, 2014.
  • [13] N. Yoshida, “Dynamic cdn against flash crowds,” in Content Delivery Networks.   Springer, 2008, pp. 275–296.
  • [14] J. M. Almeida, D. L. Eager, M. K. Vernon, and S. J. Wright, “Minimizing delivery cost in scalable streaming content distribution systems,” IEEE Transactions on Multimedia, vol. 6, no. 2, pp. 356–365, 2004.
  • [15] H. He, Y. Feng, Z. Li, Z. Zhu, W. Zhang, and A. Cheng, “Dynamic load balancing technology for cloud-oriented cdn,” Computer Science and Information Systems, vol. 12, no. 2, pp. 765–786, 2015.
  • [16] F. Chen, R. K. Sitaraman, and M. Torres, “End-user mapping: Next generation request routing for content delivery,” in ACM SIGCOMM Computer Communication Review, vol. 45, no. 4.   ACM, 2015, pp. 167–181.
  • [17] E. Nygren, R. K. Sitaraman, and J. Sun, “The akamai network: a platform for high-performance internet applications,” ACM SIGOPS Operating Systems Review, vol. 44, no. 3, pp. 2–19, 2010.
  • [18] C. A. Floudas and V. Visweswaran, “Quadratic optimization,” in Handbook of global optimization.   Springer, 1995, pp. 217–269.
  • [19] E. L. Lawler, Combinatorial optimization: networks and matroids.   Courier Corporation, 1976.
  • [20] “Set function - encyclopedia of mathematics,” https://www.encyclopediaofmath.org/index.php/Set_function.
  • [21] N. Golrezaei, K. Shanmugam, A. G. Dimakis, A. F. Molisch, and G. Caire, “Femtocaching: Wireless video content delivery through distributed caching helpers,” in INFOCOM, 2012 Proceedings IEEE.   IEEE, 2012, pp. 1107–1115.
  • [22] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey, “An analysis of approximations for maximizing submodular set functions—ii,” in Polyhedral combinatorics.   Springer, 1978, pp. 73–87.
  • [23] J. Tomasik and M.-A. Weisser, “ashiip: autonomous generator of random internet-like topologies with inter-domain hierarchy,” 2010.
  • [24] R. Albert and A.-L. Barabási, “Topology of evolving networks: local events and universality,” Physical review letters, vol. 85, no. 24, p. 5234, 2000.
  • [25] A. Schrijver, Combinatorial optimization: polyhedra and efficiency.   Springer Science & Business Media, 2003, vol. 24.
  • [26] D. Welsh, “Matroid theory. 1976,” London Math. Soc. Monogr, 1976.