Measuring and exploiting the cloud consolidation of the Web

by   Debopam Bhattacherjee, et al.

We present measurements showing that the top one million most popular Web domains are reachable within 13ms (in the median) from a collective of just 12 cloud data centers. We explore the consequences of this Web "consolidation", focusing on its potential for speeding up the evolution of the Web. That most popular services reside in or near a small number of data centers implies that new application and transport technologies can be rapidly deployed for these Web services, without the involvement of their operators. We show how this may be achieved by orchestrating a handful of reverse proxies deployed in the same data centers, with new technologies deployed at these proxies being nearly as effective as deploying them directly to the Web servers. We present early measurements of this approach, demonstrating a >50 times for users with high latencies to Web servers. We also show that this model, using a small number of proxies, can be surprisingly competitive with extensive CDN deployments, especially in geographies with high last-mile latencies.


page 1

page 2

page 3

page 4


Measuring and Mitigating the Risk of IP Reuse on Public Clouds

Public clouds provide scalable and cost-efficient computing through reso...

"Improved FCM algorithm for Clustering on Web Usage Mining"

In this paper we present clustering method is very sensitive to the init...

Web View: A Measurement Platform for Depicting Web Browsing Performance and Delivery

Web browsing is the main Internet Service and every customer wants the m...

VIQI: A New Approach for Visual Interpretation of Deep Web Query Interfaces

Deep Web databases contain more than 90 Despite their importance, users ...

A Comprehensive Approach to Abusing Locality in Shared Web Hosting Servers

With the growing of network technology along with the need of human for ...

Toxicity in the Decentralized Web and the Potential for Model Sharing

The "Decentralised Web" (DW) is an evolving concept, which encompasses t...

The Blind Men and the Internet: Multi-Vantage Point Web Measurements

In this paper, we design and deploy a synchronized multi-vantage point w...


We present measurements showing that the top one million most popular Web domains are reachable within  ms (in the median) from a collective of just cloud data centers. We explore the consequences of this Web “consolidation”, focusing on its potential for speeding up the evolution of the Web. That most popular services reside in or near a small number of data centers implies that new application and transport technologies can be rapidly deployed for these Web services, without the involvement of their operators. We show how this may be achieved by orchestrating a handful of reverse proxies deployed in the same data centers, with new technologies deployed at these proxies being nearly as effective as deploying them directly to the Web servers. We present early measurements of this approach, demonstrating a reduction in Web page load times for users with high latencies to Web servers. We also show that this model, using a small number of proxies, can be surprisingly competitive with extensive CDN deployments, especially in geographies with high last-mile latencies.

1 Introduction

Where does the Web live? While Internet hypergiants like Google and Facebook run their own extensive infrastructure to back their services, and others at the opposite end of the spectrum operate out of their own servers and clusters, a large number of today’s popular Web services are supported by public cloud infrastructure.

Prior studies have shed light on the increasing use of cloud services like Amazon Web Services (AWS) and Microsoft Azure by comparing IP addresses seen in traffic towards popular Web services to address blocks published by these cloud providers [36, 68, 54]. We take the opposite perspective: given the well-established usage of cloud platforms by popular Web services, we conduct latency measurements to Web services from these platforms, thus effectively conducting a census of Web services in terms of their proximity to AWS and Azure.

One key finding from our measurements is that the top million most popular Web domains (per Alexa [13]) are reachable in under milliseconds (in the median) from at least one vantage point out of just we deployed in Amazon’s EC2 and Microsoft’s Azure. While the replication of Web services via content distribution networks certainly decreases latency to them, including from EC2 and Azure, such replication alone does not entirely explain our results — per a recent analysis, less than of the , most popular Web sites use CDNs, with CDN usage declining rapidly with decrease in popularity [30]. Thus, we conclude that a substantial fraction of the one million most popular Web services is hosted in or near a small number of cloud data centers. We refer to this observation as the cloud consolidation111We deliberately avoid the term “centralization”, as it is already in use in a different, albeit related context — coarsely, the idea that the attention of Web users is increasingly controlled by a small number of companies [44]. of the Web.

The extent of cloud consolidation revealed by our measurements is much larger than is obvious from past work using address matching. For instance, He et al. [36] found that only “ of the Alexa top million use EC2/Azure”. While we find that this percentage has also moved up in the intervening years — from in to in — it still understates the cloud consolidation of the Web. Our latency-based measurements thus allow an examination beyond the tight constraint of matching IP addresses, revealing that a large fraction of Web services not hosted directly on the largest cloud platforms are still hosted (or at least replicated) somewhere in their near vicinity.

We further observe that a few key Web hosting providers are the primary cause of this massive consolidation. Although the total number of providers hosting domains near public cloud is quite large, very few of them have their presence across multiple cloud data centers and consistently host a large fraction of these domains. Just key Web hosting providers host more than of such near-cloud domains across AWS locations.

We also explore an interesting implication of this consolidation: its utility for evolving the Web. While Web and Internet technology is advancing at a rapid pace with the goal of improving responsiveness and user experience, there is a long tail of still popular Web services which lags the technology adoption curve – even GZip use is far from universal [11], and newer technologies like WebP see adoption rates under  [11]. Likewise, while modern transport protocols like BBR [22] and QUIC [42] may see rapid uptake and interest at the likes of Google and Akamai, of Web servers still use a TCP initial congestion window of or segments [56], despite segments being the IETF recommendation since more than years [25]. Further, as we shall see later, technology adoption is even slower in the developing World.

If we can place vantage points near most Web servers, like our measurements show, we can use these as proxies for Web content. Using new Web and Internet technologies at such near-server proxies can often be nearly as effective as using them directly at the Web servers, and does not require any involvement from the operators of Web services. While proxy-based solutions for speeding up the Web are already well-studied [11, 64, 49, 59, 51, 14, 29], our measurements reveal a simple and cheap deployment model for such proxies. At the small latencies between the Web servers and the proxies in our proposed deployment, old transport and Web stacks incur small overheads, and wide-area connectivity between these proxies and clients can benefit from modern network stacks. Thus, a small number of proxies can speed up the Web’s evolution, and substantially improve performance for many clients and services today.

We build a prototype of this approach, Fetch, and evaluate it using both emulation across network configurations with varying latency, bandwidth, and loss; as well as with a small number of Internet vantage points, finding that it can improve Web performance by more than for users with high latencies to Web servers. We also show that Fetch can be surprisingly competitive against Web page delivery over an extensive, well-tuned content delivery network, achieving page load times within of the CDN-based approach.

Figure 1: Latencies from EC2 to Web servers. As the legend indicates, circle sizes reflect the number of servers geolocated to that location, and circle color reflects average latency across those domains, green (lighter in grayscale) being lowest (under  ms), and red (darker) being the highest ( ms). Note that for visual clarity, circle sizes are proportional to the cube-root of the number of domains geolocated to a location.

2 Cloud consolidation of Web services

We quantify round-trip latency to Web services from Amazon EC2 data centers. From each of these data centers, we measured round trip times to Web servers hosting the top one million most popular Web sites (per Alexa’s list [13]) in May . We used hping [58] to conduct our RTT measurements, allowing us to send TCP SYN packets to the Web servers and record when the TCP SYN-ACKs were received at our Amazon nodes. We refer to the least of the measurements for each Web site as the “latency from EC2”. Of course, for services which use geo-replicated deployments with anycast or DNS redirects, our measurements may not correspond to the same physical Web server, but this does not hinder our goal of quantifying the latencies to these Web services from each of our measurement sites.

Fig. 1 shows a visualization of latencies from EC2 to the one million most popular Web domains. In addition to the hping measurements, this visualization also uses geolocation to map the Web servers which correspond to the smallest measured latency from EC2. For this purpose, we used MaxMind’s free geolocation product [45], rounding locations to one decimal degree. While there are bound to be some geolocation errors, it is unlikely that these change the visualization significantly, as it is broadly in line with our latency measurements (which we shall detail shortly). Each circle’s size denotes the number of services in the top million that were geolocated to the circle’s center. For clarity, circle sizes are scaled to the cube-root of the number of domains geolocated to a location, instead of linear. Color denotes average latency across domains geolocated to that location, with darker / warmer colors being worse. This visualization rounds all measurements larger than  ms ( of the data) to  ms for greater resolution for the rest of the data.

Figure 2: EC2 achieves lower latencies to services than Lahore and Zürich.

As is clear from the figure, the EC2 view of services is far from uniform across the globe, with most being consolidated in a small number of locations: a few large circles contain most of the mass222The cubic scaling understates this concentration.; and most services are reachable within a small latency from our EC2 nodes: most of the mass in the figure is green-yellow.

We also measured RTTs to the same Web services from two university-based nodes, one in Europe (Zürich), and another in Asia (Lahore). Measurements from Lahore were much slower, and we used a random sample of , Web sites to test. As Fig. 2 shows, latency is lowest from EC2 and highest from our Asia-based client, with the medians (th-percentiles) being  ms ( ms),  ms ( ms), and  ms ( ms) from EC2, Zürich, and Lahore respectively. Thus, median RTT from EC2 is smaller than from Zürich, and smaller than from Lahore. Note that both of these clients are university-hosted and “real” end-user connectivity may be worse. We omit visualizations equivalent to Fig. 1 for Zürich and Lahore, but note that the structure of the map is nearly identical in each case, and for Zürich, the map is largely red (high latency) outside of Europe; while for Lahore, it is almost entirely red. Several interesting features are evident in Fig. 2:

  • [leftmargin=11pt]

  • Measurements from Zürich reveal a step structure, with substantial parts of the distribution contained in some steps. These steps correspond to Web servers consolidated in data centers. We identify and point out several such “steps” in Fig. 2: Frankfurt, London, N. Virginia, and N. California333Amazon does not provide more specific location identifiers.. The Frankfurt and London steps respectively contain and of the entire distribution’s mass within a half second interval each.

  • The “plateau” in latencies from Zürich (- ms), where there are few measurements is mostly due to the trans-Atlantic latency from Zürich to servers in the Americas.

  • From Lahore, most Web servers are far, with being more than  ms away. We conjecture that the step-characteristic of the Zürich measurements is absent here due to greater latency variations across longer paths.

2.1 Are these observations stable?

We also assess whether these measurements are stable in two senses: (a) Is the large latency to some servers (the tail in Fig. 2

) simply an artifact of variance which could be removed by taking the minimum across multiple measurements to each server? and (b) Is the measured latency between an EC2 location and a Web server consistent across longer periods of days?

(a) Variance: It is reasonable to expect that out of several million hpings, some result in large latencies simply because of some servers being temporarily overloaded or due to transient network congestion. Thus we also conducted repeat measurements to a sample of servers. For each  ms bucket on the -axis in Fig. 2, we randomly sampled servers for which one-shot RTTs were in that bucket, and collected measurements to each over the course of several hours. For each server, we consider the minimum of these measurements as its definitive RTT. These definitive RTTs are indeed lower than one-shot measurements, but the differences are on the order of across the entire range of RTTs seen in Fig. 2. Thus, the tail in Fig. 2 is not just an artifact of variance.

(b) Long-term consistency over days and weeks: We created a mapping of each Web server to its closest EC2 node. After a period of one week, we repeated measurements between Web servers and their mapped EC2 nodes, and quantified the (absolute) changes in RTT across server-EC2-node-pairs between these measurements. The changes are typically small:  ms in the median, and  ms at the th percentile. Further, we find that for Web servers to which the initial RTT measurement is small, this variation is even smaller – across Web servers for which our first RTT measurement was under  ms, the th percentile change in RTT is only  ms. These results are not surprising: we expect that the way a service is hosted does not typically change on a weekly basis.

Figure 3: Lowering latencies further: (a) latencies from EC2, Azure, and both together; (b) beyond a small number of chosen locations, adding more EC2 and Azure locations yields small marginal gains; and (c) adding measurements from a Moscow data center reduces latencies further, as does filtering out Chinese domains, and Web servers geolocated to additional locations; and (d) latencies from EC2 have improved in the last months for the top , domains.

2.2 Lowering latency even further

While latencies from EC2 to Web servers are small, we also investigate ways of lowering them even further.

First, we add similar measurements from Microsoft’s Azure platform to compare with those from EC2. On Azure, data center locations were accessible to us. As Fig. 3 shows, latencies from Azure are larger (by in the median), but using both together (such that the reported latency is the least across all measurements for each Web site) can reduce latencies a bit further to  milliseconds in the median (i.e., a reduction).

We also attempt to quantify the contribution of additional locations to these reductions in latency. Given a candidate set of locations, and a “budget” of locations we can choose, we would like to pick the locations such that they minimize a latency objective, such as the median, average, or th percentile. A trivial reduction from the facility location problem [34] to this problem establishes its NP-Hardness. While our brute-force attempts failed even with this small number of candidate sites (

), we used a simple iterative heuristic: choose a subset of

locations (a size for which brute-force suffices) and keep the best set of locations, discarding the rest. Repeating this procedure a few times reveals that the incremental benefit of adding locations beyond is minimal. This characteristic is shown in Fig. 3. Using the locations that minimize the average, we find that the median, average, and th-percentile latencies are , , and  milliseconds respectively.

However, we observe that the main reason additional locations do not help is their natural redundancy – most Azure locations are near some EC2 location, while neither covers some parts of the globe. As Fig. 1 shows, the Web servers for which we see high latencies are also consolidated in or near a small number of big population centers, including Beijing, Istanbul, Johannesburg, Moscow, Shanghai, and Tehran. Each of these also corresponds to data center locations, just not ones available to us with EC2 or Azure. For instance, there are Telehouse data centers [66] in Johannesburg and Istanbul. Amazon and Azure do in fact offer locations in China, but using these requires registering a Chinese legal entity [16].

We assess the impact of adding presence at these locations in three ways: (a) adding measurements from a Moscow data center; (b) filtering out Chinese prefixes; and (c) coarsely emulating the inclusion of additional locations by geolocating high-latency servers and setting measurements corresponding to the largest locations (i.e., with most domains geolocated to those locations) to  milliseconds, in line with our observation that latencies within the vicinity of a data center are usually under  ms. Note that this emulation is likely to understate the impact of adding more locations, as we cannot account for reduction in latencies for other Web sites not in the immediate vicinity of these emulated locations.

Fig. 3 shows latencies from EC2, with the addition of measurements from the Moscow data center, and further, with the filtering of Chinese domains and other (city-level) locations. The filtered latencies are lower in the median than those from EC2, and lower in the th-percentile. Thus, adding a small number of locations in a targeted manner may offer substantial further reductions on the already small latencies we measure.

Figure 4: Mean weighted RTTs to non-origin domain servers.

2.3 Further consolidation over time

We measured the latencies from EC2 to the top , domains out of the previously measured million domains after a period of months, in September , in order to analyze the consolidation happening over time. We limited this analysis to the top , domains as we observed that less popular domains lower in the list have a high long-term churn rate which would result in measurements to very different sets of services. Our later measurements also cover an additional, relatively new EC2 data center in Paris. Fig. 3 shows that there has been a futher significant consolidation during this period even if we account for only the data centers that we considered the previous time. The median (th-percentile) latency improved by () falling from  ms ( ms) to  ms ( ms). If we take the Paris data center into account, latency has dropped by in the median as well as the th-percentile.

2.4 What about non-origin domains?

Most Web services today do not operate as monoliths, with the content of Web pages often composed of responses from many servers across many domains besides the origin. It is thus plausible that we measure small latencies to the origin, but the non-origin domains serve a large fraction of a Web page’s content and are reachable only at high latencies. We thus measure latencies to these non-origin domains and compare them to those for the origin servers.

We restrict these measurements to popular sites for which we observe small latencies for the origin. We use the top Alexa Web pages (excluding Google pages for greater diversity) for which RTTs from Azure’s US WEST 2 data center are less than ms. For each Web page, we record the number of bytes fetched from each non-origin domain and the RTT to that domain. We then compute for each page, mwRTT, the mean RTT weighted by the number of bytes across these domains.

Fig. 4 shows RTTs to the origin servers and mwRTTs including and excluding ad servers (identified using Easylist [27], which is used by popular ad blockers). Measurements with and without ad servers filtered out are broadly similar. The median mwRTT to servers including ad servers is  ms, a bit higher than the RTT to the origin server, but still small in the absolute.

Summary: Our measurements and analysis reveal massive consolidation of Web services in or near a few big cloud data centers, with most services being reachable at low latencies from at least one of a handful of vantage points in these data centers.

Figure 5: Few large Web hosting providers exist: across EC data centers (a) top () hosting providers are responsible for at least () of the domains ; and (b) top providers’ relative dominance varies (Frankfurt vs. Tokyo) while the aggregate share is still significant.

3 What is behind cloud consolidation?

We investigated how the primary domains, which are consolidated in or near the cloud data centers, are hosted. For each of the EC locations that we consider in §2, we examined the top , and random , domains which have less than  ms RTT from the data center. For each EC2 site, we used whois [39] to query the organization names (orgnames) hosting the responding servers. Certain organizations have varying orgnames across domains and locations. One such example is Amazon [2] using orgname Amazon.comInc. as well as AmazonTechnologiesInc.. We verified that Amazon Technologies, Inc. operates as a subsidiary of Inc. [20]. We mapped such related orgnames to unique organizations for our results.

3.1 A few key providers per location

In Fig. 5 we plot the percentage of domains with valid orgnames hosted by the top (also, only the top) Web hosting providers across the EC2 data centers both for the top and random , domains. As is evident, the share of the most significant provider, which is consistently Cloudflare [5] across our measurements, varies between (Tokyo) and (Sao Paulo) for random domains close to the cloud. For top domains as well, Cloudflare is the most significant provider consistently, with domain share varying between (Ohio) and (Sao Paulo). If we consider the top providers for each location, the range varies between (Frankfurt) and (Mumbai) for random domains, and (Frankfurt) and (Mumbai) for top domains. Nowhere is the share of top providers less than . This shows that the consolidation of the Web near public cloud is a result of few key Web providers hosting a large chunk of these domains close to the cloud.

It is also evident from Fig. 5 that the aggregate shares of the top providers are higher, in general, for locations outside North America or Europe. We further observe that the average number of providers who host at least of the domains tested for per location, is for North America and Europe while it falls below for the rest of the world. This implies that the shares of the top providers are less across North America and Europe as there are a handful of other hosting providers present in those locations with significant deployments in or close to the public cloud.

3.2 Provider share across EC2 sites

For each of the regions (North America, Europe and rest), we examine the EC2 location where the share (for random domains) of the top provider (CloudFlare) is the lowest. Fig. 5 shows the domain share of each of the top providers across these locations. In case of Frankfurt, Cloudflare has significantly higher share than the rest ( the nd major provider); while, in case of Tokyo, the shares are somewhat more uniform ( the nd major provider). However, as pointed out in the previous section, CloudFlare is consistently the dominant provider, and the aggregate share of the top exceeds of the domains tested. Note that RIPE NCC [9] and APNIC [3] are governance entities and do not provide hosting services. The corresponding vertical bars in Fig. 5 are the results of default whois setting; excluding them and including th top providers do not change the results significantly.

The top Web hosting providers responsible for more than of the top domains under consideration across EC2 locations are Cloudflare, Amazon, Google [8], Akamai [1], and Fastly [6] respectively. For the random domains, however, Google and Amazon change positions while the th and th positions are occupied by Automattic [4] (notable for [10]) and GoDaddy [7] respectively.

Summary: Our measurements show that a few key Web hosting providers play a significant role behind the cloud consolidation of the Web that we observed. If these providers together capture an even larger percentage of the hosting market, we may observe further consolidation of the Web near the public cloud.

Figure 6: For the top 200 Web pages, the percentage of non-HTTP/ requests per Web page varies by more than across countries at the -th percentile. HTTP/ adoption is highest in the US.

4 Speeding up Web evolution

Our measurement of Web consolidation in or near a few public cloud data centers opens up an interesting possibility: speeding up the evolution of Web services. We first motivate this use case (§4.1), and then discuss how Web consolidation can help address it (§4.2).

4.1 The slow-moving long tail

While the networking community is continually making significant performance improvements in the application and network stack, Web service operators beyond the industry leaders are often slow to adopt these technologies. Even universally supported technologies like using GZip for compression of Web page content are not ubiquitous [11]. Newer developments like WebP image compression [32], HTTP/, and novel transport protocols, see even less penetration. Further, we find that the adoption of new technologies varies substantially across geographies. As one instance of this, we measure the adoption of HTTP/ across different countries. For several countries, we evaluate the Web pages most popular in that country. Fig. 6 plots the CDF of the percentage of non-HTTP/ (i.e., HTTP/ or lower) requests per Web page for various countries. For several countries, including Egypt, Pakistan, and India, more than of even this set of most popular pages generate more than non-HTTP/ requests. In comparison to the US, adoption is substantially lower elsewhere.

There is thus a large fraction of Web services which evolve slowly in terms of adopting new network and application layer advances, leading to sub-optimal performance and user experience.

4.2 Consolidation enables evolution

Inevitably, a large fraction of Web services will continue to operate with outdated and sub-optimal Web stacks far after the industry’s leading edge has deployed more efficient technologies. However, cloud consolidation of the Web can help address this issue without requiring the involvement of slow-adopter Web service operators.

Given that a small number of cloud vantage points can achieve proximity to most Web services, we can use these vantage points as proxies. Deploying new technologies at these proxies can often be nearly as effective as modifying the Web servers themselves, because at the small latencies between these proxies and Web servers, the inefficiencies from older stacks are minimal, and we can get the benefit of the new stacks’ greater efficiency over (high-latency) wide-area connectivity between the clients and proxies. We outline a simple design, Fetch, along these lines, with two ingredients: (a) client software, which directs Web service requests to Fetch proxies which are nearest to those Web services; (b) Fetch proxies that receive and fulfill client requests.

Client-software: A client needs to map any target Web service to the Fetch proxy closest to that service, and then send its request to this particular proxy. This mapping is a small file (few MB), which the client obtains from Fetch. Given the stability of the latencies between Web servers and their mapped proxies (§2.1), measurements, computations, transfers, and updates of these mappings are all infrequent (e.g., once a day). In the occasional event that mappings are out of date (indicating that the Web service has migrated to a different data center, or its replica in the mapped location is slow or unavailable), there is a performance penalty, but the content can still be obtained – the proxy simply fetches it like any other device on the Internet.

Fetch proxies: Each Fetch proxy runs in a cloud data center. On receiving a page request, the Fetch proxy gets the content from a nearby Web server over (many) very short RTTs, potentially performs other optimizations (e.g., compression), and delivers the content to the client in a batch, minimizing the number of long client-Fetch RTTs. This can be achieved using: (a) transport like QUIC [42], which eliminates the handshake in most cases444With Fetch, clients connect only to a handful of proxies for most requests, and thus rarely need a full handshake.; (b) remembering and persisting initial window sizes between clients and proxies: the client logs the last observed TCP window size for incoming data, and sends it to the proxy in a cookie with every new request, thus letting the server start with this (usually) larger sending window; and (c) using the superior loss recovery of recent transport protocols like BBR [22].

Fetch proxies also measure latency to popular Web services, and aggregate and process these to construct the proxy to Web-service mapping.

In a real deployment, Fetch proxies can be an adaptation of any of a number of existing proxy-based solutions. The proxies may themselves be replicated and use standard load balancing. For simplicity, we refer to one cloud deployment location as one proxy throughout.

Prototype implementation: Our implementation uses a small piece of software at the client, which the browser uses as a local proxy through standard configuration mechanisms. It forwards Web requests to a suitable Fetch proxy based on the mapping discussed above, and as content is received from the Fetch proxy, it serves the content to the browser, which starts loading the page. Standard transport layer optimizations like larger window sizes [26] and handshake-free connections [53, 42] are used between the clients and the proxies. At each Fetch location, we maintain a pool of headless browsers (presently PhantomJS [37]), to which incoming client requests are assigned. Depending on load, new instances may be launched on separate virtual machines, and requests may be spread across these via a load balancer. On receiving a page request, a browser instance fetches it as usual. Any content received is transmitted in parallel to the client, even as the browser processes it.

Our goal is not to re-architect proxies, but to develop a simple way for deploying and orchestrating a small number of proxies at appropriate locations, in line with our measurements. We thus focus on a minimal implementation, relying on experience with deployed systems like Google’s Flywheel [11] for software issues related to the substantial complexity of the Web ecosystem.

5 Evaluation: how much could this help?

We evaluate Web page performance with Fetch, using (known) transport optimizations, batching responses from the Fetch proxies to the clients to minimize round-trips, and with and without compression. We use emulation to sweep through a large space of network configurations, and also present a smaller set of results using vantage points on the Internet.

To be able to control network characteristics tightly, we test those pages latencies to whose servers are negligible to begin with, as we can then easily evaluate the impact of additional latency. We thus deploy a Fetch proxy in Azure US WEST 2, and identify the most popular HTTP555We discuss HTTPS in §7.3. sites reachable within  ms latency from this location. The Fetch proxy uses cores and  GB memory, and runs PhantomJS [37]. For automating Web page loads and recording performance metrics, we use [61].

We measured several metrics, including: page load time, time to first paint, last visual change, speed index, and time for visual completion (viz, indicative of when most of the visual content is populated). While we observed large differences in absolute values of these metrics, our interest is in percentage improvement of such metrics when loading pages through Fetch compared to today’s default. We find that metrics other than PLT and last visual change show results closely consistent with each other, but PLT and last visual change show somewhat smaller improvements. This is due to the dependence of PLT and last visual change on ad content (e.g., when the network RTT is  ms, enabling and disabling ad blocking changes median PLT by , while there is no change in viz), which requires multiple round-trips even with Fetch (due to ad requests being generated during rendering). Thus, the rest of our evaluation uses viz.

5.1 Performance on emulated networks

We deploy the Fetch proxy and the client within the same Azure’s US WEST 2 data center, thus ensuring that all latencies involved (proxy-server, client-proxy, and client-server) are negligible, and bandwidth can be controlled at the client. This allows us to vary network conditions and measure differences in page loads with and without Fetch. Page loads without Fetch are referred to as the default case. In keeping with our observation about being able to achieve close proximity between proxies and servers (§2), we always set the client-proxy latency to be the same as the client-server latency, with the proxy-server latency remaining as is (i.e., without any manipulation.) The client runs the Chrome browser on Ubuntu virtual machines with cores and  GB memory.

Figure 7: With bandwidth fixed at  Mbps, as client-server latency increases, the default case shows a very large linear increase in viz time. Using Fetch, the dependence on the network RTT is substantially reduced, resulting in a large improvement for clients with large latency to the server.

The impact of latency: Fig. 7 shows the median viz time across pages, with increasing client-server RTT, bandwidth fixed at  Mbps, and with no (added) loss. Results for other bandwidth settings show similar trends. In the default case, as the RTT increases, viz time shows a large linear increase. In contrast, Fetch diminishes this dependence, showing much smaller increases in viz time. In the absence of significant loss, the two transport variants tested with Fetch achieve similar results. We find that bandwidth plays a much more limited role beyond a few Mbps (in line with expectations, hence results excluded). We also test the default from a host with more resources ( cores and  GB memory), but this does not reduce viz time substantially.

For clients with large latencies to Web servers, Fetch can provide a large speedup — at  ms latency to the Web server, Fetch loads pages ( s) faster. Such large latencies are in fact typical in many parts of the World, as we discuss later in §6.1.

An interesting by-product of our investigation is the dependence of page load times on client-server RTTs. The most frequently cited work on the impact of increasing RTT is Mike Belshe’s measurement of popular Web pages [17]. Others have also quantified the relationship between measured last-mile latency and page load times [65, 19] over small numbers of pages (less than ). We provide fresh measurements of this using our setup, which allows tight control of latency starting from nearly zero (under  ms). Note that this kind of measurement is only made possible by our measurement observation in §2, with other setups starting from the already significant latencies they observe. While record-and-replay tools could also be used to produce such results, they often add significant inaccuracy (e.g., in the median with MahiMahi [50]). Fig. 8 shows for each of the pages tested (which are all within  ms from our client), how viz increases nearly in linear fashion with RTT (with bandwidth fixed to  Mbps). The regression-based best-fit (over the medians at each RTT value, with RTTs being in seconds) is:


Thus, for every  ms of increase in RTT, (median) load time increases by  ms. (Faster compute affects the constant in that equation but not the linear factor.) Of course, there is substantial variation across pages, as shown by the individual lines in the plot, with some pages incurring substantially more RTTs than others.

A similar regression-fit with Fetch results in:


Thus, Fetch’s batching of results, together with optimized transport, results in a substantially smaller dependence on RTTs, although it incurs some additional overhead for the processing at the on-path proxy.

Figure 8: As client-server RTT increases, load time increases linearly, with every  ms of RTT increase, adding  ms in the median. The individual dashed, light lines are for individual pages, and also show the linearity, albeit with variations for some pages, and with different slopes. The solid line represents the medians.
Figure 9: With loss, Fetch performs worse than the default with TCP Cubic, but better with BBR between the Fetch proxy and the client.

The impact of loss: Recent measurements from Google show that packet loss is also typically higher for connections with higher RTTs [42], so merely achieving improvements in settings with high RTTs but no loss is uninteresting. We thus simulate random network loss by traffic shaping at the client using Linux tc. For increasing RTTs, we additionally impose losses in line with Google’s reported loss rates at those RTTs.

Fig. 9 shows median viz time across pages for different (RTT, loss ) configurations. Here, using more sophisticated congestion control makes a large difference. With TCP Cubic as the congestion control mechanism between the server and the client, Fetch performs worse than the default as our current implementation uses a single TCP connection compared to the multiple connections for the default. Nevertheless, when BBR congestion control mechanism is used between the Fetch proxy and the client, Fetch performs substantially better – with  ms latency and loss, Fetch with BBR is faster than the default. For the other three configurations, these improvements are even larger.

Figure 10: Performance of Fetch with BBR congestion control under uniform random loss of and GE loss with different parameter sets which also correspond to loss. RTT is fixed to  ms.

While the results in Fig. 9 use a uniform random loss model, we also tested performance using a bursty Gilbert Elliott (GE) loss model [35]. In order to compare performance under uniform random loss and GE loss, we identify GE parameter sets which result in the same average loss rate of . The GE model uses the parameters , , and , which we set (in the same order) as follows for our configurations - set : {, , , }, set : {, , , }, set : {, , , } and set : {, , , }. Fig. 10 shows no significant differences between the two loss models – BBR is robust to both loss models. The rest of our experiments use BBR between the Fetch proxies and the clients.

Figure 11: Performance of Fetch is comparable to that of Flywheel in the median even without critical optimization like caching compressed content. A Fetch node which deploys both compression and caching is faster in the median compared to Google’s Flywheel.

Compression and Flywheel: Google’s Flywheel [11] uses a proxy-based approach to compress Web page resources with GZip and WebP as appropriate before transmitting them to clients. While Flywheel is targeted at mobile devices, the same functionality is also made available by Google for the desktop Chrome browser in the form of the “Data Saver” extension [31].

We evaluate the same Web pages with Flywheel, with Fetch, and with neither. For Fetch, we test two additional variants: (a) with WebP compression for images using the webp-imageio library [43] and GZip compression for text content on the fly; and (b) with compression and caching at the Fetch proxy, such that cached resources are not compressed on the fly. Flywheel, being a compression proxy, is most useful in low-bandwidth settings, so we first discuss results for a client with  Mbps bandwidth (and a fixed network RTT of  ms).

To make the effect of compression comparable, we first loaded several pages with and without Flywheel, storing their default images and Flywheel’s WebP versions of the same images, and set parameters for our compression to match Flywheel’s resulting images. For the Fetch variant with caching, we use a hashmap based in-memory key-value store to cache the compressed content. In order to analyze the effect of caching, we load each page with this Fetch node twice (clearing the client-side cache, but populating the Fetch cache) and consider every second page load. (We realize that this is the best-case caching result; but running the comparison over popular pages implies that Flywheel also gets the same or similar benefit, as they also cache compressed versions of resources for popular pages [11].)

As Fig. 11 shows, Flywheel’s compression makes it faster than the default case in this bandwidth-constrained setting. Fetch without compression is faster than Flywheel in the median, but is slower at higher percentiles. At this constrained bandwidth, the lack of compression hurts Fetch’s load times, particularly for large image-rich pages. Interestingly, adding on-the-fly compression actually makes Fetch slower in the median, because this incurs significant compute time on the critical path, eliminating the advantage from the faster transfer of compressed resources. As expected, the Fetch variant with both compression and caching is substantially faster: faster in the median compared to Google’s Flywheel. In high-bandwidth settings, even Fetch without compression and caching is faster than Flywheel – at  Mbps, faster in the median (plot omitted).

Note that Flywheel benefits from Google’s extensive data center and networking infrastructure; and as is typical across proxy-based designs so far, routes client requests to locations nearest to them. In contrast, Fetch only depends on a small number of public cloud virtual machines. Achieving proximity to a large number of clients (for near-client proxies) is fundamentally more challenging than achieving proximity to most Web servers (like for Fetch).

5.2 Performance across the Internet

Figure 12: Fetch vs. default, for: (a) a client in the US, (b) clients in Europe; and (c) clients in Asia.

Apart from our sweep across many emulated network configurations, we also include results on a small evaluation across real Internet connections.

We run experiments from vantage points access to which was obtained informally through friends. To side-step privacy concerns, we still test the same pages as in §5.1, instead of using real browsing data. Our clients load the same pages with and without Fetch across their varied network connectivity. Fig. 12 shows the CDFs of the viz times from the different locations. For each of the clients in Asia (Fig. 12), due to their high latency to the Web servers, Fetch clearly outperforms the default, with median improvements between and . For the well-connected clients in the US (Fig. 12) and Europe (Fig. 12), Fetch is comparable or better than the default in the median, but slower at higher percentiles.

Figure 13: Although the median RTT in city of Fig. 12 to the Web services is  ms, the mwRTT is still low:  ms in the median.

To understand performance differences at higher percentiles, we examined RTTs and mwRTTs (like in §2.4) for one European client. Fig. 13 shows the results. While the median RTT is indeed large (as most of these Web servers are in the US) at  ms, the mwRTT is only  ms, with of bytes served from replicas somewhere closer than the US. Thus, many resources in this setting are fetched over small network RTTs, negating the advantage of Fetch, which is larger when more data is being fetched over high-latency networks. As noted in §5.1, large populations of clients are in such regimes (e.g., with high last-mile latency).

6 Competitiveness with a CDN?

Content distribution networks are a well-established way of speeding up Web services today. While CDN usage is far from ubiquitous, we would nevertheless like to understand how Web page acceleration with Fetch would compare to CDN-based acceleration.

6.1 The limitations of CDNs

Content distribution networks build worldwide infrastructures to establish proximity between content and users, and thus allow users to reach their services at lower latencies. However, they are far from a catch-all solution for several reasons:

Content coverage:

Due to their expense, CDNs are estimated to be used by under

of the top , most popular Web sites, with usage dropping with decline in popularity [30]. Further, most CDN-enabled Web sites today do not use CDNs for full-site delivery – often, the initial HTML, as well as any dynamic or personalized content must still be fetched from the content originator’s servers, which could incur a large latency.

Last-mile latency: Last-mile latencies in many parts of the World are extremely poor. For instance, in Pakistan, last-mile latencies for out of measured providers are around  ms in the median, and average latency to M-Lab servers within the same city exceeds  ms [15]. While, not strictly last-mile, median latencies to the nearest M-Lab servers are  ms in Africa, Asia, South America, and Oceania [38] even though there are several M-Lab servers in most of these geographies. In Asia, even the fastest of measured CDNs required an average of  sec to deliver a small (latency-bounded)  KB object, compared to under  sec in Europe and North America [24]. On mobile networks, last-mile latencies will likely continue to be poor in many parts of the World for several years, as the HSPA/HSPA+ standards have been or are being deployed, with latencies on the order of  ms [33]. For users with such large last-mile latencies, even latency to geographically nearby CDN servers can exceed milliseconds, and the many RTTs needed to fetch Web content from these servers can still add up to large page load times and poor user experience.

Figure 14: Akamai’s extensive, constantly growing footprint.

Complex, expensive infrastructure: CDNs depend on a large, expensive, and constantly spreading infrastructural footprint. While such networks do not release information about their growth over time, we were able to gather some (approximate) data points using the “Internet Wayback Machine” [41] on Akamai’s public page disclosing their network’s size [12]. As shown in Fig. 14, on average, Akamai adds , servers / month. In the same period, the number of networks to which they connect increased from to . Relatedly, provisioning CDN infrastructure is a challenging problem because of the need to forecast demands to deploy hardware. When provisioning fails to handle content demand, CDNs must redirect users to farther away servers.

The above challenges stem from a fundamental issue: establishing low-latency connectivity covering most users is made difficult and expensive by the spread of users and the diversity of their last-mile connectivity. In contrast, as our measurements show (§2), achieving proximity to most Web servers is extremely easy and cheap.

Figure 15: With client-server latency set to  ms, we emulate results for possible uses of a CDN, under increasing client-CDN latency: with the CDN only serving all images; serving a random of requests; and serving all requests.

6.2 CDN vs. Fetch acceleration

Our results in Fig. 7 comparing Fetch-based Web page delivery to the default show substantial performance improvements. However, these results were obtained by adding additional latency to all requests from a client to emulate large client-server latency. In a CDN-supported Web service, this is not a realistic comparison – CDNs make extensive efforts to place their servers closer to clients, and resources fetched from CDNs may thus be delivered over much smaller RTTs. This is already evident from our results in Fig. 12 and Fig. 13, with the latter showing the median mwRTT to be significantly smaller than the median RTT to Web servers. Hence, we next model how fetching Web pages using Fetch compares to fetching pages supported by a CDN.

We emulate different extents of CDN support, with the CDN used for: (a) fetching only images666This is the byte-dominant part of most Web pages – for Flywheel’s deployment, images comprised of the bytes in pages served [11].; (b) fulfilling a random of requests; and (c) fulfilling all requests. The rationale for evaluating options (a) and (b) stems from our conversations with a large CDN operator, which indicate that most of their customers do not use their services for full-site delivery, rather offloading only parts of it. For these experiments, we add lower amounts to latency to requests fulfilled through the CDN than for other requests, thus modeling smaller client-CDN latency and a larger client-server latency.

Results in Fig. 15 show the dependence of visual completion time on the client-CDN latency (with client-server latency fixed to  ms). As expected, the more the content served through the CDN, the better the performance. However, as the client-CDN latency increases, performance degrades quickly. In contrast, as observed earlier, Fetch effectively reduces the dependence on the network RTT, thus gaining an advantage when RTTs are larger. Thus, particularly for users with large last-mile latency, Fetch would continue to have a performance advantage over CDN-based delivery.

A big caveat to these results is that they do not account for the possibility that CDNs, like Fetch, themselves may push towards delivering batched data in minimal RTTs. CDNs do already use more aggressive transport than most Web servers, with some using congestion window sizes as large as packets [23] compared to the Linux default of . Likewise, Akamai is already starting to use QUIC [57]. It should be rather obvious that if a CDN performs full-site delivery, and uses the same optimizations as a proxy (for both networking and resource prioritization), then it will outperform the proxy, by virtue of incurring a shorter RTT. In the following, we use our experimental data to build a simple model to explore a scenario where CDNs use the same aggressive optimizations towards batching and better transport as Fetch.

We construct a simple, linear model of how the involved round-trip latencies — client-server (), client-Fetch () and client-CDN () — impact Web performance using CDNs and the Fetch. For simplification, drawing on our measurements, we assume that Fetch-server latency is nearly zero, i.e., . We also assume that enough bandwidth is available – in our experiments, beyond a few Mbps, bandwidth changes only had a small impact on performance. We use the  Mbps data to build our model.

The regression-based best fits for viz for default page loads () and Fetch-supported page loads () were already discussed in §5.1 (Eqn. 1 and 2). Thus, Fetch depends much less on network RTT (less than ) than the default (). However, the addition of an on-path proxy incurs some additional overhead in the form of the larger constant for Fetch ( sec vs. for default). Based on this network vs. compute breakdown, we can construct two hypotheticals – a well-tuned Fetch with near-zero processing overhead (Fetch*), and a well-tuned CDN with Fetch-like dependence on the network (CDN*) as follows:


Fetch* incurs , where is the last-mile latency, which the CDN incurs. Suppose for some . We can then estimate the performance of the CDN and Fetch using this model across different values of and . Fig. 16 shows these estimates for Fetch (normalized to performance as measured from the Web ecosystem today), and for Fetch* (normalized to estimated performance for CDN*). As our experimental results already showed, Fetch achieves substantial performance benefits over default page fetches today (with normalized performance values under ). Further, this advantage improves as last mile latency increases.

Of greater interest here, is the comparison to an ideal CDN*. As noted above, Fetch* will achieve performance lower than CDN*, but the difference is not very large. In particular, for small , performance is roughly worse than CDN*. Note that a small can be easy to achieve with relatively little replication (effectively, “CDN-lite”), because it does not require presence at the edge. (Recall that is the additional latency to a Fetch proxy beyond the last-mile.) Further, performance relative to CDN* improves (i.e., becomes closer to in the Fig. 16) as the last-mile latency increases – in such regimes, the advantage of CDN* is reduced by the last-mile latency forming a larger fraction of the client-server RTT. Lastly, note that this model assumes the CDN does not incur any round-trips to the server; which is unlikely to be the case for dynamic, personalized content. Thus, a simple design based on a small number of worldwide sites can not only compete effectively with today’s Web ecosystem, but also come within striking distance of a well-tuned CDN doing full-site delivery and aggressive network optimization.

Figure 16: As last-mile latency dominates, Fetch’s relative performance improves.

7 Towards realizing Fetch

While we already have a prototype implementation, it is worth discussing what it would take to deploy Fetch and make it widely available.

7.1 Wouldn’t Fetch be very expensive?

Figure 17: A small number of concurrent clients do not substantially deteriorate a Fetch instance’s performance. Client-Fetch latency =  ms; bandwidth =  Mbps.

We present a coarse per-user cost analysis based on our (first-cut) Fetch proxy implementation running on the same hardware as for our experiments in §5.

The average page size across our test pages is  MB, and requires  seconds of service time at the Fetch proxy, computed as the time from when the proxy receives a request to when it finishes processing it. The costs (averaged across Azure sites for the D4s v3 instances we used) of compute and networking are per hour and per GB [46]. Using an estimate of , Web pages per month fetched per user [67], the monthly costs of network traffic and compute per user are $ and $ respectively, coming to a total of per user per month.

Further, a single D4s v3 instance can concurrently support several users’ requests without much performance degradation. We vary the numbers of clients fetching the same set of Web pages simultaneously through a single instance, without any caching. Each client also shuffles the order in which it fetches the set of Web pages to avoid server-side overheads from (near) simultaneous arrival of these requests. As Fig. 17 shows, performance does not deteriorate for up to clients. Adjusting the compute costs (the network component remaining unchanged) for concurrent clients brings the total cost per user per month down to under . Admittedly, this cost estimate assumes system utilization, while in practice demand can be expected to be variable over time, necessarily leading to some inefficiencies. However, in the cloud environment, it is fairly trivial to adjust compute resources to handle demand fluctuations.

Our estimate is based on an unoptimized proxy loading pages to completion. Recent work has already demonstrated that such proxies can be made more efficient, e.g., by not performing rendering [69], and not executing all scripts [63]. Thus, an operational Fetch system could likely be run at much lower cost.

7.2 Who runs Fetch?

The lowest-hanging fruit is to use the measurement observation behind Fetch’s design in existing proxy-based systems. For instance, Google’s Flywheel [11] and Facebook’s Free Basics [29] could be made more efficient simply by optimizing the placement of the proxies involved nearer to the servers. For instance, for Free Basics, recent work [60] reveals that each Free Basics Web request uses (in addition to a proxy near the client) a proxy that interacts with the target Web service. This proxy is located in one of two Facebook data centers (Luleå, Sweden and Prineville, USA). By using a small number of proxies in the right locations, together with simple known transport optimizations, Web performance in such deployments could be improved substantially.

Beyond the above simplest deployment model, Fetch could essentially be operated by any independent third party. For instance, a browser extension could incorporate the client-side logic, and operate the proxies. The cost could be recouped via ads or a subscription fee for users777Note that per month is many times smaller than the price differential between bandwidth tiers in most of the World.. This is operationally similar to operating a VPN service (except here the proxies do not merely forward traffic). In principle, even individual users could operate such a service for themselves. At present, running (sparsely used) proxies full-time at several locations would be cost prohibitive, but as cloud billing aligns closer to actual usage, this could become plausible.

While a low-expense or ad-based service that speeds up the Web could be attractive to many users, this line of thought moves the burden of improving Web performance from Web service providers to users. This is potentially necessary to address the issue of slow-evolving Web service providers with inefficient services, which frustrate users. Another possibility, for service providers looking for cheaper alternatives to CDNs, is to make entry into the Fetch mappings paid. With this model, Fetch only accelerates Web services that incur a (potentially per-request) fee for being served over Fetch. Effectively, this involves deploying your Web service in a major cloud data center (or replicating in a small number, per §6.2) and then outsourcing your reverse proxy for Fetch to run.

7.3 Security & trust

While our present prototype has only been tested with HTTP, the Web is rapidly moving to HTTPS. This presents a challenge for all proxy-based architectures, which fundamentally operate using network middlemen. In fact, even with CDNs, trust is not truly end-to-end; rather, in a majority of deployments, Web service operators trust CDNs with their private keys [21]. Fetch can make a similar compromise, but at the client side, with the client having to trust Fetch– the client uses a secure connection to Fetch, which uses a new secure connection to Web services. This is precisely how Free Basics supports HTTPS today [28].

In the long-term, this solution is unsatisfactory. One could further ensure that client data is not seen even by a Fetch operator, using solutions like mbTLS [47], running over private computing platforms like Intel SGX [40].

7.4 Further improvements

We have only explored a deliberately simple design for Fetch here, but there is significant potential for further improvement. For instance, in computing the mapping between Web services and Fetch proxies, we always pick only the strictly lowest-latency pairings. This can reduce Fetch’s benefits in some scenarios. Consider an example service that is replicated in two locations, one in Europe () and another in the US (). Even if the - latency is larger than - latency by under a millisecond, European clients of Fetch will connect to through . This can be addressed by including multiple candidate proxies for each service together with their measured latencies in the mappings shared with clients, with client-side software deciding which mappings to use. For a client’s most frequently visited services, it should even be possible to learn which Fetch proxy ultimately leads to the best performance. We also expect large improvements from minimizing processing time at the Fetch proxies.

There are also potential provider-side issues that could arise, e.g., Fetch can redirect traffic for replicated services to one location (or a small number of locations with the above described improvement), causing load-balancing problems. At minimum, large providers already using such replication can opt-out of their service being proxied through Fetch.

Summary: We believe existing proxy-based solutions need only minor changes to benefit from our results. We focused our work on minimizing the proxy-server latency, and making the client-proxy interaction as efficient as possible with today’s cutting-edge networking techniques, and open to the easy adoption of other novel methods as they become available. For the substantial software challenges of making proxies work transparently and securely, and the impact of such proxies on the traffic seen by Web servers, we defer to past work like Flywheel and Free Basics, which has tackled many such problems.

8 Related work

Measurement work: Prior work [36, 54, 68] evaluated the use of cloud services like Amazon and Azure by popular Web sites using IP matching, finding that of Alexa’s top million domains use Amazon EC2 or Microsoft Azure. Our results quantify latency to popular services directly from these platforms, showing that even services not hosted directly within Amazon or Microsoft data centers are hosted (or at least replicated) nearby.

Protocols and server-side enhancements:

Our measurement work reveals a vector for deploying a variety of enhancements, as we demonstrate by using BBR 

[22]. We also leverage known techniques such as larger TCP window sizes [26] and persistent connections [53].

Other work [69, 55, 48] has observed that Web pages involve complex compute and network dependencies, and can be sped up by re-ordering edges in this dependency graph and/or optimizing bottlenecks. However, this work depends on adoption by Web site operators.

Cloud-assisted browsers: Using proxies to speed up Web browsing is not new. Opera’s Mini/Turbo [52, 51] and Google’s Flywheel [11] are compression proxies. Amazon Silk [14] offloads processing from thin clients to well-provisioned servers in the cloud. These designs preserve resources at clients, but their impact on latency is not consistently positive, as Sivakumar et al. [62] show in their analysis of a popular cloud-assisted browser. We compare our approach to Flywheel in §5.1.

WebPro [59], PARCEL [64], and Cumulus [49] invoke proxies to batch data fetched from Web servers before delivering it to the clients. However, a single proxy as envisioned in these systems (“a well-provisioned cloud server” in Cumulus [49]; a proxy “implemented on a powerful server” in PARCEL [64]) does not expose the full potential of a proxy-based approach. Unlike other past work, Shandian [69] does indeed suggest that their proxies be placed near Web servers. However, their suggested approach (i.e., the Web service operators co-locate Shandian with their reverse-proxies) requires action from Web service operators. Complementary to this work, our focus is on showing that: (a) a small number of cloud proxies suffice to achieve proximity to most Web services, and (b) these can be easily exploited to build an immediately deployable system, without the cooperation of Web service operators.

Lastly, unlike any past work, we also evaluate the competitiveness of Fetch against CDNs, showing that it can achieve performance close to an idealized CDNs.

Workshop paper: In our own preliminary work [18], we presented a small measurement on server consolidation, and a brief result on potential PLT reduction. We extend that work in several substantial ways here: (a) with extensive measurements covering more servers, visualization, and new analysis on how latencies could be further reduced; (b) a design and implementation evaluated across a variety of network conditions and metrics; (c) adding results on transport and compression; (d) modeling CDN- and Fetch-based delivery and comparing to an idealized CDN; and (e) a tighter cost analysis.

9 Conclusion

We present measurements showing the consolidation of Web services in or near a small number of cloud data centers, with only points of presence in these data centers needed to achieve a median latency of under  ms across the top million most popular Web pages. We also discuss the potential of a small number of additional sites to lower this collective latency even further.

Based on these measurements, we describe and evaluate the design of a simple proxy-based approach, Fetch, for speeding up the Web today, as well as enabling faster evolution of the Web in the future. We show that Fetch can improve Web performance by more than for users with large latencies to Web servers. Using our experimental data, we also build a simple model for the dependence of Web performance on client-server and last-mile latencies, and show that Fetch can achieve results only modestly worse than a well-optimized content delivery network serving the same pages. Fetch flips the CDN-based model of Web page delivery upside down, establishing user presence near the Web servers, which is substantially easier to achieve and manage than the other way around.

Nevertheless, there is still substantial work required to address trust, security, and interplay with service providers, and explore opportunities to further optimize Fetch. To solicit help in this endeavor, our code and measurement data will be released with the paper.

10 Acknowledgments

We thank Balakrishnan Chandrasekaran for his help with geolocation. We also appreciate the generous support from Amazon EC2 and Microsoft Azure in the form of their cloud credits programs – “AWS Cloud Credits for Research” and “Microsoft Azure Research Award” respectively, that enabled us to conduct large-scale measurements and experiments.


  • [1] Akamai: Cloud Delivery, Performance, and Security. [Online; accessed 05-Oct-2018].
  • [2] Amazon (company). [Online; accessed 05-Oct-2018].
  • [3] APNIC. [Online; accessed 21-Feb-2019].
  • [4] Automattic. [Online; accessed 05-Oct-2018].
  • [5] Cloudflare. [Online; accessed 05-Oct-2018].
  • [6] Fastly. [Online; accessed 05-Oct-2018].
  • [7] GoDaddy. [Online; accessed 11-Oct-2018].
  • [8] Google. [Online; accessed 05-Oct-2018].
  • [9] RIPE NCC. [Online; accessed 21-Feb-2019].
  • [10] [Online; accessed 05-Oct-2018].
  • [11] V. Agababov, M. Buettner, V. Chudnovsky, M. Cogan, B. Greenstein, S. McDaniel, M. Piatek, C. Scott, M. Welsh, and B. Yin. Flywheel: Google’s Data Compression Proxy for the Mobile Web. USENIX NSDI, 2015.
  • [12] Akamai. Facts & Figures. [Online; accessed 25-May-2018].
  • [13] Amazon Web Services, Inc. Alexa Top Sites. [Online; accessed 25-May-2018].
  • [14] Amazon Web Services, Inc. Amazon Silk Documentation. [Online; accessed 25-May-2018].
  • [15] M. F. Awan, T. Ahmad, S. Qaisar, N. Feamster, and S. Sundaresan. Measuring broadband access network performance in Pakistan: A comparative study. IEEE LCN Workshops, 2015.
  • [16] AWS (China). AWS (China) FAQs. [Online; accessed 25-May-2018].
  • [17] M. Belshe. More bandwidth doesn’t matter (much). Google Inc, 2010.
  • [18] D. Bhattacherjee, M. Tirmazi, and A. Singla. A Cloud-based Content Gathering Network. USENIX HotCloud, 2017.
  • [19] Z. S. Bischof, J. S. Otto, and F. E. Bustamante. Up, down and around the stack: ISP characterization from network intensive applications. ACM SIGCOMM W-MUST, 2012.
  • [20] Bloomberg. Company Overview of Amazon Technologies, Inc. [Online; accessed 11-Oct-2018].
  • [21] F. Cangialosi, T. Chung, D. Choffnes, D. Levin, B. M. Maggs, A. Mislove, and C. Wilson. Measurement and Analysis of Private Key Sharing in the HTTPS Ecosystem. ACM CCS, 2016.
  • [22] N. Cardwell, Y. Cheng, C. S. Gunn, S. H. Yeganeh, et al. BBR: congestion-based congestion control. Communications of the ACM, 60(2):58–66, 2017.
  • [23] CDNPlanet. Initcwnd settings of major CDN providers., 2017. [Online; accessed 25-May-2018].
  • [24] Dan Rayburn for Frost & Sullivan. Comparing CDN Performance: Amazon CloudFront’s Last Mile Testing Results., 2012. [Online; accessed 25-May-2018].
  • [25] N. Dukkipati, Y. Cheng, and M. Mathis. Increasing TCP’s Initial Window. RFC 6928, RFC Editor, April 2013.
  • [26] N. Dukkipati, T. Refice, Y. Cheng, J. Chu, T. Herbert, A. Agarwal, A. Jain, and N. Sutin. An argument for increasing TCP’s initial congestion window. ACM CCR, 2010.
  • [27] EasyList and EasyPrivacy authors. EasyList. [Online; accessed 25-May-2018].
  • [28] Facebook. Free Basics - Technical Guidelines. [Online; accessed 25-May-2018].
  • [29] Facebook. by Facebook. [Online; accessed 25-May-2018].
  • [30] Y. Gilad, A. Herzberg, M. Sudkovitch, and M. Goberman. CDN-on-demand: An Affordable DDoS Defense via Untrusted Clouds. Internet Society NDSS, 2016.
  • [31] Google. Data Saver. [Online; accessed 25-May-2018].
  • [32] Google Developers. WebP: A new image format for the Web. [Online; accessed 25-May-2018].
  • [33] I. Grigorik. Mobile Performance from Radio Up., 2013. [Online; accessed 25-May-2018].
  • [34] S. Guha and S. Khuller. Greedy strikes back: Improved facility location algorithms. Journal of algorithms, 1999.
  • [35] G. Hasslinger and O. Hohlfeld. The Gilbert-Elliott Model for Packet Loss in Real Time Services on the Internet. MMB, 2008.
  • [36] K. He, A. Fisher, L. Wang, A. Gember, A. Akella, and T. Ristenpart. Next stop, the cloud: understanding modern web service deployment in EC2 and Azure. ACM IMC, 2013.
  • [37] A. Hidayat. PhantomJS 1.2 Release Notes. [Online; accessed 25-May-2018].
  • [38] T. Høiland-Jørgensen, B. Ahlgren, P. Hurtig, and A. Brunstrom. Measuring Latency Variation in the Internet. ACM CoNEXT, 2016.
  • [39] ICANN. whois., 2018. [Online; accessed 5-Oct-2018].
  • [40] Intel Developer Zone. Intel Software Guard Extensions (Intel SGX). [Online; accessed 25-May-2018].
  • [41] Internet Archive. Wayback Machine. [Online; accessed 25-May-2018].
  • [42] A. Langley, A. Riddoch, A. Wilk, A. Vicente, C. Krasic, D. Zhang, F. Yang, F. Kouranov, I. Swett, J. Iyengar, J. Bailey, J. Dorfman, J. Roskind, J. Kulik, P. Westin, R. Tenneti, R. Shade, R. Hamilton, V. Vasiliev, W.-T. Chang, and Z. Shi. The QUIC Transport Protocol: Design and Internet-Scale Deployment. ACM SIGCOMM, 2017.
  • [43] Lonny Jacobson. webp-imageio. [Online; accessed 25-May-2018].
  • [44] M. Maack. Pirate Bay founder: We’ve lost the Internet, it’s all about damage control now. [Online; accessed 25-May-2018].
  • [45] MaxMind. GeoLite2 Free Downloadable Databases. [Online; accessed 25-May-2018].
  • [46] Microsoft Azure. Azure pricing. [Online; accessed 25-May-2018].
  • [47] D. Naylor, R. Li, C. Gkantsidis, T. Karagiannis, and P. Steenkiste. And Then There Were More: Secure Communication for More Than Two Parties. ACM CoNEXT, 2017.
  • [48] R. Netravali, A. Goyal, J. Mickens, and H. Balakrishnan. Polaris: Faster Page Loads Using Fine-grained Dependency Tracking. USENIX NSDI, 2016.
  • [49] R. Netravali, A. Sivaraman, S. Das, A. Goyal, K. Winstein, J. Mickens, and H. Balakrishnan. Mahimahi: Accurate Record-and-Replay for HTTP. USENIX ATC, 2015.
  • [50] R. Netravali, A. Sivaraman, K. Winstein, S. Das, A. Goyal, and H. Balakrishnan. Mahimahi: a lightweight toolkit for reproducible web measurement. In ACM CCR, pages 129–130, 2014.
  • [51] Opera Software. Data savings and turbo mode. [Online; accessed 25-May-2018].
  • [52] Opera Software. Opera Mini: Data-saving mobile browser with ad blocker. [Online; accessed 25-May-2018].
  • [53] S. Radhakrishnan, Y. Cheng, J. Chu, A. Jain, and B. Raghavan. TCP fast open. ACM CoNEXT, 2011.
  • [54] Robert McMillan for Wired. Amazon’s secretive cloud carries 1 percent of the Internet., 2012. [Online; accessed 25-May-2018].
  • [55] V. Ruamviboonsuk, R. Netravali, M. Uluyol, and H. V. Madhyastha. Vroom: Accelerating the Mobile Web with Server-Aided Dependency Resolution. ACM SIGCOMM, 2017.
  • [56] J. Rüth, C. Bormann, and O. Hohlfeld. Large-Scale Scanning of TCP’s Initial Window. ACM IMC, 2017.
  • [57] J. Rüth, I. Poese, C. Dietzel, and O. Hohlfeld. A First Look at QUIC in the Wild. arXiv preprint arXiv:1801.05168, 2018.
  • [58] S. Sanfilippo. hping. [Online; accessed 25-May-2018].
  • [59] A. Sehati and M. Ghaderi. Network assisted latency reduction for mobile Web browsing. Elsevier Computer Networks Journal, 2016.
  • [60] R. Sen, S. Ahmad, A. Phokeer, Z. A. Farooq, I. A. Qazi, D. Choffnes, and K. P. Gummadi. Inside the Walled Garden: Deconstructing Facebook’s Free Basics Program. ACM CCR, 2017.
  • [61] Welcome to the wonderful world of Web Performance. [Online; accessed 25-May-2018].
  • [62] A. Sivakumar, V. Gopalakrishnan, S. Lee, S. Rao, S. Sen, and O. Spatscheck. Cloud is not a silver bullet: A case study of cloud-based mobile browsing. ACM HotMobile, 2014.
  • [63] A. Sivakumar, C. Jiang, Y. S. Nam, S. P. Narayanan, V. Gopalakrishnan, S. G. Rao, S. Sen, M. Thottethodi, and T. Vijaykumar. NutShell: Scalable Whittled Proxy Execution for Low-Latency Web over Cellular Networks. ACM MobiCom, 2017.
  • [64] A. Sivakumar, S. Puzhavakath Narayanan, V. Gopalakrishnan, S. Lee, S. Rao, and S. Sen. PARCEL: Proxy Assisted BRowsing in Cellular Networks for Energy and Latency Reduction. ACM CoNEXT, 2014.
  • [65] S. Sundaresan. Characterizing and improving last mile performance using home networking infrastructure. PhD thesis, Georgia Institute of Technology, 2014.
  • [66] Telehouse. Telehouse Data Centres. [Online; accessed 25-May-2018].
  • [67] The Nielsen Company (US), LLC. Nielsen provides topline U.S. Web data for March 2010. [Online; accessed 25-May-2018].
  • [68] L. Wang, A. Nappa, J. Caballero, T. Ristenpart, and A. Akella. WhoWas: A Platform for Measuring Web Deployments on IaaS Clouds. ACM IMC, 2014.
  • [69] X. S. Wang, A. Krishnamurthy, and D. Wetherall. Speeding up Web Page Loads with Shandian. USENIX NSDI, 2016.