FaaT: A Transparent Auto-Scaling Cache for Serverless Applications

04/28/2021
by   Francisco Romero, et al.
0

Function-as-a-Service (FaaS) has become an increasingly popular way for users to deploy their applications without the burden of managing the underlying infrastructure. However, existing FaaS platforms rely on remote storage to maintain state, limiting the set of applications that can be run efficiently. Recent caching work for FaaS platforms has tried to address this problem, but has fallen short: it disregards the widely different characteristics of FaaS applications, does not scale the cache based on data access patterns, or requires changes to applications. To address these limitations, we present Faa$T, a transparent auto-scaling distributed cache for serverless applications. Each application gets its own Faa$T cache. After a function executes and the application becomes inactive, the cache is unloaded from memory with the application. Upon reloading for the next invocation, Faa$T pre-warms the cache with objects likely to be accessed. In addition to traditional compute-based scaling, Faa$T scales based on working set and object sizes to manage cache space and I/O bandwidth. We motivate our design with a comprehensive study of data access patterns in a large-scale commercial FaaS provider. We implement Faa$T for the provider's production FaaS platform. Our experiments show that Faa$T can improve performance by up to 92 average) for challenging applications, and reduce cost for most users compared to state-of-the-art caching systems, i.e. the cost of having to stand up additional serverful resources.

READ FULL TEXT

page 11

page 14

research
05/11/2022

Access Trends of In-network Cache for Scientific Data

Scientific collaborations are increasingly relying on large volumes of d...
research
01/28/2020

InfiniCache: Exploiting Ephemeral Serverless Functions to Build a Cost-Effective Memory Cache

Internet-scale web applications are becoming increasingly storage-intens...
research
09/19/2023

Ditto: An Elastic and Adaptive Memory-Disaggregated Caching System

In-memory caching systems are fundamental building blocks in cloud servi...
research
09/30/2020

CTDGM: A Data Grouping Model Based on Cache Transaction for Unstructured Data Storage Systems

Cache prefetching technology has become the mainstream data access optim...
research
03/09/2022

Limited Associativity Caching in the Data Plane

In-network caching promises to improve the performance of networked and ...
research
06/04/2019

A Hybrid Cache Architecture for Meeting Per-Tenant Performance Goals in a Private Cloud

The in-memory cache system is an important component in a cloud for the ...
research
01/05/2021

Modeling the Linux page cache for accurate simulation of data-intensive applications

The emergence of Big Data in recent years has resulted in a growing need...

Please sign up or login with your details

Forgot password? Click here to reset