Cheetah: Accelerating Database Queries with Switch Pruning

04/10/2020 ∙ by Muhammad Tirmazi, et al. ∙ Harvard University 0

Modern database systems are growing increasingly distributed and struggle to reduce query completion time with a large volume of data. In this paper, we leverage programmable switches in the network to partially offload query computation to the switch. While switches provide high performance, they have resource and programming constraints that make implementing diverse queries difficult. To fit in these constraints, we introduce the concept of data pruning – filtering out entries that are guaranteed not to affect output. The database system then runs the same query but on the pruned data, which significantly reduces processing time. We propose pruning algorithms for a variety of queries. We implement our system, Cheetah, on a Barefoot Tofino switch and Spark. Our evaluation on multiple workloads shows 40 - 200% improvement in the query completion time compared to Spark.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Database systems serve as the foundation for many applications such as data warehousing, data analytics, and business intelligence (thusoo2010data). Facebook is reported to run more than 30,000 database queries that scan over a petabyte per day  (prestointeractfacebook). With the increase of workloads, the challenge for database systems today is providing high performance for queries on a large distributed set of data.

A popular database query processing system is Spark SQL (sparksql). Spark SQL optimizes query completion time by assigning tasks to workers (each working on one data partition) and aggregating the query result at a master worker. Spark maximizes the parallelism and minimizes the data movement for query processing. For example, each worker sends a stream of the resulting metadata (e.g., just the columns relevant to the query) before sending the entire rows that are requested by the master. Despite the optimizations, the query performance is still limited by software speed.

We propose Cheetah, a query processing system that partially offloads queries to programmable switches. Programmable switches are supported by major switch vendors (tofino; tofinov2; Trident; XPliant). They allow programmatic processing of multiple Tbps of traffic (tofino), which is orders of magnitude higher throughput than software servers and alternative hardware such as FPGAs and GPUs. Moreover, switches already sit between the workers and thus can process aggregated data across partitions.

However, we cannot simply offload all database operations to switches as they have a constrained programming model (RMT)

: Switches process incoming packets in a pipeline of stages. At each stage, there is a limited amount of memory and computation. Further, there is a limited number of bits we can transfer across stages. These constraints are at odds with the large amount of data, diverse query functions, and many intermediate states in database systems.

To meet these constraints, we propose a new abstraction called pruning. Instead of offloading full functionality to programmable switches, we use the switch to prune a large portion of data based on the query, and the master only needs to process the remaining data in the same way as it does without the switch. For example, the switch may remove some duplicates in a DISTINCT query, and let the master remove the rest, thus accelerating the query.

The pruning abstraction allows us to design algorithms that can fit with the constrained programming model at switches: First, we do not need to implement all database functions on the switches but only offload those that fit in the switch’s programming model. Second, to fit in the limited memory of switches, we either store a cached set of results or summarized intermediate results while ensuring a high pruning rate. Third, to reduce the number of comparisons, we use in-switch partitioning of the data such that each entry is only compared with a small number of entries in its partition. We also use projection techniques that map high-dimensional data points into scalars which allows an efficient comparison.

Based on the pruning abstraction, we design and develop multiple query algorithms ranging from filtering and DISTINCT to more complex operations such as JOIN or GROUP BY. Our solutions are rigorously analyzed and we prove bounds on the resulting pruning rates. We build a prototype on the Barefoot Tofino programmable switch (tofino) and demonstrate reduction of query completion times compared with Spark SQL.

2. Using programmable switches

Cheetah leverages programmable switches to reduce the amount of data transferred to the query server and thus improves query performance. Programmable switches that follow the PISA model consist of multiple pipelines through which a network packet passes sequentially. These pipelines contain stages with disjoint memory which can do a limited set of operations as the packet passes through them. See (PRECISION) for more information about the limitations. In this section, we discuss the benefits and constraints of programmable switches to motivate our pruning abstraction.

2.1. Benefits of programmable switches

We use Spark SQL as an example of a database query execution engine. Spark SQL is widely used in industry (sparksql)

, adopts common optimizations such as columnar memory-optimized storage and vectorized processing, and has comparable performance to Amazon’s RedShift  

(amplabbenchmarkruns) and Google s BigQuery  (bigqueryredshiftbenchmark).

When a user submits a query, Spark SQL uses a Catalyst optimizer that generates the query plan and operation code (in the form of tasks) to run on a cluster of workers. The worker fetches a partition from data sources, runs its assigned task, and passes the result on to the next batch of workers. At the final stage, the master worker aggregates the results and returns them to the user. Spark SQL optimizes query completion time by having workers process data in their partition as much as possible and thus minimizes the data transferred to the next set of workers. As a result, the major portion of query completion time is spent at the tasks the workers run. Thus, Spark SQL query performance is often bottlenecked by the server processing speed and not the network (sparksql).

Cheetah rides on the trend of significant growth of the network capacity (e.g., up to 100Gbps or 400Gbps per port) and the advent of programmable switches, which are now provided by major switch vendors (e.g., Barefoot (tofino; tofinov2), Broadcom (Trident), and Cavium Xpliant (XPliant)). These switches can process billions of packets per second, already exist in the data path, and thus introduce no latency or additional cost. Table 5 compares the throughput and delay of Spark SQL on commodity servers with those of programmable Tofino switches. The best throughput of servers is 10-100Gbps, but switches can reach 6.5-12.8 Tbps. The switch throughput is also orders of magnitudes better than alternative hardware choices such as FPGAs, GPUs, and smart NICs. Switches also have less than 1 s delay per packet.

These switches already exist in the cluster and already see the data transferred between the workers. They are at a natural place to help process queries. By offloading part of the processing to switches, we can reduce the workload at workers and thus significantly reduce the query completion time despite more data transferred in the network. Compared with server-side or storage-side acceleration, switches have the extra benefit in that it can process the aggregated data across workers. We defer the detailed comparison of Cheetah with alternative hardware solutions to section 10.

2.2. Constraints of programmable switches

Programmable switches make it possible to offload part of queries because they parse custom packet formats and thus can understand data block formats. Switches can also store a limited amount of state in their match-action tables and make comparisons across data blocks to see if they match a given query. However, there are several challenges in implementing queries on switches:

Function constraints: There are limited operations we can run on switches (e.g. hashing, bit shifting, bit matching, etc). These are insufficient for queries which sometimes require string operations, and other arithmetic operations (e.g., multiplication, division, log) on numbers that are not power-of-twos.

Limited pipeline stages and ALUs: Programmable switches use a pipeline of match action tables. The pipeline has a limited number of stages (e.g. 12-60) and a limited number of ALUs per stage. This means we can only do a limited number of computations at each stage (e.g. no more than ten comparisons in one stage for some switches). This is not enough for some queries which require many comparisons across entries (e.g. DISTINCT) or across many dimensions (e.g. SKYLINE).

Memory and bit constraints: To reach high throughput and low latency, switches have a limited amount of on-chip memory (e.g. under 100MB of SRAM and up to 100K-300K TCAM entries) that is partitioned between stages. However, if we use switches to run queries, we have to store, compare, and group a large number of past entries that can easily go beyond the memory limit. Moreover, switches only parse a limited number of bits and transfer these bits across stages (e.g., 10-20 Bytes. Some queries may have more bits for the keys especially when queries are on multiple dimensions or long strings.

3. Cheetah design

The pruning abstraction: We introduce the pruning abstraction for partially offloading queries onto switches. Instead of offloading the complete query, we simply offload a critical part of a query. In this way, we best leverage the high throughput and low latency performance of switches while staying within their function and resource constraints. With pruning, the switch simply filters the data sent from the workers, but does not guarantee query completion. The master runs queries on the pruned dataset and generates the same query result as if it had run the query with the original dataset. Formally, we define pruning as follows; Let denote the result (output) of query when applied to an input data . A pruning algorithm for , , gets produces such that . That is, the algorithm computes a subset of the data such that the output of running on the subset is equivalent to that of applying the query to the whole of .

To make our description easier, for the rest of the paper, we focus on queries with one stage of workers and one master worker. We also assume a single switch between them. An example is a rack-scale query framework where all the workers are located in one rack and the top-of-rack switch runs Cheetah pruning solutions. Our solutions can work with multiple stages of workers by having the switch prune data for each stage. We discuss how to handle multiple switches in section 9.

Cheetah architecture: Cheetah can be easily integrated within Spark without affecting its normal workflow. Figure 1 shows the Cheetah design:

Figure 1. Cheetah Design.

Query planner: The way users specify a query in Spark remains unchanged. For example, the query (e.g., SELECT * WHERE ) has three parameters: (1) the query type (filtering in this example), (2) The query parameters , and (3) the relevant columns ( in this example). In addition to distributing tasks to workers, the query planner sends (1) and (2) to the switch control plane which updates the switch memory accordingly. Once the master receives an ACK from the switch, which acknowledges that it is ready, it starts the tasks at workers.

CWorkers: With Cheetah, the workers do not need to run computationally intensive tasks on the data. Instead, we implement a Cheetah module (CWorker), which intercepts the data flow at workers and sends the data directly through the switch. Therefore, Cheetah reduces the processing time at the workers and partially offloads their tasks to the switch. The workers and the master only need to process the remaining part of the query. CWorkers also convert the related columns into packets that are readable by the switch. For example, if some entries have variable width or are excessively wide (e.g., a DISTINCT query on multiple columns), CWorkers may compute fingerprints before sending the data out.

Cheetah Switch: Cheetah switch is the core component of our system. We pre-compile all the common query algorithms at the switch. At runtime, when the switch receives a query and its parameters, it simply installs match-action rules based on the query specifications. Since most queries just need tens of rules, the rule installation takes less than 1 ms in our evaluation. According to these rules, the switch prunes incoming packets by leveraging ALUs, match-action tables, registers, and TCAM as explained in section 4. The switch only forwards the remaining packets to the CMaster. The switch identifies incoming packets from workers based on pre-specified port numbers. This allows the switch to be fully compatible with other network functions and applications sharing the same network. Since the switch only adds acceleration functions by pruning the packets, the original query pipeline can work without the switch. If the switch fails, operators can simply reboot the switch with empty states or use a backup ToR switch. We also introduce a new communication protocol, as explained in detail in section 7.2, that allows the workers to distinguish between pruned packets that are legitimately dropped and lost packets that should be retransmitted.

CMaster: At the master, we implement a Cheetah module (CMaster) that converts the packets back to original Spark data format. The Spark master works in the same way with and without Cheetah. It “thinks” that we are running the query on the pruned dataset rather than the original one and completes the operation. As the pruned dataset is much smaller, the Spark master takes less time to complete on Cheetah. Many Spark queries adopt late materialization: Spark first runs queries on the metadata fields (i.e., those columns of an entry that the query conditions on) and then fetches all the requested columns for those entries that match the criteria. In this case, Cheetah prunes data for the metadata query and does not modify the final fetches.

Burger McCheetah 4
Pizza Papizza 7
Fries McCheetah 2
Jello JellyFish 5
(a)
Pizza 7 5
Cheetos 8 6
Jello 9 4
Burger 5 7
Fries 3 3
(b)
Table 3. Running Database Example

4. Query Pruning Algorithms

In this section, we explore the high-level primitives used for our query pruning algorithms. An example of input tables which we will use to illustrate them is given in Table 3. We also provide a table summarizing our algorithms, their parameters and pruning guarantee type in the full version (fullVersion).

4.1. Handling Function Constraints

Due to the switch’s function and resource limitations, we cannot always prune a complete query. In such cases, Cheetah automatically decomposes the query into two parts and prunes parts of the query which are supported.

Example #1: Filtering Query:
Consider the common query in database of selecting entries matching a WHERE expression, for example:

SELECT * FROM Ratings WHERE (taste > 5) OR (texture > 4 AND name LIKE e%s)

The switch may not be able to compute some expressions due to lack of function support (e.g., if it cannot evaluate LIKE e%s) or may lack ALUs or memory to compute some functions. Cheetah runs a subset of the predicates at the switch to prune the data and runs the remaining either at the workers or the master.

The Cheetah query compiler decomposes the predicates into two parts: Consider a monotone Boolean formula

over binary variables

and assume that the predicates can be evaluated at the switch while cannot. The Cheetah query compiler replaces each variable with a tautology (e.g., ) and applies standard reductions (e.g., modus ponens) to reduce the resulting expression. The resulting formula is computable at the switch and allow Cheetah to prune packets.

In our example, we transform the query into:

Therefore, Cheetah prunes entries that do not satisfy OR and let the master node complete the operation by removing entries for which ( LIKE e%s ).

In other cases, Cheetah uses workers to compute the predicates that cannot be evaluated on the switch. For instance, the CWorker can compute ( LIKE e%s ) and add the result as one of the values in the sent packet. This way the switch can complete the filtering execution as all predicate values are now known.

Cheetah supports combined predicates by computing the basic predicates that they contain ( and in this example) and then checking the condition based on the true/false result we obtain for each basic predicate. Cheetah writes the values of the predicates as a bit vector and looks up the value in a truth table to decide whether to drop or forward the packets.

4.2. Handling Stage/ALU Constraints

Switches have limited stages and limited ALUs per stage. Thus, we cannot compare the current entry with a sufficiently large set of points. Fortunately, for many queries, we can partition the data into multiple rows such that each entry is only compared with those in its row. Depending on the query, the partitioning can be either randomized or hash-based as in the following.

Example #2: DISTINCT Query:
The DISTINCT query selects all the distinct values in an input columns subset, e.g.,

SELECT DISTINCT seller FROM Products

returns (Papizza, McCheetah, JellyFish). To prune DISTINCT queries, the idea is to store all past values in the switch. When the switch sees a new entry, it checks if the new value matches any past values. If so, the switch prunes this entry; if not, the switch forwards it to the master node. However, storing all entries on the switch may take too much memory.

To reduce memory usage, an intuitive idea is to use Bloom filters (BFs) (bloom1970space). However, BFs have false positives. For DISTINCT, this means that the switch may drop entries even if their values have not appeared. Therefore, we need a data structure that ensures no false positives but can have false negatives. Caches match this goal. The master can then remove the false negatives to complete the execution.

We propose to use a matrix in which we cache entries. Every row serves as a Least Recently Used (LRU) cache that stores the last entries mapped to it. When an entry arrives, we first hash it to , so that the same entry always maps to the same row. Cheetah then checks if the current entry’s value appears in the row and if so prunes the packet. To implement LRU, we also do a rolling replacement of the entries by replacing the first one with the new entry, the second with the first, etc. By using multiple rows we reduce the number of per-packet comparisons to allow implementation on switches.

In the full version (fullVersion). we analyze the pruning ratio on random order streams. Intuitively, if row sees distinct values and each is compared with

that are stored in the switch memory, then with probability at least

we will prune every duplicate entry. For example, consider a stream that contains distinct entries and we have rows and columns. Then we are expected to prune 58% of the duplicate entries (i.e., entry values that have appeared previously).

Theorem 1 ().

Consider a random order stream with distinct entries111It’s possible to optimize other cases, but this seems to be the common case.. Our algorithm, configured with rows and columns is expected to prune at least fraction of the duplicate entries.

4.3. Handling Memory Constraints

Due to switch memory constraints, we can only store a few past entries at switches. The key question is: how do we decide which data to store at switches that maximizes pruning rate without removing useful entries? We give a few examples below to show how we set thresholds (in the TOP N query) or leverage sketches (in JOIN and HAVING) to achieve these goals.

Example #3: TOP N Query:
Consider a TOP N query (with an ORDER BY clause), in which we are required to output the entries with the largest value in the queried input column;222In different systems this operation have different names; e.g., MySQL supports LIMIT while Oracle has ROWNUM. e.g.,

SELECT TOP 3 name, texture FROM Ratings ORDER BY taste

may return (Jello 4, Cheetos 6, Pizza 5). Pruning in a TOP N query means that we may return a superset of the largest entries. The intuitive solution is to store the largest values, one at each stage. We can then compare them and maintain a rolling minimum of the stages. However, when is much larger than the number of stages (say is or compared to 10-20 stages), this approach does not work.

Instead, we use a small number of threshold-based counters to enable pruning TOP N query. The switch first computes the minimal value for the first entries. Afterward, the switch can safely filter out everything smaller than . It then tries to increase the threshold by checking how many values larger than we observe, for some . Once such entries were processed, we can start pruning entries smaller than . We can then continue with larger thresholds . We set the thresholds exponentially () in case the first is much smaller than most in the data. This power-of-two choice also makes it easy to implement in switch hardware. When using thresholds, our algorithm can may get pruning points smaller than , if enough larger ones exist.

Example #4: JOIN Query:
In a JOIN operation we combine two tables based on the target input columns.333We refer here to INNER JOIN, which is SQL’s default. With slight modifications, Cheetah can also prune LEFT/RIGHT OUTER joins. For example, the query

SELECT * FROM Products JOIN Ratings    ON Products.name = Ratings.name

gives   Burger McCheetah 4 5 7 Pizza Papizza 7 7 5 Fries McCheetah 2 3 3 Jello JellyFish 5 9 4   .

In the example, we can save computation if the switch identifies that the key ”Cheetos” did not appear in the table and prune it. To support JOIN, we propose to send the data through the switch with two passes. In the first pass, we use Bloom Filters (bloom1970space) to track observed keys. Specifically, consider joining tables A and B on input column (or input columns) C. Initially, we allocate two empty Bloom filters to approximately record the observed values (e.g., ) by using an input column optimization to stream the values of C from both tables. Whenever a key from table A (or B) is processed on the switch, Cheetah adds to (or ). Then, we start a second pass in which the switch prunes each packet from A (respectively, B) if () did not report a match. As Bloom Filters have no false negatives, we are guaranteed that Cheetah does not prune any matched entry. In the case of JOIN, The false positives in Bloom Filters only affect the pruning ratio while the correctness is guaranteed. Such a two pass strategy causes more network traffic but it significantly reduces the processing time at workers.

If the joined tables are of significantly different size, we can optimize the processing further. We first stream the small table without pruning while creating a Bloom filter for it. Since it is smaller, we do not lose much by not pruning and we can create a filter with significantly lower false positive rate. Then, we stream the large table while pruning it with the filter.

Example #5: HAVING Query:

HAVING runs a filtering operation on top of an aggregate function (e.g.,MIN/MAX/SUM/COUNT). For example,

SELECT seller FROM Products GROUP BY seller HAVING SUM(price) > 5

should return (McCheetah, Papizza). We first check the aggregate function on each incoming entry. For MAX and MIN, we simply maintain a counter with the current max and min value. If it is satisfied, we proceed to our Distinct solution (see section 4.2) – if it reports that the current key has not appeared before, we add it to the data structure and forward the entry; otherwise we prune it.

SUM and COUNT are more challenging because a single entry is not enough for concluding whether we should pass the entry. We leverage sketches to store the function values for different entries in a compact fashion. We choose Count-Min sketch instead of Count sketch or other algorithms because Count-Min is easy to implement at switches and it has one-sided error. That is, for HAVING , where

is SUM or COUNT, Count-Min gives an estimator

that satisfies . Therefore, by pruning only if we guarantee that every key for which makes it to the master. Thus, the sketch estimation error only affects the pruning rate. After the sketch, the switches blocks all the traffic to the master. We then make a partial second pass (i.e., stream the data again), only for the keys requested by the master. That is, the master gets a superset of the keys that it should output and requests all entries that belong to them. It can then compute the correct COUNT/MAX and remove the incorrect keys (whose true value is at most ). We defer the support for / operations to future work.

4.4. Projection for High-dimensional Data

So far we mainly focus on database operations on one dimension. However, some queries depend on values of multiple dimensions (e.g., in SKYLINE). Due to the limited number of stages and memory at switches, it is not possible to store and compare each dimension. Therefore, we need to project the multiple dimensions to one value (i.e., a fingerprint). The normal way of projection is to use hashing, which is useful for comparing if an entry is equal to another (e.g., DISTINCT and JOIN). However, for other queries (e.g., SKYLINE), we may need to order multiple dimensions so we need a different projection strategy to preserve the ordering.

Example #6: SKYLINE Query:
The Skyline query (borzsony2001skyline) returns all points on the Pareto-curve of the -dimensional input. Formally, a point is dominated by only if it is dominated on all dimensions, i.e., . The goal of a SKYLINE query is to find all the points that are not dominated in a dataset.444For simplicity, we consider maximizing all dimensions. We can extend the solution to support minimizing all dimensions with small modifications. For example, the query

SELECT name FROM Ratings          SKYLINE OF taste, texture

should return (Cheetos, Jello, Burger).

Because skyline relates to multiple dimensions, when we decide whether to store an incoming entry at the switch, we have to compare with all the stored entries because there is no strict ordering among them. For each entry, we have to compare all the dimensions to decide whether to replace it. But the switch does not support conditional write under multiple conditions in one stage. These constraints make it challenging to fit SKYLINE queries on the switch with a limited number of stages.

To address this challenge, we propose to project each high-dimensional entry into a single numeric value. We define a function that gives a single score to -dimensional points. We require that is monotonically increasing in all dimensions to ensure that if is dominated by then . In contrast, does not imply that is dominated by . For example, we can define to be the sum or product of coordinates.

The switch stores a total of points in the switch. Each point takes two stages: one for and another for all the dimensions in . When a new point arrives, for each , the switch first checks if . If so, we replace and by and . Otherwise, we check whether is dominated by and if so mark for pruning without changing the state. Note that here our replace decision is only based on a single comparison (and thus implementable at switches); our pruning decision is based on comparing all the dimensions but the switch only drops the packet at the end of the pipeline (not same stage action).

If replaced some , we put in the packet and continue the pipeline with the new values. We use a rolling minimum (according to the values) and get that the points stored in the switch are those with the highest value so far are among the true skyline.

The remaining question is which function should be. Product (i.e., ) is better than sum (i.e., ) because sum is biased towards the the dimension with a larger range (Consider one dimension ranges between and and another between and

). However, production is hard to implement on switches because it requires large values and multiplication. Instead, we use Approximate Product Heuristic (APH) which uses the monotonicity of the logarithm function to represent products as sum-of-logarithms and uses the switch TCAM and lookup tables to approximate the logarithm values (see more details in the full version 

(fullVersion)).

5. Pruning w/ Probabilistic Guarantees

The previous section focuses on providing deterministic guarantees of pruning, which always ensures the correctness of the query results. Today, to improve query execution time, database systems sometimes adopt probabilistic guarantees (e.g., (hu2019output)). This means that with a high probability (e.g., 99.99%), we ensure that the output is exactly as expected (i.e., no missing entries or extra entries). That is, , where is the query, is the algorithm, and is the data, as in section 3. Such probabilistic guarantees allow users to get query results much faster.555The master can check the extra entries before sending the results to users. Spark can also return the few missing entries at a later time.

By relaxing to probabilistic guarantees, we can improve the pruning rate by leveraging randomized algorithms to select the entries to store at switches and adopting hashing to reduce the memory usage.

Example #7: (Probabilistic) TOP N Query:
Our randomized TOP N algorithm aims at, with a high probability, returning a superset of the expected output (i.e., none of the output entries is pruned). Cheetah randomly partitions the entries into rows as explained in section 4.2. Specifically, when an entry arrives, we choose a random row for it in . In each row, we track the largest entries mapped to it using a rolling minimum. That is, the largest entry in the row is first, then the second, etc. We choose to prune any entry that was smaller than all entries that were cached in its row. Cheetah leverages the balls and bins framework to determine how to set the dimensions of the matrix ( and ) given , the goal probability , and the resource constraints (the number of stages limits while the per-stage memory restricts ).

The algorithm is illustrated in Figure 2.

Figure 2. TOP N example on a stream (7,4,7,5,3,2). The entry was mapped to the third row and pruned as all stored values were larger. In contrast, was mapped to the second row and is not pruned. The matrix dimensions are chosen so that with high probability none of the TOP N entry is pruned.  

The proper configuration of the algorithm is quite delicate. In the full version (fullVersion), we analyze how to set given a constraint on or vice versa. We also show that we achieve the best pruning rate when the matrix size is minimized (if there is no constraint on or ), thus optimizing the space and pruning simultaneously.

The goal of our algorithm is to ensure that with probability , where is an error parameter set by the user, no more than TOP N values are mapped into the same row. In turn, this guarantees that the pruning operation is successful and that all output entries are not pruned. In the following, we assume that is given (this can be derived from the amount of per-stage memory available on the switch) and discuss how to set the number of matrix columns. To that end, we use matrix columns. For example, if we wish to find the TOP 1000 with probability 99.99% (and thus, ) and have rows then we use matrix columns. Having more space (larger ) reduces ; e.g., with rows we need just matrix columns. Having too few rows may require an excessive number of matrix columns (e.g., matrix columns are required for ) which may be infeasible due to the limited number of pipeline stages. Due to lack of space, the proof of the theorem appears in the full version (fullVersion).

Theorem 1 ().

Let such that and define . Then TOP N query succeeds with probability at least .

In the worst case, if the input stream is monotonically increasing, the switch must pass all entries to ensure correctness. In practice, streams are unlikely to be adversarial as the order in which they are stored is optimized for performance. To that end, we analyze the performance on random streams, or equivalently, arbitrary streams that arrive in a random order. Going back to the above example, if we have rows on the switch and aim to find TOP from a stream of elements, our algorithm is expected to prune at least 99% of the data. For a larger table of entries our bound implies expected pruning of over 99.9%. Observe that the logarithmic dependency on in the following theorem implies that our algorithm work better for larger datasets. The following theorem’s proof is deferred to the full version (fullVersion).

Theorem 2 ().

Consider a random-order stream of elements and the TOP N operation with algorithm parameters as discussed above. Then our algorithm prunes at least of the elements in expectation.

Optimizing the Space and Pruning Rate

The above analysis considers the number of rows as given and computes the optimal value for the number of matrix columns . However, unless one wishes to use the minimal number of matrix columns possible for a given per-stage space constraint, we can simultaneously optimize the space and pruning rate. To that end, observe that the required space for the algorithm is , while the pruning rate is monotonically decreasing in as shown in Theorem 2. Therefore, by minimizing the product we optimize the algorithm in both aspects. Next, we note that for a fixed error probability the value for is monotonically decreasing in as shown in Theorem 1. Therefore we define and minimize it over the possible values of .666This omits the flooring of as otherwise the function is not continuous. The actual optimum, which needs to be integral, will be either the minimum for that value or for that is off by . The solution for this optimization is setting , where is the Lambert function defined as the inverse of . For example, for finding TOP 1000 with probability 99.99% we should use rows and matrix columns, even if the per-stage space allows larger .

Example #8: (Probabilistic) DISTINCT Query:
Some DISTINCT queries run on multiple input columns or on variable-width fields that are too wide and exceed the number of bits that can be parsed from a packet. To reduce the bits, we use fingerprints,which are short hashes of all input columns that the query runs on.

However, fingerprint collisions may cause the switch to prune entries that have not appeared before and thus provide inaccurate output.777Note that for some other queries (e.g., JOIN), fingerprint collisions only affect the pruning rate, but not correctness. Interestingly, not all collisions are harmful. This is because the DISTINCT algorithm hashes each entry into a row in Thus, a fingerprint collision between two entries is bad only if they are in the same row.

We prove the following bound on the required fingerprint length in the full version (fullVersion).

Theorem 3 ().

Denote

where is the number of distinct items in the input. Consider storing fingerprints of size bits. Then with probability there are no false positives and the distinct operation terminates successfully.

For example, if and , we can support up to distinct elements using -bits fingerprints regardless of the data size. Further, this does not depend on the value of .

The analysis leverages the balls and bins framework to derive bounds on the sum of square loads, where each load is the number of distinct elements mapped into a row. It then considers the number of distinct elements we can support without having same-row fingerprint collisions. For example, if and the error probability , we can support up to distinct elements using -bits fingerprints regardless of the data size. Further, this does not depend on the value of . We also provide a rigorous analysis of the pruning rate in Additionally, we analyze the expected pruning rate in random-order streams and show , in the full version (fullVersion), that we can prune at least an fraction of the entries, where is the number of distinct elements in the input.

6. Handling multiple queries

Cheetah supports the use case where the query is not known beforehand but only the set of queries (e.g., DISTINCT, TOP N, and JOIN) we wish to accelerate. Alternatively, the workload may contain complex queries that combine several of our operations. In this scenario, one alternative would be to reprogram the switch once a query arrives. However, this could take upwards of a minute and may not be better than to perform the query without Cheetah. Instead, we concurrently pack the different queries that we wish to support on the switch data plane, splitting the ALU/memory resources between these. This limits the set of queries we can accommodate in parallel, but allow for interactive query processing in a matter of seconds and without recompiling the switch. Further, not all algorithms are heavy in the same type of resources. Some of our queries (e.g., SKYLINE) require many stages but few ALUs and only a little SRAM. In contrast, JOIN may use only a couple of stages while requiring most of the SRAM in them. These differences enable Cheetah to efficiently pack algorithms on the same stages.

At the switch, all queries will be performed on the incoming data giving us a prune/no-prune bit for each query. Then we have a single pipeline stage that selects the bit relevant to the current query. We fit multiple queries by repurposing the functionality of ALU results and stages. We evaluate one such combined query in figure 4. Query A is a filtering query and query B is a SUM + group by query. To prune the filtering query, we only use a single ALU and 32 bits of stage memory (1 index of a 32 bit register) in a stage. We use the remaining ALUs in the same stage to compute 1) hash values and 2) sums required for query B as discussed in our pruning algorithms. We also use the remaining stage memory in that same stage to store SUM results ensuring the additional filter query has no impact on the performance of our group by query.

In more extreme examples, where the number of computation operations required exceeds the ALU count on the switch, it is still possible to fit a set of queries by reusing ALUs and registers for queries with similar processing. As an example, an ALU that does comparisons for filtering queries can be reconfigured using control plane rules to work as part of the comparator of a TOP N or HAVING query. We can also use a single stage for more than one task by partitioning its memory e.g dedicating part of to fingerprinting for DISTINCT and another part to store SKYLINE prune points.

Algorithm Defaults #stages #ALUs SRAM #TCAM
DISTINCT FIFO 64b
LRU
SKYLINE SUM 64b 0
APH 64b + 32b
TOP N Det 64b
Rand 64b
GROUP BY 64b 0
JOIN BF MB,
RBF 64b
HAVING 64b 0
Table 4. Summary of the resource consumption of our algorithms. Here, is the number of ALUs per stage on the switch. The algorithms denoted by (*) assume that same-stage ALUs can access the same memory space. For SKYLINE the above assumes that the dimension satisfies .

7. Implementation

7.1. Cheetah prototype

We built the Cheetah dataplane along with in-network pruning using a Barefoot Tofino (tofino) switch and P4 (P4Spec). Each query requires between 10 to 20 control plane rules excluding the rules needed for TCP/IP routing and forwarding. Any of the Big Data benchmark workloads can be configured using less than 100 control plane rules. We also developed a DPDK-based Cheetah end-host service using about 3500 lines of C.

We deploy five Spark workers along with an instance of CWorker connected to the switch via DPDK-compliant 40G Mellanox NICs. We restrict the NIC bandwidth to 10G and 20G for our evaluation. All workers have two CPU cores and 4 GB of memory. The CWorker sends data to the master via UDP at a rate of 10 million packets per second (i.e a throughput of 5.1 Gbps since the minimum ethernet frame is bytes) with one entry per packet. We use optimized tasks for Spark for a fair comparison. We also mount a linux tmpfs RAM disk on workers to store the dataset partitions allowing Spark to take advantage of its main-memory optimized query plans.

Spark optimizes the completion time by minimizing data movement. In addition to running tasks on workers to reduce the volume sent to the master, Spark compresses the data and packs multiple entries in each packet (often, the maximum allowed by the network MTU). In contrast, Cheetah must send the data uncompressed while packing only a small number of entries in each packet. Spark also leverages an optimization called late materialization (latematerialization) in which only a metadata stream is sent in the first stage, and the entire entries are requested by the master once it computes which tuples are part of the result. We expand on how Cheetah supports late materialization in the full version (fullVersion).

Currently, our prototype includes the DISTINCT, SKYLINE, TOP N, GROUP BY, JOIN, and filtering queries. We also support combining these queries and running them in parallel without reprogramming the switch.

Cheetah Modules We create two background services that communicate with PySpark called Cheetah Master (CMaster) and Cheetah Worker (CWorker) running on the same servers that run the Spark Master and Spark Workers respectively. The CMaster bypasses a supported PySpark query and instead sends a control message to all CWorkers providing the dataset and the columns relevant to the query. The CWorker packs the necessary columns into UDP packets with Cheetah headers and sends them via Intel DPDK (dpdk). Our experiments show a CWorker can generate over 12 million packets per second when the link and NIC are not a bottleneck.

In parallel, the master server also communicates with the switch to install the control plane rules relevant to the query. The switch parses and prunes some of the packets it processes. The remaining packets are received at the master using an Intel DPDK packet memory buffer, are parsed, and copied into userspace. The remaining processing for the query is implemented in C. The Cheetah master continues processing all entries it receives from the switch until receiving FINs from all CWorkers, indicating that data transmission is done. Finally, the CMaster sends the final set of row ids and values to the Python PySpark script (or shell).

Switch logic We use Barefoot Tofino (tofino) and implement the pruning algorithms (see section 3) in the P4 language. The switch parses the header and extracts the values which then proceed to the algorithm processing. The switch then decides if to prune each packet or forward it to the master. It also participates in our reliability protocol, which takes two pipeline stages on the hardware switch.

Resource Overheads All our algorithms are parametric and can be configured for a wide range of resources. We summarize the hardware resource requirements of the algorithms in Table 4 and expand on how this is calculated in the full version of the paper (fullVersion).

7.2. Communication protocol

Figure 3. Cheetah packet and ACK format. The packets encode the flow and entry identifiers in addition to the relevant values.

Query and response format: For communication between the CMaster node and CWorkers (passing through and getting pruned by the switch), we implement a reliable transmission protocol built on top of UDP. Each message contains the entry identifier along with the relevant column values or hashes. Our protocol runs on a separate port from Spark and uses a separate header. It also does not use Spark’s serialization implementation. Its channel is completely decoupled from and transparent to the ordinary communication between the Spark master and worker nodes. Our packet and header formats appear in Figure 3. We support variable length headers to allow the different number of columns / column-widths (e.g., TOP N has one value per entry while JOIN/GROUP BY have two or more). The number of values is specified in an 8-bits field (). The flow ID (fid) field is needed when processing multiple datasets and/or queries concurrently.

For simplicity, we store one entry on each packet; We discuss how to handle multiple entries per packet in Section 9.

Reliability protocol: We use UDP connections for CWorkers to send metadata responses to CMasters to ensure low latency. However, we need to add a reliability protocol on top of UDP to ensure the correctness of query results.

The key challenge is that we cannot simply maintain a sequence number at CWorkers and identify lost packets at CMasters because the switch prunes some packets. Thus, we need the switch to participate in the reliability protocol and acks the pruned packets to distinguish them from unintentional packet losses.

Each worker uses the entry identifiers also as packet sequence numbers. It also maintains a timer for every non-ACKed packet and retransmits it if no ACK arrives on time. The master simply acks every packet it receives. For each fid, the switch maintains the sequence number of the last packet it processed, regardless of whether it was pruned. When a packet with SEQ arrives at the switch the taken action depends on the relation between and .

If , the switch processes the packet, increments , and decides whether to prune or forward the packet. If the switch prunes the packet, it sends an ACK() message to the worker. Otherwise, the master which receives the packet sends the ACK. If , this is a retransmitted packet that the switch processed before. Thus, the switch forwards the packet without processing it. If , due to a previous packet was lost before reaching the switch, the switch drops the packet and waits for to be retransmitted.

This protocol guarantees that all the packets either reach the master or gets pruned by the switch. Importantly, the protocol maintains the correctness of the execution even if some pruned packets are lost and the retransmissions make it to the master. The reason is that all our algorithms have the property that any superset of the data the switch chooses not to prune results in the same output. For example, in a DISTINCT query, if an entry is pruned but its retransmission reaches the master, it can simply remove it.

8. Evaluation

Figure 4. A comparison of Cheetah and Spark in terms of completion time on the Big Data benchmark for the benchmark queries (first four) and the other queries supported by Cheetah.

We perform test-bed experiments and simulations. Our test-bed experiments show that Cheetah has improvement in query completion time over Spark. Our simulations show that Cheetah achieves a high pruning ratio with a modest amount of switch resources.

8.1. Evaluation setup

Benchmarks: Our test-bed experiments use the Big Data (pavlo) and TPC-H (tpch) benchmarks. From the Big Data benchmark, we run queries A (filtering)888As the data is nearly sorted on the filtered column, we run the query on a random permutation of the table., B (we offload group-by), and A+B (both A and B executed sequentially). For TPC-H, we run query 3 which consists of two join operations, three filtering operations, a group-by, and a top N. We also evaluate each algorithm separately using a dedicated query (explained in (fullVersion)) on the Big Data benchmark’s tables.

8.2. Testbed experiments

We run the BigData benchmark on a local cluster with five workers and one master, all of which are connected directly to the switch. Our sample contains 31.7 million rows for the uservisits table and 18 million rows for the rankings table. We run the TPC-H benchmark at its default scale with one worker and one master. Cheetah offloads the join part of the TPC-H because it takes 67% of the query time and is the most effective use of switch resources.

8.2.1. Benchmark Performance

Figure 4 shows that Cheetah decreases the completion time by in both BigData B, BigData AB, and TPC-H Query 3 compared to Spark’s run and compared to subsequent runs. Spark’s subsequent runs are faster than run because Spark indexes the dataset based on the given workload after the run. Cheetah reduces the completion time by for other database operations such as distinct, groupby, Skyline, TopN, and Join. Cheetah improves performance on these computation intensive aggregation queries because it reduces the expensive task computation Spark runs at the workers by offloading it to the switch’s data plane instead.

BigData A (filtering) does not have a high computation overhead. Hence Cheetah has performance comparable to Spark’s 1st run but worse than Spark’s subsequent runs. This is because Cheetah has the extra overhead of serializing data at the workers to allow processing at the switches. This serialization adds more latency than the time saved by switch pruning. Cheetah performs the combined query A B faster than the sum of individual completion times. This is because it pipelines the pre-processing of columns for the combined query resulting in faster serialization at CWorker.

8.2.2. Effect of Data Scale and Number of Workers

In Figure (a)a, we vary the number of entries per partition (worker) while keeping the total number of entries fixed. Not only is Cheetah is quicker than Spark, the gap widens as the data scale grows. Therefore Cheetah may also offer better scalability for large datasets. Figure (b)b shows the performance when fixing the overall amount of entries and varying the number of workers. Cheetah improves Spark by about the same factor with a different number of partitions. In both these experiments, we ignore the completion time of Spark’s first run on the query and only show subsequent runs (which perform better due to caching / indexing and JIT compilation effects  (dataanalyticsbottleneck; sparksql)).

(a) Varying partition size
(b) Varying partition count
Figure 7. The performance of Cheetah vs. Spark SQL on DISTINCT query.

8.2.3. Effect of Network Rate

Unlike Spark, which is often bottlenecked by computation (dataanalyticsbottleneck), Cheetah is mainly limited by the network when using a 10G NIC limit. We run Cheetah using a 20G NIC limit and show a breakdown analysis of where each system spends more time. Figure 9 illustrates how Cheetah diminishes the time spent at the worker at the expense of more sending time and longer processing at the master. The computation here for Cheetah is done entirely at the master server, with the workers just serializing packets to send them over the network. When the speed is increased to 20G, the completion time of Cheetah improves by nearly 2x, meaning that the network is the bottleneck. Similarly to Section 8.2.2, we discard Spark’s first run.

Figure 8. Overhead of moving results from the switch dataplane to the master server via packet draining on TPC-H Q3’s order key join. We vary the result size by changing filter ranges in the query.

8.2.4. Comparison with NetAccel (lerner2019case)

NetAccel is a recent system that offloads entire queries to switches. Since the switch data plane limitations may not allow a complete computation of queries, NetAccel overflows some of the data to the switch’s CPU. At the end of the execution, NetAccel drains the output (which is on the switch) and provides it to the user. Cheetah’s fundamental difference is that it does not aim to complete the execution on the switch and only prunes to reduce data size. As a result, Cheetah is not limited to queries whose execution can be done on the switch (NetAccel only supports join and group by), does not need to read the output from the switch (thereby saving latency), and can efficiently pipeline query execution.

As NetAccel stores results at switches, it must drain the output to complete the execution. This process adds latency, as shown in figure 8. We note that NetAccel’s code is not publically available, and the depicted results are a lower bound obtained by measuring the time it takes to read the output from the switch. That is, this lower bound represents an ideal case where there are enough resources in the data plane for the entire execution and no packet is overflowed to the CPU. We also assume that NetAccel’s pruning rate is as high as Cheetah’s. Moreover, the query engine cannot pipeline partial results onto the next operation in the workload if it stored in the switch.

Figure 9. breakdown of Spark and Cheetah’s delay for different network rates. Spark’s bottleneck is not the network and it does not improve when using a faster NIC.
Figure 10. The time it takes the Master to complete DISTINCT and max-GROUP BY queries for a given pruning rate.
(a) DISTINCT ()
(b) SKYLINE
(c) TOP N ()
(d) DISTINCT ()
(e) SKYLINE (APH Heuristic)
(f) TOP N
Figure 17. The pruning performance of our algorithms for a given resource setting ((a)-(c)) and vs. the data scale ((d)-(f)). Notice that the -axis is logarithmic; for example, means that of the entries are pruned.

8.3. Pruning Rate Simulations

We use simulations to study the pruning rates under various algorithm settings and switch constraints. The pruning rates dictate the number of entries that reach the master and therefore impact the completion time.

To understand how pruning rate affects completion time, we measure the time the master needs to complete the execution once it gets all entries. Figure 10 shows that the time it takes the master to complete the query significantly increases with more unpruned entries. The increase is super-linear in the unpruned rate since the master can handle each arriving entry immediately when almost all entries are pruned. In contrast, when the pruning rate is low, the entries buffer up at the master, causing an increase in the completion time.

The desired pruning rate depends on the complexity of a query’s software algorithm. For example, TOP N is implemented on the master using an -sized heap and processes millions of entries per second. In contrast, SKYLINE is computationally expensive and thus we should prune more entries to avoid having the master become a bottleneck. We only show three queries here. Experiments for remaining queries appear in the full version (fullVersion).

Pruning Rate vs. Resources Tradeoff We evaluate the pruning rate that Cheetah achieves for given hardware constraints. In all figures, OPT depicts a hypothetical stream algorithm with no resource constraints. For example, in a TOP N it shows the fraction of entries that were among the largest entries from the beginning of the stream. Therefore, OPT is an upper bound on the pruning rate of any switch algorithm. The results are depicted in Figure (a)a-(c)c

. We ran each randomized algorithm five times and used two-tailed Student t-test to determine the 95% confidence intervals. We configured the randomized algorithms to

99.99% success probability. In (a)a we see that using and Cheetah can prune all non-distinct entries; with smaller or the FIFO policy the pruning rate is slightly lower but Cheetah still prunes over 99% of the entries using just a few KB of space. In (b)b we see SKYLINE results; as expected, for the same number of points, APH outperforms the SUM heuristic and prunes all non-skyline points with . Both APH and SUM prune over 99% of the entries with while Baseline, which stands for an algorithm that store arbitrary points for pruning, requires for 99% pruning. Both heuristics allow the switch to ”learn” a good set of points to use for the pruning.  (c)c shows TOP N and illustrates the power of the randomized approach. While the deterministic algorithm can run with fewer stages and ensure correctness, if we allow just 0.01% chance of failure, we can significantly increase the pruning rate. Here, the randomized algorithm reaches about 99.995% pruning, leaving about times the optimal number of packets. The strict requirement for high-success probability forces the algorithm to only prune entries which are almost certainly not among the top N.

Pruning Rate vs. the Data Scale: The pruning rate of different algorithms behaves differently when the data scale grows. Here, each data point refers to the first entries in the relevant data set. Figures (d)d - (f)f show how the pruning rate varies as the scale grows. For the shown queries, Cheetah achieves better pruning rate for larger data. For DISTINCT it is because we cannot prune the first occurrence of an output key, but once our data structure has these reflected it gets better pruning. In SKYLINE and TOP N, a smaller fraction of input entries are needed for the output as the data scale grows, allowing the pruning of more entries. In contrast, the algorithms for JOIN and HAVING have better pruning rates for smaller data sets, as shown in the full version (fullVersion). In JOIN, the algorithm experiences more false positives as the data keep on coming and therefore prunes a smaller fraction of the entries. The HAVING query is conceptually different; as it asks for the codes for languages whose sum-of-ad-revenue is larger than $1M, the output is empty if the data is too small. The one-sided error of the Count Min sketch that we use guarantees that we do not miss any of the correct output keys but the number of false positives increases as the data grows. Nevertheless, with as few as counters for each of the three rows, Cheetah gets near-perfect pruning throughout the evaluation.

9. Extensions

Multiple switches: We have considered a single programmable switch in the path between the workers and the master. However, having multiple switches boosts our performance further. For example, we can use a “master switch” to partition the data and offload each partition to a different switch. Each switch can perform local pruning of its partition and return it to the master switch which prunes the data further. This increases the hardware resources at our disposal and allows superior pruning results.

DAG of workers: Our paper focuses on the case where there is one master and multiple workers. However, in large scale deployments or complex workloads, query planning may result in a directed acyclic graph (DAG) of workers, each takes several inputs, runs a task, and outputs to a worker on the next level. In such cases, we can run Cheetah at each edge in which data is sent between workers. To distinguish between edges, each has a dedicated port number and a set of resources (stages, ALUs, etc.) allocated to it. To that end, we use the same packing algorithm described in section 6.

Packing multiple entries per packet: Cheetah spends a significant portion of its query processing time on transmitting the entries from the workers. This is due to two factors; first, it does not run tasks on the workers that filters many of the entries; second, it only packs one entry in each packet. While the switch cannot process a very large number of entries per packet on the switch due to limited ALUs, we can still pack several (e.g. four) entries in a packet thereby significantly reducing this delay. P4 switches allow popping header fields (P4Spec) and thereby support pruning of some of the entries in a packet. The limit on the number of entries in each packet depends on the number of ALUs per stage (all our algorithms use at least one ALU per entry per stage), the number of stages (we can split logical stage to several if the pipeline is long enough). Our DISTINCT, TOP N, and GROUP BY algorithms support multiple entries per packet while maintaining correctness: if several entries are mapped to the same matrix row, we can avoid processing them while not pruning the entries.

10. Related Work

This work has not been published elsewhere except for a 2-page poster at SIGCOMM (sigcommPoster). The poster discusses simple filtering, DISTINCT, and TOP N. This work significantly advances the poster by providing pruning algorithms for additional queries, an evaluation on two popular benchmarks, and a comparison with NetAccel (lerner2019case). This work also discusses probabilistic pruning, optimizing multiple queries, using multiple switches, and a reliability protocol.

Hardware-based query accelerators: Cheetah follows a trend of accelerating database computations by offloading computation to hardware. Industrial systems  (Netezza; Exadata) offload parts of the computation to the storage engine. Academic works suggest offloading to FPGAs (Cipherbase; dennl2012fly; sukhwani2012database; woods2014ibex), SSDs (do2013query), and GPUs (paul2016gpl; Govindaraju:2004:FCD:1007568.1007594; Sun:2003:HAS:872757.872813). These either consider offloading the entire operation to hardware (lerner2019case; woods2014ibex), or doing a per-partition exact aggregation/filtering before transmitting the data for aggregation (do2013query). However, exact query computation on hardware is challenging and these only support basic operations (e.g., filtering (Netezza; do2013query)) or primitives (e.g., partitioning (FPGAPartitioning) or aggregation (dennl2013acceleration; woods2014ibex)).

Cheetah uses programmable switches which are either cheaper or have better performance than alternative hardware such as FPGAs. Compared to FPGAs, switches handle two orders of magnitude more throughput per Watt (Tokusashi:2019:CIC:3302424.3303979) and ten times more throughput per dollar (switchcost) (Tokusashi:2019:CIC:3302424.3303979). GPUs consume 2-3x more energy than FPGAs for equivalent data processing workloads (Owaida:2019:LLD:3357377.3365457) and double the cost of FPGA with similar characteristics (Owaida:2019:LLD:3357377.3365457). A summary of the attributes of the different alternatives appears in Table 5. Switches are also readily available in the networks at no additional cost. We offload partial functions on switches using the pruning abstraction and support a variety of database queries.

System Server GPU (gpubw) FPGA (netfpga) SmartNIC (mellanox) Tofino V2 (tofinov2)
Throughput 10-100Gbps 40-120Gbps 10-100Gbps 10-100Gbps 12.8 Tbps
Latency 10-100 s 8-25 s 10 s 5-10s 1s
Table 5. Performance comparison of hardware choices.

Sometimes, FPGAs and GPUs also incur extra data transmission overhead. For example, GPU’s separate memory system also introduces significant performance overhead with extra computation and memory demand (woods2014ibex). When an FPGA is attached to the PCIe bus ( (Cipherbase; Netezza)), we have to copy the data to and from the FPGA explicitly (woods2014ibex). One work has used FPGAs as an in-datapath accelerator to avoid this transfer (woods2014ibex).

Moreover, switches can see the aggregated traffic across workers and the master, and thus allow optimizations across data partitions. In contrast, FPGAs are typically connected to individual workers due to bandwidth constraints  (Netezza) and can only optimize the query for each partition. That said, Cheetah complements these works as switches can be used with FPGA and GPU-based solutions for additional performance.

Offloading to programmable switches: Several works use programmable switches for offloading different functionality that was handled in software (NetCache; DistCache; NetChain; NetPaxos; SilkRoad; Jepsen:2018:LFL:3185467.3185494; Jepsen:2018:PSP:3286062.3286092; Snappy; PRECISION; harrison2018network). Cheetah offloads database queries, which brings new challenges to fit the constrained switch programming model because database queries often provide a large amount of data and require diverse computations across many entries. One opportunity in databases is that the master can complete the query from the pruned data set.

In the network telemetry context, researchers proposed Sonata, a general monitoring abstraction that allows scripting for analytics and security applications (sonata). Sonata supports filtering, map and a constrained version of distinct in the data plane but relies on a software stream processor for other operations (e.g., GROUP BY, TOP N, SKYLINE). Conceptually, as Sonata offloads only operations that can be fully computed in the data plane, its scope is limited. Sparser (palkar2018filter) accelerates text-based filtering using SIMD.

Recently, NetAccel (lerner2019case) suggested using programmable switches for query acceleration. We discuss and evaluate the differences between Cheetah and NetAccel in section 8.2.4. Jumpgate  (mustard2019jumpgate) also suggests accelerating Spark using switches. It uses a method similar to NetAccel. However, while Cheetah and NetAccel are deployed in between the worker and master server, Jumpgate stands between the storage engine and compute nodes. Jumpgate does not include an implementation, is specific to filtering and partial aggregation, and cannot cope with packet loss.

11. Conclusion

We present Cheetah, a new query processing system that significantly reduces query completion time compared to the current state-of-the-art for a variety of query types. Cheetah accelerates queries by leveraging programmable switches while using a pruning abstraction to fit in-switch constraints without affecting query results.

12. Acknowledgements

We thank the anonymous reviewers for their valuable feedback. We thank Mohammad Alizadeh and Geeticka Chauhan for their help and guidance in the early stages of this project, and Andrew Huang for helping us validate Spark SQL queries. This work is supported by the National Science Foundation under grant CNS-1829349 and the Zuckerman foundation.

References