Approximate Distributed Joins in Apache Spark

05/15/2018 ∙ by Do Le Quoc, et al. ∙ 0

The join operation is a fundamental building block of parallel data processing. Unfortunately, it is very resource-intensive to compute an equi-join across massive datasets. The approximate computing paradigm allows users to trade accuracy and latency for expensive data processing operations. The equi-join operator is thus a natural candidate for optimization using approximation techniques. Although sampling-based approaches are widely used for approximation, sampling over joins is a compelling but challenging task regarding the output quality. Naive approaches, which perform joins over dataset samples, would not preserve statistical properties of the join output. To realize this potential, we interweave Bloom filter sketching and stratified sampling with the join computation in a new operator, ApproxJoin, that preserves the statistical properties of the join output. ApproxJoin leverages a Bloom filter to avoid shuffling non-joinable data items around the network and then applies stratified sampling to obtain a representative sample of the join output. Our analysis shows that ApproxJoin scales well and significantly reduces data movement, without sacrificing tight error bounds on the accuracy of the final results. We implemented ApproxJoin in Apache Spark and evaluated ApproxJoin using microbenchmarks and real-world case studies. The evaluation shows that ApproxJoin achieves a speedup of 6-9x over unmodified Spark-based joins with the same sampling rate. Furthermore, the speedup is accompanied by a significant reduction in the shuffled data volume, which is 5-82x less than unmodified Spark-based joins.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The volume of digital data has grown exponentially over the last ten years. A key contributor to this growth has been loosely-structured raw data that are perceived to be cost-prohibitive to clean, organize and store in a database management system (DBMS). These datasets are frequently stored in data repositories (often called “data lakes”) for just-in-time querying and analytics. Extracting useful knowledge from a data lake is a challenge since it requires data analytics systems that adapt to variety in the output of different data sources and answer ad-hoc queries over vast amounts of data quickly.

To pluck the valuable information from raw data, data processing frameworks such as Hadoop (hadoop, ), Apache Spark (spark, ) and Apache Flink (flink, ) are widely used to perform ad-hoc data manipulations and then combine data from different input sources using a join operation. While joins are a critical building block of any analytics pipeline, they are expensive to perform, especially with regard to communication costs in distributed settings. It is not uncommon for a parallel data processing framework to take hours to process a complex join query (Hive2010, ).

Figure 1. Comparison between different sampling strategies for distributed join with varying sampling fractions.

Parallel data processing frameworks are thus embracing approximate computing to answer complex queries over massive datasets quickly (BlinkDB, ; BlinkDB-2, ; quickr-sigmod, ; wander-join, )

. The approximate computing paradigm is based on the observation that approximate rather than exact results suffice if real-world applications can reason about measures of statistical uncertainty such as confidence intervals 

(approx-ex-1, ; approx-ex-2, ). Such applications sacrifice accuracy for lower latency by processing only a fraction of massive datasets. What response time and accuracy targets are acceptable for each particular problem is determined by the user that has the necessary domain expertise.

However, approximating join results by sampling is an inherently difficult problem from a correctness perspective, because uniform random samples of the join inputs cannot construct an unbiased random sample of the join output (samplingoverjoin, ). In practice, as shown in Figure 1, sampling input datasets before the join and joining the samples sacrifices up to an order of magnitude in accuracy; sampling after the join is accurate but also slower due to the data that are shuffled to compute the join result.

Obtaining a correct and precondition-free sample of the join output in a distributed computing framework is a challenging task. Previous work has assumed some prior knowledge about the joined tables, often in the form of an offline sample or a histogram (quickr-sigmod, ; BlinkDB, ; aqua, ). Continuously maintaining histograms or samples over the entire dataset —PB of data— is unrealistic as ad-hoc analytical queries process raw data selectively. Join approximation techniques for a DBMS, like RippleJoin (ripple-join, ) and WanderJoin (wander-join, ), have not considered the intricacies of HDFS-based processing where random disk accesses are notoriously inefficient and data have not been indexed in advance. In addition, both algorithms are designed for single-node join processing; parallelizing the optimization procedure for a Spark cluster is non-trivial.

In this work, we design a novel approximate distributed join algorithm that combines a Bloom filter sketching technique with stratified sampling during the join operation. The Bloom filter curtails redundant shuffling of tuples that will not participate in the subsequent join operations, thus reducing communication and processing overhead. In addition, ApproxJoin

automatically selects and progressively refines the sampling rate to meet user-defined latency and quality requirements by using the Bloom filter to estimate the cardinality of the join output.

Once the sampling rate has been determined, stratified sampling over the remaining tuples produces a sample of the join output that approximates the result of the entire join. However, sampling without coordination from concurrent processes can introduce bias in the final result, adversely affecting the accuracy of the approximation. ApproxJoin

removes this bias using the Central Limit Theorem and the Horvitz-Thompson estimator. The proposed join mechanism can be used for both

two-way joins and multi-way joins. As shown in Figure 1, sampling during the join produces accurate results with fast response times.

We implemented ApproxJoin in Apache Spark (spark, ; spark-nsdi-2012, ) and evaluate its effectiveness via microbenchmarks, TPC-H queries, and a real-world workload. Our evaluation shows that ApproxJoin achieves a speedup of over Spark-based joins with the same sampling fraction. ApproxJoin leverages Bloom filtering to reduce the shuffled data volume during the join operation by compared to Spark-based systems. Without any sampling, our microbenchmark evaluation shows that ApproxJoin achieves a speedup of over the native Spark RDD join (spark-nsdi-2012, ) and over a Spark repartition join. In addition, our evaluation with TPC-H benchmark shows that ApproxJoin is faster than the state-the-art SnappyData system (snappydata, ).

To summarize, our contributions are:

  • [leftmargin=1.5em]

  • A novel mechanism to perform stratified sampling over joins in parallel computing frameworks that relies on a Bloom filter sketching technique to preserve the statistical quality of the join output.

  • A progressive refinement procedure that automatically select a sampling rate that meets user-defined latency and accuracy targets for approximate join computation.

  • An extensive evaluation of an implementation of ApproxJoin in Apache Spark using microbenchmarks, TPC-H queries, and a real-world workload that shows that ApproxJoin outperforms native Spark-based joins and the state-of-the-art SnappyData system by a substantial margin.

The remainder of the paper is organized as follows. We first provide an overview of the system (2). Next, we describe the our approach to mitigate the overhead of distributed join operations (3). Thereafter, we present implementation of ApproxJoin (4), and evaluation (5 & 6) of ApproxJoin. Finally, we present the related work and conclusions in 7 and 8 respectively.

2. Overview

ApproxJoin is designed to mitigate the overhead of distributed join operations in big data analytics systems, such as Apache Flink or Apache Spark. The input of ApproxJoin consists of several datasets to be joined. We facilitate joins on the input datasets by providing a simple user interface. The user submits the join query and its corresponding query execution budget. The query budget can be in the form of expected latency guarantees, or the desired accuracy level. Our system ensures that the input data is processed within the specified query budget. To achieve this, ApproxJoin applies the approximate computing paradigm by processing only a partial input data items from the datasets to produce an approximate output with error bounds. At a high level, ApproxJoin makes use of a combination of sketching and sampling to select a subset of input datasets based on the user specified query budget.

Design goals. We had the following goals when we designed and implemented ApproxJoin:

  • [leftmargin=1.5em]

  • Transparency: Provide a simple programming interface to users that is similar to the join operation of state-of-the-art systems. This goal implies that there will be negligible (or no) modifications to existing programs.

  • Query budget guarantees: Ensure that the join operation is performed within the query budget supplied by the user in the form of desired latency or desired error bound. This goal implies that the system should accurately estimate the latency and error bounds of the approximation in the join operation.

  • Efficiency: Handle very large input datasets in an efficient and cost-effective manner. This goal implies that the system reduces the usage of resources (e.g., network, CPU) as much as possible.

Query interface. ApproxJoin supports joins with algebraic aggregation functions, such as SUM, AVG, COUNT, and STDEV. In addition, a query execution budget is provided to specify either the latency requirement or desired error bound. More specifically, consider the case where a user wants to perform an aggregation query after an equal-join on attribute for input datasets , where represents an input dataset. The user sends the query and supplies a query budget to ApproxJoin. The query budget can be in the form of desired latency or desired error bound . For instance, if the user wants to achieve a desired latency (e.g., seconds), or a desired error bound (e.g., with confidence level of ), he/she defines the query as follows: SELECT SUM( + + … + )
FROM , , …,
WHERE = = … =
WITHIN SECONDS
OR
ERROR CONFIDENCE

ApproxJoin executes the query and returns the most accurate query result within the desired latency which is seconds, or returns the query result within the desired error bound at a % confidence level.

Figure 2. ApproxJoin system overview (shaded boxes depict the implemented modules in Apache Spark).

Design overview. The basic idea of ApproxJoin is to address the shortcomings of the existing join operations in big data systems by reducing the number of data items that need to be processed. Our first intuition is that we can reduce the latency and computation of a distributed join by removing redundant transfer of data items that are not going to participate in the join. Our second intuition is that the exact results of the join operation may be desired, but not necessarily critical, so that an approximate result with well-defined error bounds can also suffice for the user.

Figure 2 shows an overview of our approach. There are two stages in ApproxJoin for the execution of the user’s query:

Stage #1: Filtering redundant items. In the first stage, ApproxJoin determines the data items that are going to participate in the join operation and filters the non-participating items. This filtering reduces the data transfer that needs to be performed over the network for the join operation. It also ensures that the join operation will not include ‘null’ results in the output that will require special handling, as in WanderJoin (wander-join, ). ApproxJoin employs a well-known data structure, Bloom filter (bloom-filter, ). Our filtering algorithm executes in parallel at each node that stores partitions of the input and handles multiple input tables at the same time.

Stage #2: Approximation in distributed joins. In the second stage, ApproxJoin uses a sampling mechanism that is executed during the join process: we sample the input datasets while the cross product is being computed. This mechanism overcomes the limitations of the previous approaches and enables us to achieve low latency as well as preserve the quality of the output as highlighted in Figure 1. Our sampling mechanism is executed during the join operation and preserves the statistical properties of the output.

In addition, we combine our mechanism with stratified sampling (stratified-sampling, ), where tuples with distinct join keys are sampled independently with simple random sampling. As a result, data items with different join keys are fairly selected to represent the sample, and no join key will be overlooked. The final sample will contain all join keys—even the ones with few data items—so that the statistical properties of the sample are preserved.

More specifically, ApproxJoin executes the following steps for approximation in distributed joins:

Step #2.1: Determine sampling parameters. ApproxJoin employs a cost function to compute an optimal sample size according to the corresponding budget of the query. This computation ensures that the query is executed within the desired latency and error bound parameters of the user.

Step #2.2: Sample and execute query. Using this sampling rate parameter, ApproxJoin samples during the join and then executes the aggregation query using the obtained sample.

Step #2.3: Estimate error. After executing the query, ApproxJoin provides an approximate result together with a corresponding error bound in the form of to the user.

Note that our sampling parameter estimation provides an adaptive interface for selecting the sampling rate based on the user-defined accuracy and latency requirements. ApproxJoin adapts by activating a feedback mechanism to refine the sampling rate after learning the data distribution of the input datasets (shown by the dashed line in Figure 2).

Figure 3. Bloom filter building for two datasets. Algorithm 1 generalizes for distributed multi-way joins.

3. Design

In this section, we explain the design details of ApproxJoin. We first describe how we filter redundant data items for multiple datasets to support multiway joins (§3.1). Then, we describe how we perform approximation in distributed joins using three main steps: (1) how we determine the sampling parameter to satisfy the user-specified query budget (§3.2), (2) how our novel sampling mechanism executes during the join operation (§3.3), and finally (3) how we estimate the error for the approximation (§3.4).

3.1. Filtering Redundant Items

In a distributed setting, join operations can be expensive due to the communication cost of the data items. This cost can be especially high in multi-way joins, where several datasets are involved in the join operation. One reason for this high cost is that data items not participating in the join are shuffled through the network during the join operation.

To reduce this communication cost, we need to distinguish such redundant items and avoid transferring them over the network. In ApproxJoin, we use Bloom filters for this purpose. The basic idea is to utilize Bloom filters as a compressed set of all items present at each node and combine them to find the intersection of the datasets used in the join. This intersection will represent the set of data items that are going to participate in the join.

A Bloom filter is a data structure designed to query the presence of an element in a dataset in a rapid and memory-efficient way (bloom-filter, ). There are three advantages why we choose Bloom filters for our purpose. First, querying the membership of an element is efficient: it has complexity, where denotes a constant number of hash functions. Second, the size of the filter is linearly correlated with the size of the input, but it is significantly smaller compared to the original input size. Finally, constructing a Bloom filter is fast and requires a single pass over the input.

Bloom filters have been exploited to improve distributed joins in the past (bloomfilter-join, ; bloomfilter-join1, ; bloomfilter-join2, ; hashjoin-bloomfilter, ). However, these proposals support only two-way joins. Although one can cover joins with multiple input datasets by chaining two-way joins, this approach would add to the latency of the join results. ApproxJoin handles multiple datasets at the same time and supports multi-way joins without introducing additional latency.

For simplicity, we first explain how our algorithm uses a Bloom filter to find the intersection of two input datasets. Afterwards, we explain how our algorithm finds the intersection of multiple datasets at the same time.

I: Two-way Bloom filter. For the two-way filtering, consider the join operation of two datasets (see Figure 3). First, we construct a Bloom filter for each input (step 1 in Figure 3), which we refer to as dataset filter. We perform among the dataset filters (step ). The resulting Bloom filter represents the intersection of both datasets and is referred to as join filter.

Afterwards, we broadcast the join filter to all nodes (step ). Each node checks the membership of the data items in its respective input dataset in the join filter. If a data item is not present, it is discarded. In Figure 3, all data items with keys and are preserved.

Input:
: number of input datasets
: size of the Bloom filter
: false positive rate of the Bloom filter
: input datasets
1 // Build a Bloom filter for the join input datasets
2 buildJoinFilter(, , )
3 begin
4       // Build a Bloom filter for each input
5       // Executed in parallel at worker nodes
6       : BF buildInputFilter(, , );
7       // Merge input filters BF for the overlap between inputs
8       // Executed sequentially at master node
9       BF BF;
10       return BF
11
12// Build a Bloom filter for input
13 // Executed in parallel at worker nodes
14 buildInputFilter(, , )
15 begin
16       := number of partitions of input dataset
17       := , where
18       //MAP PHASE
19       //Initialize a filter for each partition
20       forall  in  do
21             p-BF BloomFilter(, );
22             : p-BF.add();
23            
24      //REDUCE PHASE
25       // Merge partition filters to the dataset filter
26       BF p-BF;
27       return BF
28
Algorithm 1 Filtering using multi-way Bloom filter

II: Multi-way Bloom filter. We generalize the two-way Bloom filter, so that it applies to input datasets. Consider the case where we want to perform a join operation between multiple input datasets , where : .

Algorithm 1 presents the two main steps to construct the multi-way join filter. In the first step, we create a Bloom filter BF for each input , where (lines 4-6), which is executed in parallel at all worker nodes that have the input datasets. In the second step, we combine the dataset filters into the join filter by simply applying the logical operation between the dataset filters (lines 7-9). This operation adds virtually no additional overhead to build the join filter, because the logical operation with Bloom filters is fast, even though the number of dataset filters being combined is instead of two.

Note that an input dataset may consist of several partitions hosted on different nodes. To build the dataset filter for these partitioned inputs, we perform a simple MapReduce job that can be executed in distributed fashion: We first build the partition filters p-BF, where and is the number of partitions for input dataset during the Map phase, which is executed at the nodes that are hosting the partitions of each input (lines 15-21). Then, we combine the partition filters to obtain the dataset filter BF in the Reduce phase by merging the partition filters via the logical operation into the corresponding dataset filter BF (lines 22-24). This process is executed for each input dataset and in parallel (see buildInputFilter()).

Figure 4. Shuffled size comparison between join mechanisms: (a) Varying #inputs with the overlap fraction of %; (b) Varying overlap fractions with three input datasets.

3.1.1. Is Filtering Sufficient?

After constructing the join filter and broadcasting it to the nodes, one straightforward approach would be to complete the join operation by performing the cross product with the data items present in the intersection. Figure 4 (a) shows the advantage of performing such a join operation with multiple input datasets based on a simulation (see §A.1). With the broadcast join and repartition join mechanisms, the transferred data size gradually increases with the increasing number of input datasets. However, with the Bloom filter based join approach, the transferred data size significantly reduces even when the number of datasets in the join operation increases.

Although this filtering seems to significantly reduced transferred data among nodes, this reduction may not always be possible. Figure 4 (b) shows that even with a modest overlap fraction between three input datasets (i.e., 40%), the amount of transferred data becomes comparable with the repartition join mechanism. (In this paper, the overlap fraction is defined as the total number of data items participating in the join operation divided by the total number of data items of all inputs). Furthermore, the cross product operation will involve a significant amount of data items, potentially becoming the bottleneck.

In ApproxJoin, we first filter redundant data items as described in this section. Afterwards, we check whether the overlap fraction between the datasets is small enough, such that we can meet the latency requirements of the user. If so, we perform the cross product of the data items participating in the join. In other words, we do not need approximation in this case (i.e., we compute the exact join result). If the overlap fraction is large, we continue with our approximation technique, which we describe next.

3.2. Approximation: Cost Function

ApproxJoin supports the query budget interface for users to define a desired latency () or a desired error bound () as described in 2. ApproxJoin ensures the join operation executed within the specified query budget by tuning the sampling parameter accordingly. In this section, we describe how ApproxJoin converts the join requirements given by a user (i.e., ) into an optimal sampling parameter. To meet the budget, ApproxJoin makes use of two types of cost functions to determine the sample size: (i) latency cost function, (ii) error bound cost function.

I: Latency cost function. In ApproxJoin, we consider the latency for the join operation being dominated by two factors: 1) the time to filter and transfer participating join data items, , and 2) the time to compute the cross product, . To execute the join operation within the delay requirements of the user, we have to estimate each contributing factor.

The latency for filtering and transferring the join data items, , is measured during the filtering stage (described in 3.1). We then compute the remaining allowed time to perform the join operation:

(1)

To satisfy the latency requirements, the following must hold:

(2)

In order to estimate the latency of the cross product phase, we need to estimate how many cross products we have to perform. Imagine that the output of the filtering stage consists of data items with distinct keys , . To fairly select data items, we perform sampling for each join key independently (explained in 3.3). In other words, we will perform stratified sampling, such that each key corresponds to a stratum and has data items. Let represent the sample size for . The total number of cross products is given by:

(3)

The latency for the cross product phase would be then:

(4)

where denotes the scale factor that depends on the computation capacity of the cluster (e.g., #cores, total memory).

We determine empirically via a microbenchmark by profiling the compute cluster as an offline stage. In particular, we measure the latency to perform cross products with varying input sizes. Figure 5 shows that the latency is linearly correlated with the input size, which is consistent with plenty of I/O bound queries in parallel distributed settings (BlinkDB, ; mapreduce-cost1, ; mapreduce-cost2, ). Based on this observation, we estimate the latency of the cross product phase as follows:

(5)

where is a noise parameter.

Given a desired latency , the sampling fraction can be computed as:

(6)

Then, the sample size of stratum can be then selected as follows:

(7)

According to this estimation, ApproxJoin checks whether the query can be executed within the latency requirement of the user. If not, the user is informed accordingly.

II: Error bound cost function. If the user specified a requirement for the error bound, we have to execute our sampling mechanism, such that we satisfy this requirement. Our sampling mechanism utilizes simple random sampling for each stratum (see 3.3). As a result, the error can be computed as follows (sampling-3, ):

(8)

where represents the sample size of and

represents the standard deviation.

Unfortunately, the standard deviation of stratum cannot be determined without knowing the data distribution. To overcome this problem, we design a feedback mechanism to refine the sample size (the implementation details are in 4): For the first execution of a query, the standard deviation of of stratum is computed and stored. For all subsequent executions of the query, we utilize these stored values to calculate the optimal sample size using Equation 10. Alternatively, one can estimate the standard deviation using a bootstrap method (BlinkDB, ; sampling-3, ). Using this method, however, would require performing offline profiling of the data.

With the knowledge of and solving for gives:

(9)

With % confidence level, we have ; thus, . should be less or equal to , so we have:

(10)

Equation 10 allows us to calculate the optimal sample size given a desired error bound .

Figure 5. Latency cost function: offline profiling of the compute cluster to determine . The plot shows latency of cross products with varying input sizes.

III: Combining latency and error bound. From Equations 7 and 10, we have a trade-off function between the latency and the error bound with confidence level of :

(11)

3.3. Approximation: Sampling and Execution

In this section, we describe our sampling mechanism that executes during the cross product phase of the join operation. Executing approximation during the cross product enables ApproxJoin to have highly accurate results compared to pre-join sampling. To preserve the statistical properties of the exact join output, we combine our technique with stratified sampling. Stratified sampling ensures that no join key is overlooked: for each join key, we perform simple random sampling over data items independently. This method fairly selects data items from different join keys. The filtering stage (3.1) guarantees that this selection is executed only from the data items participating in the join.

Figure 6. Cross-product bipartite graph of join data items for key . Bold lines represent sampled edges.

For simplicity, we first describe how we perform stratified sampling during the cross product on a single node. We then describe how the sampling can be performed on multiple nodes in parallel.

I: Single node stratified sampling. Consider an inner join example of with a pair keys and values, , where and . This join operation produces an item if and only if .

Consider that contains , , and , and that contains , , , and . The join operation based on key can be modeled as a complete bipartite graph (shown in Figure 6). To execute stratified sampling over the join, we perform random sampling on data items having the same join key (i.e., key ). As a result, this process is equal to performing edge sampling on the complete bipartite graph.

Sampling edges from the complete bipartite graph would require building the graph, which would correspond to computing the full cross product. To avoid this cost, we propose a mechanism to randomly select edges from the graph without building the complete graph. The function sampleAndExecute() in Algorithm 2 describes the algorithm to sample edges from the complete bipartite graph. To include an edge in the sample, we randomly select one endpoint vertex from each side and then yield the edge connecting these vertices (lines 19-23). To obtain a sample of size , we repeat this selection times (lines 17-18 and 24). This process is repeated for each join key (lines 15-24).

II: Distributed stratified sampling. The sampling mechanism can naturally be adapted to execute in a distributed setting. Algorithm 2 describes how this adaptation can be achieved. In the distributed setting, the data items participating in the join are distributed to worker nodes based on the join keys using a partitioner (e.g., hash-based partitioner). A master node facilitates this distribution and directs each worker to start sampling (lines 4-5). Each worker then performs the function sampleAndExecute() in parallel to sample the join output and execute the query (lines 12-26).

III: Query execution. After the sampling, each node executes the input query on the sample to produce a partial query result, , and return it to the master node (lines 25-26). The master node collects these partial results and merges them to produce a query result (lines 6-8). The master node also performs the error bound estimation (lines 9-10), which we describe in the following subsection (§3.4) . Afterwards, the approximate query result and its error bounds are returned to the user (line 11).

Input:
: sample size of join key
& : set of vertices (items) in two sides of complete bipartite graph of join key
: number of join keys
: set of all join keys (i.e., )
1 // Executed sequentially at master node sampleDuringJoin() begin
2       foreach  in  do
3             .sampleAndExecute();// Direct workers to sample and execute the query
4            
5       ; // Initialize empty query result
6       foreach  in  do
7             merge();// Merge query results from workers
8            
9      // Estimate error for the result
10       errorEstimation();
11       return ;
12
13// Executed in parallel at worker nodes sampleAndExecute() begin
14       foreach  in  do
15             ; // Sample of join key
16             ;// Initialize a count to keep track # selected items
17             while   do
18                   // Select two random vertices
19                   random();
20                   random();
21                   // Add an edge between the selected vertices and update the sample
22                   .add();
23                   ; // Update counting
24                  
25             query(); // Execute query over sample
26             return ;
27      
28
Algorithm 2 : Stratified sampling over join

3.4. Approximation: Error Estimation

As the final step, ApproxJoin computes an error-bound for the approximate result. The approximate result is then provided to the user as .

Our sampling algorithm (i.e., the sampleAndExecute() function in Algorithm 2) described in the previous section can produce an output with duplicate edges. For such cases, we use the Central Limit Theorem to estimate the error bounds for the output. This error estimation is possible because the sampling mechanism works as a random sampling with replacement.

We can also remove the duplicate edges during the sampling process by using a hash table, and repeat the algorithm steps until we reach the desired number of data items in the sample. This approach might worsen the randomness of the sampling mechanism and could introduce bias into the sample data. In this case, we use the Horvitz-Thompson (horvitz-thompson, ) estimator to remove this bias. We next explain the details of these two error estimation mechanisms.

I: Error estimation using the Central Limit Theorem. Suppose we want to compute the approximate sum of data items after the join operation. The output of the join operation contains data items with different keys , , each key (stratum) has data items and each such data item has an associated value . To compute the approximate sum of the join output, we sample items from each join key according to the parameter we computed (described in §3.2). Afterwards, we estimate the sum from this sample as , where the error bound is defined as:

(12)

Here, is the value of the -distribution (i.e., t-score) with degrees of freedom and . The degree of freedom is calculated as:

(13)

The estimated variance for the sum,

, can be expressed as:

(14)

Here, is the population variance in the stratum. We use the statistical theories for stratified sampling (samplingBySteve, ) to compute the error bound.

II: Error estimation using the Horvitz-Thompson estimator.

Consider the second case, where we remove the duplicate edges and resample the endpoint nodes until another edge is yielded. The bias introduced by this process can be estimated using the Horvitz-Thomson estimator. Horvitz-Thompson is an unbiased estimator for the population sum and mean, regardless whether sampling is with or without replacement.

Let

is a positive number representing the probability that data item having key

is included in the sample under a given sampling scheme. Let is the sample sum of items having key : . The Horvitz-Thompson estimation of the total is computed as (samplingBySteve, ):

(15)

where the error bound is given by:

(16)

where t has degrees freedom. The estimated variance of the Horvitz-Thompson estimation is computed as:

(17)

where is the probability that both data items having key and are included.

Note that the Horvitz-Thompson estimator does not depend on how many times a data item may be selected: each distinct item of the sample is used only once (samplingBySteve, ).

4. Implementation

Figure 7. System implementation: the figure shows distributed dataflow graph execution (on y-axis) for different stages (on x-axis) in ApproxJoin.

In this section, we describe the implementation details of ApproxJoin. At the high level, ApproxJoin is composed of two main modules: (i) filtering and (ii) approximation. The filtering module constructs the join filter to determine the data items participating in the join. These data items are fed to the approximation module to perform the join query within the query budget specified by the user.

We implemented our design by modifying Apache Spark (spark, ). Spark uses Resilient Distributed Datasets (RDDs) (spark-nsdi-2012, ) for scalable and fault-tolerant distributed data-parallel computing. An RDD is an immutable collection of objects distributed across a set of machines. To support existing programs, we provide a simple programming interface that is also based on the RDDs. In other words, all operations in ApproxJoin, including filtering and approximation, are transparent to the user. To this end, we have implemented a PairRDD for approxjoin() function to perform the join query within the query budget over inputs in the form of RDDs. Figure 7 shows in detail the directed acyclic graph (DAG) execution of ApproxJoin.

Figure 8. Benefits of filtering in two-way joins. We show the total latency and the breakdown latency of (a) ApproxJoin, (b) Spark repartition join, and (c) native Spark join.

I: Filtering module. The join Bloom filter module implements the filtering stage described in §3.1 to eliminate the non-participating data items. A straightforward way to implement buildJoinFilter() in Algorithm 1 is to build Bloom filters for all partitions (p-BFs) of each input and merge them in the driver of Spark in the Reduce phase. However, in this approach, the driver quickly becomes a bottleneck when there are multiple data partitions located on many workers in the cluster. To solve this problem, we leverage the treeReduce scheme (incoop, ; slider, ). In this model, we combine the Bloom filters in a hierarchical fashion, where the reducers are arranged in a tree with the root performing the final merge (Figure 7). If the number of workers increases (i.e., ApproxJoin deployed in a bigger cluster), more layers are added to the tree to ensure that the load on the driver remains unchanged. After building the join filter, ApproxJoin broadcasts it to determine participating join items in all inputs and feed them to the approximation module.

The approximation module consists of three submodules including the cost function, sampling and error estimation. The cost function submodule implements the mechanism in 3.2 to determine the sampling parameter according to the requirements in the query budget. The sampling submodule performs the proposed sampling mechanism (described in 3.3) and executes the join query over the filtered data items with the sampling parameter. The error estimation submodule computes the error-bound (i.e., confidence interval) for the query result from the sampling module (described in 3.4). This error estimation submodule also performs fine-tuning of the sample size used by the sampling submodule to meet the accuracy requirement in subsequent runs.

II: Approximation: Cost function submodule. The cost function submodule converts the query budget requirements provided by the user into the sampling parameter used in the sampling submodule. We implemented a simple cost function by building a model to convert the desired latency into the sampling parameter. To build the model, we perform offline profiling of the compute cluster. This model empirically establishes the relationship between the input size and the latency of cross product phase by computing the parameter from the microbenchmarks. Afterwards, we utilize Equation 7 to compute the sample sizes.

III: Approximation: Sampling submodule. After receiving the intersection of the inputs from the filtering module and the sampling parameter from the cost function submodule, the sampling submodule performs the sampling during the join as described in §3.3. We implemented the proposed sampling mechanism in this submodule by creating a new Spark PairRDD function sampleDuringJoin() that executes stratified sampling during the join.

The original join() function in Spark uses two operations: 1) cogroup() shuffles the data in the cluster, and 2) cross-product performs the final phase in join. In our approxjoin() function, we replace the second operation with sampleDuringJoin() that implements our mechanism described in §3.3 and Algorithm 2. Note that the data shuffled by the cogroup() function is the output of the filtering stage. As a result, the amount of shuffled data can be significantly reduced if the overlap fraction between datasets is small. Another thing to note is that sampleDuringJoin() also performs the query execution as described in Algorithm 2.

IV: Approximation: Error estimation submodule. After the query execution is performed in sampleDuringJoin(), the error estimation submodule implements the function errorEstimation() to compute the error bounds of the query result. The submodule also activates a feedback mechanism to re-tune the sample sizes in the sampling submodule to achieve the specified accuracy target as described in §3.2. We use the Apache Common Math library (math-apache, ) to implement the error estimation mechanism described in §3.4.

5. Evaluation: Microbenchmarks

Figure 9. Benefits of filtering in multi-way joins, with different overlap fractions and different numbers of input datasets.

In this section, we present the evaluation results of ApproxJoin based on microbenchmarks and the TPC-H benchmark. In the next section, we will report evaluation based on real-world case studies.

5.1. Experimental Setup

Cluster setup. Our cluster consists of nodes, each equipped with two Intel Xeon E5405 quad-core CPUs, GB memory and a SATA-2 hard disk, running Ubuntu .

Synthetic datasets. We analyze the performance of ApproxJoin

using synthetic datasets following Poisson distributions with

in the range of . The number of distinct join keys is set to be proportional to the number of workers.

Metrics. We evaluate ApproxJoin using three metrics: latency, shuffled data size, and accuracy loss. Specifically, the latency is defined as the total time consumed to process the join operation; the shuffled data size is defined as the total size of the data shuffled across nodes during the join operation; the accuracy loss is defined as , where and denote the results from the executions with and without sampling, respectively.

5.2. Benefits of Filtering

The join operation in ApproxJoin consists of two main stages: (i) filtering stage for reducing shuffled data size, and (ii) sampling stage for approximate computing. In this section, we activate only the filtering stage (without the sampling stage) in ApproxJoin, and evaluate the benefits of the filtering stage.

I: Two-way joins. First, we report the evaluation results with two-way joins. Figure 8(a)(b)(c) show the latency breakdowns of ApproxJoin, Spark repartition join, and native Spark join, respectively. Unsurprisingly, the results show that building bloom filters in ApproxJoin is quite efficient (only around seconds) compared with the cross-product-based join execution (around longer than building bloom filters, for example, when the overlap fraction is %). The results also show that the cross-product-based join execution is fairly expensive across all three systems.

When the overlap fraction is less than %, ApproxJoin achieves and shorter latencies than Spark repartition join and native Spark join, respectively. However, with the increase of the overlap fraction, there is an increasingly large amount of data that has to be shuffled and the expensive cross-product operation cannot be eliminated in the filtering stage; therefore, the benefit of the filtering stage in ApproxJoin gets smaller. For example, when the overlap fraction is %, ApproxJoin speeds up only and compared with Spark repartition join and Spark native join, respectively. When the overlap fraction increases to 20%, ApproxJoin’s latency does not improve (or may even perform worse) compared with the Spark repartition join. At this point, we need to activate the sampling stage of ApproxJoin to reduce the latency of the join operation, which we will evaluate in §5.3.

Figure 10. Comparison between ApproxJoin and Spark join systems in terms of (a) scalability, (b) latency with sampling, and (c) accuracy loss with sampling.
Figure 11. Effectiveness of the cost function.

II: Multi-way joins. Next, we present the evaluation results with multi-way joins. Specifically, we first conduct the experiment with three-way joins whereby we create three synthetic datasets with the same aforementioned Poisson distribution.

We measure the latency and the shuffled data size during the join operations in ApproxJoin, Spark repartition join and native Spark join, with varying overlap fractions. Figure 9(a) shows that, with the overlap fraction of %, ApproxJoin is and faster than Spark repartition join and native Spark join, respectively. However, with the overlap fraction larger than %, ApproxJoin does not achieve much latency gain (or may even perform worse) compared with Spark repartition join. This is because, similar to the two-way joins, the increase of the overlap fraction prohibitively leads to a larger amount of data that needs to be shuffled and cross-producted. Note also that, we do not have the evaluation results for native Spark join with the overlap fractions of and , simply because that system runs out of memory. In addition, Figure 9(b) shows that ApproxJoin significantly reduces the shuffled data size. For example, with the overlap fraction of , ApproxJoin reduces the shuffled data size by and compared with Spark repartition join and native Spark join, respectively.

Next, we conduct experiments with two-way, three-way and four-way joins. In two-way joins, we use two synthetic datasets A and B that have an overlap fraction of %; in three-way joins, the three synthetic datasets A, B, and C have an overlap fraction of %, and the overlap fraction between any two of them is also %; in four-way joins, the four synthetic datasets have an overlap fraction of %, and the overlap fraction between any two of these datasets is also %.

Figure 9(c) presents the latency and the shuffled data size during the join operation with different numbers of input datasets. With two-way joins, ApproxJoin speeds up by and , and reduces the shuffled data size by and , compared with Spark repartition join and native Spark join, respectively. In addition, with three-way and four-way joins, ApproxJoin achieves even larger performance gain. This is because, with the increase of the number of input datasets, the number of non-join data items also increases; therefore, ApproxJoin gains more benefits from the filtering stage.

III: Scalability. Finally, we keep the overlap fraction of and evaluate the scalability of ApproxJoin with different numbers of compute nodes. Figure 10(a) shows that ApproxJoin achieves a lower latency than Spark repartition join and native Spark join. With two nodes, ApproxJoin achieves a speedup of and over Spark repartition join and native Spark join, respectively. Meanwhile, with nodes, ApproxJoin achieves a speedup of and over Spark repartition join and native Spark join.

5.3. Benefits of Sampling

Figure 12. Comparison between ApproxJoin and the state-of-the-art SnappyData system in terms of (a) latency with different TPC-H queries, (b) latency with different sampling fractions, and (c) accuracy with different sampling fractions.

As shown in previous experiments, ApproxJoin does not gain much latency benefit from the filtering stage when the overlap fraction is large. To reduce the latency of the join operation in this case, we activate the second stage of ApproxJoin, i.e., the sampling stage.

For a fair comparison, we re-purpose Spark’s built-in sampling algorithm (i.e., stratified sampling via sampleByKey) to build a “sampling over join” mechanism for the Spark repartition join system. Specifically, we perform the stratified sampling over the join results after the join operation has finished in the Spark repartition join system. We then evaluate the performance of ApproxJoin, and compare it with this extended Spark repartition join system.

I: Latency. We measure the latency of ApproxJoin and the extended Spark repartition join with varying sampling fractions. Figure 10(b) shows that the Spark repartition join system scales poorly with a significantly higher latency as it could perform stratified sampling only after finishing the join operation. Even if we were to enable the Spark repartition join system to perform stratified sampling over the input datasets and then perform the join operation over these samples, this would come with a significant accuracy loss.

II: Accuracy. Next, we measure the accuracy of ApproxJoin and the extended Spark repartition join. Figure 10(c) shows that the accuracy losses in both systems decrease with the increase of sampling fractions, although ApproxJoin’s accuracy is slightly worse than the Spark repartition join system. Note however that, as shown in Figure 10(b), ApproxJoin achieves an order of magnitude speedup compared with the Spark repartition join system since ApproxJoin performs sampling during the join operation.

5.4. Effectiveness of the Cost Function

ApproxJoin provides users with a query budget interface, and uses a cost function to convert the query budget into a sample size (see 3.2). In this experiment, a user sends ApproxJoin a join query along with a latency budget (i.e., the desired latency the user wants to achieve). ApproxJoin uses the cost function, whose parameter is set according to the microbenchmarks ( in our cluster), to convert the desired latency to the sample size. We measure the latency of ApproxJoin and the extended Spark repartition join in performing the join operations with the identified sample size. Figure 11(a) shows that ApproxJoin can rely on the cost function to achieve the desired latency quite well (with the maximum error being less than 12 seconds). Note also that, the Spark repartition join incurs a much higher latency than ApproxJoin since it performs the sampling after the join operation has finished. In addition, Figure 11(b) shows that ApproxJoin can achieve a very similar accuracy to the Spark repartition join system.

5.5. Comparison with SnappyData using TPC-H

In this section, we evaluate ApproxJoin using TPC-H benchmark. TPC-H benchmark consists of queries, and has been widely used to evaluate various database systems. We compare ApproxJoin with the state-of-the-art related system — SnappyData (snappydata, ).

SnappyData is a hybrid distributed data analytics framework which supports a unified programming model for transactions, OLAP and data stream analytics. It integrates GemFine, an in-memory transactional store, with Apache Spark. SnappyData inherits approximate computing techniques from BlinkDB (BlinkDB, ) (off-line sampling techniques) and the data synopses to provide interactive analytics. SnappyData does not support sampling over joins.

In particular, we compare ApproxJoin with SnappyData using the TPC-H queries , and which contain join operations. To make a fair comparison, we only keep the join operations and remove other operations in these queries. We run the benchmark with a scale factor of , i.e., GB datasets.

First, we use the TPC-H benchmark to analyze the performance of ApproxJoin with the filtering stage but without the sampling stage. Figure 12(a) shows the end-to-end latencies of ApproxJoin and SnappyData in processing the three queries. ApproxJoin is faster than SnappyData in processing which contains only one join operation. In addition, for the query which consists of two join operations, ApproxJoin achieves a speedup than SnappyData. Meanwhile, ApproxJoin speeds up by compared with SnappyData for the query .

Next, we evaluate ApproxJoin with both filtering and sampling stages activated. In this experiment, we perform a query to answer the question “what is the total amount of money the customers had before ordering?”. To process this query, we need to join the two tables and in the TPC-H benchmark, and then sum up the two fields and .

Since SnappyData does not support sampling over the join operation, in this experiment it first executes the join operation between the two tables and , then performs the sampling over the join output, and finally calculates the sum of the two fields and . Figure 12(b) presents the latencies of ApproxJoin and SnappyData in processing the aforementioned query with varying sampling fractions. SnappyData has a significantly higher latency than ApproxJoin, simply because it performs sampling only after the join operation finishes. For example, with a sampling fraction of , SnappyData achieves a higher latency than ApproxJoin, even though it is faster when both systems do not perform sampling (i.e., sampling fraction is ). Note however that, sampling is inherently needed when one handles joins with large-scale inputs that require a significant number of cross-product operations. Figure 12(c) shows the accuracy losses of ApproxJoin and SnappyData. ApproxJoin achieves an accuracy level similar to SnappyData. For example, with a sampling fraction of , ApproxJoin achieves an accuracy loss of , while SnappyData achieves an accuracy loss of .

6. Evaluation: Real-world Datasets

Figure 13. Comparison between ApproxJoin, Spark repartition join, and native Spark join based on two real-world datasets: (1) Network traffic monitoring dataset (denoted as [Network]), and (2) Netflix Prize dataset (denoted as [Netflix]).

We evaluate ApproxJoin based on two real-world datasets: (a) network traffic monitoring dataset and (b) Netflix Prize dataset.

6.1. Network Traffic Monitoring Dataset

Dataset. We use the CAIDA network traces (caida2015, ) which were collected on the Internet backbone links in Chicago in 2015. In total, this dataset contains TCP flows, UDP flows, and ICMP flows. Here, a flow denotes a two-tuple network flow that has the same source and destination IP addresses.

Query. We use ApproxJoin to process the query: What is the total size of the flows that appeared in all TCP, UDP and ICMP traffic? To answer this query, we need to perform a join operation across TCP, UDP and ICMP flows.

Results. Figure 13(a) first shows the latency comparison between ApproxJoin (with filtering but without sampling), Spark repartition join, and native Spark join. ApproxJoin achieves a latency and

lower than Spark repartition join and native Spark join, respectively. Interestingly, native Spark join achieves a lower latency than Spark repartition join. This is because the dataset is distributed quite uniformly across worker nodes in terms of the join-participating flow items, i.e., there is little data skew. Figure 

13(a) also shows that ApproxJoin significantly reduces the shuffled data size by a factor of compared with the two Spark join systems.

Next, different from the experiments in §5, we extend Spark repartition join by enabling it to sample the dataset before the actual join operation. This leads to the lowest latency it could achieve. Figure 13(b) shows that ApproxJoin achieves a similar latency even to this extended Spark repartition join. In addition, Figure 13(c) shows the accuracy loss comparison between ApproxJoin and Spark repartition join with different sampling fractions. As the sampling fraction increases, the accuracy losses of ApproxJoin and Spark repartition join decrease, but not linearly. ApproxJoin produces around more accurate query results than the Spark repartition join system with the same sampling fraction.

6.2. Netflix Prize Dataset

Dataset. We also evaluate ApproxJoin based on the Netflix Prize dataset which includes around ratings of movies by users. Specifically, this dataset contains files, one per movie, in the folder. The first line of each such file contains , and each subsequent line in the file corresponds to a rating from a user and the date, in the form of . There is another file which contains lines indicating , and the rating .

Query. We perform the join operation between the dataset in and the dataset in to evaluate ApproxJoin in terms of latency. Note that, we cannot find a meaningful aggregation query for this dataset; therefore, we focus on only the latency but not the accuracy of the join operation.

Results. Figure 13(a) shows the latency and the shuffled data size of ApproxJoin (with filtering but without sampling), Spark repartition join, and native Spark join. ApproxJoin is and faster than Spark repartition join and native Spark join, respectively. The result in Figure 13(a) also shows that ApproxJoin reduces the shuffled data size by and compared with Spark repartition join and native Spark join, respectively. In addition, Figure 13(b) presents the latency comparison between these systems with different sampling fractions. For example, with the sampling fraction of %, ApproxJoin is and faster than Spark repartition join and native Spark join, respectively. Even without sampling (i.e., sampling fraction is ), ApproxJoin is still and faster than Spark repartition join and native Spark join, respectively.

7. Related Work

Over the last decade, approximate computing has been applied in various domains such as programming languages (green, ; enerj, ), hardware design (approxhardware1, ), and distributed systems (hop, ; online-aggregation, ; online-aggregation-mapreduce, ). Our techniques mainly target the databases research community (aqua, ; BlinkDB, ; quickr-sigmod, ; daq, ; cs2, ; stratified-sampling-joins-1, ; stratified-sampling-joins-2, ; incapprox-www-2016, ; streamapprox-middleware17, ; privapprox-atc17, ; approxiot-icdcs-2018, ; privapprox-tech-report, ; streamapprox-tech-report, ). In particular, various approximation techniques have been proposed to make trade-offs between required resources and output quality, including sampling (stratified-sampling, ; sampling-2, ), sketches (sketching, ), and online aggregation (online-aggregation, ; online-aggregation-mapreduce, ). Chaudhari et al. provide a sampling over join mechanism by taking a sample of an input and considering all statistical characteristics and indices of other inputs (samplingoverjoin, ). AQUA (aqua, ) system makes use of simple random sampling to take a sample of joins of inputs that have primary key-foreign key relations. BlinkDB (BlinkDB, ) proposes an approximate distributed query processing engine that uses stratified sampling (stratified-sampling, ) to support ad-hoc queries with error and response time constraints. SnappyData (snappydata, ) and SparkSQL (spark-sql, ) adopt the approximation techniques from BlinkDB to support approximate queries. Quickr (quickr-sigmod, ) deploys distributed sampling operators to reduce execution costs of parallel, ad-hoc queries that may contain multiple join operations. In particular, Quickr first injects sampling operators into the query plan and searches for an optimal query plan among sampled query plans to execute input queries.

Unfortunately, all of these systems require a priori knowledge of the inputs. For example, AQUA (aqua, ) requires join inputs to have primary key-foreign key relations. For another example, the sampling over join mechanism (samplingoverjoin, ) needs the statistical characteristics and indices of inputs. Finally, BlinkDB (BlinkDB, ) utilizes most frequently used column sets to perform off-line stratified sampling over them. Afterwards, the samples are cached, such that queries can be served by selecting the relevant samples and executing the queries over them. While useful in many applications, BlinkDB and these other systems cannot process queries over new inputs, where queries or inputs are typically not known in advance.

Ripple Join (ripple-join, ) implements online aggregation for joins. Ripple Join repeatedly takes a sample from each input. For every item selected, it is joined with all items selected in other inputs so far. Recently, Wander Join (wander-join, ) improves over Ripple Join by performing random walks over the join data graph of a multi-way join. However, their approach crucially depends on the availability of indices, which are not readily available in “big data” systems like Apache Spark. In addition, the current Wander Join implementation is single-threaded, and parallelizing the walk plan optimization procedure is non-trivial. In this work, we proposed a simple but efficient sampling mechanism over joins which works not only on a single node but also in a distributed setting.

Recently, an approximate query processing (AQP) formulation (AQP, )

has been proposed to provide low-error approximate results without any preprocessing or a priori knowledge of inputs. The formulation based on probability theory allows to reuse results of past queries to improve the performance of future query processing. However, the current version of AQP formulation does not support joins.

8. Conclusion

The keynote speakers at SIGMOD 2017 (Kraska-keynote, ; AQP-Chaudhuri1, ; Mozafari-keynote, ) highlighted the challenges and opportunities in approximate query processing. In a follow up succinct blog post (AQP-Chaudhuri2, ), Chaudhuri explains the reasons why, in spite of decades of technical results, the problem of approximate joins is hard even for a simple join query with group-by and aggregation. In this work, we strive to address the challenges associated in performing approximate joins for distributed data analytics systems. We achieve this by performing sampling during the join operation to achieve low latency as well as high accuracy. In particular, we employed a sketching technique (Bloom filters) to reduce the size of the shuffled data during the joins, and also proposed a stratified sampling mechanism that executes during the join in a distributed setting. We implemented our techniques in a system called ApproxJoin using Apache Spark and evaluated its effectiveness using a series of microbenchmarks and real-world case studies. Our evaluation shows that ApproxJoin significantly reduces query response times as well as data shuffled through the network without losing accuracy of the query results compared with the state-of-the-art systems.

Supplementary material. The appendix contains analysis of ApproxJoin covering both communication and computation complexities (Appendix A). In addition, we also discuss three alternative design choices for Bloom filters (Appendix B).

References

  • [1] Apache Flink. https://flink.apache.org/. Accessed: October, 2017.
  • [2] Apache Hadoop. http://hadoop.apache.org/. Accessed: October, 2017.
  • [3] Apache Spark. https://spark.apache.org. Accessed: October, 2017.
  • [4] Approximate query processing where do we go from here? http://wp.sigmod.org/?p=2183. Accessed: October, 2017.
  • [5] Quickr: Lazily Approximating Complex Ad-Hoc Queries in Big Data Clusters. In Proceedings of the ACM International Conference on Management of Data (SIGMOD), 2016.
  • [6] S. Acharya, P. B. Gibbons, V. Poosala, and S. Ramaswamy. The aqua approximate query answering system. In Proceedings of the 1999 ACM SIGMOD International Conference on Management of Data (SIGMOD), 1999.
  • [7] S. Agarwal, H. Milner, A. Kleiner, A. Talwalkar, M. Jordan, S. Madden, B. Mozafari, and I. Stoica. Knowing when you’re wrong: Building fast and reliable approximate query processing systems. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD), 2014.
  • [8] S. Agarwal, B. Mozafari, A. Panda, H. Milner, S. Madden, and I. Stoica. Blinkdb: Queries with bounded errors and bounded response times on very large data. In Proceedings of the ACM European Conference on Computer Systems (EuroSys), 2013.
  • [9] M. Al-Kateb and B. S. Lee. Stratified reservoir sampling over heterogeneous data streams. In Proceedings of the 22nd International Conference on Scientific and Statistical Database Management (SSDBM), 2010.
  • [10] G. Ananthanarayanan, S. Kandula, A. Greenberg, I. Stoica, Y. Lu, B. Saha, and E. Harris.

    Reining in the outliers in map-reduce clusters using mantri.

    In Proceedings of the 9th USENIX Conference on Operating Systems Design and Implementation (OSDI), 2010.
  • [11] M. Armbrust, R. S. Xin, C. Lian, Y. Huai, D. Liu, J. K. Bradley, X. Meng, T. Kaftan, M. J. Franklin, A. Ghodsi, and M. Zaharia. Spark SQL: relational data processing in spark. In Proceedings of the International Conference on Management of Data (SIGMOD), 2015.
  • [12] W. Baek and T. M. Chilimbi. Green: A framework for supporting energy-conscious programming using controlled approximation. In Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2010.
  • [13] R. Barber, G. Lohman, I. Pandis, V. Raman, R. Sidle, G. Attaluri, N. Chainani, S. Lightstone, and D. Sharpe. Memory-efficient hash joins. In Proceedings of the International Conference on Very Large Data Bases (VLDB), 2014.
  • [14] P. Bhatotia, U. A. Acar, F. P. Junqueira, and R. Rodrigues. Slider: Incremental Sliding Window Analytics. In Proceedings of the 15th International Middleware Conference (Middleware), 2014.
  • [15] P. Bhatotia, A. Wieder, R. Rodrigues, U. A. Acar, and R. Pasquini. Incoop: MapReduce for Incremental Computations. In Proceedings of the ACM Symposium on Cloud Computing (SoCC), 2011.
  • [16] B. H. Bloom. Space/time trade-offs in hash coding with allowable errors. Commun. ACM, 1970.
  • [17] F. Bonomi, M. Mitzenmacher, R. Panigrahy, S. Singh, and G. Varghese. An improved construction for counting bloom filters. In Proceedings of the 14th Conference on Annual European Symposium (ESA), 2006.
  • [18] CAIDA. The CAIDA UCSD Anonymized Internet Traces 2015 (equinix-chicago-dirA). http://www.caida.org/data/passive/passive_2015_dataset.xml.
  • [19] S. Chaudhuri, B. Ding, and S. Kandula. Approximate query processing: No silver bullet. In Proceedings of the ACM International Conference on Management of Data (SIGMOD), 2017.
  • [20] S. Chaudhuri, R. Motwani, and V. Narasayya. On random sampling over joins. In Proceedings of the ACM International Conference on Management of Data (SIGMOD), 1999.
  • [21] T. Condie, N. Conway, P. Alvaro, J. M. Hellerstein, K. Elmeleegy, and R. Sears. MapReduce Online. In Proceedings of the 7th USENIX Conference on Networked Systems Design and Implementation (NSDI), 2010.
  • [22] G. Cormode, M. Garofalakis, P. J. Haas, and C. Jermaine. Synopses for massive data: Samples, histograms, wavelets, sketches. Found. Trends databases, 2012.
  • [23] A. Doucet, S. Godsill, and C. Andrieu. On sequential monte carlo sampling methods for bayesian filtering. Statistics and Computing, 2000.
  • [24] A. Galakatos, A. Crotty, E. Zgraggen, C. Binnig, and T. Kraska. Revisiting reuse for approximate query processing. In Proceedings of the International Conference on Very Large Data Bases (VLDB), 2017.
  • [25] M. N. Garofalakis and P. B. Gibbon. Approximate Query Processing: Taming the TeraBytes. In Proceedings of the International Conference on Very Large Data Bases (VLDB), 2001.
  • [26] M. T. Goodrich and M. Mitzenmacher. Invertible bloom lookup tables. CoRR, 2011.
  • [27] P. J. Haas and J. M. Hellerstein. Ripple joins for online aggregation. In Proceedings of the ACM International Conference on Management of Data (SIGMOD), 1999.
  • [28] J. M. Hellerstein, P. J. Haas, and H. J. Wang. Online aggregation. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD), 1997.
  • [29] D. G. Horvitz and D. J. Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American statistical Association, 1952.
  • [30] N. Kamat and A. Nandi. Perfect and maximum randomness in stratified sampling over joins. CoRR, 2016.
  • [31] N. Kamat and A. Nandi. A unified correlation-based approach to sampling over joins. In Proceedings of the 29th International Conference on Scientific and Statistical Database Management (SSDBM), 2017.
  • [32] T. Kraska.

    Approximate query processing for interactive data science.

    In Proceedings of the 2017 ACM International Conference on Management of Data (SIGMOD), 2017.
  • [33] D. R. Krishnan, D. L. Quoc, P. Bhatotia, C. Fetzer, and R. Rodrigues. IncApprox: A Data Analytics System for Incremental Approximate Computing. In proceedings of International Conference on World Wide Web (WWW), 2016.
  • [34] T. Lee, K. Kim, and H.-J. Kim. Join processing using bloom filter in mapreduce. In Proceedings of the 2012 ACM Research in Applied Computation Symposium (RACS), 2012.
  • [35] F. Li, B. Wu, K. Yi, and Z. Zhao. Wander join: Online aggregation via random walks. In Proceedings of the 2016 International Conference on Management of Data (SIGMOD), 2016.
  • [36] S. Lohr. Sampling: design and analysis. Cengage Learning, 2009.
  • [37] C. Math. The Apache Commons Mathematics Library. http://commons.apache.org/proper/commons-math. Accessed: October, 2017.
  • [38] B. Mozafari. Approximate query engines: Commercial challenges and research opportunities. In Proceedings of the 2017 ACM International Conference on Management of Data (SIGMOD), 2017.
  • [39] S. Natarajan. Imprecise and Approximate Computation. Kluwer Academic Publishers, 1995.
  • [40] N. Pansare, V. R. Borkar, C. Jermaine, and T. Condie. Online aggregation for large mapreduce jobs. In Proceedings of the International Conference on Very Large Data Bases (VLDB), 2011.
  • [41] S. A. Paulo, B. Carlos, P. Nuno, and H. David. Scalable bloom filters. 2007.
  • [42] N. Potti and J. M. Patel. DAQ: A New Paradigm for Approximate Query Processing. In Proceedings of the International Conference on Very Large Data Bases (VLDB), 2015.
  • [43] D. L. Quoc, M. Beck, P. Bhatotia, R. Chen, C. Fetzer, and T. Strufe. Privacy preserving stream analytics: The marriage of randomized response and approximate computing. 2017.
  • [44] D. L. Quoc, M. Beck, P. Bhatotia, R. Chen, C. Fetzer, and T. Strufe. PrivApprox: Privacy-Preserving Stream Analytics. In Proceedings of the 2017 USENIX Conference on USENIX Annual Technical Conference (USENIX ATC), 2017.
  • [45] D. L. Quoc, R. Chen, P. Bhatotia, C. Fetzer, V. Hilt, and T. Strufe. Approximate Stream Analytics in Apache Flink and Apache Spark Streaming. CoRR, abs/1709.02946, 2017.
  • [46] D. L. Quoc, R. Chen, P. Bhatotia, C. Fetzer, V. Hilt, and T. Strufe. StreamApprox: Approximate Computing for Stream Analytics. In Proceedings of the International Middleware Conference (Middleware), 2017.
  • [47] J. Ramnarayan, B. Mozafari, S. Wale, S. Menon, N. Kumar, H. Bhanawat, S. Chakraborty, Y. Mahajan, R. Mishra, and K. Bachhav. Snappydata: A hybrid transactional analytical store built on spark. In Proceedings of the International Conference on Management of Data (SIGMOD), 2016.
  • [48] A. Sampson, W. Dietl, E. Fortuna, D. Gnanapragasam, L. Ceze, and D. Grossman. EnerJ: Approximate data types for safe and general low-power computation. In Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2011.
  • [49] A. Sampson, J. Nelson, K. Strauss, and L. Ceze. Approximate storage in solid-state memories. In Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2013.
  • [50] S. K. Thompson. Sampling. Wiley Series in Probability and Statistics, 2012.
  • [51] A. Thusoo, J. S. Sarma, N. Jain, Z. Shao, P. Chakka, N. Zhang, S. Anthony, H. Liu, and R. Murthy. Hive - A petabyte scale data warehouse using Hadoop. In Proceedings of the 26th International Conference on Data Engineering, ICDE 2010, March 1-6, 2010, Long Beach, California, USA, pages 996–1005, 2010.
  • [52] Y. Tian, F. Özcan, T. Zou, R. Goncalves, and H. Pirahesh. Building a hybrid warehouse: Efficient joins between data stored in hdfs and enterprise warehouse. ACM Trans. Database Syst., 2016.
  • [53] Y. Tian, T. Zou, F. Ozcan, R. Goncalves, and H. Pirahesh. Joins for hybrid warehouses: Exploiting massive parallelism in hadoop and enterprise data warehouses. In In Proceedings of the 2015 International Conference on Extending Database Technology (EDBT), pages 373–384, 2015.
  • [54] Z. Wen, D. L. Quoc, P. Bhatotia, R. Chen, and M. Lee. ApproxIoT: Approximate Analytics for Edge Computing. In Proceedings of the 38th IEEE International Conference on Distributed Computing Systems (ICDCS), 2018.
  • [55] F. Yu, W.-C. Hou, C. Luo, D. Che, and M. Zhu. CS2: A New Database Synopsis for Query Estimation. In Proceedings of the 2013 International Conference on Management of Data (SIGMOD), 2013.
  • [56] M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauley, M. J. Franklin, S. Shenker, and I. Stoica. Resilient Distributed Datasets: A Fault Tolerant Abstraction for In-Memory Cluster Computing. In Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation (NSDI), 2012.
  • [57] M. Zaharia, A. Konwinski, A. D. Joseph, R. Katz, and I. Stoica. Improving mapreduce performance in heterogeneous environments. In Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation (OSDI), 2008.

Appendix

Appendix A Analysis of ApproxJoin

Symbol Meaning
R A join input
The number of join inputs
The number of join keys
The number of worker nodes
The number of hash functions in bloom filters
BF Bloom filter of input
p-BF Bloom filter of partition in
join filter The global Bloom filter for all inputs
Desired latency
Desired error
A join key (a stratum)
The total number of data items having join key
sample size of join key
Data transfer delay
Cross-product computing delay
Total data transfered size
Total cross-product size
Parameter for execution environment
The error bound of a join key
Noise parameter of execution environment
The number of partitions of input
Table 1. Symbols and terms used in this paper

In this section, we first analyze the communication complexity of ApproxJoin. Thereafter, we provide computational complexity analysis for the proposed stratified sampling over joins in the sampling stage of ApproxJoin.

a.1. Communication Complexity

For the communication complexity, we analyze the performance gain of ApproxJoin in terms of shuffled data size during the filter stage with various setting of Bloom filters using a model-based analysis. We compare the gains of ApproxJoin with the broadcast and repartition join mechanisms. Based on our analysis, we also show how to select input parameters for Bloom filter to achieve an optimal trade-off between reducing the shuffled data volume and the desired false positive value in the Bloom filters.

Suppose we want to execute a multi-way join operation on attribute for input datasets , where is an input dataset. For simplicity, we assume that . The number of nodes in our experimental cluster is and .

I: Broadcast join. In broadcast join, we broadcast all smaller datasets to all nodes that contain the largest dataset. The total shuffled data volume is bounded by:

(18)

When we add one more node to the cluster, the relative increase in the shuffled data volume in broadcast join will be:

(19)

When we add one more dataset to the join operation, the relative increase in the shuffled data volume will be:

(20)

II: Repartition join. In repartition join, we shuffle data items of datasets across the cluster to make sure that each node in the cluster will keep at least a chunk/partition of each dataset. Therefore, the shuffled data volume in repartition join is computed as follows:

(21)

When we add one more node to the cluster, the relative increase in the shuffled data volume in repartition join will be:

(22)

When we add one more dataset to the join operation, the relative increase in the shuffled data volume will be:

(23)

III: ApproxJoin. Algorithm 1 describes our proposed filtering using a Bloom-filter for multi-way joins. The algorithm builds a Bloom filter BF for each dataset using the function buildInputFilter. In the Map phase, the function builds Bloom filters for all partitions of the input dataset . In the Reduce phase, all the partitioned Bloom filters are merged to build the Bloom filter BF for the input dataset . Since we fix the size for all Bloom filters, the volume of the shuffled data for building Bloom filters for all inputs is computed as , where is the size of each Bloom filter. Thereafter, the Bloom filters of all inputs are combined to build a join Bloom filter with size for all join input using the function buildJoinFilter.

Next, the algorithm broadcasts the join Bloom filter to all nodes to filter out all data items that do not participate in the join operation. The shuffled data size of the broadcast step is calculated as . The volume of shuffled data for the filtering step is computed as ; where is the size of data items participating in the join operation of input .

In summary, the total volume of shuffled data in the proposed filtering mechanism is calculated as follows:

(24)

When we add one more node to the cluster, the relative increase in the shuffled data volume in ApproxJoin will be:

(25)

When we add one more dataset to the join operation, the relative increase of the shuffled data volume is computed as:

(26)

Note that for Bloom filters, false positives are possible, but false negatives are not. There is a trade-off between the size of a bit vector

and the probability of a false positive. A larger has fewer false positives but consumes more memory, whereas a smaller requires less memory at the risk of more false positives. The false positive rate can be computed as [16]: ; where is the number of hash functions and is the number of data items inserted to the Bloom filter. For a given and , the value of that minimizes the false positive probability [16, 41] is . Therefore, we have: which can be simplified to: . Thus can be computed as follows:

(27)

In our design, we select ; where is the size of the largest input dataset.

Figure 14. Volumes of shuffled data in broadcast join, repartition join, optimal ApproxJoin and ApproxJoin. Optimal ApproxJoin is the case that there are no false positives during the filtering operation.

We use a simulation-driven approach, based on the aforementioned model, to analyze the trade-off between reducing the shuffled data volume and the desired false positive value in the Bloom filters. We conduct an experiment by using the simulation. We create three input datasets , , and ; where , , . We set the overlap fraction to ; and the number of keys in , , and to , , and , respectively. The value of each data item in the datasets follows Poisson distribution with lambda parameter . Finally, we set the number of nodes in the cluster . We run the simulation with the input parameters and analyze the shuffled data volume with different false positive values. Figure 14 shows the shuffled data volume of broadcast join, repartition join, optimal ApproxJoin, and ApproxJoin. Optimal ApproxJoin is the case when there are no false positives during the join operation of ApproxJoin. When the false positive rate is set to less than or equal to , ApproxJoin reaches the optimal case.

This simulation allows us to quickly set the desired false positive parameter for ApproxJoin with varying input datasets.

a.2. Computational Complexity

Since ApproxJoin significantly reduces the communication overhead for distributed join operations, it becomes important to ensure that the bottleneck is not shifted to another part of the system, potentially hindering improved performance. Thus, another important aspect of the performance analysis is the computational complexity, which theoretically represents the amount of time required to execute the proposed algorithm. Here, we provide the computational complexity analysis of our sampling mechanism (3.3) in comparison with the broadcast and repartition join mechanisms.

Consider that we want to perform a join operation for inputs , where is the input dataset. The inputs contain join keys . Let be the number of data items participating in the join operation from input with join key .

In repartition join or broadcast join, we need to perform the full cross product operation over these data items. As a result, the computational complexity for each join key is O().

On the other hand, in ApproxJoin, we perform sampling over the cross product operation. As a result, for each join key , the sampling mechanism performs random selections on each side of the bipartite graph (3.3), where represents the sample size of join key . is computed as , where is the sampling fraction. Thus, the computational complexity of the proposed sampling mechanism is O(). Rewriting as , the computational complexity for each join key becomes O()). To summarize, the computational complexity of ApproxJoin is lower than the complexity of the broadcast and repartition join mechanisms by a factor of .

Appendix B Bloom Filter Configuration

We discuss three alternative design choices for Bloom filters that we considered in ApproxJoin to filter the redundant items (step 1). To evaluate different variants of Bloom filters in terms of size and computation cost, we used a simulation with one input dataset containing data items and built the corresponding Bloom filters. Figure 15 shows the size of each Bloom filter used.

I: Invertible Bloom filter. In addition to the membership check, an Invertible Bloom Filter (IBF)) [26] also allow to get the list of all items present in the filter. As a result, the participating join items can be obtained by using the subtraction operation of the IBF. However, the IBF comes at a higher cost for computation and storage of the filter: Each cell in an IBF is not a single bit as a regular Bloom filter, but a data structure with a count maintaining the number of collisions and an invertible value of keys. Moreover, just as regular Bloom filters have false positives, there is a probability that a get operation returns a “not found” result, although the data might still be in the filter, but due to collisions it cannot be found. This probability is the same as the false positive rate for the corresponding Bloom filter. Thus, the filtering step may have false negatives (due to the “not found” result), negatively affecting the join result. Note that such a false negative is not possible with the regular Bloom filters.

II: Counting Bloom filter. One can also use a Counting Bloom Filter (CBF) [17] for the filtering stage. CBFs also provide the remove/subtraction operation, similar to IBFs, but not the get operation. Unlike an IBF, each cell in a CBF is only a count that tracks the number of collisions. As a result, CBFs can be considerably smaller than IBFs (see Figure 15). However, the size of CBFs is still significantly larger than of regular Bloom filters (see Figure 15).

Figure 15. Comparison of size of different Bloom filters with varying false positive rates.

III: Scalable Bloom filter. In our design, we need to know the size of the input datasets for configuring optimal values for the Bloom filters. In practice, however, this information may not always be available in advance. As a result, non-optimal values for Bloom filters may be chosen. To address this problem, we could employ Scalable Bloom filters (SBFs) [41], where the input dataset can be represented without knowing the number of data items to be put in the filter. This mechanism adapts to the growth of the input size by using a series of regular Bloom filters of increasing sizes and tighter error probabilities.

To build a global SBF (as our join filter), we need to merge local SBFs from all worker nodes in the cluster. Unfortunately, the current design and implementation of SBFs do not support the union operation to perform this merging. We show how to implement this merge operation by creating a pull request111https://github.com/josephfox/pythonbloomfilter/pull/11 to the SBF repository. Our implementation takes advantage of the fact that SBFs contain a set of regular Bloom filters. As a result, we perform the union operation between two SBFs by executing the union of regular Bloom filters under the hood.