Efficient Approximate Query Answering over Sensor Data with Deterministic Error Guarantees

With the recent proliferation of sensor data, there is an increasing need for the efficient evaluation of analytical queries over multiple sensor datasets. The magnitude of such datasets makes exact query answering infeasible, leading researchers into the development of approximate query answering approaches. However, existing approximate query answering algorithms are not suited for the efficient processing of queries over sensor data, as they exhibit at least one of the following shortcomings: (a) They do not provide deterministic error guarantees, resorting to weaker probabilistic error guarantees that are in many cases not acceptable, (b) they allow queries only over a single dataset, thus not supporting the multitude of queries over multiple datasets that appear in practice, such as correlation or cross-correlation and (c) they support relational data in general and thus miss speedup opportunities created by the special nature of sensor data, which are not random but follow a typically smooth underlying phenomenon. To address these problems, we propose PlatoDB; a system that exploits the nature of sensor data to compress them and provide efficient processing of queries over multiple sensor datasets, while providing deterministic error guarantees. PlatoDB achieves the above through a novel architecture that (a) at data import time pre-processes each dataset, creating for it an intermediate hierarchical data structure that provides a hierarchy of summarizations of the dataset together with appropriate error measures and (b) at query processing time leverages the pre-computed data structures to compute an approximate answer and deterministic error guarantees for ad hoc queries even when these combine multiple datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/19/2020

Compact Representations for Efficient Storage of Semantic Sensor Data

Nowadays, there is a rapid increase in the number of sensor data generat...
08/16/2020

DeepSampling: Selectivity Estimation with Predicted Error and Response Time

The rapid growth of spatial data urges the research community to find ef...
03/23/2020

RoboMem: Giving Long Term Memory to Robots

Robots have the potential to improve health monitoring outcomes for the ...
08/14/2018

Plato: Approximate Analytics over Compressed Time Series with Tight Deterministic Error Guarantees

Plato provides sound and tight deterministic error guarantees for approx...
06/07/2021

Deep Canonical Correlation Alignment for Sensor Signals

Sensor technology is becoming increasingly prevalent across a multitude ...
05/30/2018

Progressive Evaluation of Queries over Untagged Data

Modern information systems often collect raw data in the form of text, i...
03/15/2019

Adding Value by Combining Business and Sensor Data: An Industry 4.0 Use Case

Industry 4.0 and the Internet of Things are recent developments that hav...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The increasing affordability of sensors and storage has recently led to the proliferation of sensor data in a variety of domains, including transportation, environmental protection, healthcare, fitness, etc. These data are typically of high granularity and as a result have substantial storage requirements, ranging from a few GB to many TB. For instance, a Formula 1 produces 20GB of data during two 90-minute practice sessions 111http://www.zdnet.com/article/formula-1-racing-sensors-data-speed-and-the-internet-of-things/, while a commercial aircraft may generate 2.5TB of data per day 222http://www.datasciencecentral.com/profiles/blogs/that-s-data-science-airbus-puts-10-000-sensors-in-every-single.

The magnitude of sensor datasets creates a significant challenge when it comes to query evaluation. Running analytical queries over the data (such as finding correlations between signals), which typically involve aggregates, can be very expensive, as the queries have to access significant amounts of data. This problem becomes worse when queries combine in ad hoc ways multiple sensor datasets. For instance, consider a data analytics scenario, where a user wants to combine (a) a location dataset providing the location of users for different points in time (as recorded by their smartphone’s GPS) and (b) an air pollution dataset recording the air quality at different points in time and space (as recorded by air quality sensors) to compute the average quality of air inhaled by each user over a certain time period333This is a real example encountered during the DELPHI project conducted at UC San Diego, which studied how health-related data about individuals, including large amounts of sensor data, can be leveraged to discover the determinants of health conditions [18].. Answering this query requires accessing all location and air pollution measurements in the time period of interest, which can be substantial for long periods. To solve this problem, researchers have proposed approximate query processing algorithms [17, 1, 37, 2, 26, 26, 31, 24] that approximate the query result by looking at a subset of the data.

However, existing approaches have the following shortcomings when it comes to the query processing of multiple sensor data sets:

  • Lack of deterministic error guarantees. Most query approximation algorithms provide probabilistic error guarantees. While this is sufficient for some use cases, it does not cover scenarios where the user needs deterministic guarantees ensuring that the returned answer is within the specified error bounds.

  • Lack of support of queries over multiple datasets. Many techniques, such as wavelets, provide error guarantees only for queries over a single dataset. The errors can be arbitrarily large for queries ranging over multiple datasets, as they are unaware of how multiple datasets interact with each other.

  • Data agnosticism. The majority of existing techniques works for relational data in general and does not leverage compression opportunities that come from the fact that sensor data are not random in nature but follow typically smooth continuous phenomena.

To overcome the limitations, we design the PlatoDB

system, which leverages the nature of sensor data to compress them and provide efficient processing of analytical queries over multiple sensor datasets, while providing deterministic error guarantees. In a nutshell, PlatoDB operates as follows: When initiated, it preprocesses each time series dataset and builds for it a binary tree structure, which provides a hierarchy of summarizations of segments of the original time series. A node in the tree structure summarizes a segment of time series through two components: (i) a compression function estimating the data points in the segment, and (ii) error measures indicating the distance between the compressed segment and the original one. The lower level nodes refers to finer-grained segments and smaller errors. During runtime, PlatoDB takes as input an aggregate query over potentially multiple sensor datasets together with an error or time budget and utilizes the tree structure for each of the datasets involved in the query to obtain an approximate answer together with a deterministic error guarantee that satisfies the time/error budget.


Figure 1: PlatoDB’s architecture, including details on the segment tree generation and query processing.

Contributions. In this work, we make the following contributions:

  • We define a query language over sensor data, which is powerful enough to express most common statistics over both single and multiple time series, such as variance, correlation, and cross-correlation (Section 

    3).

  • We propose a novel tree structure (structurally similar to hierarchical histograms) and a corresponding tree generation algorithm that provides a hierarchical summarization of each time series independently of the other time series. The summarization is based on the combination of arbitrary compression functions that can be reused from the literature together with three novel error measures that can be used to provide deterministic error guarantees, regardless of the employed compression function (Section 4).

  • We design an efficient query processing algorithm operating on the pre-computed tree structures, which can provide deterministic error guarantees for queries ranging over multiple time series, even though each tree refers to one time series in isolation. The algorithm is based on a combination of error estimation formulas that leverage the error measures of individual time series segments to compute an error for an entire query (Section 5) together with a tree navigation algorithm that efficiently traverses the time series tree to quickly compute an approximate answer that satisfies the error guarantees (Section 6).

  • We conduct experiments on two real-life datasets to evaluate our algorithms. The results show that our algorithm outperforms the baseline by 1-3 orders of magnitude (Section 7).

2 System Architecture

Figure 1 depicts PlatoDB’s architecture. PlatoDB operates in two steps, performed at two different points in time. At data import time, PlatoDB pre-processes the incoming time series data, creating a segment tree structure for each time series. At query execution time, it leverages these segment trees to provide an approximate query answer together with deterministic error guarantees. We next describe these two steps in detail.

Off-line Pre-Processing. At data import time, PlatoDB takes as input a set of time series. The time series are created from the raw sensor data by the typical Extract-Transform-Load (ETL) scripts potentially combined with de-noising algorithms, which is outside the focus of this paper.

For each such time series, PlatoDB’s Segment Tree Generator creates a hierarchy of summarizations of the data in the form of a segment tree; a tree, whose nodes summarize the data for segments of the original time series. Intuitively, the structure of the segment tree corresponds to a way of splitting the time series recursively into smaller segments: The root of the tree corresponds to the entire time series, which can be split into two subsegments (generally of different length), represented by the root’s children and . The segment corresponding to can be in turn split further into two smaller segments, represented by the children and of and so on. Since each node provides a brief summarization of the corresponding segment, lower levels of the tree provide a more precise representation of the time series than upper levels. As we will see later, this hierarchical structure of segments is crucial for the query processor’s ability to adapt to a wide variety of error/time budgets provided by the user. When the user is willing to accept a large error, the query processor will mostly use the top levels of the trees, providing a quick response. On the other hand, if the user demands a lower error, the algorithm will be able to satisfy the request by visiting lower levels of the segment trees (which exact nodes will be visited also depends on the query and the interplay of the time series in it). Leveraging the trees, PlatoDB can even provide users with continuously improving approximate answers and error guarantees, allowing them to stop the computation at any time, similar to works in online aggregation  [15, 7, 26].

Each node of the tree summarizes the corresponding segment through two data items: (a) a compression function, which represents the data points in a segment in a compact way (e.g., through a constant [21] or a line [19]), and (b) a set of error measures, which are metrics of the distance between the data point values estimated by the compression function and the actual values of the data points. As we will see, the query processor uses the compression function and error measures of the segment tree nodes to produce an approximate answer of the query and the error guarantees, respectively. Interestingly, PlatoDB’s internals are agnostic of the compression function used. As we will discuss in Section 4, PlatoDB’s query processor works independently of the employed compression functions, allowing the system to be combined with all popular compression techniques. For instance, in our example above we utilized the Piecewise Aggregate Approximation (PAA)  [21], which returns the average of a set of values. However, we could have used other compression techniques, such as the Adaptive Piecewise Constant Approximation (APCA) [20], the Piecewise Linear Representation (PLR) [19], or others.

Remark. It is important to note that the segment tree is not necessarily a balanced tree. PlatoDB decides whether a segment need to be split based on how close the values derived from the compression function are to the actual values of the segment. PlatoDB splits the segment when the difference is large. Intuitively, this means that the segment tree contains more nodes for parts of the domain where the time series is irregular and/or rapidly changing, and fewer nodes for the smooth parts. PlatoDB treats the problem of finding the splitting positions as an optimization problem, splitting at positions that can bring the largest error reduction. We will present the segment tree generator algorithms in Section 4.

Example 1

Figure 1(a) shows the segment tree for a time series . The root node of the tree (corresponding to the segment covering the entire time series) summarizes this segment through two items: a set of parameters describing a compression function (in this case the function returns the average of the values of the time series and can therefore be described by the single value ) and a set of error measures (the details of error measures will be presented in Section 4). This entire segment is split into two subsegments and , giving rise to the identically-named tree nodes. Note that the tree is not balanced. Segment is not split further as its function correctly predicts the values within the corresponding segment. In contrast, the segment displays great variability in the time series’ values and is thus split further into segments and .

On-line Query Processing. At query evaluation time, PlatoDB’s Query Processor receives a query and a time or error budget and leverages the pre-processed segment trees to produce an approximate query answer and a corresponding error guarantee satisfying the provided budget.

To compute the answer and error guarantee, PlatoDB traverses in parallel in a top-down fashion the segment trees of all time series involved in the query. At any step of this process, it uses the compression function and error measures in the current accessed nodes to calculate an approximate query answer and the corresponding error. If it has not reached yet the time/error budget (i.e., if there is still time left or if the current error is still greater than the error budget), PlatoDB greedily chooses among all the currently accessed nodes the one, whose children nodes would yield the greatest error reduction and uses them to replace their parent in the answer and error estimation. Otherwise, PlatoDB stops accessing further nodes of the segment trees and outputs the currently computed approximate answer and error. Query processing is described in detail in Sections 5 and 6.

Remark. It is important to note that, in contrast to existing approximate query answering systems, PlatoDB can answer queries that span across different time series, even though the segment trees were pre-processed for each time series individually. As we will see, the fact that the segment trees were generated for each time series individually, leads to interesting problems at query processing time, such as aligning the segments of different time series and reasoning about how these segments interact to produce the query answer and error guarantees. Finally, it is also important to note that PlatoDB adapts to the provided error budget by accessing different number of nodes. Larger error budgets lead to fewer node accesses, while smaller error budgets require more node accesses.

Example 2

Consider a query involving two time series and and an error budget . Figure 1(b) shows how the query processing algorithm uses the pre-computed segment trees of the two time series. PlatoDB first accesses the root nodes of both segment trees in parallel and computes the current approximate query answer and error , using the compression function and error measures in the root nodes. Let’s assume that . Since , PlatoDB keeps traversing the trees by greedily choosing a node and replacing it by its children, so that the error reduction at each step is maximized. This process continues until the error budget is satisfied. For instance, assume that using the yellow shaded nodes in Figure 1(b) PlatoDB obtains an error . Then PlatoDB stops traversing the trees and outputs the approximate answer and the error . Note that none of the descendants of the shaded nodes is touched, resulting in big performance savings.

As a result of this architecture, PlatoDB achieves speedups of 1-3 orders of magnitude in query processing of sensor data compared to approaches that use the entire dataset to compute exact query answers (more details are included in PlatoDB’s experimental evaluation in Section 7).

3 Data and Queries

Before describing the PlatoDB system, we first present its data model and query language.

Statistic Symbol Definition Query Expression
Mean
Variance
Covariance
Correlation
Cross-correlation
Table 1: Query expressions for common statistics.

Data Model. For the purpose of this work, a time series =, , , is a sequence of (time, data point) pairs (, ), such that the data point was observed at time . We follow existing work [13] to normalize and standardize the time series so that all time series are in the same domain and have the same resolution. Since all time series are aligned, for ease of exposition we omit the exact time points and use instead the index of the data points whenever we need to define a time interval. For instance, we will denote the above time series simply as =, and use to refer to the time interval . A subsequence of a time series is called a time series segment. For example is a segment of the time series , , , .

Figure 2: Grammar of query expressions.

Query Language. PlatoDB supports queries whose main building blocks are aggregation queries over time series. Figure 2 shows the formal definition of the query language and Table 1 lists several common statistics that can be expressed in this language.

A query expression is an arithmetic expression of the form , where are the standard arithmetic operators () and is either an arithmetic literal or an aggregation expression over a time series. An aggregation expression Sum over a time series computes the sum of all data points of in the time interval . Note that the time series that is aggregated could either be a base time series or a derived time series that was computed from a set of base time series through a set of time series operators. PlatoDB allows a series of time series operators, including Plus, , and (which return a time series that has data points computed by adding, subtracting, and multiplying the respective data points of the original time series, respectively), as well as , which takes as input a value and a counter and creates a new time series that contains data points with the value .

Note that the query language can be used to express many common statistics over time series encountered in practice and all the queries we encountered during the DELPHI project conducted at UC San Diego, which explored how health-related data about individuals, including large amounts of sensor data, can be leveraged to discover the determinants of health conditions and which served as the motivation for this work [18]. These include the mean and variance of a single time series, as well as the covariance, correlation, and cross-correlation between two time series. Table 1 shows how common statistics can be expressed in PlatoDB’s query language.

4 Segment Tree

As explained in Section 2, at data import time, PlatoDB creates for each time series a hierarchy of summarizations of the series in the form of the segment tree. In this Section we first explain the structure of the tree and then describe the segment tree generation algorithm.

4.1 Segment Tree Structure

Let be a time series. The segment tree of is a binary tree whose nodes summarize segments of the time series with nodes higher up the tree summarizing large segments and nodes lower down the tree summarizing progressively smaller segments. In particular, the root node summarizes the entire time series . Moreover, for each node of the tree summarizing a segment of , its left and right children nodes and summarize two subsegments and , respectively, which form a partitioning of the original segment . As we will see in Section 6, this hierarchical structure allows PlatoDB to adapt to varying error/time budgets by only accessing the parts of the tree required to achieve the given error/time budget.

At each node corresponding to segment , PlatoDB summarizes the segment by keeping two types of measures: (a) a description of a compression function that is used to approximately represent the time series values in the segment and (b) a set of error measures describing how far the above approximate values are from the real values. As we will see in Sections 5 and 6, PlatoDB uses at query processing time the compression function and error measures stored in each node to compute an approximate answer of the query and deterministic error guarantees, respectively. We next describe the compression functions and error measures stored within each segment tree node in detail.

Segment Compression Function. Let be a segment. PlatoDB summarizes its contents through a compression function used by the user. PlatoDB supports the use of any of the compression functions suggested in the literature [21, 20, 19, 11, 5, 4]. Examples include but are not limited to the Piecewise Aggregate Approximation (PAA)  [21], the Adaptive Piecewise Constant Approximation (APCA) [20], the Piecewise Linear Representation (PLR) [19]

, the Discrete Fourier Transformation (DFT) 

[11], the Discrete Wavelet Transformation (DWT) [5], and the Chebyshev polynomials (CHEB) [4].

To describe the function, PlatoDB stores in the segment node parameters describing the function. These parameters depend on the type of the function. For instance, if is a Piecewise Aggregate Approximation (PAA), estimating all values within a segment by a single value , then the parameter is just a single value . On the other hand, if is a Piecewise Linear Approximation (PLR), estimating the values in the segment through a line , then the function parameters are the coefficients and of the polynomial used to describe the line.

In the rest of the document, we will refer directly to the compression function (instead of the parameters that are used to describe it). Given a segment , we will use to denote the value for element of the segment, as derived by .

Segment Error Measures. In addition to the compression function, PlatoDB also stores a set of error measures for each time series segment . PlatoDB stores the following three error measures:

  • : The sum of the absolute distances between the original and the compressed time series (also known as the Manhattan or distance), i.e., .

  • : The maximum absolute value of the original time series, i.e., .

  • : The maximum absolute value of the compressed time series, i.e., .

Example 3

For instance, consider a segment summarized through the PAA compression function (i.e., ). Then , and .

As we will see in Section 5, the above three error measures are sufficient to compute deterministic error guarantees for any query supported by the system, regardless of the employed compression function . This allows administrators to select the compression function best suited to each time series, without worrying about computing the error guarantees, which is automatically handled by PlatoDB.

4.2 Segment Tree Generation

We next describe the algorithm generating the segment tree. To build the tree, the algorithm has to decide how to build the children nodes from a parent node; i.e., how to partition a segment into two non-overlapping subsegments. Each possible splitting point will lead to different children segments and as a result to different errors when PlatoDB uses the children segments to answer a query at query processing time. Ideally, the splitting point should be the one that minimizes the error among all possible splitting points. However, since PlatoDB supports ad hoc queries and since each query may benefit from a different splitting point, there is no way for PlatoDB to choose a splitting point that is optimal for all queries.

Segment Tree Generation Algorithm. Based on this observation, PlatoDB chooses the splitting point that minimizes the error for the basic query that simply computes the sum of all data points of the original segment. In particular, the segment tree generation algorithm starts from the root and proceeding in a top-down fashion given a segment , selects a splitting point that leads into two subsegments and so that the sum of the Manhattan distances of the new subsegments is minimized.

The algorithm stops further splitting down a segment , when one of the following two conditions hold: (i) When the Manhattan distance of the segment is smaller than a threshold or (ii) when he size of the segment is below a threshold . The choice between conditions (i) and (ii) and the values of the corresponding thresholds and is specified by the system administrator.

Since the algorithm needs time proportional to the size of a segment to compute the splitting point of a single segment and it repeats this process for every non-leaf tree node, it exhibits a worst-time complexity of , where is the size of the original time series (i.e., the number of its data points) and number of nodes in the resulting segment tree.

Discussion. Note that by deciding independently how to split each individual segment into two subsegments, the segment tree generation algorithm is a greedy algorithm, which even though makes optimal local decisions for the basic aggregation query, may not lead to optimal global decisions. For instance, there is no guarantee that the nodes that exist at a particular level of the segment tree correspond to the nodes that minimize the error of the basic aggregation query. The literature contains a multitude of algorithms that can provide such a guarantee for a given ; i.e., algorithms that can, given a time series and a number , produce segments of that minimize some error metric. Examples include the optimal algorithm of [3], as well as approximation algorithms with formal guarantees presented in [34]

. However, all these algorithms have very high worst-time complexity that makes them prohibitive for the large number of data points typically found in sensor datasets and are therefore not considered in this work. Though several heuristic segmentation algorithms exist, such as the

Sliding Windows [33], the Top-down [22] and the Bottom-Up [23] algorithm, similar do our greedy algorithm, they do not provide any formal guarantees.

Finally, note that the tree generated by the above algorithm will in general be unbalanced. Intuitively, the algorithm will create more nodes and corresponding tree levels to cover segments that contain data points that are more irregular and/or rapidly changing, utilizing fewer nodes for smooth segments.

5 Computing Approximate Query Answers and Error Guarantees

Given pre-computed segment trees for time series , PlatoDB answers ad hoc queries over the time series by accessing their segment trees. In particular, to answer a given query under an error/time budget, PlatoDB navigates the segment trees of the time series involved in , selects segment nodes (or simply segments) that satisfy the budget, and computes an approximate answer for together with deterministic error guarantees.

We will next present the query processing algorithm. For ease of exposition, we will start by describing how PlatoDB computes an approximate query answer and the associated error guarantees assuming that the segment nodes have been already chosen, and will explain in Section 6 how PlatoDB traverses the tree to choose the segment nodes.

Approximate query answering problem under given segments. Formally, let be time series, such that time series is partitioned into segments . Given (a) these segments and the associated measures as described above and (b) a query over the time series , we will show how PlatoDB computes an approximate query answer and an estimated error , such that the approximate query answer is guaranteed to be with of the accurate query answer 444Accurate answer means running queries over raw data. But note that, in this work, we can given estimate errors wihout computing the accurate answers., i.e., .

For ease of exposition, we next first describe the simple case where each time series contains a single segment perfectly aligned with the single segment of the other series, before describing the general case, where each time series contains multiple segments, which may also not be perfectly aligned with the segments of the other time series.

5.1 Single Time Series Segment

Let be time series with single aligned segments, i.e., is approximated by a single segment . Also let be the compression function and the error measures of segment , respectively. To compute the approximate answer and error guarantees of a query over using the single segments , PlatoDB employs an algebraic approach computing in a bottom-up fashion for each algebraic operator of the approximate answer and error guarantees for the subquery corresponding to the subtree rooted at .

This algebraic approach is based on formulas that for each algebraic query operator, given an approximate query answer and error for the inputs of the operator, provide the corresponding query answer and error for the output of the operator. Figure 3 shows the formulas employed by PlatoDB for each algebraic query operator supported by the system. Note that the output signatures differ between operators. This is due to the different types of operators supported by PlatoDB, as explained next. Recall from Section 3 that PlatoDB’s query language consists of three types of operators: (i) time series operators, (ii) aggregation operator, and (iii) arithmetic operators. While time series operators output a time series, aggregation and arithmetic operators output a single number. As a result, the formulas used for answer and error estimation, treat these two classes of operators differently: For time series operators, the formulas return, similarly to the input time series, the compression function and error measures of the output time series. For aggregation and arithmetic operators on the other hand, which return a single number and not an entire time series, the formulas return simply a single approximate answer and estimated error. Figure 3 shows the resulting formulas. 555Out of the formulas, the most involved are the output measure estimation formulas of the Times operator. More details on how they were derived can be found in Appendix A.1.

Time Series Operators Operator Compr. Output Func. Error Measures SeriesGen() Plus() Minus() Times()        

Aggregation Operator

Operator Approximate Estimated
Output Error
Sum(T,)

Arithmetic Operators Operator Approximate Estimated Output Error Agg Number Agg Number Agg Number Agg Number

Figure 3: Formulas for estimating answer and error for each algebraic operator (single segment).

Without going into detail into each of them, we next explain how they can be used to compute the answer and corresponding error guarantees for an entire query through an example.

Figure 4: Approximate query answer and associated error for query . Compression functions and error measures are shown in blue and red, respectively.
Example 4

This example shows how to use the formulas in Figure 3 to compute the approximate answer and associated error for a query computing the variance of a time series consisting of single segment . For simplicity of the query expression we assume that the mean of is known in advance (note that even if was not known, the query would still be expressible in PlatoDB’s query language, albeit through a longer expression). Let be the compression function and the error measures of . The query can be expressed as . Figure 4 shows how PlatoDB evaluates this query in a bottom-up fashion. It first uses the formula of the SeriesGen operator to compute the compression function () and error measures (, , ) for the output of the SeriesGen operator. It then computes the compression function () and error measures (, , ) for the output of the Minus operator. The computation continues in a bottom-up fashion, until PlatoDB computes the output of the Sum operator in the form of an approximate answer where is the number of data points in , and an estimated error .

Importantly, the formulas shown in Figure 3 are guaranteed to produce the best error estimation out of any formula that uses the three error measures employed by PlatoDB  as explained by the following theorem:

Theorem 1

The estimated errors produced through the use of the formulas shown in Figure 3 are the lowest among all possible error estimations produced by using the error measures described in Section 4.

The proof can be found in Appendix A.2.

5.2 Multiple Segment Time Series

Let us now consider the general case, where each time series contains multiple segments of varying different sizes. As a result of the varying sizes of the segments, segments of different time series may not fully align.

Example 5

For instance consider the top two time series and of Figure 5 (ignore the third time series for now). Segment overlaps with both and . Similarly, segment overlaps with both and .

One may think that this can be easily solved by creating subsegments that are perfectly aligned and then using for each of them the answer and error estimation formulas of Section 5.1.

Example 6

Continuing our example, the two time series and can be split into the three aligned subsegments shown as the output time series . Then for each of these output segments, we can compute the error based on the formulas of Section 5.1.

However, the problem with this approach is that the resulting error will be severely overestimated as the error of a single segment of the original time series may be counted multiple times, as it overlaps with multiple output segments.

Example 7

For instance, for a query over the time series and of Figure 5,the error of will be double-counted, as it will be counted towards the error of the two output segments and .

To avoid this pitfall, PlatoDB does not estimate the error for its segment individually but instead computes the error holistically for the entire time series. Figures 6 and 7 show the resulting answer and error estimation formulas for time series operators and the aggregation operator, respectively. The formulas of the arithmetic operators are omitted as they remain the same as in the single segment case, as the arithmetic operators take as input single numbers instead of time series and are thus not affected by multiple segments.

Figure 5: Example of aligned time series segments. The new generated time series is shown in red color.

Time Series Operators Operator Comp. func. Output Error Measures f SeriesGen() Plus() Minus() Times()

Figure 6: Formulas for estimating answer and error for time series operators (multiple segments). For each output time series segment , let and be the input segments that overlap with .

Aggregation Operator Operator Approximate Estimated Output Error Sum(T,)

Figure 7: Formulas for estimating answer and error for the aggregation operator (multiple segments).

6 Navigating the SEGMENT TREE

So far we have seen how PlatoDB computes the approximate answer to a query and its associated error, assuming that the segments that are used for query processing have already been selected. In this Section, we explain how this selection is performed. In particular, we show how PlatoDB navigates the segment trees of the time series involved in the query to continuously compute better estimations of the query answer under the given error or time budget is satisfied.

Query Processing Algorithm. Let be a set of time series and the respective segment trees. Let also be a query over and / an error/time budget, respectively. To answer under the given budget, PlatoDB first starts from the roots of and uses them to compute the approximate query answer and corresponding error using the formulas presented in Section 5. If the estimated error is greater than the error budget (i.e., if ) or if the elapsed time is smaller than the allowed time budget, PlatoDB chooses one of the tree nodes used above, replaces it with its children and repeats the above procedure using the newly selected nodes until the given error/time budget is reached. What is important is the criterion that is used to choose the node that is replaced at each step by its children. In general, PlatoDB will have to select between several nodes, as it will be exploring in which segment tree and moreover in which part of the selected segment tree it pays off to navigate further down. Since PlatoDB aims to reduce the estimated error as much as possible, at each step it greedily chooses the node whose replacement by its children leads to the biggest reduction in the estimated error. The resulting procedure is shown as Algorithm 1 666Note that the algorithm is shown for both error and time budget case. In contrast to the case when a time budget is provided, in which the algorithm has to always keep a computed estimated answer to return it when the time budget runs out, in the case of the error budget this is not required. Thus, in the latter case, it suffices to compute only at the very last step of the algorithm, thus avoiding its iterative computation during the while loop..

Input: Segment Trees , query , error budget or time budget
Output: Approximate answer and error
1 Access the roots of ;
2 Compute and by using the compression functions and error measures of the currently accessed nodes (see Section 5 for details);
3 while  or  do
4        Choose a node maximizing the error reduction;
5        Update the current answer and error using the compression functions and error measures of the currently accessed nodes;
Return ;
Algorithm 1 PlatoDB Query Processing

Algorithm Optimality. Given its greedy nature, one may wonder whether the query processing algorithm is optimal. To answer this question, we have to first define optimality. Since the aim of the query processing algorithm is to produce the lowest possible error in the fastest possible time (which can be approximated by the number of nodes that are accessed), we say that an algorithm is optimal if for every possible query, set of segment trees, and error budget it answers the query under the given budget accessing the lowest number of nodes than any other possible algorithm. Since a comparison of any possible algorithm is hard, we also restrict our attention to deterministic algorithms that access the segment trees in a top-down fashion (i.e, to access a node all its ancestor nodes should also be accessed). We denote this class of algorithms as . It turns out that no algorithm in can be optimal as the following theorem states:

Theorem 2

There is no optimal algorithm in .

Consider the following segment trees of two time series and . The segment tree of is shown in Figure 8 and the segment tree of is a tree containing a single node. Now consider a query over these two time series and an error budget where is the height of the ’s tree. Assume that the query error using the tree roots is . Also assume that whenever the query processing algorithm replaces a node by its children, the error for the query is reduced by with the exception of the shaded node, which, when replaced by its children, leads to an error reduction of . This means that the query processing algorithm can only terminate after accessing the children of the shaded node, as the query error in that case will be at most . Otherwise, the error estimated by the algorithm will be at least , which exceeds the error budget and thus does not allow the algorithm to terminate. Since the shaded node can be placed at an arbitrary position in the tree, for every given deterministic algorithm, we can place the shaded node in the tree, so that the algorithm accesses the children of the shaded node only after it has accessed all the other nodes in the tree. However, this is suboptimal, as there is a way to access the children of the shaded node with fewer node accesses (i.e., by following the path from the root to the shaded node). Therefore, no algorithm in is optimal.

Figure 8: Segment Tree for Theorem 2.

As a result of the above theorem, PlatoDB’s query processing algorithm cannot be optimal in general. However, we can show that it is optimal for segment trees that exhibit the following property: For every pair of nodes and of the segment tree, such that is a descendant of , the error reduction achieved by replacing with its children is greater or equal to the error reduction achieved by replacing with its children. Such a tree is called fine-error-reduction tree and intuitively it guarantees that any node leads to a greater or equal error reduction than any of its descendants. If all trees satisfy the above property, PlatoDB’s query processing algorithm is optimal:

Theorem 3

In the presence of segment trees that are fine-error-reduction trees, PlatoDB’s query processing algorithm is optimal.

Operator Incremental Error Update
Plus()
Minus()
Times()
Table 2: Incremental update of estimated errors for time series operators. .

Incremental Error Update. Having proven the optimality of the algorithm for fine-error-reduction trees, we will next discuss an optimization that can be employed to speedup the algorithm. By studying the algorithm, it is easy to observe that as the algorithm moves from a set of nodes to a set , , , , , , , of nodes (by replacing node by its children and ), it recomputes the error using all nodes in , although only the two nodes and have changed from the previous node set .

This observation led to the incremental error update optimization of PlatoDB’s query processing algorithm described next. Instead of recomputing from scratch the error of using all nodes, PlatoDB incrementally updates the error of by using only the error measures of the newly replaced node and the newly inserted nodes and . Let , , and be the error measures of nodes , , and , respectively. Assume that the segments overlap with the segment of node , the segments overlap with the segment of node , and the segments overlap with the segment of node . Then the estimated error using nodes and can be incrementally computed from the error using node through the incremental error update formulas shown in Table 2777The SeriesGen operator is omitted, since its input is not a time series and as a result there is no segment tree associated with its input..

Probabilistic Extension. While PlatoDB provides deterministic error guarantees, which as we discussed above are in many cases required, it is interesting to note that it can be easily extended to provide probabilistic error guarantees if needed. Most importantly this can be done simply by changing the error measures computed for each segment from to , where is the variance of , and is the maximal absolute value of

. Then we can employ the Central Limit Theorem (CLT) 

[10] to bound the accurate error by , where can be adjusted by the users to get different confidence levels. It is interesting that the rest of the system, including the hierarchical structure of the segment tree and the tree navigation algorithm employed at query processing time do not need to be modified. In our future work we plan to further explore this probabilistic extension and compare it to existing approximate query answering techniques with probabilistic guarantees.

7 Experimental Evaluation

To evaluate PlatoDB’s performance and verify our hypothesis that PlatoDB is able to provide significant savings in the query processing of sensor data, we are conducting experiments on real sensor data. We present here early data points that we have discovered.

Datasets. For our preliminary experiments, we used two real sensor datasets:

  1. Intel Lab Data (ILD)888http://db.lcs.mit.edu/labdata/labdata.html. Smart home data (humidity and temperature) collected at 31-second intervals from 54 sensors deployed at the Intel Berkeley Research Lab between February 28th and April 5th, 2004. The dataset contains about 2.3 million tuples (i.e., 4.6 million sensor readings in total).

  2. EPA Air Quality Data (AIR)999https://www.epa.gov/outdoor-air-quality-data. Air quality data collected at hourly intervals from about 1000 sensors from January 1st 2000 to April 1st 2016. The dataset contains about 133 million tuples (i.e., 266 million sensor readings in total).

From each dataset we extracted multiple time series, each corresponding to a single attribute of the dataset; Humidity and Temperature for ILD and Ozone and SO for AIR. We then used PlatoDB to create the corresponding segment tree for each time series and to answer queries over them.

Experimental platform. All experiments were performed on a computer with a 4th generation Intel i7-4770 processor ( KB L1 data cache, KB L2 cache, MB shared L3 cache, physical cores, GHz) and GB RAM, running Ubuntu 14.04.1. All the algorithms were implemented in C++ and compiled with g++ 4.8.4, using -O3 optimization. All data was stored in main memory.

7.1 Experimental Results

In our preliminary evaluation, we measured two quantities: First, the size of the segment tree created by PlatoDB, since this segment tree is stored in main memory, and second, the query processing performance of PlatoDB compared to a system that answers queries using the entirety of the raw sensor data. In our future work, we will be conducting a more thorough evaluation of the system. We next present our preliminary results:

Dataset Tuples Raw Data Segment Tree
(0-degree) (1-degree)
ILD 2,313,153 35.29 MB 0.14 MB 0.67 MB
AIR 133,075,510 1.98 GB 4.37 MB 8.11 MB
Table 3: Raw data and segment tree sizes.

Segment tree size. Table 3 shows the size of the raw data and the combined size of the segment trees built for all the time series extracted from the ILD and AIR datasets.101010To make a fair comparison, the raw data size refers only to the combined size of the attributes used in the time series and does not include other attributes that exist in the original dataset (such as location codes etc). We experimented with two different compression functions, resulting in different segment tree sizes; a 0-degree polynomial (corresponding to the Piecewise Aggregate Approximation [21], where each value within a segment is approximated through the average of the values in the segment) and a 1-degree polynomial (corresponding to the Piecewise Linear Approximation [19], where each segment is approximated through a line). As shown, the segment trees are significantly smaller than the raw sensor data (about and smaller for the ILD and AIR datasets, respectively). As a result, the segment trees of the time series can be easily kept in main memory, even when the system stores a large number of time series.

(a) ILD (b) AIR
Figure 9: Query processing performance for correlation query (time shown in ms).

Query processing performance. We next compared the query processing performance of PlatoDB against a baseline, which is a custom in-memory algorithm that computes the exact answer of the queries using the raw data. To compare the systems, we measured the time required to process a correlation query between two time series (i.e., correlation(Humidity, Temperature) in ILD and correlation(Ozone and SO) in AIR)) with a varying error budget (ranging from 5% to 25%). Figure 9 shows the resulting times for each of the two datasets. Each graph depicts the performance of three systems; Exact, which is the baseline method of answering queries over the raw data, and PlatoDB-0, PlatoDB-1, which are instances of PlatoDB using the 0-degree and 1-degree polynomial compression functions, as explained above.

By studying Figure 9, we can make the following observations.

  • Both instances of PlatoDB outperform Exact by one to three orders of magnitude, depending on the provided error budget.

  • In contrast to Exact which always uses the entire raw dataset to compute exact query answers, PlatoDB allows the user to select the appropriate tradeoff between time spent in query processing and resulting error by specifying the desired error budget. The system adapts to the budget by providing faster responses as the allowed error budget increases;

  • Notably, PlatoDB remains significantly faster than Exact even for small error budgets. In particular, PlatoDB is over and faster than Exact when the error is 5% in ILD and AIR respectively.

In summary, our preliminary results show that PlatoDB shows significant potential for speeding up query processing of ad hoc queries over large amounts of sensor data, as it outperforms exact query processing algorithms in many cases by several orders of magnitude. Moreover, it can provide such speedups, while providing deterministic error guarantees, in contrast to existing sampling-based approximate query answering approaches that provide only probabilistic guarantees, which may not hold in practice. Despite the difference in guarantees, in our future work we will be conducting a more thorough evaluation of the system comparing it also against sampling-based systems.

8 Related Work

Approximate query answering has been the focus on an extensive body of work, which we will summarize next. However, to the best of our knowledge, this is the first work that provides deterministic guarantees for aggregation queries over multiple time series.

Approximate query answering with probabilistic error guarantees. Most of the existing work on approximate query processing has focused on using sampling to compute approximate query answers by appropriately evaluating the queries on small samples of the data [17, 1, 37, 2, 26, 26]

. Such approaches typically leverage statistical inequalities and the central limit theorem to compute the confidence interval or variance of the computed approximate answer. As a result, their error guarantees are probabilistic. While probabilistic guarantees are often sufficient, there are not suitable for scenarios where one wants to be certain that the answer will fall within a certain interval 

111111Note that as discussed in Section 6, PlatoDB can also be extended to provide probabilistic guarantees when deterministic guarantees are not required, simply by modifying the error measures computed for each segment..

A special form of sampling-based methods are online aggregation approaches, which provide a continuously improving query answer, allowing users to stop the query evaluation when they are satisfied with the resulting error  [15, 7, 26]. With its hierarchical segment tree, PlatoDB can support the online aggregation paradigm, while providing deterministic error guarantees.

Approximate query answering with deterministic error guarantees. Approximately answering queries while providing deterministic error guarantees has so far received only very limited attention [31, 24, 30]. Existing work in the area has focused on simple aggregation queries that involve a single relational table. In contrast, PlatoDB provides deterministic error guarantees on queries that may involve multiple time series (each of which can be though of as a single relational table), enabling the evaluation of many common statistics that span tables, such as correlation, cross-correlation and others.

Approximate query answering over sensor data. Moreover, PlatoDB is one of the first approximate query answering systems that leverage the fact that sensor data are not random but follow a usually smooth underlying phenomenon. The majority of existing works on approximate query answering looked at general relational data. Moreover, the ones that studied approximate query processing for sensor data, focused on the networking aspect of the problem, studying how aggregate queries can be efficiently evaluated in a distributed sensor network [25, 8, 9]. While these works focused on the networking aspect of sensor data, our work focuses on the continuous nature of the sensor data, which it leverages to accelerate query processing even in a single machine scenario, where historical sensor data already accumulated on the machine have to be analyzed.

Data summarizations. Last but not least, there has been extensive work on creating summarizations of sensor data. Work in this area has come mostly from two different communities; from the database community [16, 30, 27, 35] and the signal processing community [21, 20, 19, 5, 11, 11].

The database community has mostly focused on creating summarizations (also referred to as synopses or sketches) that can be used to answer specific queries. These include among others histograms [16, 30, 12, 29] (e.g., EquiWidth and EquiDepth histograms [28], V-Optimal histograms [16], Hierarchical Model Fitting (HMF) histograms [36], and Compact Hierarchical Histograms (CHH) [32]), as well as sampling methods [14, 6], used among other for cardinality estimation [16] and selectivity estimation [30]. In contrast to such special-purpose approaches, PlatoDB supports a large class of queries over arbitrary sensor data.

The signal processing community on the other hand, produced a variety of methods that can be used to compress time series data. These include among others the Piecewise Aggregate Approximation (PAA) [21], the Adaptive Piecewise Constant Approximation (APCA) [20], the Piecewise Linear Representation (PLR) [19], the Discrete Wavelet Transform (DWT) [5], and the Discrete Fourier Transform (DFT) [11]. However, it has not been concerned on how such compression techniques can be used to answer general queries. PlatoDB’s modular architecture allows the easy incorporation of such techniques as compression functions, that are then automatically leveraged by the system to enable approximate answering of a large number of queries with deterministic error guarantees.

9 Conclusion

In this paper, we proposed the PlatoDB system that allows users the efficient computation of approximate query answers to queries over sensor data. By utilizing the novel segment tree data structure, PlatoDB creates at data import time a set of hierarchical summarizations of each time series, which are used at query processing time to not only enable the efficient processing of queries over multiple time series with varying error/time budgets but to also provide error guarantees that are deterministic and are therefore guaranteed to hold, in contrast to the multitude of existing approaches that only provide probabilistic error guarantees. Our preliminary results show that the system can in real use cases lead to several order of magnitude improvements over systems that access the entire dataset to provide exact query answers. In our future work, we plan to perform a thorough experimental evaluation of the system, in order to both study the behavior of the system in different datasets and query workloads, as well as to compare it against systems that provide probabilistic error guarantees.

References

  • [1] S. Agarwal, B. Mozafari, A. Panda, H. Milner, S. Madden, and I. Stoica. Blinkdb: queries with bounded errors and bounded response times on very large data. In EuroSys, pages 29–42, 2013.
  • [2] B. Babcock, M. Datar, and R. Motwani. Load shedding for aggregation queries over data streams. In ICDE, pages 350–361, 2004.
  • [3] R. Bellman. On the approximation of curves by line segments using dynamic programming. Communications of the ACM, 4(6):284, 1961.
  • [4] Y. Cai and R. T. Ng. Indexing spatio-temporal trajectories with chebyshev polynomials. In SIGMOD, pages 599–610, 2004.
  • [5] K. Chan and A. W. Fu. Efficient time series matching by wavelets. In ICDE, pages 126–133, 1999.
  • [6] Y. Chen and K. Yi. Two-level sampling for join size estimation. In SIGMOD, pages 759–774, 2017.
  • [7] T. Condie, N. Conway, P. Alvaro, J. M. Hellerstein, K. Elmeleegy, and R. Sears. Mapreduce online. In NSDI, pages 313–328, 2010.
  • [8] J. Considine, F. Li, G. Kollios, and J. Byers. Approximate aggregation techniques for sensor databases. In ICDE, pages 449–460, 2004.
  • [9] J. Considine, F. Li, G. Kollios, and J. W. Byers. Approximate aggregation techniques for sensor databases. In ICDE, pages 449–460, 2004.
  • [10] R. M. Dudley. Uniform central limit theorems, volume 23. Cambridge Univ Press, 1999.
  • [11] C. Faloutsos, M. Ranganathan, and Y. Manolopoulos. Fast subsequence matching in time-series databases. In SIGMOD, pages 419–429, 1994.
  • [12] P. B. Gibbons, Y. Matias, and V. Poosala. Fast incremental maintenance of approximate histograms. In VLDB, pages 466–475, 1997.
  • [13] D. Q. Goldin and P. C. Kanellakis. On similarity queries for time-series data: Constraint specification and implementation. In CP, pages 137–153, 1995.
  • [14] P. J. Haas and A. N. Swami. Sequential sampling procedures for query size estimation. In SIGMOD, pages 341–350, 1992.
  • [15] J. M. Hellerstein, P. J. Haas, and H. J. Wang. Online aggregation. In SIGMOD Record, volume 26, pages 171–182, 1997.
  • [16] Y. E. Ioannidis and V. Poosala. Balancing histogram optimality and practicality for query result size estimation. In SIGMOD, pages 233–244, 1995.
  • [17] C. M. Jermaine, S. Arumugam, A. Pol, and A. Dobra. Scalable approximate query processing with the DBO engine. In SIGMOD, pages 725–736, 2007.
  • [18] Y. Katsis, C. Baru, T. Chan, S. Dasgupta, C. Farcas, W. Griswold, J. Huang, L. Ohno-Machado, Y. Papakonstantinou, F. Raab, et al. Delphi: Data e-platform for personalized population health. In e-Health Networking, Applications & Services (Healthcom), 2013 IEEE 15th International Conference on, pages 115–119. IEEE, 2013.
  • [19] E. Keogh. Fast similarity search in the presence of longitudinal scaling in time series databases. In ICTAI, pages 578–584, 1997.
  • [20] E. Keogh, K. Chakrabarti, M. Pazzani, and S. Mehrotra. Locally adaptive dimensionality reduction for indexing large time series databases. SIGMOD Record, 30(2):151–162, 2001.
  • [21] E. J. Keogh, K. Chakrabarti, M. J. Pazzani, and S. Mehrotra. Dimensionality reduction for fast similarity search in large time series databases. KAIS, 3(3):263–286, 2001.
  • [22] E. J. Keogh and M. J. Pazzani. An enhanced representation of time series which allows fast and accurate classification, clustering and relevance feedback. In KDD, pages 239–243, 1998.
  • [23] E. J. Keogh and M. J. Pazzani. Relevance feedback retrieval of time series data. In SIGIR, pages 183–190, 1999.
  • [24] I. Lazaridis and S. Mehrotra. Progressive approximate aggregate queries with a multi-resolution tree structure. In SIGMOD, pages 401–412, 2001.
  • [25] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. Tag: A tiny aggregation service for ad-hoc sensor networks. SIGOPS, 36(SI):131–146, 2002.
  • [26] N. Pansare, V. R. Borkar, C. Jermaine, and T. Condie. Online aggregation for large mapreduce jobs. PVLDB, 4(11):1135–1145, 2011.
  • [27] O. Papapetrou, M. N. Garofalakis, and A. Deligiannakis. Sketch-based querying of distributed sliding-window data streams. PVLDB, 5(10):992–1003, 2012.
  • [28] G. Piatetsky-Shapiro and C. Connell. Accurate estimation of the number of tuples satisfying a condition. In SIGMOD, pages 256–276, 1984.
  • [29] V. Poosala, V. Ganti, and Y. E. Ioannidis. Approximate query answering using histograms. IEEE Data Eng. Bull., 22(4):5–14, 1999.
  • [30] V. Poosala, Y. E. Ioannidis, P. J. Haas, and E. J. Shekita. Improved histograms for selectivity estimation of range predicates. In SIGMOD, pages 294–305, 1996.
  • [31] N. Potti and J. M. Patel. DAQ: A new paradigm for approximate query processing. PVLDB, 8(9):898–909, 2015.
  • [32] F. Reiss, M. N. Garofalakis, and J. M. Hellerstein. Compact histograms for hierarchical identifiers. In VLDB, pages 870–881, 2006.
  • [33] H. Shatkay and S. B. Zdonik. Approximate queries and representations for large data sequences. In ICDE, pages 536–545, 1996.
  • [34] E. Terzi and P. Tsaparas. Efficient algorithms for sequence segmentation. In SDM, pages 316–327, 2006.
  • [35] D. Ting. Towards optimal cardinality estimation of unions and intersections with sketches. In SIGKDD, pages 1195–1204, 2016.
  • [36] H. Wang and K. C. Sevcik. Histograms based on the minimum description length principle. VLDB J., 17(3):419–442, 2008.
  • [37] S. Wu, B. C. Ooi, and K. Tan. Continuous sampling for online aggregation over multiple queries. In SIGMOD, pages 651–662, 2010.

Appendix A Proofs

a.1 Error measures for the Times operator (Single Segment)

Let and be the compression functions of and respectively. Let and be the error measures for time series and . For Times() operator, the compression function and the error measures for the output time series are computed as follows:

  • , i.e., the product of two compression functions.

  • . There are two options to transform this expression.
    Option 1: .
    Option 2: