1 Introduction
The streaming model of computation has become an increasingly popular model for processing massive datasets. In this model, the data is presented sequentially, and the objective is to answer some predefined query. The overwhelmingly large size of the dataset imposes a number of restrictions on any algorithm designed to answer the predefined query. For example, a streaming algorithm is permitted only a few passes, or in many cases, only a single pass over the data. Moreover, the algorithm should also use space sublinear in, or even logarithmic in, the size of the data. For more details on the background and applications of the streaming model, [BBD02, Mut05, Agg07] provide excellent surveys.
Informally, a coreset for a given problem is a small summary of the dataset such that the cost of any candidate solution on the coreset is approximately the same as the cost in the original set. Coresets have been used in a variety of problems, including generalized facility locations [FFS06], means clustering [FMS07, BFL16]
, principal component analysis
[FSS13], and regression [DDH09]. Coresets also have a number of applications in distributed models (see [IMMM14, MZ15, BENW16, AK17], for example). To maintain the coresets throughout the data stream, one possible approach is the so called mergeandreduce method, in which the multiple sets may be adjusted and combined. Several wellknown coreset constructions [HM04, Che09] for the median and means problems are based on the mergeandreduce paradigm.1.1 Motivation
Many applications discard obsolete data, choosing to favor relatively recent data to base their queries. This motivates the time decay model, in which there exists a function so that the weight of the most recent item is . Note that this is a generalization of both the insertiononly streaming model, where for all , and the slidingwindow model, where for the most recent items, and for . In this paper, we study the problem of maintaining coresets over a polynomial decay model, where for some parameter , and an exponential decay model, where at time for some halflife parameter .
Although exponential decay model is wellmotivated by natural phenomena that exhibit halflife behavior, [CS03] notices that exponential decay and the sliding window model is often insufficient for many applications because the decay occurs too quickly and suggests that polynomial decay may be a reasonable alternative for some applications, such as availability of network links. For example, consider a network link that fails at every time between and and a second network link that fails once at time . Intuitively, it seems like the second link should be better, but under many parameters, the exponential decay model and sliding window model will both agree that the first link is better. Fortunately, under the polynomial decay model, events that occur near the same time have approximately the same weight, and we will obtain some view in which the first link is preferred [KP05]. In practice, time decay functions have been used in natural language understanding to give more importance to recent utterances than the past ones [SYC18].
Organization.
The rest of the paper is organized as follows. In Section 2, we summarize the main results of the paper and the algorithmic approaches. In Section 3, we discuss the related work, and in Section 4, we formalize the problem and discuss the preliminaries required. In Sections 5 and 6, we handle the polynomial and exponential decay, respectively, in detail, wherein we present the algorithmic details as well as the complete analysis.
2 Our Contributions
We summarize our results and give a highlevel idea of our approach for problems in the polynomial and exponential decay models in the following subsections respectively. The reader is encouraged to go through Sections 5 and 6 for details.
2.1 Polynomial decay
In the polynomial decay model, a stream of points arrives sequentially and the weight of the most recent point, denoted as , is where is a given constant parameter of the decay function. We first state a theorem that shows that we can use an offline coreset construction mechanism to give a coreset for the polynomial decay model.
Theorem 1.
Given an algorithm that takes a set of points as input and constructs an coreset of points in time, there exists a polynomial decay algorithm that maintains an coreset while storing points and with update time.
Theorem 1 applies to any timedecay problem on data streams that admits an approximation algorithm using coresets. Among its applications are the problems of median and means clustering, estimator clustering, projective clustering, and subspace approximation. We list a few of these results in Table 1. Our result is a generalization of the vanilla mergeandreduce approach used to convert offline coresets to streaming counterparts. In particular, plugging in , we get the vanilla streaming model, and the theorem recovers the corresponding guarantees.
Problem  Coreset size  Offline algorithm 

Metric median clustering  [FL11]  
Metric means clustering  [BFL16]  
Metric estimator  [BFL16]  
subspace approximation  [FL11]  
Low rank approximation  [GLPW16] 
Approach.
A natural starting point would be to attempt to generalize existing sliding window algorithms to time decay models. These algorithms typically use a histogram data structure [BO07], in which multiple instances of streaming algorithms are started at various points in time, one of which wellapproximates the objective evaluated on the data set represented by the sliding window. However, generalizing these histogram data structures to timedecay models does not seem to work since the weights of all data points changes upon each new update in timedecay model, whereas streaming algorithms typically assume static weights for each data point.
Instead, our algorithm partitions the stream into blocks, where each block represents a disjoint collection of data point between certain time points. Each arriving element initially begins as its own block, containing one element. The algorithm maintains an unweighted coreset for each block, and merges blocks (i.e corresponding coresets) as they become older. However, at the end, each block is to be weighted according to some function, and so the algorithm chooses to merge blocks when the weights of the blocks become “close”. Thus, a coreset for each block will represent the set of points well, as the weights of the points in each block do not differ by too much.
2.2 Exponential decay
We also provide an algorithm that achieves a constant approximation for median clustering in the exponential decay model. Our guarantees also extend to means clustering and estimators.
Given a set of points in a metric space, let denote its aspect ratio i.e the ratio between the largest and (nonzero) smallest distance between any two points in . The weight of the most recent point at time is where is the halflife parameter of the exponential decay function.
Theorem 2.
There exists a streaming algorithm that given a stream of points with exponentially decaying weights, with aspect ratio and halflife , produces an approximate solution to kmedian clustering. The algorithm runs in time and uses space.
Approach.
Although our previous framework will work for other decay models, the algorithm may use prohibitively large space. The intuition behind the polynomial decay approach is that a separate coreset is maintained for each set of points that roughly have the same weight. In other words, the previous framework maintains a separate coreset each time the weight of the points decrease by some constant amount, so that if is the ratio between the largest weight and the smallest weight, then the total number of coresets stored by the algorithm is roughly . In the polynomial decay model, the number of stored coresets is , but in the exponential decay model, the number of stored coresets would be , which would no longer be sublinear in the size of the input. Hence, we require a new approach for the exponential decay model.
Instead, we use the online facility location (OFL) algorithm of Meyerson [Mey01] as a subroutine to solve median clustering in the exponential decay model. In the online facility location problem, we are given a metric space along with a facility cost for each point/location that appears in the data stream. The objective is to choose a (small) number of facility locations to minimize the total facility cost plus the service cost, where the service cost of a point is its distance to the closest facility. For more details, please see Section 6.
Our algorithm for the exponential time decay model proceeds on the data stream, working in phases. Each phase corresponds to an increasing “guess” for the value of the cost of the optimal clustering. Using this guess, each phase queries the corresponding instance of OFL. If the guess is correct, then the subroutine selects a bounded number of facilities. On the other hand, if either the cost or the number of selected facilities surpasses a certain quantity, then the guess for the optimal cost must be incorrect, and the algorithm triggers a phase change. Upon a phase change, our algorithm uses an offline median clustering algorithm to cluster the facility set and produces exactly points. It then runs a new instance of OFL with a larger guess, and continues processing the data stream.
However, there is a slight subtlety in this analysis. The number of points stored by OFL is dependent on the weights of the point. In an exponential decay function, the ratio between the largest weight and smallest weight of points in the data set may be exponentially large. Thus to avoid OFL from keeping more than a logarithmic number of points, we force OFL to terminate after seeing points during a phase. Furthermore, we store points verbatim until we see distinct points, upon whence we will trigger a phase change. We show that forcing this phase change does indeed correspond with an increase in the guess of the value for the optimal cost.
3 Related Work
The first insertiononly streaming algorithm for the median clustering problem was presented in 2000 by Guha, Mishra, Motwani, and O’Callaghan [GMMO00]. Their algorithm uses space for a approximation, for some . Subsequently, Charikar [COP03] present an approximation algorithm for means clustering that uses space. Their algorithm uses a number of phases, each corresponding to a different guess for the value of the cost of optimal solution. The guesses are then used in the online facility location (OFL) algorithm of [Mey01], which provides a set of centers whose number and cost allows the algorithm to reject or accept the guess. This technique is now one of the standard approaches for handling service problems. Braverman [BMO11] improve the space usage of this technique to . [BLLM15] and [BLLM16] develop algorithms for means clustering on sliding windows, in which expired data should not be included in determining the cost of a solution.
Another line of approach for service problems is the construction of coresets, in particular when the data points lie in the Euclidean space. HarPeled and Mazumdar [HM04] give an insertiononly streaming algorithm for medians and means that provides a approximation, using space , where is the dimension of the space. Similarly, Chen [Che09] introduced an algorithm using space, with the same approximation guarantees.
Cohen and Strauss [CS03] study problems in timedecaying data streams in 2003. There are a number of results [KP05, CTX07, CKT08, CTX09] in this line of work, but the most prominent timedecay model is the sliding window model. Datar [DGIM02] introduced the exponential histogram as a framework in the sliding window for estimating statistics such as count, sum of positive integers, average, and norms. This initiated an active line of research, including improvements to count and sum [GT02], frequent itemsets [CWYM06, BGL18]
, frequency counts and quantiles
[AM04, LT06], rarity and similarity [DM02], variance and
medians [BDMO03] and other geometric and numerical linear algebra problems [FKZ05, CS06, BDU18].4 Preliminaries
Let be the set of possible points in a space with metric . A weighted set is a pair with a set and a weight function . A query space is a tuple that combines a weighted set with a set of possible queries and a function . A query space induces a function
We now instantiate the above with some simple examples.
Example 1 (means).
Let be all sets of points in , and for define . The means cost of to is
Example 2 (median).
Let be all sets of points in , and for define . The median cost of to is
Note that both median and means are captured in . We now define an coreset.
Definition 1 (coreset).
A coreset for the query space is a tuple , where is a set of points and are their corresponding weights, such that for every
An important property of coresets is that they are closed under operations like union and composition. We formalize this below.
Proposition 1 (Mergeandreduce).
[Che09] Coresets satisfy the following two properties.

If and are coresets of disjoint sets and respectively, then is an coreset of .

If is an coreset of and is a coreset of , then is a coreset of .
We now define approximate triangle inequality, a property that allows us to extend our results obtained in metric spaces to ones with semidistance functions. In particular, this allows us to extend results for median clustering to means and estimators in exponential decay streams.
Definition 2 (approximate triangle inequality).
A function on a space satisfies the approximate triangle inequality if for all ,
5 Polynomial decay
We consider a time decay, wherein a point in the stream, which arrived at time , has weight at time , for some parameter . Equivalently, the most recent element has weight for some .
We present a general framework which, for given problem, takes an offline coreset construction algorithm and adapts it to polynomial decay streams. Our technique can be viewed as a generalization of mergeandreduce technique of Bentley and Saxe [BS80]. We also briefly discuss some applications towards that end. We start with stating our main theorem for polynomial decay streams.
Theorem 3.
Given an offline algorithm that takes a set of points as input and constructs an coreset of points in time, there exists a polynomial decay algorithm that maintains an coreset while storing points and with update time
Notation.
We use to denote the set of natural numbers. We use CSRAM to denote an offline coreset construction algorithm, which given points, constructs an coreset in time and takes space . We abuse notation by using to also refer to the corresponding coreset.
5.1 Algorithm
We start with giving a highlevel intuition of the algorithm. Given a stream of points, the algorithm implicitly maintains a partition of the streams into disjoint blocks. A block is a collection of consecutive points in the stream, and is represented by two positive integers and as , where represents the position of the first point in the block and the last point, relative to the start of the stream. Let the set of blocks be denoted by . Our algorithm stores points of a given block by maintaining a coreset for the points in that block. As the stream progresses, we merge older blocks i.e. the corresponding coresets. Informally, the merge happens when the weights of the blocks become close.
We first define a set of integer markers , which for a given , depends on the decay parameter and target . These markers dictate when to merge blocks as the stream progresses. For a given , we define to be the minimum integer greater than or equal to such that
Equivalently, we can write . Note that each of the points following in the stream, has weight within times the weight of . Moreover, ’s can be exactly precomputed from the equation and we therefore assume that these are implicitly stored by the algorithm. Each new element in the stream starts as a new block. As mentioned before, the blocks are represented by two integers and the points are stored as a coreset. When a block reaches , then algorithm merges all of points into a single coreset. In the end, the algorithm outputs the weighted union of the coresets of the blocks.
To visualize this, consider the integer line, and suppose that we have ’s marked on the positive side of the line, for example . The tuple indices of the blocks represent the relative position of the point in the stream, with the start being and the end point being . At the start, the stream is on the nonpositive end with the first point at . As the time progresses, the stream moves to the right side. Therefore, when we observe the first element, it moves to the point . We then store it as a new block, represented by ; we also simultaneously store a coreset corresponding to it. As time progresses, a block reaches for some which can be formally expressed as . We then merge all blocks in the range . Note that by definition of , we would have observed all these elements and also we will not merge partial blocks. We present this idea in full in Algorithm 1 and intuition in Figure 1. We remark that when we construct coresets, we use an offline algorithm CSRAM which given a set of points and a query space produces an coreset.
5.2 Analysis
We first show that a weighted combination of blocks gives us an coreset. For a block , let the weight of the block be denoted as . We set where satisfies
The following lemma shows that any such produces a coreset.
Lemma 1.
Let be an coreset for . Let be such that for every , then is a coreset for .
Proof.
Since is an coreset for , therefore for every ,
Note that for , we have . Therefore is a coreset for . ∎
Having assigned weights to the blocks, we can take the union to get the coreset of . For simplicity, we choose in Algorithm 1. We now present a lemma that bounds the number of blocks maintained by the algorithm.
Lemma 2.
Given a polynomial decay stream of points as input to Algorithm 1, the number of blocks produced is .
Proof.
Consider any two adjacent blocks. By the definition of the ’s, the ratio between the weights of the oldest and youngest elements is at least . In the full stream, the oldest element has weight and the youngest element has weight . Let be the number of blocks so that . Solving for , we get . We will now lower bound the denominator using the numerical inequality for ; equivalently for and . We get , and therefore we have . ∎
We now give the proof of the main theorem for the polynomial decay model.
Proof of Theorem 3.
From Proposition 1, we get that when we merge disjoint blocks, we do not sacrifice the coreset approximation parameter . However, when we reduce, for instance two corsets, we get a coreset. For points observed in the stream, note that there would be at most reduces. This follows from the fact that the size of successive blocks increase exponentially. Therefore using an offline coreset construction algorithm CSRAM with , we get that merging and reducing the blocks produces an coreset (by Proposition 1). Finally, from Lemma 1, we get that taking a union of these blocks weighted by gives us an coreset.
For the space bound, we have from Lemma 2 that the number of blocks is . Since we maintain an coreset for each block, we get that the offline coreset construction algorithm takes space . Therefore, we get that the space complexity is . For update time, note that for points, we have blocks and we use an coreset algorithm which takes time per block. We therefore get a total time of ∎
Applications.
Coresets have been designed for a wide variety of geometric, numerical linear algebra and learning problems. Some examples include median and means clustering [Che09], low rank approximation [Sar06], regression [CW09], projective clustering [DRVW06], subspace approximation [FMSW10], kernel methods [ZP17]
[HCB16] etc. We instantiate our framework with a few of these problems, and present the results in Table 1.6 Exponential decay
We now discuss another model of time decay in which the weights of previous points decay exponentially with time. Analogous to our polynomial decay model, a point that first appeared in the stream at time has weight at time , where the parameter is the halflife of the decay function. We however consider a different viewpoint to simplify the analysis; we maintain that the weight of a point observed at time is fixed to be where is the halflife parameter. These are equivalent since the ratio of weights between successive points is the same in both the models.
Online Facility Location.
We first discuss the problem of Online Facility Location (OFL) as our algorithm uses it as a subroutine. The problem of facility location, given a set of points , called demands, a distance function and fixed cost , conventionally called the facility cost, asks to find a set of points that minimizes the objective
Informally, it seeks a set of points such that the cumulative cost of serving the demands (known as service cost), which is and opening new facilities , is minimized. Online Facility Location is the variant of the above problem in the streaming setting, wherein the facility assignments and service costs incurred are irrevocable. That is to say, once a point is assigned to a facility, it cannot be reassigned to a different facility at a later point in time, even if the newer facility is closer. A simple and popular algorithm to this problem is by Meyerson [Mey01], wherein upon receiving a point, it calculates its distance to the nearest facility and flips a coin with bias equal to the distance divided by facility cost. If the outcome is heads (or ), it opens a new facility, otherwise the nearest point serves this demand and it incurs a service cost, equal to the distance. From here on, we abuse notation and use OFLto refer to the algorithm of Meyerson [Mey01].
6.1 Algorithm
Our algorithm for exponential decaying streams is a variant of the popular median clustering algorithm [BMO11, COP03], which uses OFL as a subroutine. We first briefly discuss the algorithm of [BMO11] and then elucidate on how we adapt this to exponential decay streams. The algorithm operates in phases, where in each phase it maintains a guess, denoted by , to the lower bound on optimal cost. It then uses this guess to instantiate the OFL algorithm of [Mey01] on a set of points in the stream. If the service cost of OFL grows high or the number of facilities grows large, it infers that the guess is too low and triggers a phase change. It then increases the guess by a factor of (to be set appropriately) and the facilities are put back at the start of the stream and another round of OFL is run.
Notation.
We first define and explain some key quantities. The aspect ratio of a set is defined as the ratio between the largest distance and the smallest nonzero distance between any two points in the set. We use to denote the aspect ratio of the stream . For simplicity of presentation, we assume that the minimum nonzero distance between two points is at least . We define as the total weight of the first points in the stream divided by the minimum weight. Suppose the stream starts at , then for any ,
For a set , we use to denote the optimal median clustering cost for the set. For two sets and , we use to denote the cost of clustering with as medians. Whenever we use OPT, it corresponds to the optimal cost of median clustering of the stream seen till the point in context. We use KMRAM to denote an offline constant approximate median clustering algorithm in the random access model (RAM). Given a set of points and a positive integer , KMRAM outputs , where is a set of points and .
Our Algorithm.
Our algorithm, inspired from [COP03, BMO11], works in phases. We however have important differences. Each of our phases are again subdivided into two subphases. In the first subphase we execute OFL same as [COP03, BMO11] and after each point we check if the cost or the number of facilities is too large. If this is indeed the case, we trigger a phase change. However, if we read points in a phase, then we move on to the second subphase of the algorithm. Here we simply count points and store them verbatim. Upon reading points, we trigger a phase change. The intuition for this subphase is that a phase change is triggered when OPT increases by a factor of . After points, subsequent points are so heavy relative to points of the previous phase that any service cost will be large enough to ensure OPT has increased. Therefore, we restrict the algorithm to read at most points in a single phase. When we start a new phase, we cluster the existing facility set to extract exactly points using an offtheshelf constant approximate KMRAM algorithm and continue processing the stream. We present the above idea in full in Algorithm 2. We now state our main theorem for exponential decay streams.
Theorem 4.
There exists a streaming algorithm that given a stream of exponential decaying points with aspect ratio and halflife , produces an approximate solution to kmedian clustering. The algorithm runs in time and uses space.
6.2 Analysis
We first analyze the service cost and space complexity of OFL. For the point in the stream , the weight of , denoted , is . The following two lemmas will establish bounds on the service cost and number of facilities of OFL.
Lemma 3.
When OFL is run on a stream of points with exponentially decaying weights, with facility cost where , it produces a service cost of at most with probability at least .
Proof.
The proof follows the standard analysis of Online Facility Location. Let is the set of points read in a phase. Instead of looking at distinct points with varying weights, we view it as repeated points of unit or minimum weight. The total number of points is therefore at most .
We remind the reader that is the optimal cost and is the total service cost incurred by OFL. Let be the set of corresponding facilities allocated by OPT, and ’s denote the optimum facilities where and the set of points from served by the facility . Let be the service cost of . We now further partition each region into rings. Let be the first ring around that contains half the nearest points in . Formally, . Furthermore, is the second ring around containing onequarter of the points in and so on. Therefore, we can inductively define . Note that may be not be uniquely identifiable, but their existence suffices for the sake of analysis. Let be the cost of set . For a point , use and for its optimal cost and cost incurred in the algorithm respectively.
We look at two cases. In the first case, suppose each region has a facility open; let the facility of be . We look at the cost incurred by subsequent points arriving in this region. Consider the set and let be a facility in . A subsequent point incurs a cost . By triangle inequality, we have . By definition of , we have for any point . We sum over all in and get . We therefore get . Summing over all points is , we get . Summing over all ’s, we get . Finally, summing over ’s, we get that in the first case . We now look at the second case wherein each region has a facility open. The number of points is at most , therefore, the number of regions is at most . The expected service cost incurred by a region before opening a facility is at most (See Fact , [Lan17]). Therefore, the total service cost . Combining the two cases, we get that . Note that when we store points verbatim, we do not incur any service cost. With a simple application of Markov inequality, we get that with probability at least , . ∎
Lemma 4.
When OFL is run on a stream of points with exponentially decaying weights, with facility cost where , the number of facilities produced is at most , with probability at least .
Proof.
Considering the points as repeated points of minimum weight, the total number of points is at most and the total number of regions is at most . One facility in each region gives us facilities. After opening a facility in a region, each subsequent point has probability to open a facility. Therefore, the expected number of facilities is . We showed in Lemma 3 that . Hence, the expected number of facilities is at most . A simple application of Markov’s inequality completes the proof. ∎
median clustering.
We now state some key lemmas that will help us establish that the algorithm produces a approximation to the median clustering cost. We then show how these come together and present the detailed guarantees in Theorem 5.
Lemma 5.
At every phase change, with probability at least , if and .
Proof.
The phase change is triggered in two ways, either the cost or the number of facilities grows large (more precisely, cost more that or the number of facilities greater than ), or we read too many points. Let us look at the first case. Assume that , then from Lemma 3 and 4, we get that with probability at least , and the number of facilities is respectively. However with , neither of the two conditions are met and therefore the premise that a phase change was triggered gives us a contradiction. Hence, in the first case, we get with probability at least .
In the other case, we store points exactly (incurring no additional cost). The only danger in this case is performing a phase change too early (before OPT has doubled). Let be the value of OPT at the beginning of the phase, which we assume starts at time . Since points cannot be at distance greater than , then
Now let be the value of OPT after terminating the phase (which occurs after reading distinct points after the initial points of the phase). We must prove that . Observe that after reading distinct points, we must cluster at least points across a distance of at least (since we can have at most centers). The weights of these points begin at . Therefore,
where the second inequality follows from straightforward arithmetic. Let be the value of in the previous phase. Thus,
where the second inequality holds with probability at least , as justified above. Setting completes the proof. ∎
Lemma 6.
At any part in the algorithm, we have .
Proof.
We know that the increase of in the current phase is upper bounded by the variable COST (see Algorithm 2). In a single phase, we have . Therefore, outside the phase loop, we just need to show that it is at most . Note that it changes only by the KMRAM algorithm, which incurs cost of . Suppose that it holds in the previous phase and let be the value of in the previous phase. Then the cost outside the loop is , which finishes the proof. ∎
Lemma 7.
With probability at least , .
Let and denote the values of and in the previous phase. We condition on the event that , which we know from Lemma 5 occurs with probability at least . From the update equation of , we either have or . In the first case, we directly get . With , we get the claim of the lemma. We now look at the second case, where we have from the guarantee of the KMRAM algorithm. It is easy to see that by a simple application of triangle inequality on all the points. Moreover, from Lemma 6, we have . Combining these, we get .
We now restate the theorem for the exponential decay model but tailored to Algorithm 2 with all the algorithmic details precisely stated.
Theorem 5.
Let be a stream of points with exponential decaying weights parametrized by the halflife parameter and let be some positive integer. Algorithm 2 run with on the stream outputs points, which produce an approximation to the optimal cost of median clustering on with high probability. The algorithm runs in time and uses space .
Proof.
We emphasize that we give a streaming guarantee, that is, given a fixed point in the stream, it will hold for all the points seen till then. Note that in the proofs of Lemma 5 and 7, we only need that the random event hold with probability at least only in the previous phase. We can therefore amplify the probability of success by running parallel instances to get the bounds to hold with probability at least . The space bound of the algorithm is , which simply follows from the condition in the algorithm that we don’t allow the number of facilities to grow beyond combined with the fact that we store points verbatim in the second subphase. ∎
Extensions.
References
 [Agg07] Charu C. Aggarwal, editor. Data Streams  Models and Algorithms, volume 31 of Advances in Database Systems. Springer, 2007.
 [AK17] Sepehr Assadi and Sanjeev Khanna. Randomized composable coresets for matching and vertex cover. In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures, (SPAA), pages 3–12, 2017.
 [AM04] Arvind Arasu and Gurmeet Singh Manku. Approximate counts and quantiles over sliding windows. In Proceedings of the Twentythird ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems, pages 286–296, 2004.
 [BBD02] Brian Babcock, Shivnath Babu, Mayur Datar, Rajeev Motwani, and Jennifer Widom. Models and issues in data stream systems. In Proceedings of the Twentyfirst ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems (PODS), pages 1–16, 2002.
 [BDMO03] Brian Babcock, Mayur Datar, Rajeev Motwani, and Liadan O’Callaghan. Maintaining variance and kmedians over data stream windows. In Proceedings of the TwentySecond ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems (PODS), pages 234–243, 2003.
 [BDU18] Vladimir Braverman, Petros Drineas, Jalaj Upadhyay, David P. Woodruff, and Samson Zhou. Numerical linear algebra in the sliding window model. arXiv preprint arXiv:1805.03765, 2018.
 [BENW16] Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, and Justin Ward. A new framework for distributed submodular maximization. In IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 645–654, 2016.
 [BFL16] Vladimir Braverman, Dan Feldman, and Harry Lang. New frameworks for offline and streaming coreset constructions. arXiv preprint arXiv:1612.00889, 2016.

[BGL18]
Vladimir Braverman, Elena Grigorescu, Harry Lang, David P. Woodruff, and Samson
Zhou.
Nearly optimal distinct elements and heavy hitters on sliding
windows.
In
Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM
, pages 7:1–7:22, 2018.  [BLLM15] Vladimir Braverman, Harry Lang, Keith Levin, and Morteza Monemizadeh. Clustering on sliding windows in polylogarithmic space. In 35th IARCS Annual Conference on Foundation of Software Technology and Theoretical Computer Science, FSTTCS, pages 350–364, 2015.
 [BLLM16] Vladimir Braverman, Harry Lang, Keith Levin, and Morteza Monemizadeh. Clustering problems on sliding windows. In Proceedings of the TwentySeventh Annual ACMSIAM Symposium on Discrete Algorithms, SODA, pages 1374–1390, 2016.
 [BMO11] Vladimir Braverman, Adam Meyerson, Rafail Ostrovsky, Alan Roytman, Michael Shindler, and Brian Tagiku. Streaming kmeans on wellclusterable data. In Proceedings of the twentysecond annual ACMSIAM symposium on Discrete Algorithms, pages 26–40. Society for Industrial and Applied Mathematics, 2011.
 [BO07] Vladimir Braverman and Rafail Ostrovsky. Smooth histograms for sliding windows. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2007), October 2023, 2007, Providence, RI, USA, Proceedings, pages 283–293, 2007.
 [BS80] Jon Louis Bentley and James B Saxe. Decomposable searching problems i. statictodynamic transformation. Journal of Algorithms, 1(4):301–358, 1980.
 [Che09] Ke Chen. On coresets for kmedian and kmeans clustering in metric and euclidean spaces and their applications. SIAM Journal on Computing, 39(3):923–947, 2009.
 [CKT08] Graham Cormode, Flip Korn, and Srikanta Tirthapura. Timedecaying aggregates in outoforder streams. In Proceedings of the TwentySeventh ACM SIGMODSIGACTSIGART Symposium on Principles of Database Systems, PODS, pages 89–98, 2008.

[COP03]
Moses Charikar, Liadan O’Callaghan, and Rina Panigrahy.
Better streaming algorithms for clustering problems.
In
Proceedings of the thirtyfifth annual ACM symposium on Theory of computing
, pages 30–39. ACM, 2003.  [CS03] Edith Cohen and Martin Strauss. Maintaining timedecaying stream aggregates. In Proceedings of the TwentySecond ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems, pages 223–233, 2003.
 [CS06] Timothy M. Chan and Bashir S. Sadjad. Geometric optimization problems over sliding windows. Int. J. Comput. Geometry Appl., 16(23):145–158, 2006. A preliminary version appeared in the Proceedings of Algorithms and Computation, 15th International Symposium (ISAAC), 2004.
 [CTX07] Graham Cormode, Srikanta Tirthapura, and Bojian Xu. Timedecaying sketches for sensor data aggregation. In Proceedings of the TwentySixth Annual ACM Symposium on Principles of Distributed Computing, PODC, pages 215–224, 2007.
 [CTX09] Graham Cormode, Srikanta Tirthapura, and Bojian Xu. Timedecayed correlated aggregates over data streams. In Proceedings of the SIAM International Conference on Data Mining, SDM, pages 271–282, 2009.
 [CW09] Kenneth L Clarkson and David P Woodruff. Numerical linear algebra in the streaming model. In Proceedings of the fortyfirst annual ACM symposium on Theory of computing, pages 205–214. ACM, 2009.

[CWYM06]
Yun Chi, Haixun Wang, Philip S. Yu, and Richard R. Muntz.
Catch the moment: maintaining closed frequent itemsets over a data stream sliding window.
Knowl. Inf. Syst., 10(3):265–294, 2006. A preliminary version appeared in the Proceedings of the 4th IEEE International Conference on Data Mining (ICDM), 2004.  [DDH09] Anirban Dasgupta, Petros Drineas, Boulos Harb, Ravi Kumar, and Michael W. Mahoney. Sampling algorithms and coresets for regression. SIAM J. Comput., 38(5):2060–2078, 2009. A preliminary version appeared in the Proceedings of the Nineteenth Annual ACMSIAM Symposium on Discrete Algorithms, (SODA) 2008.
 [DGIM02] Mayur Datar, Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Maintaining stream statistics over sliding windows. SIAM J. Comput., 31(6):1794–1813, 2002. A preliminary version appeared in the Proceedings of the Thirteenth Annual ACMSIAM Symposium on Discrete Algorithms (SODA), 2002.
 [DM02] Mayur Datar and S. Muthukrishnan. Estimating rarity and similarity over data stream windows. In Algorithms  ESA 2002, 10th Annual European Symposium, Proceedings, pages 323–334, 2002.
 [DRVW06] Amit Deshpande, Luis Rademacher, Santosh Vempala, and Grant Wang. Matrix approximation and projective clustering via volume sampling. In Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm, pages 1117–1126. Society for Industrial and Applied Mathematics, 2006.
 [FFS06] Dan Feldman, Amos Fiat, and Micha Sharir. Coresets forweighted facilities and their applications. In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 315–324, 2006.
 [FKZ05] Joan Feigenbaum, Sampath Kannan, and Jian Zhang. Computing diameter in the streaming and slidingwindow models. Algorithmica, 41(1):25–41, 2005.
 [FL11] Dan Feldman and Michael Langberg. A unified framework for approximating and clustering data. In Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC, pages 569–578, 2011.
 [FMS07] Dan Feldman, Morteza Monemizadeh, and Christian Sohler. A PTAS for kmeans clustering based on weak coresets. In Proceedings of the 23rd ACM Symposium on Computational Geometry (SoCG), pages 11–18, 2007.
 [FMSW10] Dan Feldman, Morteza Monemizadeh, Christian Sohler, and David P Woodruff. Coresets and sketches for high dimensional subspace approximation problems. In Proceedings of the twentyfirst annual ACMSIAM symposium on Discrete Algorithms, pages 630–649. Society for Industrial and Applied Mathematics, 2010.
 [FSS13] Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constantsize coresets for kmeans, PCA and projective clustering. In Proceedings of the TwentyFourth Annual ACMSIAM Symposium on Discrete Algorithms, (SODA), 2013, pages 1434–1453, 2013.
 [GLPW16] Mina Ghashami, Edo Liberty, Jeff M Phillips, and David P Woodruff. Frequent directions: Simple and deterministic matrix sketching. SIAM Journal on Computing, 45(5):1762–1792, 2016.
 [GMMO00] Sudipto Guha, Nina Mishra, Rajeev Motwani, and Liadan O’Callaghan. Clustering data streams. In Foundations of computer science, 2000. proceedings. 41st annual symposium on, pages 359–366. IEEE, 2000.
 [GT02] Phillip B. Gibbons and Srikanta Tirthapura. Distributed streams algorithms for sliding windows. In SPAA, pages 63–72, 2002.

[HCB16]
Jonathan Huggins, Trevor Campbell, and Tamara Broderick.
Coresets for scalable bayesian logistic regression.
In Advances in Neural Information Processing Systems, pages 4080–4088, 2016.  [HM04] Sariel HarPeled and Soham Mazumdar. On coresets for kmeans and kmedian clustering. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing (STOC), pages 291–300, 2004.
 [IMMM14] Piotr Indyk, Sepideh Mahabadi, Mohammad Mahdian, and Vahab S. Mirrokni. Composable coresets for diversity and coverage maximization. In Proceedings of the 33rd ACM SIGMODSIGACTSIGART Symposium on Principles of Database Systems (PODS), pages 100–108, 2014.
 [KP05] Tsvi Kopelowitz and Ely Porat. Improved algorithms for polynomialtime decay and timedecay with additive error. In Theoretical Computer Science, 9th Italian Conference, ICTCS Proceedings, pages 309–322, 2005.
 [Lan17] Harry Lang. Online facility location on semirandom streams. arXiv preprint arXiv:1711.09384, 2017.
 [LT06] LapKei Lee and H. F. Ting. A simpler and more efficient deterministic scheme for finding frequent items over sliding windows. In Proceedings of the TwentyFifth ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems, pages 290–297, 2006.
 [Mey01] Adam Meyerson. Online facility location. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 426–431. IEEE, 2001.
 [Mut05] S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science, 1(2), 2005.
 [MZ15] Vahab S. Mirrokni and Morteza Zadimoghaddam. Randomized composable coresets for distributed submodular maximization. In Proceedings of the FortySeventh Annual ACM on Symposium on Theory of Computing (STOC), pages 153–162, 2015.
 [Sar06] Tamas Sarlos. Improved approximation algorithms for large matrices via random projections. In Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE Symposium on, pages 143–152. IEEE, 2006.
 [SYC18] ShangYu Su, PeiChieh Yuan, and YunNung Chen. How time matters: Learning timedecay attention for contextual spoken language understanding in dialogues. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2133–2142, 2018.
 [ZP17] Yan Zheng and Jeff M Phillips. Coresets for kernel regression. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 645–654. ACM, 2017.