1. Introduction
Time series data is of enormous interest across all domains of life: from health sciences and weather forecasts to retail and finance, time dependent data is ubiquitous. Despite the diversity of applications, time series problems are commonly confronted by the same two pervasive obstacles: interpolation and extrapolation in the presence of noisy and/or missing data. Specifically, we consider a discretetime setting with representing the time index and ^{1}^{1}1We denote as the field of real numbers and as the integers. representing the latent discretetime time series of interest. For each
and with probability
, we observe the random variable
such that . While the underlying mean signal is of course strongly correlated, we assume the perstep noise is independent acrossand has uniformly bounded variance. Under this setting, we have two objectives: (1) interpolation, i.e., estimate
for all ; (2) extrapolation, i.e., forecast for . Our interest is in designing a generic method for interpolation and extrapolation that is applicable to a large model class while being agnostic to the time dynamics and noise distribution.We develop an algorithm based on matrix estimation, a topic which has received widespread attention, especially with the advent of large datasets. In the matrix estimation setting, there is a “parameter” matrix of interest, and we observe a sparse, corrupted signal matrix where . The aim then is to recover the entries of from noisy and partial observations given in . For our purposes, the attractiveness of matrix estimation derives from the property that these methods are fairly model agnostic in terms of the structure of and distribution of given . We utilize this key property to develop a model and noise agnostic time series imputation and prediction algorithm.
1.1. Overview of contributions
Time series as a matrix. We transform the time series of observations for into what is known as the Page matrix (cf. (Damen et al., 1982)) by placing contiguous segments of size (an algorithmic hyperparameter) of the time series into nonoverlapping columns; see Figure 1 for a caricature of this transformation.
As the key contribution, we establish that—in expectation—this generated matrix is either exactly or approximately lowrank for a large class of models . Specifically, can be from the following families:

[wide, labelwidth=!, labelindent=0pt]

Linear Recurrent Formulae (LRF): .

Compact Support: where has the form with for some ; and is Lipschitz ^{2}^{2}2We say is Lipschitz if there exists a such that for all and denotes the standard Euclidean norm on . ^{3}^{3}3It can be verified that if is an LRF satisfying , then it satisfies the form for with appropriately defined constants , functions ; see Proposition D.2 of Appendix D for details..

Sublinear: where and for some , and .
Over the past decade, the matrix estimation community has developed a plethora of methods to recover an exact or approximately lowrank matrix from its noisy, partial observations in a noise and model agnostic manner. Therefore, by applying such a matrix estimation method to this transformed matrix, we can recover the underlying mean matrix (and thus for ) accurately. In other words, we can interpolate and denoise the original corrupted and incomplete time series without any knowledge of its time dynamics or noise distribution. Theorem 4.1 and Corollary 4.1 provide finitesample analyses for this method and establish the consistency property of our algorithm, as long as the underlying satisfies Property 4.1 and the matrix estimation method satisfies Property 2.1. In Section 5, we show that any additive mixture of the three function classes listed above satisfies Property 4.1. Effectively, Theorem 4.1 establishes a statistical reduction between time series imputation and matrix estimation. Our key contribution with regards to imputation lies in establishing that a large class of time series models (see Section 5) satisfies Property 4.1.
It is clear that for LRF, the last row of the mean transformed matrix can be expressed as a linear combination of the other rows. An important representation result of the present paper, which generalizes this notion, is that an approximate LRF relationship holds for the other two model classes. Therefore, we can forecast , say for
, as follows: apply matrix estimation to the transformed data matrix as done in imputation; then, linearly regress the last row with respect to the other rows in the matrix; finally, compute the inner product of the learnt regression vector with the vector containing the previous
values that were estimated via the matrix estimation method. Theorem 4.2 and Corollary 4.2 imply that the meansquared error of our predictions decays to zero provided the matrix estimation method satisfies Property 2.2 and the underlying model satisfies Property 4.2. Similar to the case of imputation, establishing that Property 4.2 holds for the three function classes is novel (see Section 5).Noisy regression. Our proposed forecasting algorithm performs regression with noisy and incomplete features. In the literature, this is known as errorinvariable regression. Recently, there has been exciting progress to understand this problem especially in the highdimensional setting (Poling and Wainwright, 2012; Belloni et al., 2017; Datta and Zou, 2017)
. Our algorithm offers an alternate solution for the highdimensional setting through the lens of matrix estimation: first, utilize matrix estimation to denoise and impute the feature observations, and then perform least squares with the preprocessed feature matrix. We demonstrate that if the true, underlying feature matrix is (approximately) lowrank, then our algorithm provides a consistent estimator to the true signal (with finite sample guarantees). Our analysis further suggests the usage of a nonstandard error metric, the max row sum error (MRSE) (see Property
2.2 for details).Class of applicable models. As aforementioned, our algorithm enjoys strong performance guarantees provided the underlying mean matrix induced by the time series satisfies certain structural properties, i.e., Properties 4.1 and 4.2. We argue that a broad class of commonly used time series models meets the requirements of the three function classes listed above.
LRFs include the following important family of time series: a finite sum of products of exponentials (), harmonics (), and finite degree polynomials () (Golyandina et al., 2001), i.e., . Further, since stationary processes and integrable functions are well approximated by a finite summation of harmonics (i.e., and ), LRFs encompass a vitally important family of models. For this model, we show that indeed the structural properties required from the time series matrix for both imputation and prediction are satisfied.
However, there are many important time series models that do not admit a finite order LRF representation. A few toy examples include . Time series models with compact support, on the other hand, include models composed of a finite summation of periodic functions (e.g., ). Utilizing our lowrank representation result, we establish that models with compact support possess the desired structural properties. We further demonstrate that sublinear functions, which include models that are composed of a finite summation of non (super)linear functions (e.g., ), also possess the necessary structural properties. Importantly, we argue that the finite mixture of the above processes satisfy the necessary structural properties.
Recovering the hidden state. Our algorithm, being noise and timedynamics agnostic, makes it relevant to recover the hidden state from its noisy, partial observations as in a Hidden Markovlike Model. For example, imagine having access to partial observations of a timevarying truncated Poisson process^{4}^{4}4Let denote a positive, bounded constant, and a Poisson random variable. We define the truncated Poisson random variable as . without
knowledge that the process is Poisson. By applying our imputation algorithm, we can recover timevarying parameters of this process accurately and, thus, the hidden states. If we were to apply an ExpectationMaximization (EM) like algorithm, it would require knowledge of the underlying model being Poisson; moreover, theoretical guarantees are not clear for such an approach.
Sample complexity. Given the generality and model agnostic nature of our algorithm, it is expected that its sample complexity for a specific model class will be worse than model aware optimal algorithms. Interestingly, our finite sample analysis suggests that for the model classes stated above, the performance loss incurred due to this generality is minor. See Section 5.6 for a detailed analysis.
Experiments. Using synthetic and realworld datasets, our experiments establish that our method outperforms existing standard software packages
(including R) for the tasks of interpolation and extrapolation in the presence of noisy and missing observations. When the data is generated synthetically, we “help” the existing software package by choosing the correct parametric model and algorithm while our algorithm remains oblivious to the underlying model; despite this disadvantage, our algorithm continues to
outperform the standard packages with missing data.1.2. Related works
There are two related topics: matrix estimation and time series analysis. Given the richness of both fields, we cannot do justice in providing a full overview. Instead, we provide a highlevel summary of known results with references that provide details.
Matrix estimation. Matrix estimation is the problem of recovering a data matrix from an incomplete and noisy sampling of its entries. This has become of great interest due to its connection to recommendation systems (cf. (Keshavan et al., 2010a, b; Negahban and Wainwright, 2011; Chen and Wainwright, 2015; Chatterjee, 2015; Lee et al., 2016; Candès and Tao, 2010; Recht, 2011; Davenport et al., 2014)), social network analysis (cf. (Abbe and Sandon, 2015a, b, 2016; Anandkumar et al., 2013; Hopkins and Steurer, 2017)), and graph learning (graphon estimation) (cf. (Airoldi et al., 2013; Zhang et al., 2015; Borgs et al., 2015, 2017)). The key realization of this rich literature is that one can estimate the true underlying matrix from noisy, partial observations by simply taking a lowrank approximation of the observed data. We refer an interested reader to recent works such as (Chatterjee, 2015; Borgs et al., 2017) and references there in.
Time series analysis. The question of time series analysis is potentially as old as civilization in some form. Few textbook style references include (Brockwell and Davis, 2013; Box and Reinsel, 1994; Hamilton, 1994; Robert H. Shumway, 2015). At the highest level, time series modeling primarily involves viewing a given time series as a function indexed by time (integer or real values) and the goal of model learning is to identify this function from observations (over finite intervals). Given that the space of such functions is complex, the task is to utilize function form (i.e., “basis functions”) so that for the given setting, the time series observation can fit a sparse representation. For example, in communication and signal processing, the harmonic or Fourier representation of a time series has been widely utilized, due to the fact that signals communicated are periodic in nature. The approximation of stationary processes via harmonics or ARIMA has made them a popular model class to learn stationarylike time series, with domain specific popular variations, such as ‘Autoregressive Conditional Heteroskedasticity’ (ARCH) in finance. To capture nonstationary or “trendlike” behavior, polynomial bases have been considered. There are rich connections to the theory of stochastic processes and information theory (cf. (Cover, 1966; Shields, 1998; Rissanen, 1984; Feder et al., 1992)). Popular time series models with latent structure are Hidden Markov Models (HMM) in probabilistic form (cf. (Kalman et al., 1960; Baum and Petrie, 1966)
and Recurrent Neural Networks (RNN) in deterministic form (cf.
(Schmidhuber, 1992)).The question of learning time series models with missing data has received comparatively less attention. A common approach is to utilize HMMs or general StateSpaceModels to learn with missing data (cf. (Dunsmuir and Robinson, 1981; Shumway and Stoffer, 1982)). To the best of the authors’ knowledge, most work within this literature is restricted to such class of models (cf. (Durbin and Koopman, 2012)). Recently, building on the literature in online learning, sequential approaches have been proposed to address prediction with missing data (cf. (Anava et al., 2015)).
Time series and matrix estimation. The use of a matrix structure for time series analysis has roughly two streams of related work: SSA for a single time series (as in our setting), and the use of multiple time series. We discuss relevant results for both of these topics.
Singular Spectrum Analysis (SSA)
of time series has been around for some time. Generally, it assumes access to time series data that is not noisy and fully observed. The core steps of SSA for a given time series are as follows: (1) create a Hankel matrix from the time series data; (2) perform a Singular Value Decomposition (SVD) of it; (3) group the singular values based on user belief of the model that generated the process; (4) perform diagonal averaging for the “Hankelization” of the grouped rank1 matrices outputted from the SVD to create a set of time series; (5) learn a linear model for each “Hankelized” time series for the purpose of forecasting.
At the highest level, SSA and our algorithm are cosmetically similar to one another. There are, however, several key differences: (i) matrix transformation—while SSA uses a Hankel matrix (with repeated entries), we transform the time series into a Page matrix (with nonoverlapping structure); (ii) matrix estimation—SSA heavily relies on the SVD while we utilize general matrix estimation procedures (with SVD methods representing one specific procedural choice); (iii) linear regression—SSA assumes access to fully observed and noiseless data while we allow for corrupted and missing entries.
These differences are key in being able to derive theoretical results. For example, there have been numerous recent works that have attempted to apply matrix estimation methods to the Hankel matrix inspired by SSA for imputation, but these works do not provide any theoretical guarantees (Shen et al., 2015; Schoellhamer, 2001; Tsagkatakis et al., 2016). In effect, the Hankel structure creates strong correlation of noise in the matrix, which is an impediment for proving theoretical results. Our use of the Page matrix overcomes this challenge and we argue that in doing so, we still retain the underlying structure in the matrix. With regards to forecasting, the use of matrix estimation methods that provide guarantees with respect to MRSE rather than standard MSE is needed (which SSA provides no theoretical analysis for). While we do not explicitly discuss such methods in this work, such methods are explored in detail in (Agarwal et al., 2018). With regards to imputation, SSA does not provide direction on how to group the singular values, which is instead done based on user belief of the generating process. However, due to recent advances in matrix estimation literature, there exist algorithms that provide datadriven methods to perform spectral thresholding (cf. (Chatterjee, 2015)). Finally, it is worth nothing that to the best of the authors’ knowledge, the classical literature on SSA seem to be lacking finite sample analysis in the presence of noisy observations, which we do provide for our algorithm.
Multiple time series viewed as matrix. In a recent line of work (Amjad and Shah, 2017; Yu et al., 2016; Xie et al., 2016; Rallapalli et al., 2010; Chen and Cichocki, 2005; Amjad et al., 2017), multiple time series have been viewed as a matrix with the primary goal of imputing missing values or denoising them. Some of these works also require prior model assumptions on the underlying time series. For example in (Yu et al., 2016), as stated in Section 1, the second step of their algorithm changes based on the user’s belief in the model that generated the data along with the multiple time series requirement.
In summary, to the best of our knowledge, ours is the first work to give rigorous theoretical guarantees for a matrix estimation inspired algorithm for a single, univariate time series.
Recovering the hidden state. The question of recovering the hidden state from noisy observations is quite prevalent and a workhorse of classical systems theory. For example, most of the system identification literature focuses on recovering model parameters of a Hidden Markov Model. While ExpectationMaximization or BaumWelch are the goto approaches, there is limited theoretical understanding of it in generality (for example, see a recent work (Yang et al., 2017) for an overview) and knowledge of the underlying model is required. For instance, (Bertsimas et al., 1999) proposed an optimization based, statistically consistent estimation method. However, the optimization “objective” encoded knowledge of the precise underlying model.
It is worth comparing our method with a recent work (Amjad and Shah, 2017) where the authors attempt to recover the hidden timevarying parameter of a Poisson process via matrix estimation. Unlike our work, they require access to multiple time series. In essence, our algorithm provides the solution to the same question without requiring access to any other time series!
1.3. Notation
For any positive integer , let . For any vector , we denote its Euclidean () norm by , and define . In general, the norm for a vector is defined as .
For a realvalued matrix , its spectral/operator norm, denoted by , is defined as , where and are the singular values of (assumed to be in decreasing order and repeated by multiplicities). The Frobenius norm, also known as the HilbertSchmidt norm, is defined as The maxnorm, or supnorm, is defined as . The MoorePenrose pseudoinverse of is defined as
with and being the left and right singular vectors of , respectively.
For a random variable we define its subgaussian norm as
If is bounded by a constant, we call a subgaussian random variable.
Let and be two functions defined on the same space. We say that if and only if there exists a positive real number and a real number such that for all , . Similarly, we say if and only if for all , .
1.4. Organization
In Section 2, we list the desired properties needed from a matrix estimation estimation method in order to achieve our theoretical guarantees for imputation and prediction. In Section 3, we formally describe the matrix estimation based algorithms we utilize for time series analysis. In Section 4, we identify the required properties of time series models under which we can provide finite sample analysis for imputation and prediction performance. In Section 5, we list a broad set of time series models that satisfy the properties in Section 4, and we analyze the sample complexity of our algorithm for each of these models. Lastly, in Section 6, we corroborate our theoretical findings with detailed experiments.
2. Matrix Estimation
2.1. Problem setup
Consider an matrix of interest. Suppose we observe a random subset of the entries of a noisy signal matrix , such that . For each and , the th entry is a random variable that is observed with probability and is missing with probability , independently of all other entries. Given , the goal is to produce an estimator that is “close” to . We use two metrics to quantify the estimation error:
(1) meansquared error,
(1) 
(2) max row sum error,
(2) 
Here, and denote the th elements of and , respectively. We highlight that the MRSE is a nonstandard matrix estimation error metric, but we note that it is a stronger notion than the ^{5}^{5}5.; in particular, it is easily seen that . Hence, for any results we prove in Section 4 regarding the MRSE, any known lower bounds for RMSE of matrix estimation algorithms immediately hold for our results. We now give a definition of a matrix estimation algorithm, which will be used in the following sections.
Definition 2.0 ().
A matrix estimation algorithm, denoted as , takes as input a noisy matrix and outputs an estimator .
2.2. Required properties of matrix estimation algorithms
As aforementioned, our algorithm (Section 3.3) utilizes matrix estimation as a pivotal “blackbox” subroutine, which enables accurate imputation and prediction in a model and noise agnostic setting. Over the past decade, the field of matrix estimation has spurred tremendous theoretical and empirical research interest, leading to the emergence of a myriad of algorithms including spectral, convex optimization, and nearest neighbor based approaches. Consequently, as the field continues to advance, our algorithm will continue to improve in parallel. We now state the properties needed of a matrix estimation algorithm to achieve our theoretical guarantees (formalized through Theorems 4.1 and 4.2); refer to Section 1.3 for matrix norm definitions.
Property 2.1 ().
Let ME satisfy the following: Define where if is observed, and otherwise. Then, for all and some , the produced estimator satisfies
(3) 
Here, ^{6}^{6}6Precisely, we define . denotes the proportion of observed entries in and is a universal constant.
We argue the two quantities in Property 2.1, and , are natural. quantifies the amount of noise corruption on the underlying signal matrix ; for many settings, this norm concentrates well (e.g., a matrix with independent zeromean subgaussian entries scales as with high probability (Vershynin, 2010)). quantifies the inherent model complexity of the latent signal matrix; this norm is well behaved for an array of situations, including lowrank and Lipschitz matrices (e.g., for lowrank matrices, scales as where r is the rank of the matrix, see (Chatterjee, 2015) for bounds on under various settings). We note the universal singular value thresholding algorithm proposed in (Chatterjee, 2015) is one such algorithm that satisfies Property 2.1. We provide more intuition for why we choose Property 2.1 for our matrix estimation methods in Section 4.2, where we bound the imputation error.
Property 2.2 ().
Let ME satisfy the following: For all , the produced estimator satisfies
(4) 
where .
Property 2.2 requires the normalized max row sum error to decay to zero as we collect more data. While spectral thresholding and convex optimization methods accurately bound the average meansquared error, minimizing norms akin to the normalized max row sum error require matrix estimation methods to utilize “local” information, e.g., nearest neighbor type methods. For instance, (Zhang et al., 2015) satisfies Property 2.2 for generic latent variable models (which include lowrank models) with ; (Lee et al., 2016) also satisfies Property 2.2 for ; (Borgs et al., 2017) establishes this for lowrank models as long as .
3. Algorithm
3.1. Notations and definitions
Recall that denotes the observation at time where . We shall use the notation for any . Furthermore, we define
to be an algorithmic hyperparameter and
. For any matrix , let represent the the last row of . Moreover, let denote the submatrix obtained by removing the last row of .3.2. Viewing a univariate time series as a matrix.
We begin by introducing the crucial step of transforming a single, univariate time series into the corresponding Page matrix. Given time series data , we construct different matrices defined as
(5) 
where ^{7}^{7}7Technically, to define each , we need access to time steps of data. To reduce notational overload and since it has no bearing on our theoretical analysis, we let .. In words, is obtained by dividing the time series into nonoverlapping contiguous intervals each of length , thus constructing columns; for each , is the th shifted version with starting value . For the purpose of imputation, we shall only utilize . In the case of forecasting, however, we shall utilize for all . We define analogously to using instead of .
3.3. Algorithm description
We will now describe the imputation and forecast algorithms separately (see Figure 1).
Imputation. Due to the matrix representation of the time series, the task of imputing missing values and denoising observed values translates to that of matrix estimation.
Forecast. In order to forecast future values, we first denoise and impute via the procedure outlined above, and then learn a linear relationship between the the last row and the remaining rows through linear regression.

For each , apply the imputation algorithm to produce from .

For each , define .

Produce the estimate at time as follows:

Let and .

Let .

Produce the estimate: .

Why is necessary for forecasting: For imputation, we are attempting to denoise all observations made up to time ; hence, it suffices to only use since it contains all of the relevant information. However, in the case of making predictions, we are only creating an estimator for the last row. Thus, if we take for instance, then it is not hard to see that our prediction algorithm only produces estimates for and so on. Therefore, we must repeat this procedure times in order to produce an estimate for each entry.
4. Main Results
4.1. Properties
We now introduce the required properties for the matrices and to identify the time series models for which our algorithm provides an effective method for imputation and prediction. Under these properties, we state Theorems 4.1 and 4.2, which establish the efficacy of our algorithm. The proofs of these theorems can be found in Appendices B and C, respectively. In Section 5, we argue these properties are satisfied for a large class of time series models.
Property 4.1 ().
()imputable
Let matrices and satisfy the following:

[leftmargin=*]

A. For each and :

are independent subgaussian random variables^{8}^{8}8Recall that this condition only requires the perstep noise to be independent; the underlying mean time series remains highly correlated. satisfying and .

is observed with probability , independent of other entries.


B. There exists a matrix of rank such that for ,
Property 4.2 ().
()forecastable
For all , let matrices and satisfy the following:

[leftmargin=*]

A. For each and :

, where are independent subGaussian random variables satisfying and .

is observed with probability , independent of other entries.


B. There exists a with for some constant and such that
For forecasting, we make the more restrictive additive noise assumption since we focus on linear forecasting methods. Such methods generally require additive noise models. If one can construct linear forecasters under less restrictive assumptions, then we should be able to lift the analysis of such a forecaster to our setting in a straightforward way.
4.2. Imputation
The imputation algorithm produces as the estimate for the underlying time series . We measure the imputation error through the relative meansquared error:
(6) 
Recall from the imputation algorithm in Section 3.3 that is the Page matrix corresponding to and is the estimate ME produces; i.e. . It is then easy to see that for any matrix estimation method we have
(7) 
Thus, we can immediately translate the (unnormalized) MSE of any matrix estimation method to the imputation error of the corresponding time series.
However, to highlight how the rank and the lowrank approximation error of the underlying mean matrix (induced by ) affect the error bound, we rely on Property 2.1, which elucidates these dependencies through the quantity . Thus, we have the following theorem that establishes a precise link between time series imputation and matrix estimation methods.
Theorem 4.1 states that any matrix estimation subroutine ME that satisfies Property 2.1 will accurately filter noisy observations and recover missing values. This is achieved provided that the rank of and our lowrank approximation error are not too large. Note that knowledge of is not required apriori for many standard matrix estimation algorithms. For instance, (Chatterjee, 2015) does not utilize the rank of in its estimation procedure; instead, it performs spectral thresholding of the observed data matrix in an adaptive, datadriven manner. Theorem 4.1 implies the following consistency property of .
Corollary 4.1 ().
Let the conditions for Theorem 4.1 hold. Let ^{9}^{9}9Note the condition is easily satisfied for any time series by adding a constant shift to every observation .. Further, suppose is ()imputable for some and . Then for
We note that Theorem 4.1 follows in a straightforward manner from Property 2.1
and standard results from random matrix theory
(Vershynin, 2010). However, we again highlight that our key contribution lies in establishing that the conditions of Corollary 4.1 hold for a large class of time series models (Section 5).4.3. Forecast
Recall can only utilize information until time . For all , our forecasting algorithm learns with the previous time steps. We measure the forecasting error through:
(9) 
Here, denotes the vector of forecasted values. The following result relies on a novel analysis of how applying a matrix estimation preprocessing step affects the prediction error of errorinvariable regression problems (in particular, it requires analyzing a nonstandard error metric, the MRSE).
Theorem 4.2 ().
Note that is trivially bounded by by assumption (see Section 3). If the underlying matrix is lowrank, then ME algorithms such as the USVT algorithm (cf. (Chatterjee, 2015)) will output an estimator with a small . However, since our bound holds for general ME methods, we explicitly state the dependence on .
In essence, Theorem 4.2 states that any matrix estimation subroutine ME that satisfies Property 2.2 will produce accurate forecasts from noisy, missing data. This is achieved provided the linear model approximation error is not too large (recall by Property 2.2). Additionally, Theorem 4.2 implies the following consistency property of .
Corollary 4.2 ().
Let the conditions for Theorem 4.2 hold. Suppose is forecastable for any and for any . Then for , such that for ,
Similar to the case of imputation, a large contribution of this work is in establishing that the conditions of Corollary 4.2 hold for a large class of time series models (Section 5). Effectively, Corollary 4.2 demonstrates that learning a simple linear relationship among the singular vectors of the denoised matrix is sufficient to drive the empirical error to zero for a broad class of time series models. The simplicity of this linear method suggests that our estimator will have low generalization error, but we leave that as future work.
We should also note that for autoregressive processes (i.e., where is mean zero noise), previous works (e.g., (Nardi and Rinaldo, 2011)) have already shown that simple linear forecasters are consistent estimators. For such models, it is easy to see that the underling mean matrix is not (approximately) lowrank, and so it is not necessary to preprocess the data matrix via a matrix estimation subroutine as we propose in Section 3.3.
5. Family of Time Series That Fit Our Framework
In this section, we list out a broad set of time series models that satisfy Properties 4.1 and 4.2, which are required for the results stated in Section 4. The proofs of these results can be found in Appendix D. To that end, we shall repeatedly use the following model types for our observations.

[wide, labelwidth=!, labelindent=0pt]

Model Type 1. For any , let be a sequence of independent subgaussian random variables with and . Note the noise on is generic (e.g., nonadditive).

Model Type 2. For , let where are independent subgaussian random variables with and .
5.1. Linear recurrent functions (LRFs)
For , let
(10) 
Proposition 5.1 ().
.

Under Model Type 1, satisfies Property 4.1 with and ^{11}^{11}11To see this, take for example. WLOG, let us consider the first column. Then , which in turn gives and . By induction, it is not hard to see that this holds more generally for any finite ..

Under Model Type 2, satisfies Property 4.2 with and for all where is an absolute constant.
Corollary 5.1 ().
Under Model Type 1, let the conditions of Theorem 4.1 hold. Let for any . Then for some , if
we have .
Corollary 5.2 ().
Under Model Type 2, let the conditions of Theorem 4.2 hold. Let for any . Then for some , if
we have .
We now provide the rank of an important class of time series methods—a finite sum of the product of polynomials, harmonics, and exponential time series functions.
Proposition 5.2 ().
Let be a polynomial of degree . Then,
admits a representation as in (10). Further the order of is independent of , the number of observations, and is bounded by
where .
5.2. Functions with compact support
For , let
(11) 
where takes the form with ; and is Lipschitz for some .
Proposition 5.3 ().
Corollary 5.3 ().
Under Model Type 1, let the conditions of Theorem 4.1 hold. Let for any . Then for some and any , if
we have .
Corollary 5.4 ().
Under Model Type 2, let the conditions of Theorem 4.2 hold. Let for any . Then for some and any , if
we have .
As the following proposition will make precise, any Lipschitz function of a periodic time series falls into this family.
Proposition 5.4 ().
Let
(12) 
where is Lipschitz and is rational, admits a representation as in (11). Let denote the fundamental period.^{12}^{12}12The “fundamental period”, , of is the smallest value such that is an integer for all . Let and let be the least common multiple (LCM) of . Rewriting as , we have the set of numerators, are all integers and we define their LCM as . It is easy to verify that is indeed a fundamental period. As an example, consider , in which case the above computation results in . Then the Lipschitz constant of is bounded by
5.3. Finite sum of sublinear trends
Consider such that
(13) 
for some .
Proposition 5.5 ().
By Proposition 5.5 and Theorems 4.1 and 4.2, we immediately have the following corollaries on the finite sample performance guarantees of our estimators.
Corollary 5.5 ().
Under Model Type 1, let the conditions of Theorem 4.1 hold. Let for any . Then for some , if
we have .
Corollary 5.6 ().
Under Model Type 2, let the conditions of Theorem 4.2 hold. Let for any . Then for some and for any , if
we have .
Proposition 5.6 ().
5.4. Additive mixture of dynamics
We now show that the imputation results hold even when we consider an additive mixture of any of the models described above. For , let
(15) 
Here, each is such that under Model Type 1 with , Property 4.1 is satisfied with and for .
Proposition 5.7 ().
Under Model Type 1, satisfies Property 4.1 with and .
Corollary 5.7 ().
Under Model Type 1, let the conditions of Theorem 4.1 hold. For each , let and for some . Then,
Comments
There are no comments yet.