Time Series Analysis via Matrix Estimation

by   Anish Agarwal, et al.

We consider the task of interpolating and forecasting a time series in the presence of noise and missing data. As the main contribution of this work, we introduce an algorithm that transforms the observed time series into a matrix, utilizes singular value thresholding to simultaneously recover missing values and de-noise observed entries, and performs linear regression to make predictions. We argue that this method provides meaningful imputation and forecasting for a large class of models: finite sum of harmonics (which approximate stationary processes), non-stationary sublinear trends, Linear Time-Invariant (LTI) systems, and their additive mixtures. In general, our algorithm recovers the hidden state of dynamics based on its noisy observations, like that of a Hidden Markov Model (HMM), provided the dynamics obey the above stated models. We demonstrate on synthetic and real-world datasets that our algorithm outperforms standard software packages not only in the presence of significantly missing data with high levels of noise, but also when the packages are given the underlying model while our algorithm remains oblivious. This is in line with the finite sample analysis for these model classes.



There are no comments yet.


page 1

page 2

page 3

page 4


On a simultaneous parameter inference and missing data imputation for nonstationary autoregressive models

This work addresses the problem of missing data in time-series analysis ...

Factorized Inference in Deep Markov Models for Incomplete Multimodal Time Series

Integrating deep learning with latent state space models has the potenti...

Learning Mixture Model with Missing Values and its Application to Rankings

We consider the question of learning mixtures of generic sub-gaussian di...

Robust Synthetic Control

We present a robust generalization of the synthetic control method for c...

A Unified Framework for Long Range and Cold Start Forecasting of Seasonal Profiles in Time Series

Providing long-range forecasts is a fundamental challenge in time series...

Time Series Source Separation using Dynamic Mode Decomposition

The dynamic mode decomposition (DMD) extracted dynamic modes are the non...

Graphon Estimation from Partially Observed Network Data

We consider estimating the edge-probability matrix of a network generate...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Time series data is of enormous interest across all domains of life: from health sciences and weather forecasts to retail and finance, time dependent data is ubiquitous. Despite the diversity of applications, time series problems are commonly confronted by the same two pervasive obstacles: interpolation and extrapolation in the presence of noisy and/or missing data. Specifically, we consider a discrete-time setting with representing the time index and 111We denote as the field of real numbers and as the integers. representing the latent discrete-time time series of interest. For each

and with probability

, we observe the random variable

such that . While the underlying mean signal is of course strongly correlated, we assume the per-step noise is independent across

and has uniformly bounded variance. Under this setting, we have two objectives: (1) interpolation, i.e., estimate

for all ; (2) extrapolation, i.e., forecast for . Our interest is in designing a generic method for interpolation and extrapolation that is applicable to a large model class while being agnostic to the time dynamics and noise distribution.

We develop an algorithm based on matrix estimation, a topic which has received widespread attention, especially with the advent of large datasets. In the matrix estimation setting, there is a “parameter” matrix of interest, and we observe a sparse, corrupted signal matrix where . The aim then is to recover the entries of from noisy and partial observations given in . For our purposes, the attractiveness of matrix estimation derives from the property that these methods are fairly model agnostic in terms of the structure of and distribution of given . We utilize this key property to develop a model and noise agnostic time series imputation and prediction algorithm.

1.1. Overview of contributions

Time series as a matrix. We transform the time series of observations for into what is known as the Page matrix (cf. (Damen et al., 1982)) by placing contiguous segments of size (an algorithmic hyper-parameter) of the time series into non-overlapping columns; see Figure 1 for a caricature of this transformation.

As the key contribution, we establish that—in expectation—this generated matrix is either exactly or approximately low-rank for a large class of models . Specifically, can be from the following families:

  1. [wide, labelwidth=!, labelindent=0pt]

  2. Linear Recurrent Formulae (LRF): .

  3. Compact Support: where has the form with for some ; and is -Lipschitz 222We say is -Lipschitz if there exists a such that for all and denotes the standard Euclidean norm on . 333It can be verified that if is an LRF satisfying , then it satisfies the form for with appropriately defined constants , functions ; see Proposition D.2 of Appendix D for details..

  4. Sublinear: where and for some , and .

Over the past decade, the matrix estimation community has developed a plethora of methods to recover an exact or approximately low-rank matrix from its noisy, partial observations in a noise and model agnostic manner. Therefore, by applying such a matrix estimation method to this transformed matrix, we can recover the underlying mean matrix (and thus for ) accurately. In other words, we can interpolate and de-noise the original corrupted and incomplete time series without any knowledge of its time dynamics or noise distribution. Theorem 4.1 and Corollary 4.1 provide finite-sample analyses for this method and establish the consistency property of our algorithm, as long as the underlying satisfies Property 4.1 and the matrix estimation method satisfies Property 2.1. In Section 5, we show that any additive mixture of the three function classes listed above satisfies Property 4.1. Effectively, Theorem 4.1 establishes a statistical reduction between time series imputation and matrix estimation. Our key contribution with regards to imputation lies in establishing that a large class of time series models (see Section 5) satisfies Property 4.1.

Figure 1. Caricature of imputation and forecast algorithms. We first transform the noisy time series (with “” indicating missing data) into a Page matrix with non-overlapping entries. For imputation, we apply a matrix estimation (ME) algorithm with input to obtain the estimates for the de-noised and filled-in entries. For forecasting, we first apply ME to (i.e., excluding the last row), and then fit a linear model between the last row and all other rows to obtain the forecast estimates .

It is clear that for LRF, the last row of the mean transformed matrix can be expressed as a linear combination of the other rows. An important representation result of the present paper, which generalizes this notion, is that an approximate LRF relationship holds for the other two model classes. Therefore, we can forecast , say for

, as follows: apply matrix estimation to the transformed data matrix as done in imputation; then, linearly regress the last row with respect to the other rows in the matrix; finally, compute the inner product of the learnt regression vector with the vector containing the previous

values that were estimated via the matrix estimation method. Theorem 4.2 and Corollary 4.2 imply that the mean-squared error of our predictions decays to zero provided the matrix estimation method satisfies Property 2.2 and the underlying model satisfies Property 4.2. Similar to the case of imputation, establishing that Property 4.2 holds for the three function classes is novel (see Section 5).

Noisy regression. Our proposed forecasting algorithm performs regression with noisy and incomplete features. In the literature, this is known as error-in-variable regression. Recently, there has been exciting progress to understand this problem especially in the high-dimensional setting (Po-ling and Wainwright, 2012; Belloni et al., 2017; Datta and Zou, 2017)

. Our algorithm offers an alternate solution for the high-dimensional setting through the lens of matrix estimation: first, utilize matrix estimation to de-noise and impute the feature observations, and then perform least squares with the pre-processed feature matrix. We demonstrate that if the true, underlying feature matrix is (approximately) low-rank, then our algorithm provides a consistent estimator to the true signal (with finite sample guarantees). Our analysis further suggests the usage of a non-standard error metric, the max row sum error (MRSE) (see Property

2.2 for details).

Class of applicable models. As aforementioned, our algorithm enjoys strong performance guarantees provided the underlying mean matrix induced by the time series satisfies certain structural properties, i.e., Properties 4.1 and 4.2. We argue that a broad class of commonly used time series models meets the requirements of the three function classes listed above.

LRFs include the following important family of time series: a finite sum of products of exponentials (), harmonics (), and finite degree polynomials () (Golyandina et al., 2001), i.e., . Further, since stationary processes and integrable functions are well approximated by a finite summation of harmonics (i.e., and ), LRFs encompass a vitally important family of models. For this model, we show that indeed the structural properties required from the time series matrix for both imputation and prediction are satisfied.

However, there are many important time series models that do not admit a finite order LRF representation. A few toy examples include . Time series models with compact support, on the other hand, include models composed of a finite summation of periodic functions (e.g., ). Utilizing our low-rank representation result, we establish that models with compact support possess the desired structural properties. We further demonstrate that sublinear functions, which include models that are composed of a finite summation of non (super-)linear functions (e.g., ), also possess the necessary structural properties. Importantly, we argue that the finite mixture of the above processes satisfy the necessary structural properties.

Recovering the hidden state. Our algorithm, being noise and time-dynamics agnostic, makes it relevant to recover the hidden state from its noisy, partial observations as in a Hidden Markov-like Model. For example, imagine having access to partial observations of a time-varying truncated Poisson process444Let denote a positive, bounded constant, and a Poisson random variable. We define the truncated Poisson random variable as . without

knowledge that the process is Poisson. By applying our imputation algorithm, we can recover time-varying parameters of this process accurately and, thus, the hidden states. If we were to apply an Expectation-Maximization (EM) like algorithm, it would require knowledge of the underlying model being Poisson; moreover, theoretical guarantees are not clear for such an approach.

Sample complexity. Given the generality and model agnostic nature of our algorithm, it is expected that its sample complexity for a specific model class will be worse than model aware optimal algorithms. Interestingly, our finite sample analysis suggests that for the model classes stated above, the performance loss incurred due to this generality is minor. See Section 5.6 for a detailed analysis.

Experiments. Using synthetic and real-world datasets, our experiments establish that our method outperforms existing standard software packages

(including R) for the tasks of interpolation and extrapolation in the presence of noisy and missing observations. When the data is generated synthetically, we “help” the existing software package by choosing the correct parametric model and algorithm while our algorithm remains oblivious to the underlying model; despite this disadvantage, our algorithm continues to

outperform the standard packages with missing data.

Further, our empirical studies demonstrate that our imputation algorithm accurately recovers the hidden state for Hidden Markov-like Models, verifying our theoretical imputation guarantees (see Theorem 4.1). All experimental findings can be found in Section 6.

1.2. Related works

There are two related topics: matrix estimation and time series analysis. Given the richness of both fields, we cannot do justice in providing a full overview. Instead, we provide a high-level summary of known results with references that provide details.

Matrix estimation. Matrix estimation is the problem of recovering a data matrix from an incomplete and noisy sampling of its entries. This has become of great interest due to its connection to recommendation systems (cf. (Keshavan et al., 2010a, b; Negahban and Wainwright, 2011; Chen and Wainwright, 2015; Chatterjee, 2015; Lee et al., 2016; Candès and Tao, 2010; Recht, 2011; Davenport et al., 2014)), social network analysis (cf. (Abbe and Sandon, 2015a, b, 2016; Anandkumar et al., 2013; Hopkins and Steurer, 2017)), and graph learning (graphon estimation) (cf. (Airoldi et al., 2013; Zhang et al., 2015; Borgs et al., 2015, 2017)). The key realization of this rich literature is that one can estimate the true underlying matrix from noisy, partial observations by simply taking a low-rank approximation of the observed data. We refer an interested reader to recent works such as (Chatterjee, 2015; Borgs et al., 2017) and references there in.

Time series analysis. The question of time series analysis is potentially as old as civilization in some form. Few textbook style references include (Brockwell and Davis, 2013; Box and Reinsel, 1994; Hamilton, 1994; Robert H. Shumway, 2015). At the highest level, time series modeling primarily involves viewing a given time series as a function indexed by time (integer or real values) and the goal of model learning is to identify this function from observations (over finite intervals). Given that the space of such functions is complex, the task is to utilize function form (i.e., “basis functions”) so that for the given setting, the time series observation can fit a sparse representation. For example, in communication and signal processing, the harmonic or Fourier representation of a time series has been widely utilized, due to the fact that signals communicated are periodic in nature. The approximation of stationary processes via harmonics or ARIMA has made them a popular model class to learn stationary-like time series, with domain specific popular variations, such as ‘Autoregressive Conditional Heteroskedasticity’ (ARCH) in finance. To capture non-stationary or “trend-like” behavior, polynomial bases have been considered. There are rich connections to the theory of stochastic processes and information theory (cf. (Cover, 1966; Shields, 1998; Rissanen, 1984; Feder et al., 1992)). Popular time series models with latent structure are Hidden Markov Models (HMM) in probabilistic form (cf. (Kalman et al., 1960; Baum and Petrie, 1966)

and Recurrent Neural Networks (RNN) in deterministic form (cf.

(Schmidhuber, 1992)).

The question of learning time series models with missing data has received comparatively less attention. A common approach is to utilize HMMs or general State-Space-Models to learn with missing data (cf. (Dunsmuir and Robinson, 1981; Shumway and Stoffer, 1982)). To the best of the authors’ knowledge, most work within this literature is restricted to such class of models (cf. (Durbin and Koopman, 2012)). Recently, building on the literature in online learning, sequential approaches have been proposed to address prediction with missing data (cf. (Anava et al., 2015)).

Time series and matrix estimation. The use of a matrix structure for time series analysis has roughly two streams of related work: SSA for a single time series (as in our setting), and the use of multiple time series. We discuss relevant results for both of these topics.

Singular Spectrum Analysis (SSA)

of time series has been around for some time. Generally, it assumes access to time series data that is not noisy and fully observed. The core steps of SSA for a given time series are as follows: (1) create a Hankel matrix from the time series data; (2) perform a Singular Value Decomposition (SVD) of it; (3) group the singular values based on user belief of the model that generated the process; (4) perform diagonal averaging for the “Hankelization” of the grouped rank-1 matrices outputted from the SVD to create a set of time series; (5) learn a linear model for each “Hankelized” time series for the purpose of forecasting.

At the highest level, SSA and our algorithm are cosmetically similar to one another. There are, however, several key differences: (i) matrix transformation—while SSA uses a Hankel matrix (with repeated entries), we transform the time series into a Page matrix (with non-overlapping structure); (ii) matrix estimation—SSA heavily relies on the SVD while we utilize general matrix estimation procedures (with SVD methods representing one specific procedural choice); (iii) linear regression—SSA assumes access to fully observed and noiseless data while we allow for corrupted and missing entries.

These differences are key in being able to derive theoretical results. For example, there have been numerous recent works that have attempted to apply matrix estimation methods to the Hankel matrix inspired by SSA for imputation, but these works do not provide any theoretical guarantees (Shen et al., 2015; Schoellhamer, 2001; Tsagkatakis et al., 2016). In effect, the Hankel structure creates strong correlation of noise in the matrix, which is an impediment for proving theoretical results. Our use of the Page matrix overcomes this challenge and we argue that in doing so, we still retain the underlying structure in the matrix. With regards to forecasting, the use of matrix estimation methods that provide guarantees with respect to MRSE rather than standard MSE is needed (which SSA provides no theoretical analysis for). While we do not explicitly discuss such methods in this work, such methods are explored in detail in  (Agarwal et al., 2018). With regards to imputation, SSA does not provide direction on how to group the singular values, which is instead done based on user belief of the generating process. However, due to recent advances in matrix estimation literature, there exist algorithms that provide data-driven methods to perform spectral thresholding (cf. (Chatterjee, 2015)). Finally, it is worth nothing that to the best of the authors’ knowledge, the classical literature on SSA seem to be lacking finite sample analysis in the presence of noisy observations, which we do provide for our algorithm.

Multiple time series viewed as matrix. In a recent line of work (Amjad and Shah, 2017; Yu et al., 2016; Xie et al., 2016; Rallapalli et al., 2010; Chen and Cichocki, 2005; Amjad et al., 2017), multiple time series have been viewed as a matrix with the primary goal of imputing missing values or de-noising them. Some of these works also require prior model assumptions on the underlying time series. For example in (Yu et al., 2016), as stated in Section 1, the second step of their algorithm changes based on the user’s belief in the model that generated the data along with the multiple time series requirement.

In summary, to the best of our knowledge, ours is the first work to give rigorous theoretical guarantees for a matrix estimation inspired algorithm for a single, univariate time series.

Recovering the hidden state. The question of recovering the hidden state from noisy observations is quite prevalent and a workhorse of classical systems theory. For example, most of the system identification literature focuses on recovering model parameters of a Hidden Markov Model. While Expectation-Maximization or Baum-Welch are the go-to approaches, there is limited theoretical understanding of it in generality (for example, see a recent work (Yang et al., 2017) for an overview) and knowledge of the underlying model is required. For instance, (Bertsimas et al., 1999) proposed an optimization based, statistically consistent estimation method. However, the optimization “objective” encoded knowledge of the precise underlying model.

It is worth comparing our method with a recent work (Amjad and Shah, 2017) where the authors attempt to recover the hidden time-varying parameter of a Poisson process via matrix estimation. Unlike our work, they require access to multiple time series. In essence, our algorithm provides the solution to the same question without requiring access to any other time series!

1.3. Notation

For any positive integer , let . For any vector , we denote its Euclidean () norm by , and define . In general, the norm for a vector is defined as .

For a real-valued matrix , its spectral/operator norm, denoted by , is defined as , where and are the singular values of (assumed to be in decreasing order and repeated by multiplicities). The Frobenius norm, also known as the Hilbert-Schmidt norm, is defined as The max-norm, or sup-norm, is defined as . The Moore-Penrose pseudoinverse of is defined as

with and being the left and right singular vectors of , respectively.

For a random variable we define its sub-gaussian norm as

If is bounded by a constant, we call a sub-gaussian random variable.

Let and be two functions defined on the same space. We say that if and only if there exists a positive real number and a real number such that for all , . Similarly, we say if and only if for all , .

1.4. Organization

In Section 2, we list the desired properties needed from a matrix estimation estimation method in order to achieve our theoretical guarantees for imputation and prediction. In Section 3, we formally describe the matrix estimation based algorithms we utilize for time series analysis. In Section 4, we identify the required properties of time series models under which we can provide finite sample analysis for imputation and prediction performance. In Section 5, we list a broad set of time series models that satisfy the properties in Section 4, and we analyze the sample complexity of our algorithm for each of these models. Lastly, in Section 6, we corroborate our theoretical findings with detailed experiments.

2. Matrix Estimation

2.1. Problem setup

Consider an matrix of interest. Suppose we observe a random subset of the entries of a noisy signal matrix , such that . For each and , the -th entry is a random variable that is observed with probability and is missing with probability , independently of all other entries. Given , the goal is to produce an estimator that is “close” to . We use two metrics to quantify the estimation error:

(1) mean-squared error,


(2) max row sum error,


Here, and denote the -th elements of and , respectively. We highlight that the MRSE is a non-standard matrix estimation error metric, but we note that it is a stronger notion than the 555.; in particular, it is easily seen that . Hence, for any results we prove in Section 4 regarding the MRSE, any known lower bounds for RMSE of matrix estimation algorithms immediately hold for our results. We now give a definition of a matrix estimation algorithm, which will be used in the following sections.

Definition 2.0 ().

A matrix estimation algorithm, denoted as , takes as input a noisy matrix and outputs an estimator .

2.2. Required properties of matrix estimation algorithms

As aforementioned, our algorithm (Section 3.3) utilizes matrix estimation as a pivotal “blackbox” subroutine, which enables accurate imputation and prediction in a model and noise agnostic setting. Over the past decade, the field of matrix estimation has spurred tremendous theoretical and empirical research interest, leading to the emergence of a myriad of algorithms including spectral, convex optimization, and nearest neighbor based approaches. Consequently, as the field continues to advance, our algorithm will continue to improve in parallel. We now state the properties needed of a matrix estimation algorithm to achieve our theoretical guarantees (formalized through Theorems 4.1 and 4.2); refer to Section 1.3 for matrix norm definitions.

Property 2.1 ().

Let ME satisfy the following: Define where if is observed, and otherwise. Then, for all and some , the produced estimator satisfies


Here, 666Precisely, we define . denotes the proportion of observed entries in and is a universal constant.

We argue the two quantities in Property 2.1, and , are natural. quantifies the amount of noise corruption on the underlying signal matrix ; for many settings, this norm concentrates well (e.g., a matrix with independent zero-mean sub-gaussian entries scales as with high probability (Vershynin, 2010)). quantifies the inherent model complexity of the latent signal matrix; this norm is well behaved for an array of situations, including low-rank and Lipschitz matrices (e.g., for low-rank matrices, scales as where r is the rank of the matrix, see (Chatterjee, 2015) for bounds on under various settings). We note the universal singular value thresholding algorithm proposed in (Chatterjee, 2015) is one such algorithm that satisfies Property 2.1. We provide more intuition for why we choose Property 2.1 for our matrix estimation methods in Section 4.2, where we bound the imputation error.

Property 2.2 ().

Let ME satisfy the following: For all , the produced estimator satisfies


where .

Property 2.2 requires the normalized max row sum error to decay to zero as we collect more data. While spectral thresholding and convex optimization methods accurately bound the average mean-squared error, minimizing norms akin to the normalized max row sum error require matrix estimation methods to utilize “local” information, e.g., nearest neighbor type methods. For instance, (Zhang et al., 2015) satisfies Property 2.2 for generic latent variable models (which include low-rank models) with ; (Lee et al., 2016) also satisfies Property 2.2 for ; (Borgs et al., 2017) establishes this for low-rank models as long as .

3. Algorithm

3.1. Notations and definitions

Recall that denotes the observation at time where . We shall use the notation for any . Furthermore, we define

to be an algorithmic hyperparameter and

. For any matrix , let represent the the last row of . Moreover, let denote the submatrix obtained by removing the last row of .

3.2. Viewing a univariate time series as a matrix.

We begin by introducing the crucial step of transforming a single, univariate time series into the corresponding Page matrix. Given time series data , we construct different matrices defined as


where 777Technically, to define each , we need access to time steps of data. To reduce notational overload and since it has no bearing on our theoretical analysis, we let .. In words, is obtained by dividing the time series into non-overlapping contiguous intervals each of length , thus constructing columns; for each , is the -th shifted version with starting value . For the purpose of imputation, we shall only utilize . In the case of forecasting, however, we shall utilize for all . We define analogously to using instead of .

3.3. Algorithm description

We will now describe the imputation and forecast algorithms separately (see Figure 1).

Imputation. Due to the matrix representation of the time series, the task of imputing missing values and de-noising observed values translates to that of matrix estimation.

  1. Transform the data into the matrix via the method outlined in Subsection 3.2.

  2. Apply a matrix estimation method (as in Definition 2.1) to produce .

  3. Produce estimate: for and .

Forecast. In order to forecast future values, we first de-noise and impute via the procedure outlined above, and then learn a linear relationship between the the last row and the remaining rows through linear regression.

  1. For each , apply the imputation algorithm to produce from .

  2. For each , define .

  3. Produce the estimate at time as follows:

    • Let and .

    • Let .

    • Produce the estimate: .

Why is necessary for forecasting: For imputation, we are attempting to de-noise all observations made up to time ; hence, it suffices to only use since it contains all of the relevant information. However, in the case of making predictions, we are only creating an estimator for the last row. Thus, if we take for instance, then it is not hard to see that our prediction algorithm only produces estimates for and so on. Therefore, we must repeat this procedure times in order to produce an estimate for each entry.

Choosing the number of rows : Theorems 4.1 and 4.2 (and the associated corollaries) suggest should be as large as possible with the requirement . Thus, it suffices to let for any , e.g., .

4. Main Results

4.1. Properties

We now introduce the required properties for the matrices and to identify the time series models for which our algorithm provides an effective method for imputation and prediction. Under these properties, we state Theorems 4.1 and 4.2, which establish the efficacy of our algorithm. The proofs of these theorems can be found in Appendices B and C, respectively. In Section 5, we argue these properties are satisfied for a large class of time series models.

Property 4.1 ().

Let matrices and satisfy the following:

  • [leftmargin=*]

  • A. For each and :

    • are independent sub-gaussian random variables888Recall that this condition only requires the per-step noise to be independent; the underlying mean time series remains highly correlated. satisfying and .

    • is observed with probability , independent of other entries.

  • B. There exists a matrix of rank such that for ,

Property 4.2 ().

For all , let matrices and satisfy the following:

  • [leftmargin=*]

  • A. For each and :

    • , where are independent sub-Gaussian random variables satisfying and .

    • is observed with probability , independent of other entries.

  • B. There exists a with for some constant and such that

For forecasting, we make the more restrictive additive noise assumption since we focus on linear forecasting methods. Such methods generally require additive noise models. If one can construct linear forecasters under less restrictive assumptions, then we should be able to lift the analysis of such a forecaster to our setting in a straightforward way.

4.2. Imputation

The imputation algorithm produces as the estimate for the underlying time series . We measure the imputation error through the relative mean-squared error:


Recall from the imputation algorithm in Section 3.3 that is the Page matrix corresponding to and is the estimate ME produces; i.e. . It is then easy to see that for any matrix estimation method we have


Thus, we can immediately translate the (un-normalized) MSE of any matrix estimation method to the imputation error of the corresponding time series.

However, to highlight how the rank and the low-rank approximation error of the underlying mean matrix (induced by ) affect the error bound, we rely on Property 2.1, which elucidates these dependencies through the quantity . Thus, we have the following theorem that establishes a precise link between time series imputation and matrix estimation methods.

Theorem 4.1 ().

Assume Property 4.1 holds and ME satisfies Property 2.1. Then for some ,


Theorem 4.1 states that any matrix estimation subroutine ME that satisfies Property 2.1 will accurately filter noisy observations and recover missing values. This is achieved provided that the rank of and our low-rank approximation error are not too large. Note that knowledge of is not required apriori for many standard matrix estimation algorithms. For instance, (Chatterjee, 2015) does not utilize the rank of in its estimation procedure; instead, it performs spectral thresholding of the observed data matrix in an adaptive, data-driven manner. Theorem 4.1 implies the following consistency property of .

Corollary 4.1 ().

Let the conditions for Theorem 4.1 hold. Let 999Note the condition is easily satisfied for any time series by adding a constant shift to every observation .. Further, suppose is ()-imputable for some and . Then for

We note that Theorem 4.1 follows in a straightforward manner from Property 2.1

and standard results from random matrix theory

(Vershynin, 2010). However, we again highlight that our key contribution lies in establishing that the conditions of Corollary 4.1 hold for a large class of time series models (Section 5).

4.3. Forecast

Recall can only utilize information until time . For all , our forecasting algorithm learns with the previous time steps. We measure the forecasting error through:


Here, denotes the vector of forecasted values. The following result relies on a novel analysis of how applying a matrix estimation pre-processing step affects the prediction error of error-in-variable regression problems (in particular, it requires analyzing a non-standard error metric, the MRSE).

Theorem 4.2 ().

Assume Property 4.2 holds and ME satisfies Property 2.2, with 101010Refer to Section 2.2 for lower bounds on for various ME algorithms. The dependence of the bound on is implicitly captured in .. Let . Then,

Note that is trivially bounded by by assumption (see Section 3). If the underlying matrix is low-rank, then ME algorithms such as the USVT algorithm (cf. (Chatterjee, 2015)) will output an estimator with a small . However, since our bound holds for general ME methods, we explicitly state the dependence on .

In essence, Theorem 4.2 states that any matrix estimation subroutine ME that satisfies Property 2.2 will produce accurate forecasts from noisy, missing data. This is achieved provided the linear model approximation error is not too large (recall by Property 2.2). Additionally, Theorem 4.2 implies the following consistency property of .

Corollary 4.2 ().

Let the conditions for Theorem 4.2 hold. Suppose is -forecastable for any and for any . Then for , such that for ,

Similar to the case of imputation, a large contribution of this work is in establishing that the conditions of Corollary 4.2 hold for a large class of time series models (Section 5). Effectively, Corollary 4.2 demonstrates that learning a simple linear relationship among the singular vectors of the de-noised matrix is sufficient to drive the empirical error to zero for a broad class of time series models. The simplicity of this linear method suggests that our estimator will have low generalization error, but we leave that as future work.

We should also note that for auto-regressive processes (i.e., where is mean zero noise), previous works (e.g., (Nardi and Rinaldo, 2011)) have already shown that simple linear forecasters are consistent estimators. For such models, it is easy to see that the underling mean matrix is not (approximately) low-rank, and so it is not necessary to pre-process the data matrix via a matrix estimation subroutine as we propose in Section 3.3.

5. Family of Time Series That Fit Our Framework

In this section, we list out a broad set of time series models that satisfy Properties 4.1 and 4.2, which are required for the results stated in Section 4. The proofs of these results can be found in Appendix D. To that end, we shall repeatedly use the following model types for our observations.

  • [wide, labelwidth=!, labelindent=0pt]

  • Model Type 1. For any , let be a sequence of independent sub-gaussian random variables with and . Note the noise on is generic (e.g., non-additive).

  • Model Type 2. For , let where are independent sub-gaussian random variables with and .

5.1. Linear recurrent functions (LRFs)

For , let

Proposition 5.1 ().


  • Under Model Type 1, satisfies Property 4.1 with and 111111To see this, take for example. WLOG, let us consider the first column. Then , which in turn gives and . By induction, it is not hard to see that this holds more generally for any finite ..

  • Under Model Type 2, satisfies Property 4.2 with and for all where is an absolute constant.

By Proposition 5.1, Theorems 4.1 and 4.2 give the following corollaries:

Corollary 5.1 ().

Under Model Type 1, let the conditions of Theorem 4.1 hold. Let for any . Then for some , if

we have .

Corollary 5.2 ().

Under Model Type 2, let the conditions of Theorem 4.2 hold. Let for any . Then for some , if

we have .

We now provide the rank of an important class of time series methods—a finite sum of the product of polynomials, harmonics, and exponential time series functions.

Proposition 5.2 ().

Let be a polynomial of degree . Then,

admits a representation as in (10). Further the order of is independent of , the number of observations, and is bounded by

where .

5.2. Functions with compact support

For , let


where takes the form with ; and is -Lipschitz for some .

Proposition 5.3 ().

For any ,

  • Under Model Type 1, satisfies Property 4.1 with and for some .

  • Under Model Type 2, satisfies Property 4.2 with and for all .

Using Proposition 5.3, Theorems 4.1 and 4.2 immediately lead to the following corollaries.

Corollary 5.3 ().

Under Model Type 1, let the conditions of Theorem 4.1 hold. Let for any . Then for some and any , if

we have .

Corollary 5.4 ().

Under Model Type 2, let the conditions of Theorem 4.2 hold. Let for any . Then for some and any , if

we have .

As the following proposition will make precise, any Lipschitz function of a periodic time series falls into this family.

Proposition 5.4 ().



where is -Lipschitz and is rational, admits a representation as in (11). Let denote the fundamental period.121212The “fundamental period”, , of is the smallest value such that is an integer for all . Let and let be the least common multiple (LCM) of . Rewriting as , we have the set of numerators, are all integers and we define their LCM as . It is easy to verify that is indeed a fundamental period. As an example, consider , in which case the above computation results in . Then the Lipschitz constant of is bounded by

5.3. Finite sum of sublinear trends

Consider such that


for some .

Proposition 5.5 ().

Let for some . Then for any ,

  • Under Model Type 1, satisfies Property 4.1 with and .

  • Under Model Type 2, satisfies Property 4.2 with and for all .

By Proposition 5.5 and Theorems 4.1 and 4.2, we immediately have the following corollaries on the finite sample performance guarantees of our estimators.

Corollary 5.5 ().

Under Model Type 1, let the conditions of Theorem 4.1 hold. Let for any . Then for some , if

we have .

Corollary 5.6 ().

Under Model Type 2, let the conditions of Theorem 4.2 hold. Let for any . Then for some and for any , if

we have .

Proposition 5.6 ().

For with for ,


admits a representation as in (13).

5.4. Additive mixture of dynamics

We now show that the imputation results hold even when we consider an additive mixture of any of the models described above. For , let


Here, each is such that under Model Type 1 with , Property 4.1 is satisfied with and for .

Proposition 5.7 ().

Under Model Type 1, satisfies Property 4.1 with and .

Proposition 5.7 and Corollary 4.1 imply the following.

Corollary 5.7 ().

Under Model Type 1, let the conditions of Theorem 4.1 hold. For each , let and for some . Then,