Generalization error bounds for stationary autoregressive models

03/04/2011 ∙ by Daniel J. McDonald, et al. ∙ Carnegie Mellon University 0

We derive generalization error bounds for stationary univariate autoregressive (AR) models. We show that imposing stationarity is enough to control the Gaussian complexity without further regularization. This lets us use structural risk minimization for model selection. We demonstrate our methods by predicting interest rate movements.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In standard machine learning situations, we observe one variable,

, and wish to predict another variable,

, with an unknown joint distribution. Time series models are slightly different: we observe a sequence of observations

from some process, and we wish to predict , for some . Throughout what follows,

will be a sequence of random variables, i.e., each

is a measurable mapping from some probability space

into a measurable space . A block of the random sequence will be written , where either limit may go to infinity.

The goal in building a predictive model is to learn a function

which maps the past into predictions for the future, evaluating the resulting forecasts through a loss function

which gives the cost of errors. Ideally, we would use , the function which minimizes the risk

over all , the class of prediction functions we can use.

Since the true joint distribution of the sequence is unknown, so is

, but it is often estimated with the error on a training sample of size

(1)

with being the minimizer of over . This is “empirical risk minimization”.

While converges to for many algorithms, one can show that when minimizes (1), . This is because the choice of adapts to the training data, causing the training error to be an over-optimistic estimate of the true risk. Also, training error must shrink as model complexity grows. Thus, empirical risk minimization gives unsatisfying results: it will tend to overfit the data and give poor out-of-sample predictions. Statistics and machine learning propose two mitigation strategies. The first is to restrict the class . The second, which we follow, is to change the optimization problem, penalizing model complexity. Without the true distribution, the prediction risk or generalization error are inaccessible. Instead, the goal is finding bounds on the risk which hold with high probability — “probably approximately correct” (PAC) bounds. A typical result is a confidence bound on the risk which says that with probability at least ,

where measures the complexity of the model class , and is a function of this complexity, the confidence level, and the number of observed data points.

The statistics and machine learning literature contains many generalization error bounds for both classification and regression problems with IID data, but their extension to time series prediction is a fairly recent development; in 1997, Vidyasagar [21] named extending such results to time series as an important open problem. Yu [22] sets forth many of the uniform ergodic theorems that are needed to derive generalization error bounds for stochastic processes. Meir [11] is one of the first papers to construct risk bounds for time series. His approach was to consider a stationary but infinite-memory process, and to decompose the training error of a predictor with finite memory, chosen through empirical risk minimization, into three parts:

where is an empirical estimate based on finite data of length , finite memory of length , and complexity indexed by ; is the oracle with finite memory and given complexity, and is the oracle with finite memory over all possible complexities. The three terms amount to an estimation error incurred from the use of limited and noisy data, an approximation error due to selecting a predictor from a class of limited complexity, and a loss from approximating an infinite memory process with a finite memory process.

More recently, others have provided PAC results for non-IID data. Steinwart and Christmann [20] prove an oracle inequality for generic regularized empirical risk minimization algorithms learning from

-mixing processes, a fairly general sort of weak serial dependence, getting learning rates for least-squares support vector machines (SVMs) close to the optimal IID rates.

Mohri and Rostamizadeh [13] prove stability-based generalization bounds when the data are stationary and -mixing or -mixing, strictly generalizing IID results and applying to all stable learning algorithms. (We define -mixing below.) Karandikar and Vidyasagar [8] show that if an algorithm is “sub-additive” and yields a predictor whose risk can be upper bounded when the data are IID, then the same algorithm yields predictors whose risk can be bounded if data are -mixing. They use this result to derive generalization error bounds in terms of the learning rates for IID data and the -mixing coefficients.

All these generalization bounds for dependent data rely on notions of complexity which, while common in machine learning, are hard to apply to models and algorithms ubiquitous in the time series literature. SVMs, neural networks, and kernel methods have known complexities, so their risk can be bounded on dependent data as well. On the other hand, autoregressive moving average (ARMA) models, generalized autoregressive conditional heteroskedasticity (GARCH) models, and state-space models in general have unknown complexity and are therefore neglected theoretically. (This does not keep them from being used in applied statistics, or even in machine learning and robotics, e.g.,

[17, 15, 18, 2, 10].) Arbitrarily regularizing such models will not do, as often the only assumption applied researchers are willing to make is that the time series is stationary.

We show that the assumption of stationarity regularizes autoregressive (AR) models implicitly, allowing for the application of risk bounds without the need for additional penalties. This result follows from work in the optimal control and systems design literatures but the application is novel. In §2, we introduce concepts from time series and complexity theory necessary for our results. Section 3 uses these results to calculate explicit risk bounds for autoregressive models. Section 4 illustrates the applicability of our methods by forecasting interest rate movements. We discuss our results and articulate directions for future research in §5.

2 Preliminaries

Before developing our results, we need to explain the idea of effective sample size for dependent data, and the closely related measure of serial dependence called -mixing, as well as the Gaussian complexity technique for measuring model complexity.

2.1 Time series

Because time-series data are dependent, the number of data points in a sample exaggerates how much information the sample contains. Knowing the past allows forecasters to predict future data (at least to some degree), so actually observing those future data points gives less information about the underlying data generating process than in the IID case. Thus, the sample size term in a probabilistic risk bound must be adjusted to reflect the dependence in the data source. This effective sample size may be much less than .

We investigate only stationary -mixing input data. We first remind the reader of the notion of (strict or strong) stationarity.

Definition 2.1 (Stationarity).

A sequence of random variables is stationary when all its finite-dimensional distributions are invariant over time: for all and all non-negative integers and , the random vectors and have the same distribution.

From among all the stationary processes, we restrict ourselves to ones where widely-separated observations are asymptotically independent. Stationarity does not imply that the random variables are independent across time , only that the distribution of is constant in time. The next definition describes the nature of the serial dependence which we are willing to allow.

Definition 2.2 (-Mixing).

Let be the -field of events generated by the appropriate collection of random variables. Let be the restriction of to , be the restriction of to , and be the restriction of to . The coefficient of absolute regularity, or -mixing coefficient, , is given by

(2)

where is the total variation norm. A stochastic process is absolutely regular, or -mixing, if as .

This is only one of many equivalent characterizations of -mixing (see Bradley [3] for others). This definition makes clear that a process is -mixing if the joint probability of events which are widely separated in time increasingly approaches the product of the individual probabilities, i.e., that is asymptotically independent. Typically, a supremum over is taken in (2), however, this is unnecessary for stationary processes, i.e.  as defined above is independent of .

2.2 Gaussian complexity

Statistical learning theory provides several ways of measuring the complexity of a class of predictive models. The results we are using here rely on Gaussian complexity (see, e.g., Bartlett and Mendelson [1]

), which can be thought of as measuring how well the model can (seem to) fit white noise.

Definition 2.3 (Gaussian Complexity).

Let be a (not necessarily IID) sample drawn according to . The empirical Gaussian complexity is

where

are a sequence of random variables, independent of each other and everything else, and drawn from a standard Gaussian distribution. The

Gaussian complexity is

where the expectation is over sample paths generated by .

The term inside the supremum, , is the sample covariance between the noise and the predictions of a particular model . The Gaussian complexity takes the largest value of this sample covariance over all models in the class (mimicking empirical risk minimization), then averages over realizations of the noise.

Intuitively, Gaussian complexity measures how well our models could seem to fit outcomes which were really just noise, giving a baseline against which to assess the risk of over-fitting or failing to generalize. As the sample size grows, for any given the sample covariance , by the ergodic theorem; the overall Gaussian complexity should also shrink, though more slowly, unless the model class is so flexible that it can fit absolutely anything, in which case one can conclude nothing about how well it will predict in the future from the fact that it performed well in the past.

2.3 Error bounds for -mixing data

Mohri and Rostamizadeh [12] present Gaussian111In fact, they present the bounds in terms of the Rademacher complexity, a closely related idea. However, using Gaussian complexity instead requires no modifications to their results while simplifying the proofs contained here. The constant in Theorem 2.4 is given in Ledoux and Talagrand [9]. complexity-based error bounds for stationary -mixing sequences, a generalization of similar bounds presented earlier for the IID case. The results are data-dependent and measure the complexity of a class of hypotheses based on the training sample.

Theorem 2.4.

Let be a space of candidate predictors and let be the space of induced losses:

for some loss function . Then for any sample drawn from a stationary -mixing distribution, and for any with and where is the mixing coefficient, with probability at least ,

and

where in the first case or in the second. .

The generalization error bounds in Theorem 2.4 have a straightforward interpretation. The risk of a chosen model is controlled, with high probability, by three terms. The first term, the training error, describes how well the model performs in-sample. More complicated models can more closely fit any data set, so increased complexity leads to smaller training error. This is penalized by the second term, the Gaussian complexity. The first bound uses the empirical Gaussian complexity which is calculated from the data while the second uses the expected Gaussian complexity, and is therefore tighter. The third term is the confidence term and is a function only of the confidence level and the effective number of data points on which the model was based . While it was actually trained on data points, because of dependence, this number must be reduced. This process is accomplished by taking widely spaced blocks of points. Under the asymptotic independence quantified by , this spacing lets us treat these blocks as independent.

3 Results

Autoregressive models are used frequently in economics, finance, and other disciplines. Their main utility lies in their straightforward parametric form, as well as their interpretability: predictions for the future are linear combinations of some fixed length of previous observations. See Shumway and Stoffer [19] for a standard introduction.

Suppose that is a real-valued random sequence, evolving as

where

has mean zero, finite variance,

for all , and for all . This is the traditional specification of an autoregressive order or AR model. Having observed data , and supposing to be known, fitting the model amounts to estimating the coefficients

. The most natural way to do this is to use ordinary least squares (OLS). Let

Generalization error bounds for these processes follow from an ability to characterize their Gaussian complexity. The theorem below uses stationarity to bound the risk of AR models. The remainder of this section provides the components necessary to prove the results.

Theorem 3.1.

Let be a sample of length from a stationary -mixing distribution. For any with and , then under squared error loss truncated at , the prediction error of an AR () model can be bounded with probability at least using

or

where , is the vertex of the stability domain, and is the row of the design matrix .

For slight adjustments are required. We state this result as a corollary.

Corollary 3.2.

Under the same conditions as above, the prediction error of an AR model can be bounded with probability at least using

or

3.1 Proof components

To prove Theorem 3.1 it is necessary to control the size of the model class by using the stationarity assumption.

3.1.1 Stationarity controls the hypothesis space

Define, as an estimator of ,

(3)

where is the Euclidean norm.222There are other ways to estimate AR models, but they typically amount to very similar optimization problems. Equation 3 has the usual closed form OLS solution:

(4)

Despite the simplicity of Eq. 4, modellers often require that the estimated autoregressive process be stationary. This can be checked algebraically: the complex roots of the polynomial

must lie strictly inside the unit circle. Eq. 3 is thus not quite right for estimating a stationary autoregressive model, as it does not incorporate this constraint.

Constraining the roots of constrains the coefficients . The set where the process is stationary is the stability domain, . Clearly, is just . Fam and Meditch [6] gives a recursive method for determining for general . In particular, they show that the convex hull of the space of stationary solutions is a convex polyhedron with vertices at the extremes of the . This convex hull basically determines the complexity of stationary AR models.

3.1.2 Gaussian complexity of AR models

Returning to the AR() model, it is necessary to find the Gaussian complexity of the function class

Theorem 3.3.

For the AR() model with , the empirical Gaussian complexity is given by

where is the vertex of the stability domain and is the row of the design matrix .

The proof relies on the following version of Slepian’s Lemma (see, for example Ledoux and Talagrand [9] or Bartlett and Mendelson [1]).

Lemma 3.4 (Slepian).

Let be random variables such that for all , where are iid standard normal random variables. Then,

Proof of Theorem 3.3.

where the last equality follows from Theorem 12 in [1]. By standard results from convex optimization, this supremum is attained at one of the vertices of . Therefore,

where is the vertex of . Let . Then by the Lemma 3.4,

where is the -entry of the design matrix. ∎

When , as in Corollary 3.2, we can calculate the complexity directly. The proof’s last line shows that we are essentially interested in the diameter of the stability domain projected onto the column space of , which gives a tighter bound than that from the general results on linear prediction in e.g. Kakade et al. [7].

Since we care about the complexity of the model class viewed through the loss function , we must also account for this additional complexity. For -Lipschitz loss functions, this just means multiplying by .

4 Application

We illustrate our results by predicting interest rate changes — specifically, the 10-year Treasury Constant Maturity Rate series from the Federal Reserve Bank of St. Louis’ FRED database333Available at http://research.stlouisfed.org/fred2/series/DGS10?cid=115. — recorded daily from January 2, 1962 to August 31, 2010. Transforming the series into daily natural-log growth rates leaves observations (Figure 1).

Figure 1: Growth rate of 10-year treasury bond

The changing variance apparent in the figure is why interest rates are typically forecast with GARCH(1,1) models. For this illustration however, we will use an AR model, picking the memory order by the risk bound.

Figure 2 shows the training error

where is the datapoint, and is the model’s prediction. shrinks as the order of the model () grows, as it must since ordinary least squares minimizes for a given . Also shown is the gap between the AIC for different and the lowest attainable value; this would select an AR(36) model.

Figure 2: Training error (top panel) and AIC (bottom panel) against model order

A better strategy uses the probabilistic risk bound derived above. The goal of model selection is to pick, with high probability, the model with the smallest risk; this is Vapnik’s structural risk minimization principle. Here, it is clear that AIC is dramatically overfitting. The optimal model using the risk bound is an AR(). Figure 3 plots the risk bound against with the loss function truncated at . (No daily interest rate change has ever had loss larger than , and results are fairly insensitive to the level of the loss cap.) This bound says that with 95% probability, regardless of the true data generating process, the AR(1) model will make mistakes with squared error no larger than . If we had instead predicted with zero, this loss would have occurred three times.

Figure 3: Generalization error bound for different model orders

One issue with Theorem 2.4 is that it requires knowledge of the -mixing coefficients, . Of course, the dependence structure of this data is unknown, so we calculated it under generous assumptions on the data generating process. In a homogeneous Markov process, the -mixing coefficients work out to

where is the -step transition operator and is the stationary distribution [14, 4]. Since AR models are Markovian, we estimated an AR model with Gaussian errors for large and calculated the mixing coefficients using the stationary and transition distributions. To create the bound, we used and . We address non-parametric estimation of -mixing coefficients elsewhere [Anon.].

5 Discussion

We have constructed a finite-sample predictive risk bound for autoregressive models, using the stationarity assumption to constrain OLS estimation. Interestingly, stationarity — a common assumption among applied researchers — constrains the model space enough to yield bounds without further regularization. Moreover, this is the first predictive risk bound we know of for any of the standard models of time series analysis.

Traditionally, time series analysts have selected models by blending empirical risk minimization, more-or-less quantitative inspection of the residuals (e.g., the Box-Ljung test; see [19]), and AIC. In many applications, however, what really matters is prediction, and none of these techniques, including AIC, controls generalization error, especially with mis-specification. (Cross-validation is a partial exception, but it is tricky for time series; see [16] and references therein.) Our bound controls prediction risk directly. Admittedly, our bound covers only univariate autoregressive models, the plainest of a large family of traditional time series models, but we believe a similar result will cover the more elaborate members of the family such as vector autoregressive, autoregressive-moving average, or autoregressive conditionally heteroskedastic models. While the characterization of the stationary domain from [6] on which we relied breaks down for such models, they are all variants of the linear state space model [5], whose parameters are restricted under stationarity, and so we hope to obtain a general risk bound, possibly with stronger variants for particular specifications.

References

  • Bartlett and Mendelson [2002] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002.
  • Becker et al. [2008] B.C. Becker, H. Tummala, and C.N. Riviere. Autoregressive modeling of physiological tremor under microsurgical conditions. In Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE, pages 1948–1951. IEEE, 2008.
  • Bradley [2005] Richard C. Bradley. Basic properties of strong mixing conditions. a survey and some open questions. Probability Surveys, 2:107–144, 2005. URL http://arxiv.org/abs/math/0511078.
  • Davydov [1973] Y.A. Davydov.

    Mixing conditions for markov chains.

    Theory of Probability and its Applications, 18(2):312–328, 1973.
  • Durbin and Koopman [2001] J. Durbin and S.J. Koopman. Time Series Analysis by State Space Methods. Oxford Univ Press, Oxford, 2001.
  • Fam and Meditch [1978] Adly T. Fam and James S. Meditch. A canonical parameter space for linear systems design. IEEE Transactions on Automatic Control, 23(3):454–458, 1978.
  • Kakade et al. [2008] Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. Technical report, NIPS, 2008. URL http://ttic.uchicago.edu/~karthik/rad-paper.pdf.
  • Karandikar and Vidyasagar [2009] R. L. Karandikar and M. Vidyasagar. Probably approximately correct learning with beta-mixing input sequences. submitted for publication, 2009.
  • Ledoux and Talagrand [1991] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. A Series of Modern Surveys in Mathematics. Springer Verlag, Berlin, 1991. ISBN 3540520139.
  • Li and Moore [2008] J. Li and A.W. Moore. Forecasting web page views: Methods and observations. Journal of Machine Learning Research, 9:2217–2250, 2008.
  • Meir [2000] Ron Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning, 39(1):5–34, 2000. URL http://www.ee.technion.ac.il/~rmeir/Publications/MeirTimeSeries00.pdf.
  • Mohri and Rostamizadeh [2009] Mehryar Mohri and Afshin Rostamizadeh. Rademacher complexity bounds for non-iid processes. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, volume 21, pages 1097–1104, 2009.
  • Mohri and Rostamizadeh [2010] Mehryar Mohri and Afshin Rostamizadeh. Stability bounds for stationary -mixing and -mixing processes. Journal of Machine Learning Research, 11:789–814, February 2010.
  • Mokkadem [1988] A. Mokkadem. Mixing properties of arma processes. Stochastic processes and their applications, 29(2):309–315, 1988.
  • Olsson and Hansen [2006] R.K. Olsson and L.K. Hansen. Linear state-space models for blind source separation. The Journal of Machine Learning Research, 7:2585–2602, 2006. ISSN 1532-4435.
  • Racine [2000] J. Racine. Consistent cross-validatory model-selection for dependent data: Hv-block cross-validation. Journal of econometrics, 99(1):39–61, 2000.
  • Ruiz-del Solar and Vallejos [2005] J. Ruiz-del Solar and P. Vallejos.

    Motion detection and tracking for an aibo robot using motion compensation and kalman filtering.

    In Lecture Notes in Computer Science 3276 (RoboCup 2004), pages 619–627. Springer Verlag, 2005.
  • Sak et al. [2006] M. Sak, D.L. Dowe, and S. Ray. Minimum message length moving average time series data mining. In Computational Intelligence Methods and Applications, 2005 ICSC Congress on, page 6. IEEE, 2006. ISBN 1424400201.
  • Shumway and Stoffer [2000] R.H. Shumway and D.S. Stoffer. Time Series Analysis and Its Applications. Springer Series in Statistics. Springer Verlag, New York, 2000.
  • Steinwart and Christmann [2009] Ingo Steinwart and Andreas Christmann. Fast learning from non-i.i.d. observations. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1768–1776. MIT Press, 2009. URL http://books.nips.cc/papers/files/nips22/NIPS2009_1061.pdf.
  • Vidyasagar [1997] M. Vidyasagar. A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems. Springer Verlag, Berlin, 1997.
  • Yu [1994] Bin Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of Probability, 22(1):94–116, 1994.

References

  • Bartlett and Mendelson [2002] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002.
  • Becker et al. [2008] B.C. Becker, H. Tummala, and C.N. Riviere. Autoregressive modeling of physiological tremor under microsurgical conditions. In Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE, pages 1948–1951. IEEE, 2008.
  • Bradley [2005] Richard C. Bradley. Basic properties of strong mixing conditions. a survey and some open questions. Probability Surveys, 2:107–144, 2005. URL http://arxiv.org/abs/math/0511078.
  • Davydov [1973] Y.A. Davydov.

    Mixing conditions for markov chains.

    Theory of Probability and its Applications, 18(2):312–328, 1973.
  • Durbin and Koopman [2001] J. Durbin and S.J. Koopman. Time Series Analysis by State Space Methods. Oxford Univ Press, Oxford, 2001.
  • Fam and Meditch [1978] Adly T. Fam and James S. Meditch. A canonical parameter space for linear systems design. IEEE Transactions on Automatic Control, 23(3):454–458, 1978.
  • Kakade et al. [2008] Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. Technical report, NIPS, 2008. URL http://ttic.uchicago.edu/~karthik/rad-paper.pdf.
  • Karandikar and Vidyasagar [2009] R. L. Karandikar and M. Vidyasagar. Probably approximately correct learning with beta-mixing input sequences. submitted for publication, 2009.
  • Ledoux and Talagrand [1991] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. A Series of Modern Surveys in Mathematics. Springer Verlag, Berlin, 1991. ISBN 3540520139.
  • Li and Moore [2008] J. Li and A.W. Moore. Forecasting web page views: Methods and observations. Journal of Machine Learning Research, 9:2217–2250, 2008.
  • Meir [2000] Ron Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning, 39(1):5–34, 2000. URL http://www.ee.technion.ac.il/~rmeir/Publications/MeirTimeSeries00.pdf.
  • Mohri and Rostamizadeh [2009] Mehryar Mohri and Afshin Rostamizadeh. Rademacher complexity bounds for non-iid processes. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, volume 21, pages 1097–1104, 2009.
  • Mohri and Rostamizadeh [2010] Mehryar Mohri and Afshin Rostamizadeh. Stability bounds for stationary -mixing and -mixing processes. Journal of Machine Learning Research, 11:789–814, February 2010.
  • Mokkadem [1988] A. Mokkadem. Mixing properties of arma processes. Stochastic processes and their applications, 29(2):309–315, 1988.
  • Olsson and Hansen [2006] R.K. Olsson and L.K. Hansen. Linear state-space models for blind source separation. The Journal of Machine Learning Research, 7:2585–2602, 2006. ISSN 1532-4435.
  • Racine [2000] J. Racine. Consistent cross-validatory model-selection for dependent data: Hv-block cross-validation. Journal of econometrics, 99(1):39–61, 2000.
  • Ruiz-del Solar and Vallejos [2005] J. Ruiz-del Solar and P. Vallejos.

    Motion detection and tracking for an aibo robot using motion compensation and kalman filtering.

    In Lecture Notes in Computer Science 3276 (RoboCup 2004), pages 619–627. Springer Verlag, 2005.
  • Sak et al. [2006] M. Sak, D.L. Dowe, and S. Ray. Minimum message length moving average time series data mining. In Computational Intelligence Methods and Applications, 2005 ICSC Congress on, page 6. IEEE, 2006. ISBN 1424400201.
  • Shumway and Stoffer [2000] R.H. Shumway and D.S. Stoffer. Time Series Analysis and Its Applications. Springer Series in Statistics. Springer Verlag, New York, 2000.
  • Steinwart and Christmann [2009] Ingo Steinwart and Andreas Christmann. Fast learning from non-i.i.d. observations. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1768–1776. MIT Press, 2009. URL http://books.nips.cc/papers/files/nips22/NIPS2009_1061.pdf.
  • Vidyasagar [1997] M. Vidyasagar. A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems. Springer Verlag, Berlin, 1997.
  • Yu [1994] Bin Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of Probability, 22(1):94–116, 1994.