DeepAI
Log In Sign Up

Moments of Maximum: Segment of AR(1)

Let X_t denote a stationary first-order autoregressive process. Consider five contiguous observations (in time t) of the series (e.g., X_1, ..., X_5). Let M denote the maximum of these. Let ρ be the lag-one serial correlation, which satisfies |ρ| < 1. For what value of ρ is E(M) maximized? How does V(M) behave for increasing ρ? Answers to these questions lie in Afonja (1972), suitably decoded.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/05/2019

Number of Sign Changes: Segment of AR(1)

Let X_t denote a stationary first-order autoregressive process. Consider...
09/22/2022

Piercing Diametral Disks Induced by Edges of Maximum Spanning Tree

Let P be a set of points in the plane and let T be a maximum-weight span...
09/11/2015

Learning the Number of Autoregressive Mixtures in Time Series Using the Gap Statistics

Using a proper model to characterize a time series is crucial in making ...
01/30/2018

P not equal to NP

Let G(V,E) be a graph with vertex set V and edge set E, and let X be eit...
09/25/2018

Towards Automated Let's Play Commentary

We introduce the problem of generating Let's Play-style commentary of ga...
03/15/2021

Maximum Number of Steps of Topswops on 18 and 19 Cards

Let f(n) be the maximum number of steps of Topswops on n cards. In this ...
04/20/2020

Enforcing stationarity through the prior in vector autoregressions

Stationarity is a very common assumption in time series analysis. A vect...

1 Partial Correlations

Set for simplicity.  Fix

.  There exists a unique ordered pair

such that .  Define a matrix

which captures all correlations among , , .  Let be the cofactor of the element in the expansion of the determinant of .  The partial correlation between and , given , is prescribed by

The notation here [1] differs from that used elsewhere [2, 3, 4].  In words, measures the linear dependence of and in which the influence of is removed.  Define finally a matrix

as was to be done.

Set now .  Fix .  There exists a unique ordered triple such that .  The preceding discussion extends naturally, supplementing the case by additional possible cases and .  Define finally a matrix

We could go on for larger , but this is all that is needed for our purposes.

Again, set .  Fix .  There exists a unique ordered pair such that .  Define a matrix

which captures all correlations among , , , .  Let be the cofactor of the element in the expansion of the determinant of .  The partial correlation between and , given and , is prescribed by

In words, measures the linear dependence of and in which the influence of and is removed.  Define finally a matrix

as was to be done.

Set now .  Fix .  There exists a unique ordered triple such that .  The preceding discussion extends naturally, supplementing the case by additional possible cases and .  Define finally a matrix

We could go on for larger , but this is all that is needed for our purposes.

2 Small Segments

For convenience, define

The latter expression, while more cumbersome, exhibits symmetry in , .

If , then and .  We have

If , then and .  We have

In formula (3.6) for in [1], should be replaced by .

If , then

In general, and thus symmetry fails for .  We have

In formula (3.7) for in [1], a factor should be inserted in front of the summation.

If , then

Symmetry now fails for both and .  We have

a total of terms and terms, respectively.

If , then contains non-elementary functions which require numerical integration (beyond our present scope).  In contrast,

a total of terms.  In formula (3.9) for in [1], a constant term should be inserted in front of the first summation; further, the last summation should be taken over both and (not merely ).

3 Time Series

Consider a discrete-time stationary first-order autoregressive process

where is white noise. The covariance matrix has element

which leads to certain simplifications.  Let us make the reliance of on explicit.  We have

but is too lengthy to record here.  In the limit as , we obtain

for and these are consistent with well-known values [5] corresponding to independent .  Figure 1 displays as functions of .  The left-hand endpoint is at and is unsurprisingly the mean of a standard half-normal distribution.  The right-hand endpoint is at .  Associated with are maximum points with equal to

respectively.  Closed-form expressions for the latter two quantities remain open.

We also have

but and are too lengthy to record here.  In the limit as , we obtain

for and these again are consistent with well-known values [5].  Associated with are maximum points with equal to

respectively.  Closed-form expressions for the latter three quantities remain open.  Figure 2 displays as functions of .  The left-hand endpoint is at ; the right-hand endpoint is at .  Unlike or , the variance is strictly increasing throughout the interval.  An intuitive reason for such behavior would be good to establish someday.

4 Proof of Revision

Our general formula for looks somewhat different from that presented by Afonja [1].  To demonstrate the equivalence of the two formulas, it suffices to prove that if , and , then

The left-hand side is equal to

which is the right-hand side, as was to be shown.

5 Proof from First Principles

An exercise in [6] suggests that formulas for and should be derived from

It is instructive to similarly prove our formula for , using instead

Define and .  Clearly is bivariate normally distributed with vector mean zero and covariance matrix

Also

The four integrals (depending on signs of and ) underlying

can all be evaluated (however tediously).  Because

we suspect that a more elegant proof ought to be available.  Ideas on bridging this gap would be welcome.

In more detail, letting

denote the bivariate normal density, we obtain

when and ;

when and ;

when and ; and when and .  Adding these contributions and dividing by , we verify

as was desired.

Calculating the variance of from first principles has not been attempted.  The variance of the median (-tile) is also of interest, appearing explicitly in [7] for but under the assumption of independence.

An alternative probability density-based derivation of

and can be found in [8, 9].  See also [10] for the expected range of a normal sample, [11] for the expected absolute maximum, and [12] for other aspects of AR(1).

6 Large Segments

Assuming depends only on , Berman [13, 14, 15] proved that if either

then

where

Further, the two hypotheses on cannot be significantly weakened.  This theorem clearly applies for a first-order autoregressive process, although we note that does not incorporate lag-one correlation at all.  A more precise asymptotic result might do so.

Other relevant works in the literature include [16, 17, 18, 19, 20, 21, 22, 23, 24].  In particular, Figure 2 of [19] depicts the density of AR(1) maximum for and , , …, , .

7 Acknowledgements

Raymond Kan [25] symbolically evaluated the integrals in Section 5, at my request, for the special case