## 1 Partial Correlations

Set for simplicity. Fix

. There exists a unique ordered pair

such that . Define a matrixwhich captures all correlations among , , . Let be the cofactor of the element in the expansion of the determinant of . The partial correlation between and , given , is prescribed by

The notation here [1] differs from that used elsewhere [2, 3, 4]. In words, measures the linear dependence of and in which the influence of is removed. Define finally a matrix

as was to be done.

Set now . Fix . There exists a unique ordered triple such that . The preceding discussion extends naturally, supplementing the case by additional possible cases and . Define finally a matrix

We could go on for larger , but this is all that is needed for our purposes.

Again, set . Fix . There exists a unique ordered pair such that . Define a matrix

which captures all correlations among , , , . Let be the cofactor of the element in the expansion of the determinant of . The partial correlation between and , given and , is prescribed by

In words, measures the linear dependence of and in which the influence of and is removed. Define finally a matrix

as was to be done.

Set now . Fix . There exists a unique ordered triple such that . The preceding discussion extends naturally, supplementing the case by additional possible cases and . Define finally a matrix

We could go on for larger , but this is all that is needed for our purposes.

## 2 Small Segments

For convenience, define

The latter expression, while more cumbersome, exhibits symmetry in , .

If , then and . We have

If , then

In general, and thus symmetry fails for . We have

In formula (3.7) for in [1], a factor should be inserted in front of the summation.

If , then

Symmetry now fails for both and . We have

a total of terms and terms, respectively.

If , then contains non-elementary functions which require numerical integration (beyond our present scope). In contrast,

a total of terms. In formula (3.9) for in [1], a constant term should be inserted in front of the first summation; further, the last summation should be taken over both and (not merely ).

## 3 Time Series

Consider a discrete-time stationary first-order autoregressive process

where is white noise. The covariance matrix has element

which leads to certain simplifications. Let us make the reliance of on explicit. We have

but is too lengthy to record here. In the limit as , we obtain

for and these are consistent with well-known values [5] corresponding to independent . Figure 1 displays as functions of . The left-hand endpoint is at and is unsurprisingly the mean of a standard half-normal distribution. The right-hand endpoint is at . Associated with are maximum points with equal to

respectively. Closed-form expressions for the latter two quantities remain open.

We also have

but and are too lengthy to record here. In the limit as , we obtain

for and these again are consistent with well-known values [5]. Associated with are maximum points with equal to

respectively. Closed-form expressions for the latter three quantities remain open. Figure 2 displays as functions of . The left-hand endpoint is at ; the right-hand endpoint is at . Unlike or , the variance is strictly increasing throughout the interval. An intuitive reason for such behavior would be good to establish someday.

## 4 Proof of Revision

Our general formula for looks somewhat different from that presented by Afonja [1]. To demonstrate the equivalence of the two formulas, it suffices to prove that if , and , then

The left-hand side is equal to

which is the right-hand side, as was to be shown.

## 5 Proof from First Principles

An exercise in [6] suggests that formulas for and should be derived from

It is instructive to similarly prove our formula for , using instead

Define and . Clearly is bivariate normally distributed with vector mean zero and covariance matrix

Also

The four integrals (depending on signs of and ) underlying

can all be evaluated (however tediously). Because

we suspect that a more elegant proof ought to be available. Ideas on bridging this gap would be welcome.

In more detail, letting

denote the bivariate normal density, we obtain

when and ;

when and ;

when and ; and when and . Adding these contributions and dividing by , we verify

as was desired.

Calculating the variance of from first principles has not been attempted. The variance of the median (-tile) is also of interest, appearing explicitly in [7] for but under the assumption of independence.

## 6 Large Segments

Assuming depends only on , Berman [13, 14, 15] proved that if either

then

where

Further, the two hypotheses on cannot be significantly weakened. This theorem clearly applies for a first-order autoregressive process, although we note that does not incorporate lag-one correlation at all. A more precise asymptotic result might do so.

## 7 Acknowledgements

Raymond Kan [25] symbolically evaluated the integrals in Section 5, at my request, for the special case