Breaking Bivariate Records

01/24/2019 ∙ by James Allen Fill, et al. ∙ Johns Hopkins University 0

We establish a fundamental property of bivariate Pareto records for independent observations uniformly distributed in the unit square. We prove that the asymptotic conditional distribution of the number of records broken by an observation given that the observation sets a record is Geometric with parameter 1/2.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction and main result

This paper proves an interesting phenomenon concerning the breaking of bivariate records first observed empirically by Daniel Q. Naiman, whom we thank for an introduction to the problem considered. We begin with some relevant definitions, taken (with trivial changes) from [4; 3]. Although our attention in this paper will be focused on dimension (see [3, Conj. 2.2] for general ), and the approach we utilize seems to be limited to the bivariate case, we begin by giving definitions that apply for general dimension .

Let according as  is true or false. We write or for natural logarithm, for binary logarithm, and when the base doesn’t matter. For

-dimensional vectors

and , write to mean that for . The notation means .

As do Bai et al. [2], we find it more convenient (in particular, expressions encountered in their computations and ours are simpler) to consider (equivalently) record-small, rather than record-large, values. Let be i.i.d. (independent and identically distributed) copies of a random vector  with independent coordinates, each uniformly distributed over the unit interval.

Definition 1.1.

(a) We say that is a Pareto record (or simply record, or that sets a record at time ) if for all .

(b) If , we say that is a current record (or remaining record, or minimum) at time  if for all .

(c) If , we say that breaks (or kills) records if sets a record and there exist precisely  values  with such that is a current record at time but is not a current record at time .

For (or , with the obvious conventions) let denote the number of records with , and let denote the number of remaining records at time .

Here is the main result of this paper.

Theorem 1.2.

Suppose that independent bivariate observations, each uniformly distributed in , arrive at times . Let if the observation is not a new record, and otherwise let denote the number of remaining records killed by the observation. Then , conditionally given , converges in distribution to , where , as .

Equivalently, the conclusion (with asymptotics throughout referring to ) is that

(1.1)

Here is an outline of the proof. In Section 2 we provide a simple and short proof of the well-known result that

where denotes the harmonic number. In Section 3 (see Theorem 3.9) we show that

(1.2)

for all and all . The improvement

(1.3)

to (1.1) then follows immediately, where is a first-order correction term with

to the Geometricprobability mass function (pmf) . This improvement shows that approximation of the conditional pmf in Theorem 1.2 by the uncorrected Geometric pmf has (for large ) vanishingly small relative error not just for fixed , but for . It also shows that the corrected approximation has small relative error for . Of course we always have , and, by [4, Rmk. 4.3(b)] we have almost surely; the corrected approximation thus gives small relative error for rather large values of  indeed.

As one might expect, the correction terms sum to . We observe that the correction is positive (and of largest magnitude in absolute-error terms) when , vanishes when , and is negative (and of nonincreasing magnitude) when .

Formulation of Theorem 1.2 was motivated by [3, Table 1], reproduced here as Table 1. Table 1 tabulates, for the first 100,000 records generated in a single trial, the number of records that break  remaining records, for each value of . The Geometric pattern is striking. The precise relationship between Theorem 1.2 and the phenomenon observed in Table 1 is discussed in Section 4, where a main conjecture is stated and a possible plan for completing its proof is described.

Throughout, we denote the observation simply by (note: subscripted  will have a different later use) and, for any Borel subset  of , the number of the first  observations falling in  by .

0 50,334 0.50334
1 24,667 0.24667
2 12,507 0.12507
3 63,35 0.06335
4 3,040 0.03040
5 1,571 0.01571
6 782 0.00782
7 364 0.00364
8 202 0.00202
9 94 0.00094
10 48 0.00048
11 24 0.00024
12 18 0.00018
13 8 0.00008
14 4 0.00004
16 1 0.00001
17 0 0.00000
18 1 0.00001
Table 1. Results of a simulation experiment in which 100,000 bivariate records are generated, and for each new record the number  of records it breaks is recorded. The number of records that break  current records is denoted by , and is the proportion of the 100,000 records that break  records.

2. The probability that

In this section we compute the probability (that the observation is a record) exactly and approximate it asymptotically. This result is already well known, but we give a proof for completeness.

Proposition 2.1.

For we have

Proof.

We have

Integrating, we therefore have

as claimed. ∎

3. The probability that

In this section, we compute for exactly and produce the approximation (3.7) with its stated error bound.

3.1. The exact probability

Over the event (with ), denote those remaining records at time  broken by , in order from southeast to northwest (that is, in decreasing order of first coordinate and increasing order of second coordinate), by . Note that if we read all the remaining records in order from southeast to northwest, then appear consecutively.

If there are any remaining records at time with second coordinate smaller than , choose the largest such second coordinate and denote the corresponding remaining record by [and note that then   appear consecutively]; otherwise, set .

Similarly, if there are any remaining records at time with first coordinate smaller than , choose the largest such first coordinate and denote the corresponding remaining record by [and note that then appear consecutively]; otherwise, set .

Observe that, (almost surely) over the event , we have and . In results that follow we will only need to treat three cases: (i)  and ; (ii)  and ; and (iii)  and . The fourth case  and can be handled by symmetry with respect to the second case.

Our first result of this section specifies the exact joint distribution of

. We write for the falling factorial power

and we introduce the abbreviations

for sums that will appear frequently in the sequel.

Proposition 3.1.

(i) For and

we have

(ii) For and

we have

where here .

(iii) For and

we have

where here .

Proof.

We present only the proof of (i); the proofs of (ii) and (iii) are similar. We shall be slightly informal in regard to “differentials” in our presentation. The key is that the event in question (almost surely) equals the following event:

(3.1)

where  is the following disjoint union of rectangular regions:

See Figure 1. But the probability of the event (3.1) is

which reduces easily to the claimed result. ∎

max totalsize=.9.7,center

Figure 1. In this example, after observations, none of which fall in the shaded region , there are remaining records. The observation, shown in green, breaks the remaining records shown in red but not the remaining records shown in blue.
Remark 3.2.

When , Proposition 3.1 is naturally and correctly interpreted as follows:

(i) For and and we have

(ii) For and and we have

(iii) For and and we have

To obtain an exact expression for , one need only integrate out the variables in Proposition 3.1 to get

(3.2)

where , , and (all of which also depend on ) correspond to parts (i), (ii), and (iii) of the proposition, respectively. For small values of  this can be done explicitly, but for general  we take an inductive approach. To get started on the induction, we first treat the case .

3.2. The case

Using Remark 3.2, we obtain the following result.

Proposition 3.3.

We have

and therefore

Proof.

Using Remark 3.2, we perform the computations in increasing order of difficulty. First, it is clear that for . Next, for we have

Finally, for we have

the final equality after two integrations by part. Using the computation in the proof of Proposition 2.1 and the above computation of , for we therefore find

Now just use (3.2) to establish the asserted expression for . ∎

3.3. Simplifications

The expressions obtained from Proposition 3.1 for , , and for are easily simplified by integrating out the four variables that don’t appear in the integrand (when they do appear as variables). Here is the result.

Lemma 3.4.

Assume . Let be defined as explained at (3.2).

(i) For we have

(ii) For we have

where here and if then the integral is taken over .

(iii) For we have

where here and if then the interpretation is .

Remark 3.5.

Alternative expressions involving only finite sums are available for by recasting the expressions in square brackets in Lemma 3.4 as finite sums of nonnegative terms, expanding the integrand multinomially, and integrating the resulting polynomials explicitly. When this is done, one finds that are all rational, as therefore are and .

Take as an example. We have

and carrying out this procedure yields

where the indicated sum is taken over -tuples of nonnegative integers summing to and the natural interpretation for is . Examples include

(3.3)

Since our aim is to compute up to additive error for large , the following lemma will suffice to treat the contributions .

Lemma 3.6.

For , the probabilities satisfy

Proof.

Recalling that denotes the number of remaining records at time , it is clear from the description of case (iii) leading up to Proposition 3.1 that

Therefore

3.4. Recurrence relations

In this subsection we establish recurrence relations for and in the variable , holding  fixed and treating the probabilities as known.

Lemma 3.7.

For we have

  1. if ,

  2. if .

Proof.

(i) Begin with the expression for in Lemma 3.4 and integrate out the variable . This gives

with in the subtracted integral. For , observe that the variable does not appear within the square brackets in the integrand. Thus, integrating out and then shifting variable names, we find

where the last equality follows from Lemma 3.4. We see also from Lemma 3.4 that . This completes the proof of part (i).

(ii) The proof of part (ii) is similar. Begin with the expression for in Lemma 3.4 and integrate out the variable . This gives (with )

For , observe that the expression within equals , which doesn’t depend on . Thus, integrating out , we find

where the last equality follows from Lemma 3.4. We see also from Lemma 3.4 that . This completes the proof of part (ii). ∎

The recurrence relations of Lemma 3.7 are trivial to solve in terms of the probabilities and the “initial conditions” delivered by Proposition 3.3.

Lemma 3.8.

For and we have

(3.4)
(3.5)
Proof.

Clearly we have (3.5) and likewise

(3.6)

Then plugging (3.5) into (3.6) and rearranging yields (3.4). ∎

3.5. Approximation to the probability , with error bound.

Theorem 3.9.

For and every we have

(3.7)
Proof.

Recall from (3.2) that ; substitute for and using Lemma 3.8; then substitute for and using Proposition 3.3; and finally rearrange.

For this gives

Denote the coefficient of (with ) by . Note that depends only on , and that (with equality for and ). So Lemma 3.6 gives the bound on the remainder term (with half as big a constant).

For this gives