# Breaking Bivariate Records

We establish a fundamental property of bivariate Pareto records for independent observations uniformly distributed in the unit square. We prove that the asymptotic conditional distribution of the number of records broken by an observation given that the observation sets a record is Geometric with parameter 1/2.

## Authors

• 6 publications
• ### Exact and asymptotic properties of δ-records in the linear drift model

The study of records in the Linear Drift Model (LDM) has attracted much ...
06/09/2020 ∙ by Raúl Gouet, et al. ∙ 0

• ### Records from partial comparisons and discrete approximations

In this paper we study records obtained from partial comparisons within ...
06/19/2018 ∙ by Ghurumuruhan Ganesan, et al. ∙ 0

• ### Durable Top-K Instant-Stamped Temporal Records with User-Specified Scoring Functions

A way of finding interesting or exceptional records from instant-stamped...
02/24/2021 ∙ by Junyang Gao, et al. ∙ 0

• ### Generating Pareto records

We present, (partially) analyze, and apply an efficient algorithm for th...
01/17/2019 ∙ by James Allen Fill, et al. ∙ 0

• ### The Pareto Record Frontier

For iid d-dimensional observations X^(1), X^(2), ... with independent Ex...
01/17/2019 ∙ by James Allen Fill, et al. ∙ 0

• ### Revisiting the probabilistic method of record linkage

11/05/2019 ∙ by Abel Dasylva, et al. ∙ 0

• ### Records for Some Stationary Dependent Sequences

For a zero-mean, unit-variance second-order stationary univariate Gaussi...
07/01/2018 ∙ by Michael Falk, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction and main result

This paper proves an interesting phenomenon concerning the breaking of bivariate records first observed empirically by Daniel Q. Naiman, whom we thank for an introduction to the problem considered. We begin with some relevant definitions, taken (with trivial changes) from [4; 3]. Although our attention in this paper will be focused on dimension (see [3, Conj. 2.2] for general ), and the approach we utilize seems to be limited to the bivariate case, we begin by giving definitions that apply for general dimension .

Let according as  is true or false. We write or for natural logarithm, for binary logarithm, and when the base doesn’t matter. For

-dimensional vectors

and , write to mean that for . The notation means .

As do Bai et al. [2], we find it more convenient (in particular, expressions encountered in their computations and ours are simpler) to consider (equivalently) record-small, rather than record-large, values. Let be i.i.d. (independent and identically distributed) copies of a random vector  with independent coordinates, each uniformly distributed over the unit interval.

###### Definition 1.1.

(a) We say that is a Pareto record (or simply record, or that sets a record at time ) if for all .

(b) If , we say that is a current record (or remaining record, or minimum) at time  if for all .

(c) If , we say that breaks (or kills) records if sets a record and there exist precisely  values  with such that is a current record at time but is not a current record at time .

For (or , with the obvious conventions) let denote the number of records with , and let denote the number of remaining records at time .

Here is the main result of this paper.

###### Theorem 1.2.

Suppose that independent bivariate observations, each uniformly distributed in , arrive at times . Let if the observation is not a new record, and otherwise let denote the number of remaining records killed by the observation. Then , conditionally given , converges in distribution to , where , as .

Equivalently, the conclusion (with asymptotics throughout referring to ) is that

 P(Kn=k|Kn≥0)→2−(k+1)\ for % each (fixed) integer k≥0. (1.1)

Here is an outline of the proof. In Section 2 we provide a simple and short proof of the well-known result that

 P(Kn≥0)=n−1Hn,n≥1,

where denotes the harmonic number. In Section 3 (see Theorem 3.9) we show that

 ∣∣P(Kn=k)−[2−(k+1)n−1Hn−(k−1)2−(k+2)n−1]∣∣≤12n−2 (1.2)

for all and all . The improvement

 ∣∣P(Kn=k|Kn≥0)−[2−(k+1)+αn,k]∣∣≤12n−1H−1n (1.3)

to (1.1) then follows immediately, where is a first-order correction term with

 αn,k:=−(k−1)2−(k+2)H−1n

to the Geometricprobability mass function (pmf) . This improvement shows that approximation of the conditional pmf in Theorem 1.2 by the uncorrected Geometric pmf has (for large ) vanishingly small relative error not just for fixed , but for . It also shows that the corrected approximation has small relative error for . Of course we always have , and, by [4, Rmk. 4.3(b)] we have almost surely; the corrected approximation thus gives small relative error for rather large values of  indeed.

As one might expect, the correction terms sum to . We observe that the correction is positive (and of largest magnitude in absolute-error terms) when , vanishes when , and is negative (and of nonincreasing magnitude) when .

Formulation of Theorem 1.2 was motivated by [3, Table 1], reproduced here as Table 1. Table 1 tabulates, for the first 100,000 records generated in a single trial, the number of records that break  remaining records, for each value of . The Geometric pattern is striking. The precise relationship between Theorem 1.2 and the phenomenon observed in Table 1 is discussed in Section 4, where a main conjecture is stated and a possible plan for completing its proof is described.

Throughout, we denote the observation simply by (note: subscripted  will have a different later use) and, for any Borel subset  of , the number of the first  observations falling in  by .

## 2. The probability that Kn≥0

In this section we compute the probability (that the observation is a record) exactly and approximate it asymptotically. This result is already well known, but we give a proof for completeness.

###### Proposition 2.1.

For we have

 P(Kn≥0)=n−1Hn.
###### Proof.

We have

 P(Kn≥0,X∈dx) =P(Nn−1((0,x)×(0,y))=0,X∈dx) =P(Nn−1((0,x)×(0,y))=0)P(X∈dx) =(1−xy)n−1dxdy.

Integrating, we therefore have

 P(Kn≥0) =∫1x=0∫1y=0(1−xy)n−1dydx=n−1∫1x=0x−1[1−(1−x)n]dx =n−1n−1∑j=0∫1x=0(1−x)jdx=n−1Hn,

as claimed. ∎

## 3. The probability that Kn=k

In this section, we compute for exactly and produce the approximation (3.7) with its stated error bound.

### 3.1. The exact probability

Over the event (with ), denote those remaining records at time  broken by , in order from southeast to northwest (that is, in decreasing order of first coordinate and increasing order of second coordinate), by . Note that if we read all the remaining records in order from southeast to northwest, then appear consecutively.

If there are any remaining records at time with second coordinate smaller than , choose the largest such second coordinate and denote the corresponding remaining record by [and note that then   appear consecutively]; otherwise, set .

Similarly, if there are any remaining records at time with first coordinate smaller than , choose the largest such first coordinate and denote the corresponding remaining record by [and note that then appear consecutively]; otherwise, set .

Observe that, (almost surely) over the event , we have and . In results that follow we will only need to treat three cases: (i)  and ; (ii)  and ; and (iii)  and . The fourth case  and can be handled by symmetry with respect to the second case.

Our first result of this section specifies the exact joint distribution of

. We write for the falling factorial power

 n(n−1)⋯(n−k+1)=k!(nk),

and we introduce the abbreviations

 k∑j:=k∑i=j(xi−1−xi)yi,k∑:=k∑1

for sums that will appear frequently in the sequel.

###### Proposition 3.1.

(i) For and

 1>x0>⋯>xk>x>xk+1>0\rm\ \ and\ \ 0

we have

 P(Kn=k;X∈dx;Xi∈dxi\rm% \ for i=0,…,k+1) =(n−1)k+2–––––[1−{∑k+xkyk+1}]n−(k+3)dxdx0⋯dxk+1.

(ii) For and

 1>x1⋯>xk>x>xk+1>0\rm\ \ and\ \ 0

we have

 P(Kn=k;X∈dx;X0=e1;Xi∈dxi\rm\ for i=1,…,k+1) =(n−1)k+1–––––[1−{∑k+xkyk+1}]n−(k+2)dxdx1⋯dxk+1

where here .

(iii) For and

 1>x1⋯>xk>x>0\rm\ \ and\ \ 0

we have

 P(Kn=k;X∈dx;X0=e1;Xi∈dxi\rm\ for i=1,…,k;Xk+1=e2) =(n−1)k––[1−{∑k+xk}]n−(k+1)dxdx1⋯dxk

where here .

###### Proof.

We present only the proof of (i); the proofs of (ii) and (iii) are similar. We shall be slightly informal in regard to “differentials” in our presentation. The key is that the event in question (almost surely) equals the following event:

 {Nn−1(dxi)=1\rm\ for i=0,…,k+1; Nn−1(S)=0; X∈dx} (3.1)

where  is the following disjoint union of rectangular regions:

 S=∪ki=1[(xi,xi−1)×(0,yi)]∪[(0,xk)×(0,yk+1)].

See Figure 1. But the probability of the event (3.1) is

 (n−1)k+2–––––[k+1∏i=0dxi]×[1−λ(S)]n−(k+3)×dx,

which reduces easily to the claimed result. ∎

###### Remark 3.2.

When , Proposition 3.1 is naturally and correctly interpreted as follows:

(i) For and and we have

 P(Kn=0;X∈dx;X0∈dx0;X1∈dx1) =(n−1)2–(1−x0y1)n−3dxdx0dx1.

(ii) For and and we have

 P(Kn=0;X∈dx;X0=e1;X1∈dx1)=(n−1)(1−y1)n−2dxdx1.

(iii) For and and we have

 P(Kn=0;X∈dx;X0=e1;X1=e2)=1(n=1)dx.

To obtain an exact expression for , one need only integrate out the variables in Proposition 3.1 to get

 P(Kn=k)=Ak+2Bk+Ck, (3.2)

where , , and (all of which also depend on ) correspond to parts (i), (ii), and (iii) of the proposition, respectively. For small values of  this can be done explicitly, but for general  we take an inductive approach. To get started on the induction, we first treat the case .

### 3.2. The case k=0

Using Remark 3.2, we obtain the following result.

###### Proposition 3.3.

We have

 A0 =1(n≥3)[12n−1Hn−34n−1],B0=1(n≥2)12n−1,C0=1(n=1),

and therefore

 P(Kn=0)={12n−1Hn+14n−1\rm if n≥21\rm if n=1.
###### Proof.

Using Remark 3.2, we perform the computations in increasing order of difficulty. First, it is clear that for . Next, for we have

 B0 =∫1>x>x1>0,0

Finally, for we have

 A0 =∫1>x0>x>x1>0,0

the final equality after two integrations by part. Using the computation in the proof of Proposition 2.1 and the above computation of , for we therefore find

 A0 =12P(Kn≥0)−12n−1−12B0=12n−1Hn−12n−1−14n−1 =12n−1Hn−34n−1.

Now just use (3.2) to establish the asserted expression for . ∎

### 3.3. Simplifications

The expressions obtained from Proposition 3.1 for , , and for are easily simplified by integrating out the four variables that don’t appear in the integrand (when they do appear as variables). Here is the result.

###### Lemma 3.4.

Assume . Let be defined as explained at (3.2).

(i) For we have

 Ak =14(n−1)k+2––––– ×∫1>x0>⋯>xk>0,0

(ii) For we have

 Bk =12(n−1)k+1––––– ×∫1>x1>⋯>xk>0,0

where here and if then the integral is taken over .

(iii) For we have

 Ck =(n−1)k––∫1>x1>⋯>xk>0,0

where here and if then the interpretation is .

###### Remark 3.5.

Alternative expressions involving only finite sums are available for by recasting the expressions in square brackets in Lemma 3.4 as finite sums of nonnegative terms, expanding the integrand multinomially, and integrating the resulting polynomials explicitly. When this is done, one finds that are all rational, as therefore are and .

Take as an example. We have

 1−{∑k+xk}=k∑i=1(xi−1−xi)(1−yi),

and carrying out this procedure yields

 Ck=n−2∑k∏i=1(i+k∑ℓ=k+1−ijℓ)−1,

where the indicated sum is taken over -tuples of nonnegative integers summing to and the natural interpretation for is . Examples include

 C1 =n−2(n−1)−1,n≥2; C2 =n−2(n−1)−1Hn−2,n≥3; Cn−1 =n−2n−2∏i=1i−1=(n!n)−1,n≥1. (3.3)

Since our aim is to compute up to additive error for large , the following lemma will suffice to treat the contributions .

###### Lemma 3.6.

For , the probabilities satisfy

 ∞∑k=0Ck=n−1∑k=0Ck=n−2.
###### Proof.

Recalling that denotes the number of remaining records at time , it is clear from the description of case (iii) leading up to Proposition 3.1 that

 Ck=P(rn−1=k,Kn=k)=P(rn−1=k,Kn=rn−1).

Therefore

 ∞∑k=0Ck=P(Kn=rn−1)=P(X≺X(i)\ for all 1≤i≤n−1)=n−2. \qed

### 3.4. Recurrence relations

In this subsection we establish recurrence relations for and in the variable , holding  fixed and treating the probabilities as known.

For we have

1. if ,

2. if .

###### Proof.

(i) Begin with the expression for in Lemma 3.4 and integrate out the variable . This gives

 Ak =14(n−1)k+1––––– ×⎛⎝∫1>x1>⋯>xk>0,0x1>⋯>xk>0,0

with in the subtracted integral. For , observe that the variable does not appear within the square brackets in the integrand. Thus, integrating out and then shifting variable names, we find

 A′k =18(n−1)k+1––––– ×∫1>x1>⋯>xk>0,0x0>⋯>xk−1>0,0

where the last equality follows from Lemma 3.4. We see also from Lemma 3.4 that . This completes the proof of part (i).

(ii) The proof of part (ii) is similar. Begin with the expression for in Lemma 3.4 and integrate out the variable . This gives (with )

 Bk =12(n−1)k––⎛⎝∫1>x1>⋯>xk>0,0x1>⋯>xk>0,0

For , observe that the expression within equals , which doesn’t depend on . Thus, integrating out , we find

 B′k =14(n−1)k––∫1>x1>⋯>xk−1>0,0

where the last equality follows from Lemma 3.4. We see also from Lemma 3.4 that . This completes the proof of part (ii). ∎

The recurrence relations of Lemma 3.7 are trivial to solve in terms of the probabilities and the “initial conditions” delivered by Proposition 3.3.

###### Lemma 3.8.

For and we have

 Ak =1(n≥k+3) (3.4) ×[2−kA0−k2−(k+1)B0+k∑j=1(k+1−j)2−(k+2−j)Cj], Bk =1(n≥k+2)[2−kB0−k∑j=12−(k+1−j)Cj]. (3.5)
###### Proof.

Clearly we have (3.5) and likewise

 Ak=2−kA0−k∑j=12−(k+1−j)Bj. (3.6)

Then plugging (3.5) into (3.6) and rearranging yields (3.4). ∎

### 3.5. Approximation to the probability P(Kn=k), with error bound.

###### Theorem 3.9.

For and every we have

 ∣∣P(Kn=k)−[2−(k+1)n−1Hn−(k−1)2−(k+2)n−1]∣∣≤12n−2. (3.7)
###### Proof.

Recall from (3.2) that ; substitute for and using Lemma 3.8; then substitute for and using Proposition 3.3; and finally rearrange.

For this gives

 P(Kn=k) =2−kA0−(k−4)2−(k+1)B0+k−1∑j=1(k−3−j)2−(k+2−j)Cj+14Ck =2−(k+1)n−1Hn−(k−1)2−(k+2)n−1 +k−1∑j=1(k−3−j)2−(k+2−j)Cj+14Ck.

Denote the coefficient of (with ) by . Note that depends only on , and that (with equality for and ). So Lemma 3.6 gives the bound on the remainder term (with half as big a constant).

For this gives

 P(K