# Optimal Group-Sequential Tests with Groups of Random Size

We consider sequential hypothesis testing based on observations which are received in groups of random size. The observations are assumed to be independent both within and between the groups. We assume that the group sizes are independent and their distributions are known, and that the groups are formed independently of the observations. We are concerned with a problem of testing a simple hypothesis against a simple alternative. For any (group-) sequential test, we take into account the following three characteristics: its type I and type II error probabilities and the average cost of observations. Under mild conditions, we characterize the structure of sequential tests minimizing the average cost of observations among all sequential tests whose type I and type II error probabilities do not exceed some prescribed levels.

## Authors

• 2 publications
• 1 publication
01/14/2020

### Second-Order Asymptotics of Sequential Hypothesis Testing

We consider the classical sequential binary hypothesis testing problem i...
10/13/2015

### Fast sequential forensic camera identification

Two sequential camera source identification methods are proposed. Sequen...
11/20/2018

### Higher significance with smaller samples: A modified Sequential Probability Ratio Test

We describe a modified sequential probability ratio test that can be use...
04/15/2021

### Logical contradictions in the One-way ANOVA and Tukey-Kramer multiple comparisons tests with more than two groups of observations

We show that the One-way ANOVA and Tukey-Kramer (TK) tests agree on any ...
07/07/2020

### The ordering of future observations from multiple groups

There are many situations where comparison of different groups is of gre...
07/07/2020

### Inverse Reinforcement Learning for Sequential Hypothesis Testing and Search

This paper considers a novel formulation of inverse reinforcement learni...
03/26/2022

### Design and performance evaluation in Kiefer-Weiss problems when sampling from discrete exponential families

In this paper, we deal with problems of testing hypotheses in the framew...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this article, we consider sequential hypothesis testing when the observations are received in groups of a random size, rather than on a one-at-a-time basis (we adhere to the statistical model proposed by Mukhopadhyay and de Silva [2008] for this context). There are many practical situations where the random group size model comes into question and a lot of theoretical problems that arise (see Mukhopadhyay and de Silva [2008]). In this article we address the problems of optimality of the random group-sequential tests for the case of two simple hypotheses, covering theoretical aspects of their optimality.

In the case of random groups of observations, there are different ways to quantify the volume of observations taken for the analysis; e. g., a special interest can be put on the total number of observations, or on the number of groups taken (see Mukhopadhyay and de Silva [2008]). To tackle the possible differences, in this article we introduce a natural concept of cost of observations accounting for the number of groups and/or for the number of observations within the groups, and use the average cost as one of the characteristics to be taken into account.

Our main objective is to characterize all the tests minimizing the average cost of the experiment, among all group-sequential tests whose type I and type II error probabilities do not exceed some given levels.

With respect to these, we should start with the classical framework of one-per-group observations where the average sample number is minimized under restrictions on the probabilities of the first and the second kind. Wald and Wolfowitz [1948]

show that Wald’s Sequential Probability Ratio Test (SPRT) has a minimum average sample number, among all the tests whose error probabilities do not exceed those of the SPRT. The minimum is reached both under the null-hypothesis and under the alternative - this strong optimality property is known as the Wald-Wolfowitz optimality.

For the group-sequential model we adhere to in this article, Mukhopadhyay and de Silva [2008] proposed an extension of the classical SPRT, called RSPRT. In this article, we want to characterize the structure of optimal sequential tests and, in particular, show the optimality of the RSPRT, in the Wald-Wolfowitz sense, when the group sizes are identically distributed.

For a more general case, when the group sizes not necessarily have the same distribution, we use a weaker approach related to the minimization of the average cost under one hypothesis (see Lorden [1980]). If an optimal, in the Wald-Wolfowitz sense, test will ever be found, for any specific group size distribution model, it should minimize the average cost each one of the hypothesis, thus it should be of the particular form we find here.

In this way, our main concern in this article is the characterization of optimal group-sequential tests, which minimize the average cost of the observations given restrictions on the error probabilities.

In Section 2, the main definitions and assumptions are presented.

In Section 3, the problem of findinig optimal test is reduced to an optimal stopping problem.

In Section 4, characterizations of optimal sequential rules are given.

In Section 5, the optimality of the random sequential probability ratio tests (RSPRT) is demonstrated.

The proofs of the main results are placed in the Appendix A.

## 2 Notation and Assumptions

We assume that i.i.d. observations , , are available to the statistician sequentially, in groups numbered by . The group sizes

are assumed to be values of some independent integer-valued random variables

. The distributions of are assumed to be fixed and known to the statistician. But the distribution of (we denote it ) depends on a parameter , and the goal of the statistician is to test two simple hypothesis, against

. The number of groups to be taken for the analysis is up to the statistician, and is to be determined on the basis of observations he has up to the moment of stopping.

Below we formalize this procedure in detail.

For any natural , we denote by

the vector of observations in the

-th group, . If (no observations in the group), we will formally write it as . Group sizes are assumed to be values of independent variables , with respective probability mass functions (p.m.f.)

 P(νi=n)=pi(n),n∈G⊂{0,1,2,…}, (2.1)

;

Then the joint distribution of the first

consecutive group sizes (let us denote and ) is given by

 P(ν(k)=n)=p(n)=k∏i=1pi(ni),n∈Gk,

for all

It is assumed that the observations , , are conditionally independent and identically distributed, given group sizes , for all ,

Let take its “values” in a measurable space , and assume that its distribution has a “density” function on (the Radon–Nikodym derivative of with respect to a -finite measure on ).

In this way, for any number of groups , given any consecutive group sizes the random vector of observations has a joint density

 f(n)θ(x(n))=f(n)θ(x(n1)1,…,x(nk)k)=k∏i=1nk∏j=1fθ(xij)

with respect to the product measure on , where ( times). By definition, we assume here that and that is a probability measure on the (trivial) -algebra on .

Throughout the paper, it will be assumed that the distributions and are distinct:

 μ{x:fθ0(x)≠fθ1(x)}>0. (2.2)

We define a (randomized) stopping rule as a family of measurable functions , , , where represents the conditional probability to stop, given number of groups , group sizes , and the data observed till the time of stopping.

In a similar manner, a (randomized) decision rule is a family of measurable functions , , , where represents the conditional probability to reject , given a number of the groups observed, group sizes and data observed till the time of final decision (to accept or reject ).

A group-sequential test is a pair of a stopping rule and a decision rule .

For , with any , let

 tψn(x(n))=(1−ψ(n1)(x(n1)1))⋯(1−ψ(n1,…,nk−1)(x(n1)1,…,x(nk−1)k−1)), (2.3)
 sψn(x(n))=tψn(x(n))ψn(x(n)) (2.4)

(by definition, for ).

Let us also denote for ,

In the latter expression, we assume , in spite of its initial definition as . Throughout the paper, we will use this kind of double interpretation for any function of observations, according to the following rule. If , , is some function of observations and its arguments are omitted, then is interpreted as when it is under the probability or expectation sign, and as , otherwise.

Any stopping rule generates a random variable (stopping time), with a p.m.f.

 Pθ(τψ=k)=∑n∈Gkp(n)Eθsψn,k=1,2,…,

where is expectation with respect to . Under , a stopping rule terminates the testing procedure with probability 1 if

 Pθi(τψ<∞)=∞∑k=1∑n∈Gkp(n)Eθisψn=1. (2.5)

Let us denote the set of stopping rules satisfying (2.5), and let be the set of all group-sequential tests such that , .

For brevity, we will write , and instead of , and , respectively, .

For a group-sequential test , the type I and type II error probabilities are defined as

 α(ψ,ϕ) = β(ψ,ϕ) =

Let us assume that the cost of observations is , . The cost of the first stages of the experiment for a given is then The average cost of obtaining the -th group of observations is

 ¯ck=∑n∈Gpk(n)c(n).

We assume that for all . The average total cost of the sequential sampling according to a stopping rule is, under

 Ki(ψ)=∞∑k=1∑n∈Gkp(n)c(n)Eisψn,i=0,1.

Here are some natural particular cases of the cost structure: if , for all , then corresponds to the average number of the groups taken; another particular case is for all , it accounts for the average total number of observations. A combination of these two seems to be quite useful for some applications: , with (see, for example, Schmitz [1993]).

It is seen from the above definitions that our group-sequential experiment always starts with one group of observations (i.e., stage is always present). The usual (for the sequential analysis) case when the experiment admits stopping without taking observations can be easily incorporated in this scheme by supposing that the first group always has size (and stopping at this stage means no observations will be taken).

The usual context for sequential hypothesis testing is to minimize average costs under restrictions on the type I and type II error probabilities. We are concerned in this article with minimizing and/or under restrictions

 α(ψ,ϕ)≤αandβ(ψ,ϕ)≤β, (2.6)

over (and/or ), where and are some given numbers.

## 3 Reduction to optimal stopping problem

The problem of minimizing average cost under restrictions on the error probabilities and is routinely reduced to a non-constrained optimization problem using the Lagrange multipliers method.

Let us define the Lagrangian function

 L(ψ,ϕ;λ0,λ1):=K0(ψ)+λ0α(ψ,ϕ)+λ1β(ψ,ϕ), (3.1)

where and are constant multipliers.

The following Lemma is the essence of the reduction, and is almost trivial. It is placed here for convenience of references.

###### Lemma 3.1.

Let and a test be such that

 L(ψ,ϕ;λ0,λ1)≤L(ψ′,ϕ′;λ0,λ1) (3.2)

for all . Then for every test such that

 α(ψ′,ϕ′)≤α(ψ,ϕ)andβ(ψ′,ϕ′)≤β(ψ,ϕ), (3.3)

it holds

 K0(ψ′)≥K0(ψ). (3.4)

The inequality in (3.4) is strict if at least one of the inequalities in (3.3) is strict.

Let us define for all , and let , for .

For all and all let

 zn=zn(x(n))=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩f(n)1(x(n))f(n)0(x(n)),iff(n)0(x(n))>0,∞,iff(n)0(x(n))=0, butf(n)1(x(n))>0,0(or whatever), otherwise.

From this time on, we will use the following notation.

Let us write (say) when It is easy to see that this is an equivalent way to say that: , if , , if , and that when .

If and are functions of some arguments, this agreement should be applied to their values calculated at any given arguments.

###### Theorem 3.1.

For all tests it holds

 (3.5)

There is an equality in (3.5) if and only if the following condition is satisfied:
Condition . For all and for all

 ϕn≃I{λ0/λ1≼zn}P0-almost surely onSψn. (3.6)

The proof can be carried out along the lines of the proof of Theorem 2.2 in Novikov [2009].

Theorem 3.1 is the first step in the Lagrange minimization we discussed earlier in this section. It reduces the problem of minimization of to that of minimization of

 L(ψ;λ0,λ1)=infϕL(ψ,ϕ;λ0,λ1)=∞∑k=1∑n∈Gkp(n)E0sψn(c(n)+g(zn;λ0,λ1)) (3.7)

over all stopping rules .

Indeed, if we have a stopping rule minimizing over all stopping rules in , then, combining the optimal with any decision rule satisfying

 ϕn≃I{λ0/λ1≼zn}for all n∈Gk andk=1,2,…

(cf. (3.6)), we obtain

 L(ψ,ϕ;λ0,λ1)=L(ψ;λ0,λ1)≤L(ψ′;λ0,λ1)=L(ψ′,ϕ;λ0,λ1)≤L(ψ′,ϕ′;λ0,λ1)

Consequently, (3.2) is satisfied for all

In such a way, starting from this moment, our focus will be on the problem of minimizing over all stopping rules .

In the meanwhile, a useful consequence of Theorem 3.1 can already be drawn for a particular (in fact, non-sequential) case, when the number of groups is fixed in advance (and for all ). Combining the result of Theorem 3.1 with Lemma 3.1 one immediately obtains an alternative proof of Theorem 2.1 in Mukhopadhyay and de Silva [2008].

## 4 Optimal random group-sequential tests

### 4.1 Optimal stopping on a finite horizon

For any , we define the class of truncated stopping rules as

 FN={ψ:(1−ψn1)(1−ψ(n1,n2))…(1−ψn)≡0for alln∈GN}. (4.1)

Let be the set of all group-sequential tests with .

For , let us denote (see (3.7)).

In this section we characterize the structure of all stopping rules which minimize over all .

It follows from (4.1) that

 LN(ψ)=N−1∑k=1∑n∈Gkp(n)E0sψn(c(n)+g(zn))+∑n∈GNp(n)E0tψn(c(n)+g(zn)). (4.2)

The minimization of (4.2) is an optimal stopping problem, so the solution is through the following variant of the backward induction.

Let the functions , , , be defined in the following way: starting from

 VNN(z)≡g(z), (4.3)

define recursively, for all ,

 VNk−1(z)=min{g(z),¯ck+∑n∈Gpk(n)E0VNk(zzn)},k=N,N−1,…,2. (4.4)

Let us denote

 ¯VNk(z)=∑n∈Gpk(n)E0VNk(zzn),z≥0,k=1,2,…,N. (4.5)

It is important to bear in mind that all the functions above are constructed on the basis of the constants and and some definite cost structure (). Unfortunately, there is no satisfactory way to make them all explicit in the notation, so we leave them implicit in all the elements of the construction.

The following theorem characterizes the structure of all truncated stopping rules minimizing .

###### Theorem 4.1.

For every

 LN(ψ)≥¯c1+¯VN1(1). (4.6)

There is an equality in (4.6) if satisfies the following
Condition . For all and all

 ψn≃I{g(zn)≼¯ck+1+¯VNk+1(zn)}P0-almost surely onTψn. (4.7)

Conversely, if there is an equality in (4.6) for some then satisfies Condition .

The proof of Theorem 4.1 is placed in the Appendix.

###### Corollary 4.1.

Let a truncated sequential test be such that Condition of Theorem 4.1 and Condition of Theorem 3.1 are satisfied.

Then for all truncated tests such that

 α(ψ′,ϕ′)≤α(ψ,ϕ)andβ(ψ′,ϕ′)≤β(ψ,ϕ) (4.8)

it holds

 K0(ψ′)≥K0(ψ). (4.9)

The inequality in (4.9) is strict if at least one of the inequalities in (4.8) is strict.

If there are equalities in all the inequalities in (4.8) and (4.9), then Condition and Condition are satisfied for .

Corollary 4.1 is a direct consequence of Theorem 3.1 and Theorem 4.1 in combination with Lemma 3.1.

### 4.2 Optimal stopping on infinite horizon

Very much like in Novikov [2009], the idea of this part is to pass to the limit, as , on both sides of (4.6) in order to obtain a lower bound for the Lagrangian function and conditions to attain it.

Let us analyse first the right-hand side of (4.6) supposing that .

We have:

 VNN(z)≥VN+1N(z) (4.10)

for all , because, by (4.4),

 VNN(z)=g(z)≥VN+1N(z)=min{g(z),¯cN+1+∑n∈GpN+1(n)E0VN+1N+1(zzn)}.

Applying (4.4) to (4.10) again, we obtain , and so on, getting finally to

 VNk(z)≥VN+1k(z),z≥0, (4.11)

for any fixed. It follows from (4.11) that there exists , , and, by the Lebesgue theorem of dominated convergence, , .

To pass to the limit on the left-hand side of (4.6), let us define truncation of any at any level as for all . Because , we can apply (4.6) with . It is easy to see that can be calculated directly over (4.2), whatever be a stopping rule .

###### Lemma 4.1.

For any

 limN→∞LN(ψ)=L(ψ). (4.12)
###### Lemma 4.2.
 infψ∈F0L(ψ)=¯c1+¯V1(1).

We place the proofs of Lemma 4.1 and Lemma 4.2 in the Appendix.

Now, we are able to characterize optimal stopping rules on infinite horizon.

###### Theorem 4.2.

For every

 L(ψ)≥¯c1+¯V1(1). (4.13)

There is an equality in (4.13) if satisfies

Condition . For all and all

 ψn≃I{g(zn)≼¯ck+1+¯Vk+1(zn)}P0-almost surely onTψn. (4.14)

Conversely, if there is an equality in (4.13) for some then satisfies Condition .

The proof of Theorem 4.2 is laid out in the Appendix.

###### Corollary 4.2.

Let a sequential test be such that Condition of Theorem 4.2 and Condition of Theorem 3.1 are satisfied.

Then for all sequential tests such that

 α(ψ′,ϕ′)≤α(ψ,ϕ)andβ(ψ′,ϕ′)≤β(ψ,ϕ) (4.15)

it holds

 K0(ψ′)≥K0(ψ). (4.16)

The inequality in (4.16) is strict if at least one of the inequalities in (4.15) is strict.

If there are equalities in all the inequalities in (4.15) and (4.16), then Condition and Condition are satisfied for .

## 5 Optimality of the random sequential probability ratio test

In this section we apply the general results of preceding sections to a particularly important model, assuming that the groups for the group-sequential test are formed in a stationary way. More precisely, we assume in this section that the group sizes are identically distributed (and their common distribution is given by , for and ). Respectively, the average group costs keep the same value over the experiment time. We will characterize the structure of optimal group-sequential tests in the case of infinite horizon. In particular, we will prove the optimal property of the random sequential probability ratio test (RSPRT) proposed by Mukhopadhyay and de Silva [2008] for the random group-sequential model.

There are three constants involved in the construction of optimal tests in this case: , and . It is easy to see that only two of them suffice to obtain all the optimal sequential tests of Corollary 4.2. Let these be and (and just assume that ). The structure of optimal tests of Theorem 4.2 now acquires a simpler form.

Let

 ρ0(z)=ρ0(z;c,λ)=g(z;λ)=min{λ,z},z≥0, (5.1)

and, recursively over ,

 ρk(z)=ρk(z;c,λ)=min{g(z;λ),c+∑n∈Gp(n)E0ρk−1(zzn;c,λ)},z≥0. (5.2)

Let also

 ¯ρk(z)=¯ρk(z;c,λ)=∑n∈Gp(n)E0ρk(zzn;c,λ). (5.3)

It follows from (4.3) - (4.4) that

 VNk(z)=ρN−k(z;c,λ),z≥0, (5.4)

and from (4.5),

 ¯VNk(z)=¯ρN−k(z;c,λ). (5.5)

Let us define

 ρ(z)=ρ(z;c,λ)=limk→∞ρk(z;c,λ),z≥0. (5.6)

If we take the limit, as , in (5.4), then

 Vk(z)=ρ(z;c,λ),z≥0, (5.7)

for all , and by (5.5)

 ¯Vk(z)=¯ρ(z;c,λ),z≥0. (5.8)

Stopping rule (4.14) in Condition now transforms to

 ψn≃I{g(zn;λ)≼c+¯ρ(zn;c,λ)}, (5.9)

so the form of optimal stopping rules entirely depends on whether the inequality

 g(z;λ)≤c+¯ρ(z;c,λ) (5.10)

(and/or its strict variant) is fulfilled or not at .

First of all, it is easy to see that if

 λ

then (5.9) implies that for all (the optimal test stops after the first group is taken). Therefore, non-trivial optimal sequential tests are only obtained if

 λ>c+¯ρ(λ;c,λ), (5.12)

which will be assumed in what follows.

###### Lemma 5.1.

If

 P0(f1(X)>0)=1, (5.13)

then for any positive and satisfying (5.12) there exist and , such that

 g(A;λ)=c+¯ρ(A;c,λ),g(B;λ)=c+¯ρ(B;c,λ), (5.14)

and

 g(z;λ)B, (5.15)

and

 g(z;λ)>c+¯ρ(z;c,λ) for allA

It follows from Lemma 5.1 that, if (5.13) holds then (5.9) is equivalent to

 I{zn∈(A,B)}≤1−ψn≤I{zn∈[A,B]}, (5.17)

that is, any optimal test is a randomized version of the random sequential probability ratio test (RSPRT) by Mukhopadhyay and de Silva [2008] which, in our terms, can be described as with

 ψn=I{zn∉(A,B)}andϕn=I{zn≥B}, (5.18)

for all and .

Obviously, in (5.18) is a particular case of (5.17), and satisfies Condition . In addition, by virtue of Theorem 3.1 in Mukhopadhyay and de Silva [2008], (the details can be found in the proof of Theorem 5.1 below). Consequently, it follows that this RSPRT is optimal in the sense of Corollary 4.2.

In the same way, all the sequential tests with satisfying (5.17) and satisfying (5.18) (for all and ) share the optimum property with the RSPRT, when (5.13) is satisfied. In particular, obviously, this is the case when the hypothesized distributions belong to a Koopman-Darmois family.

If (5.13) is not satisfied, optimal tests with stopping rules (5.9) are not necessarily of the RSPRT type. This can be seen from the following simple example.

Let

state that the (one per group) observations follow a uniform distribution on [0,1], whereas under

they are assumed to be uniform on . Using definition of in (5.6), on the basis of defined in (5.1) and (5.2), with and , one easily sees that and . Let us consider an (optimal) test corresponding to (5.10), with a strict inequality. It it immediate that, with the above definitions of and , (5.10) holds if and only if . But the consecutive values of the probability ratios are, respectively, , whenever , so the test (minimizing ) only stops when, for the first time, , in which case .

It is seen from this example, first, that the test minimizing is not an RSPRT (because an RSPRT should also stop when which happens, under , with a positive probability, thus, there would be a positive -error), and second, that, in no way, it minimizes , because under it never stops.

Nevertheless, the following theorem shows that, even if (5.13) is not satisfied, not only the RSPR tests with satisfying (5.18), but also their “randomized” versions with satisfying (5.17), are optimal in the sense of Wald and Wolfowitz [1948], i.e. they minimize the average cost under both and , given restrictions on the error probabilities.

###### Theorem 5.1.

Let be two positive constants. Let be any stopping rule satisfying

 I{zn∈(A,B)}≤1−ψn≤I{zn∈[A,B]}, (5.19)

for all and , and let be a decision rule defined as

 ϕn=I{zn≥B} (5.20)

for all and .

Then , and it is optimal in the following sense: for any sequential test such that

 α(ψ′,ϕ′)≤α(ψ,ϕ)andβ(ψ′,ϕ′)≤β(ψ,ϕ) (5.21)

it holds

 K0(ψ)≤K0(ψ′)andK1(ψ)≤K1(ψ′). (5.22)

Both inequalities in (5.22) are strict if at least one of the inequalities in (5.21) is strict.

###### Remark 5.1.

The optimum property stated in Theorem 5.1, in the case of one-per-group observations is known as the Wald-Wolfowitz optimality (see Wald and Wolfowitz [1948]). Burkholder and Wijsman [1963] proved that all “extended” SPRTs, i.e. those admitting a randomized decision between stopping and continuing in case or , share the same optimum property with the SPRT. Our Theorem 5.1 states the same, in the case of random group sequential tests: the “extended” group-sequential tests, i.e. those with stopping rules satisfying (5.19), minimize the average cost under both and .

###### Remark 5.2.

Very much like in the classical one-per-group case, when taking no observations is permitted, only the case is meaningful for the RSPRT, because otherwise a trivial test (namely the one which, without taking any observations, accepts or rejects depending on whether or ) performs better than the optimal tests of Theorem 5.1 ( in terms of optimal stopping of Theorem 4.2).

## Acknowledgment

A. Novikov thanks SNI by CONACyT, Mexico, for a partial support for this work. X.I. Popoca-Jiménez thanks CONACyT, Mexico, for scholarships for her studies.

## Appendix A Appendix

In this section, lengthy or too technical proofs are gathered together.

For simplicity, it is assumed throughout this section that all group sizes are identically distributed, even when this is not explicitly required in the respective statement.

###### Lemma A.1.

If a stopping rule is such that

 ∑m∈Grp(m)Eθtψn,m→0 asr→∞, (A.1)

for some , , then

 Eθsψn+∞∑r=1∑m∈Grp(m)Eθsψn,m=Eθtψn. (A.2)

Proof of Lemma A.1 Let be any natural number. Then

 ∑m∈Grp(m)Eθtψn,m−∑m∈Gr+1p(m)Eθtψn,m= = ∑m∈Grp(m)Eθtψn,m−∑m∈Gr∑i∈Gp(m)p(i)Eθtψn,m,i = ∑m∈Grp(m)(Eθtψn,m−∑i∈Gp(i)Eθtψn,m,i)= = ∑m∈Grp(m)(Eθtψn,m−Eθ(1−ψn1)…(1−ψn)(1−ψn,m1)…(1−ψn,m)) = ∑m∈Grp(m)Eθsψn,m,

so

 ∑m∈Grp(m)Eθsψn,m=∑m∈Grp(m)Eθtψn,m−∑m∈Gr+1p(m)Eθtψn,m. (A.3)

Applying the sum over from to on both sides of (A.3), we obtain

 k∑r=1∑m∈Grp(m)Eθsψn,m=k∑r=1(∑