Method comparison with repeated measurements - Passing-Bablok regression for grouped data with errors in both variables

05/18/2019 ∙ by Franz Baumdicker, et al. ∙ University of Freiburg 0

The Passing-Bablok and Theil-Sen regression are closely related non-parametric methods to estimate the regression coefficients and build tests on the relationship between the dependent and independent variables. Both methods rely on the slopes of the connecting lines between pairwise measurements. While Theil and Sen assume no measurement errors in the independent variable, the method from Passing and Bablok accounts for errors in both variables. Here we consider the case where multiple, e.g. repeated, measurements with errors in both variables are available for m samples. We show that in this case the slopes between repeated measurements need to be excluded to obtain an unbiased estimate. We prove that the resulting Block-Passing-Bablok estimate for grouped data is asymptotically normally distributed. If measurements of the independent variable are without error the variance of the estimate equals the variance of the Theil-Sen method with tied ranks. If both variables are measured with imprecision the result depends on the fraction of measurements between groups that fall within the range of each other. Only if no overlap between measurements of different groups occurs the variance equals again the tied ranks version. Otherwise, the variance is smaller. We explicitly compute this variance and provide a method comparison test for data with repeated measurements based on the method from Passing and Bablok for independent measurements. If repeated measurements are considered this test has a higher power to detect the true relationship between two methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider the simple linear regression, where

, for

. Compared to ordinary least squares regression (OLS), non-parametric estimation of the regression coefficients

and

can be more robust. Deviations from normal distributed error terms, like heavy tails or outliers, do not disturb the estimation as strongly as for OLS. A good introduction to robust estimation is given in 

[13].

Here we consider the non-parametric Theil-Sen regression (TSR) and Passing-Bablok regression (PBR), which are both based on Kendall’s rank correlation [4] and provide a robust estimate of . PBR and TSR do not rely on the assumption of normally distributed errors. The Theil-Sen estimate [11, 10] for is given by the median of all slopes of the connecting lines between pairwise measurements. If measurement errors occur for both and , the TSR is biased towards zero. This phenomenon is also known from OLS and called regression dilution or attenuation. For least squares, Deming regression instead of OLS can be used to account for errors in both variables. In addition, [2] suggest how to use repeated measurements to correct for regression dilution, if errors are normally distributed. A variation of the Theil-Sen regression that accounts for errors in both variables is the Passing-Bablok estimator [8, 9]. The Passing-Bablok estimate is also given by the median of all slopes, but shifted by an offset to ensure that and are interchangeable. Interestingly, PBR and TSR seem to be popular in separate fields. While TSR is popular in Metrology and Environmental Science, PBR is mainly used in Clinical Biochemistry, Pharmacology and Laboratory Medicine to compare two alternative measurement methods. This might be due to the fact that PBR is outlined in a guideline of the Clinical and Laboratory Standards Institute [7]. In [3] a protocol for method comparison studies in clinical laboratories is suggested, including a paragraph on PBR. Furthermore, there have been attempts to use PBR for batch effect removal in gene expression analysis [6]. Throughout this manuscript we will refer to the Passing-Bablok regression, but note that setting in the PBR equals the TSR and the presented results naturally hold for both methods.

We consider the case, where repeated measurements of two methods, for true pairs are available. The alternative measurements of the two methods are given by data points , . This scenario includes simultaneous repeated measurements of the same sample with two methods, as well as multiple measurements of the same sample separately measured for each method. In the later, measurements need to be combined randomly into observation pairs . If the data set for a PBR contains repeated measurements it is important to account for this. In contrast to the expected slope between independent measurements, the expected slope between repeated measurements is . Hence, the slopes between points that correspond to repeated measurements of the same underlying true value are meaningless and would distort PB estimates and lower the power of the associated statistical test if included. Here we describe how the variance changes if we omit the meaningless slopes between repeated measurements and provide the resulting test on the equivalence of two methods.

The paper is organized as follows: First, we consider the assumptions and definitions of Passing and Bablok and introduce the necessary adaptation for repeated measurements in section 2. Section 3

states our results, including the asymptotic confidence interval for the estimated parameter

in Corollary 3.4. The statistical test for the equivalence of two methods with repeated measurements is given in Corollary 3.6. In section 4 we discuss the implications of the suggested Block-Passing-Bablok procedure and compare it empirically to the PBR without repeated measurements. Finally, proofs are given in section 5.

2 The Passing-Bablok regression for repeated measurements

2.1 The standard Passing-Bablok regression for independent measurements

Under the hypotheses of a structural linear relationship between two measurement methods, i.e.

for , where corresponds to one and to the other method, Passing and Bablok [8] considered the following assumptions:

Assumptions 2.1.

(standard Passing-Bablok Regression)

  • All points are of the form

    with the (non-random) ’true’ values of the measurements and error terms and .

  • All come from an arbitrary and continuous distribution with mean zero, and

  • for the variances of error terms, holds.

2.2 Block-Passing-Bablok regression for multiple dependent measurements

Sometimes the data at hand is not independent as stated in assumption 2.1(ii) but grouped into dependent sets of measurements. Examples include common situations, e.g. if measurements have been repeated multiple times for the same sample or a series of measurements is done under the same conditions. In this case, the Passing-Bablok Regression cannot be used straight away. Here, we expand the Passing-Bablok Regression such that it considers the data to be available in groups with members. We index our data points by group and individual from this group. Each represents a measurement of the true values and we assume again a linear relationship

We make the following assumptions for repeated measurement data:

Assumptions 2.2.

(Passing-Bablok Regression for grouped data)

  • All points are of the form

    with the ’true’ values of the measurements in group and error terms indexed by group and individual from this group.

  • All come from an arbitrary and continuous distribution with mean zero, and

  • for the variances of error terms, holds.

Assumptions 2.3.

(non overlapping groups)
All groups are strictly separated on the x-axis, i.e. almost surely

Remark 2.4.

Note that groups need to be also separated at the -axis as in Assumption 2.3 to maintain the symmetry of the method. However, we will mainly consider the general case with possibly overlapping groups such that only Assumptions 2.2 i)-iii) hold.

Estimating the regression parameter from grouped data

The Passing-Bablok regression for independent measurements [8] makes use of the slopes between all pairs of points and . Under Assumptions 2.1, it is easy to see that for all . In contrast, under Assumptions 2.2 we get that the expected slope between and equals only if and is zero otherwise, i.e for . To include only the meaningful slopes for the estimation of the regression parameter , we have to exclude all repeated measurement pairs within each group from the analysis.

Definition 2.5 (Block-Passing-Bablok regression for grouped data).
  1. For the estimation of the regression parameters and , compute the slopes of the connecting lines between any pair of points from different groups. The slopes are given by

  2. Discard identical measurements as well as all slopes with . Thereby obtain slopes.

  3. The slope parameter is estimated by the shifted median of the slopes with the offset

    (1)

    I.e. if the ranked sequence of slopes is given by , is estimated by

  4. The intercept parameter is estimated by

Remark 2.6.

Naturally, the description of the groupwise method in Definition 2.5 is close to the description of the original method introduced by Passing and Bablok. If we set and we regain the classical estimator for the regression parameter  [8]. In this case, we compute the slopes of the connecting lines between any pair of points, that are given by

Without considering groups there are possible lines to connect any two points of an -dimensional data set. Again two identical measurements with and are not considered for the estimation. Further, any slopes with a value of are disregarded, such that we have at most slopes to consider.

Remark 2.7.

Note that setting in Definition 2.5 results in the Theil-Sen estimator. To get an estimator where methods can be used interchangeably, Passing and Bablok defined the offset determined by . The definition of as the number of slopes with a value smaller than in equation (1

) corresponds to the null hypothesis

, and needs to be adapted for other hypothesis for the value of . If the null hypothesis is not true, setting in this way introduces a bias towards higher estimates for . The median slope between independent measurements within one group with offset is no longer given by zero if . This effect can be seen in Table 1, e.g. for for both the classic and the Block-PBR. Since the offset drives the median of meaningless slopes between repeated measurements towards , this can also lead to overconfidence for if the classic PBR is used. Table 1 for illustrates this effect.

3 Results

Definition 3.1.

Under Assumptions 2.2 let be the difference between the number of slopes that are larger than and the number of slopes lower than . I.e. for

(2)

Note that we will denote this number by instead of in the special case of Assumptions 2.1 where no groups are considered.

Theorem 1 (Variance of ).

If Assumptions 2.2 (i)-(iii) hold and is the expected fraction of triplets with one point from group and two points from group with and

(a) if the group sizes are given by ,

(3)

(b) Consequently, if the group sizes are equal, i.e.  for ,

(4)
Remark 3.2 (classical Passing-Bablok regression).

If we set and in Theorem 1 we regain the result from Passing and Bablok [8] and Theil-Sen [10] for independent measurements, where

Remark 3.3.

In Theorem 1, note that (b) is a direct consequence from (a) as setting in equation (3) gives

which leads to (4) as .

Theorem 2 (asymptotic normality of ).

Let the number of groups be fixed and consider the limit of a large samplesize .

(a) If the group sizes are given by and Assumptions 2.2 (i)-(iii) hold, is asymptotically normally distributed with mean zero and variance given by

where is the expected fraction of triplets with one point from group and two points from group with .

(b) More specifically, for groups with equal group sizes as in Theorem 1(b) and if Assumptions 2.2 and 2.3 hold, is asymptotically normally distributed with mean zero and variance

Corollary 3.4 (confidence interval for ).

Let denote the

-quantile of the standardized normal distribution. Let further

,

The asymptotic confidence interval for with significance level is then given by

(5)
Remark 3.5 (confidence interval for ).

Let denote the lower and denote the upper limit of the confidence interval for the slope in equation (5). Then,

(6)
(7)

are the corresponding limits for the intercept .

Corollary 3.6 (Statistical test for the equivalence of two methods with repeated measurements).

Given two measurement methods with measurements given by and .

  • The equivalence of both methods can be concluded if and .

  • An intercept indicates a constant systemic difference between the two measurement methods.

  • A regression parameter indicates a proportional systemic difference between the measurement methods.

Remark 3.7 (non-overlapping groups).

In case of a data set where Assumption 2.3 is fulfilled, i.e. with non-overlapping groups on the x-axis, we can set and formula (3) transforms to

(8)

And consequently for equal group sizes

(9)

Note that (8) equals the correction for tied ranks in one variable [10]. To see this consider the fact that setting to the mean of the corresponding group for does not change the sign of slopes between separated groups. So compared to the variance for non-overlapping groups the variance for overlapping groups is always smaller. For overlapping groups the test from Corollary 3.6 based on equation (8) is thus a conservative test for the equivalence of the measurement methods, but the power of the test could in principle be improved if a reliable estimate for is available.

4 Diskussion and empirical comparisons of
regular and Block-Passing-Bablok Regression

Passing-Bablok Regression is recommended as a robust method such that extreme values can be included and the errors do not have to be normally distributed [3]. However, the assumption of independent measurements in the classical Passing-Bablok regression seems to be frequently not fulfilled. Duplicated and repeated measurements are often an important part of studies to assess the variance of measurements and help to identify outliers [3]. As a random example among various studies including repeated measurements consider Figure 2 in [12], where measurements of different patients have been repeated in different numbers. But to which degree does this harm the results of a classical Passing-Bablok regression?

We recall, the Block-Passing-Bablok Regression only considers slopes between pairs of points from different groups for estimating the regression parameter . Since the regression parameters are estimated as medians, a relatively small difference for the number of considered slopes does not have a large influence on the estimations. Hence if the group sizes are equal, instead of slopes as in the original method, we utilize approximately slopes. As shown in Theorem 2 (a) the discrepancy between the unadapted and the Block-Passing-Bablok Regression will quickly vanish for a large number of separated and equally sized groups.

In a hypothetical example with varying group sizes where , the Block-Passing-Bablok Regression yields better results than the original method. The median of all slopes between a very large and a very small group would be a slope within the large group. Since we assume random errors within groups, this median slope would be sampled from a set of meaningless slopes with mean , which transforms into if the offset is used. This biases the estimate of towards and lowers the power to detect . The Block-Passing-Bablok Regression provides more reliable estimates in this setting.

To illustrate the influence of the group sizes and the overlap between groups on the estimation of and the power of the test in Corollary 3.6 we simulated 16 illustrative scenarios. Evaluations of the Passing-Bablok Estimates have been performed with the mcr package [5] for the classic PBR and with a custom script based on mcr for the Block-PBR. The script is available from the authors upon request. A subset of the considered settings is illustrated in Figure 1. The results of the simulations are shown in Table 1.

slope groups overlap mean mean
cPBR BPBR cPBR BPBR cPBR BPBR cPBR BPBR
100-100 low 1.001 1.001 [.923,1.085] [.923,1.085] .950 .950 .050 .050
high 1.003 1.004 [.870,1.156] [.856,1.177] .950 .949 .050 .051
180-20 low 1.003 1.002 [.860,1.169] [.874,1.148] .951 .949 .049 .051
high 1.005 1.01 [.812,1.246] [.771,1.326] .951 .947 .049 .053
10100 low 1 1 [.994,1.006] [.994,1.006] .950 .950 .050 .050
high 1 1 [.988,1.012] [.987,1.012] .952 .952 .048 .048
820-920 low 1 1 [.988,1.013] [.991,1.009] .951 .952 .049 .048
high 1 1 [.977,1.024] [.982,1.018] .951 .951 .049 .049
100-100 low .983 .981 [.907,1.067] [.904,1.064] .950 .950 .071 .076
high .989 .984 [.856,1.141] [.838,1.156] .950 .948 .054 .057
180-20 low .991 .983 [.850,1.157] [.856,1.128] .949 .947 .053 .061
high .998 .991 [.804,1.239] [.753,1.306] .948 .951 .049 .051
10100 low .980 .980 [.974,.986] [.974,.986] .950 .949 1 1
high .980 .980 [.968,.993] [.968,.993] .950 .951 .876 .880
820-920 low .981 .980 [.969,.994] [.971,.989] .950 .951 .838 .994
high .982 .980 [.959,1.006] [.963,.998] .946 .948 .326 .607
100-100 low .828 .801 [.757,.906] [.730,.877] .888 .953 .987 .998
high .854 .807 [.730,.997] [.672,.963] .881 .952 .536 .679
180-20 low .888 .802 [.754,1.051] [.686,.934] .769 .949 .303 .823
high .924 .812 [.738,1.158] [.592,1.095] .770 .950 .117 .305
10100 low .801 .800 [.795,.807] [.794,.806] .934 .948 1 1
high .802 .800 [.791,.813] [.789,.812] .935 .947 1 1
820-920 low .813 .800 [.802,.825] [.792,.808] .398 .953 1 1
high .824 .800 [.803,.847] [.784,.816] .385 .948 1 1
100-100 low .317 .202 [.257,.383] [.144,.261] .022 .948 1 1
high .461 .279 [.352,.588] [.165,.404] .001 .758 1 1
180-20 low .559 .201 [.415,.753] [.105,.302] 0 .952 .975 1
high .508 .280 [.318,.733] [.093,.501] 0 .906 .685 1
10100 low .204 .200 [.200,.209] [.196,.205] .567 .954 1 1
high .212 .204 [.203,.221] [.195,.213] .236 .846 1 1
820-920 low .274 .200 [.255,.302] [.194,.206] 0 .951 1 1
high .328 .202 [.299,.366] [.189,.215] 0 .941 1 1
Table 1: We consider four different combinations of group sizes denoted by 100-100 , 180-20 , 10100 , and 820-920 . The true values of the samples are given by and , for . In addition, we vary the variance within groups, setting to either or to create high or low overlap between groups. We simulate the performance under the null hypothesis and alternative scenarios where . The difference between the classical Passing-Bablok regression (cPBR) and the Block-Passing-Bablok regression (BPBR) from Definiton 2.5 is shown. In particular, we provide estimations for ,

, the probability that the true slope

and the probability to reject the null hypothesis .
Figure 1: Four scenarios of our simulations are shown. The solid gray line represents the true slope . The black long dashed line is the Block-Passing-Bablok estimate and the dotted line is the classic Passing-Bablok estimate ignoring the group memberships. Group memberships are illustrated via the shape and color of the measurements . The red short dashed line shows the identity. Top left: ; Top right: ; Bottom left: ; Bottom right: .

5 Proofs

For Theorem 1, Theorem 2, and the verification of confidence intervals in Corollary 3.4 we need to compute the variance of

the number of slopes bigger than minus the number of slopes smaller than , as introduced in Definition 3.1.

To simplify the notation for the proofs, we will instead consider a transformed dataset, where are divided by such that the errors in both dimensions are identically distributed (see Assumption 2.2(iii)). If we also substract from , the sign of the slopes in the transformed datasets determines whether a pair of points contributes to or from Definition 3.1. In particular, all figures in the proof section are plotted with transformed coordinates, i.e. .

5.1 Proof of Theorem 1

Proof.

By defining

as the sign of the slope in the transformed dataset, it follows that

We will hence compute

Since for all , the second term on the right hand side vanishes and we have

(10)

We split the right hand side into the sums over the squares and the sum over the remaining mixed terms :

(a) The first term on the right hand side is not hard to compute since for all with holds. We get

(11)

(b) For the computation of the mixed terms, we only need to consider those with a common pair of indices, i.e. one of the four cases and . Ignoring group assignments there are such combinations. All other combinations vanish in expectation, due to independence. For the expectation of the sum of mixed terms, we look at the covariances of the with each other. Since all cases lead to the same result, we can assume without loss of generality and multiply the result by four.We will combine triplets of mixed terms to simplify the calculations. By distinction of cases as sketched in Figure 2, we obtain for the expectation

(12)

considering that the cases occur with probability and , respectively.

(i,k)

(j,l)

(s,u)

(i,k)

(j,l)

(s,u)

Figure 2: Left: Sketch of case 1. With probability , is inbetween and . Slopes have equal signs. Right: Sketch of case 2. With probability , is above or below both, and . Slopes have different signs.

Furthermore, we need to consider that we counted each pair of slopes three times due to the arrangement in triples. Consequently, we have to divide by and get the interim result

(13)

So far we did not consider the groups. Since the method from Definition 2.5 does not consider any pairs of data points within the same group, we have to subtract those triples of indices with , and , as well as .

(c) There are

(14)

possible combinations of three elements from two different groups. Let’s say . In that case, the only remaining term in (12) is and we have to subtract the other two terms from our computation of (10) because we wrongfully included them in step (b). In case of strictly separated groups, however, those combinations vanish as can be seen below in equation (15).

In case of two equal indices there are three different possibilities, how the points are located relativ to each other. Looking at the single point from group , it can be located above, in between or below the two points from group (relating to the y-axis). Those arrangements occur with probability each and are illustrated in Figure 3 if and in Figure 4 for . Summed up, those arrangements yield an expectation of

(15)

(i,k)

(j,k)

(s,u)

1

1

-1

(i,k)

(j,k)

(s,u)

1

1

1

(i,k)

(j,k)

(s,u)

1

-1

-1
Figure 3: All possible arrangements of three elements from two different groups. and come from the same group, comes from another group and .

(i,k)

(j,k)

(s,u)

1

1

1

(i,k)

(j,k)

(s,u)

1

1

-1

(i,k)

(j,k)

(s,u)

1

-1

1
Figure 4: Some as Figure 3 if with probability .

(d) Further, in (13) we wrongfully included the case . This means, we have to substract

(16)

and can combine (11) and (13)-(16) to get for the term in (10)

the wanted formula for the variance. Naturally, this formula holds likewise for the less complex settings of equally sized non-overlapping groups with for all , i.e.

(17)
(18)

and for non-grouped data with and we regain the classic result

(19)

5.2 Proof of Theorem 2

Proof.

The proofs for the PBR and TSR for non-grouped measurements [8, 10] relied on results for rank correlation methods. Our proof will follow the same route as shown in chapter 5 of Kendalls book [4]

. To show the asymptotic normality we will show that the moments of the distribution of

tend to those of the normal distribution [1, Thm 30.2].

Under the null hypothesis we have that . Hence moments of of odd order vanish. For moments of even order, we need to compute

(20)

Consider the expansion of the sum above, which consists of summands with factors each. Since and the are independent if each summand with an independent factor will vanish in the expectation. Let us consider the number of summands where the factors are pairwise linked by exactly one suffix. Each of these summands will look like this:

(21)

which due to independence could be split up into the form

However, by doing so we would no longer connect the cases as in equation (12). So we ignore the above and combine cases, as we have done in the proof of Theorem 1:

For each in each summand as shown in equation (21) there are two other combinations that are exactly the same except that there is an or instead of , respectively. Therefore we define for each triplet of points the pairings

For each summand with pairwise tied indices let be the triplet of indices used in the -th pairing. Let us then consider the set of sums which is represented by Now we see that

if the points of are a disjoint and hence independent set of points.

The remaining questions are:

  1. How many summands with pairwise tied indices are there in the expanded sum ignoring group associations?

  2. Which of these need to be omitted due to group associations for non-overlapping groups?

  3. What changes for overlapping groups?

Answer to (a):

There are ways to choose the first factors of pairings and ways to assign the remaining factors. But now we have counted some combinations twice so we have to divide by to get the final number of ways to choose pairings. Given such a combination there are now different indices to choose, so possibilities.

Thus we end up with

combinations with pairs with tied indices. Finally since we counted each summand times we would get for (20)

(22)

if we ignore group associations.

Answer to (b):

For there are 3 different scenarios.

Scenario 1: all groups are different ()

No difference to the above calculation

Scenario 2: all groups are equal ()

The corresponding summand is not included in (20) which we mimic by setting

Scenario 3: two groups are equal ( or or )

Althoug some summands are not included in this case, we do not have to change anything since the effect vanishes for separated groups as shown in equation (15).

For separated groups only scenario 2 alters the above calculation. So we have to subtract the number of combinations with at least one factor where all indices are from the same group. The probablity to choose 3 indices from group is given by and thus the fraction of all sums that have no from scenario 2 tends to

(23)

which simplifies for groups with equal sizes to

(24)

So we have to multiply equation (22) and (23) and end up with

Answer to (c):

If groups can overlap, let be the number of triplets where groups do not overlap (Scenario 3.1) and the number of triplets where groups overlap (Scenario 3.2).

In this case we get

So for each summand with an odd number of triplets with we have to substract just as in the proof of Theorem 1. We set

The probability to get a summand with odd given there is no triplet from scenario 2 is now given by This leads us to