pimeta: an R package of prediction intervals for random-effects meta-analysis

10/15/2021 ∙ by Kengo Nagashima, et al. ∙ 0

The prediction interval is gaining prominence in meta-analysis as it enables the assessment of uncertainties in treatment effects and heterogeneity between studies. However, coverage probabilities of the current standard method for constructing prediction intervals cannot retain their nominal levels in general, particularly when the number of synthesized studies is moderate or small, because their validities depend on large sample approximations. Recently, several methods have developed been to address this issue. This paper briefly summarizes the recent developments in methods of prediction intervals and provides readable examples using R for multiple types of data with simple code. The pimeta package is an R package that provides these improved methods to calculate accurate prediction intervals and graphical tools to illustrate these results. The pimeta package is listed in “CRAN Task View: Meta-Analysis.” The analysis is easily performed in R using a series of R packages.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Random-effects meta-analyses combine the treatment effect estimates across studies, accounting for heterogeneity. One of the objectives of a random-effects meta-analysis is to summarize studies and to make statistical inferences on the average treatment effect. Quantification and evaluation of the magnitude of heterogeneity are also very important, because true treatment effects can differ for each study due to differences in patient characteristics, follow-up times, drug regimens, and other factors, in many cases. However, researchers often interpret results from random-effects models in the same manner as those from fixed-effect models that assume the true effect does not differ from study to study

[RileyGates2011, RileyHiggins2011]. It means that they tend to pay little attention to the treatment effects in each study that can be different from the average treatment effect.

Skipka [Skipka2006] first proposed a prediction interval in random-effects meta-analysis, although, in which the term heterogeneity interval was used. Aterwards, it was recommended that a prediction interval [Higgins2009]

should be reported alongside a confidence interval and heterogeneity measures

[RileyHiggins2011, Veroniki2019]. It can be interpreted as the range of the predicted true treatment effect in a new study, given information of the combined studies. Prediction intervals provide useful additional information to confidence intervals, because a prediction interval is a measure of treatment effects that accounts for heterogeneity. The importance of prediction intervals has come to be recognized. Several methods to construct prediction intervals for random-effects meta-analysis have been developed.

Higgins et al. [Higgins2009]

proposed a method to calculate the prediction interval that can provide valuable information from a very simple formula. This prediction interval is useful because it can also be calculated from summarized results (i.e., the standard error of the average effect and estimate of heterogeneity parameter) of a random-effects meta-analysis. However, Partlett & Riley

[Partlett2017] confirmed that the Higgins et al.’s prediction interval could have poor coverage when the heterogeneity or the number of studies is small, because this prediction interval is based on large-sample approximations.

Partlett & Riley [Partlett2017]

proposed modified methods of the Higgins et al.’s prediction interval using restricted maximum likelihood (REML) estimation with four variance estimators for the overall mean effect as follows: (1) the approximate variance estimator; (2) the Hartung–Knapp variance estimator

[Hartung1999, Hartung2001]; (3) the Sidik–Jonkman bias-corrected variance estimator [Sidik2006]; and (4) the Kenward–Roger’s approach [Kenward1997]. These prediction intervals exhibit better coverage performances than Higgins et al.’s prediction interval, but are still insufficient when the heterogeneity or number of studies is small [Partlett2017], because these methods are also based on similar approximations of the Higgins et al.’s prediction interval.

To address these issues, Nagashima et al. [Nagashima2018] proposed an alternative prediction interval based on a parametric bootstrap method using a confidence distribution [Schweder2002, Xie2013]. In their simulation studies, the Nagashima et al.’s prediction interval has been found to perform very well even the heterogeneity and number of studies are small; this prediction interval was accurate when -statistics were in the range of and no. of studies .

Nagashima et al. [pimeta2018] have developed the R [R2019] package pimeta to facilitate implementation of these improved methods in meta-analyses. The pimeta package is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=pimeta or GitHub repository https://github.com/nshi-stat/pimeta. Moreover, the pimeta package is listed in “CRAN Task View: Meta-Analysis” [CRANTaskView2019].

The remainder of this paper is organized as follows. Section 2 briefly reviews the random-effects model for meta-analysis and prediction intervals. Section 3 introduces the analysis of prediction intervals using R with the pimeta package and provides examples based on meta-analysis datasets included in the package. Section 4 concludes the paper.

2 Prediction intervals for random-effects meta-analysis

2.1 Random-effects model

Let us consider the case of combining information from a series of comparative studies by the random-effects model [Cochran1937, DerSimonian1986]. The random-effects model can be defined as follows:

(1)

where

is a random variable of an effect size estimate from the

th study, is the true treatment effect in the th study, is a random error within a study, is an unknown parameter of the overall mean effect, is a random variable reflecting study-specific deviation from the overall mean effect, and and

are assumed to be mutually independent and normally distributed, which means that

, , , . The parameters of the within-study variances, , are known and replaced by their efficient estimates [Biggerstaff2008]. The parameter of the heterogeneity variance is an unknown parameter that should be estimated.

The overall mean effect, , can be estimated as , where , and is an estimator of the heterogeneity variance that has been proposed by a number of researchers [Sidik2007, Veroniki2016, Petropoulou2017, Langan2018] (see also Section 3.3). Similarly, many methods to estimate a confidence interval of the overall mean effect, , have also been proposed [Brockwell2001, Meca2008, Veroniki2019] (see also Section 3.3).

2.2 Higgins–Thompson–Spiegelhalter prediction interval

Higgins et al. [Higgins2009] proposed a simple plug-in type prediction interval that can be expressed as

(2)

where is an approximate variance estimator of , is the percentile of the distribution with degrees of freedom, is the DerSimonian–Laird estimator of the heterogeneity variance [DerSimonian1986] that is defined as , , , , and . This prediction interval is essentially based on the following two approximations: is approximately distributed as , and is approximately distributed as . These approximations generally do not hold for small . Therefore, this prediction interval perform well when the heterogeneity and number of studies are moderate [Partlett2017]. This prediction interval was accurate when were in the range of and in simulations [Nagashima2018]. However, in practice, most meta-analyses include less than 20 studies [Kontopantelis2013].

2.3 Partlett–Riley prediction intervals

Partlett & Riley [Partlett2017] proposed prediction intervals following restricted maximum likelihood (REML) estimation of the heterogeneity variance [Sidik2007, Harville1977] with four variance estimators for the overall mean effect, . This prediction interval can be expressed as

  1. The approximate variance estimator.
    This prediction interval can be expressed as

    This method replaces , , and in the Higgins et al.’s prediction interval (2) with , , and , where is an iterative solution of the equation

    , and .

  2. The Hartung–Knapp variance estimator [Hartung1999, Hartung2001].
    As the above one, this method replaces , , and in (2) with , , and , where the Hartung–Knapp estimator is defined as

  3. The Sidik–Jonkman bias-corrected variance estimator [Sidik2006].
    Similarly, this method replaces , , and in (2) with , , and , where the Sidik–Jonkman bias-corrected estimator

    and

  4. The Kenward–Roger’s approach [Kenward1997, Morris2018].
    This prediction interval can be expressed as

    This method applies the approximate degrees of freedom, , and a bias-adjusted variance estimator, . For the random-effects model in (1), the expected information for is given by

    Thus, the bias-adjusted variance estimator is defined as

    and the approximate degrees of freedom is .

These prediction intervals are refinements of the Higgins et al.’s prediction interval to provide more appropriate intervals. However, these prediction intervals are based on similar approximations of the Higgins et al.’s prediction interval. They were accurate when were in the range of and in simulations [Nagashima2018].

2.4 Nagashima–Noma–Furukawa prediction interval

Nagashima et al. [Nagashima2018] proposed a prediction interval that is valid under the heterogeneity variance with a small number of studies. To avoid using the approximations of the Higgins et al.’s prediction interval, this prediction interval is based on a parametric bootstrap approach using a confidence distribution [Schweder2002, Xie2013] to account for the uncertainty of with an exact distribution estimator of and the Hartung–Knapp variance estimator [Hartung1999, Hartung2001].

This prediction interval can be constructed using the algorithm described below:

  1. Generate bootstrap samples () that are drawn from the exact distribution of , that are drawn from , and that are drawn from .

  2. Calculate , and , where , and .

  3. Calculate the prediction limits and that are and percentage points of , respectively.

The exact distribution function of the random variable is given by

where is the distribution function of , and is an observed value of . By applying the inverse transformation method, a random sample can be computed by numerical inversion of , where

is a random number with a uniform distribution,

. If , then the sample is truncated to zero (). A quadratic form, , has the same distribution as the random variable that has only one parameter , where

are the eigenvalues of matrix

, are independent central chi-square random variables, each with one degree of freedom, , , , , , and [Biggerstaff2008].

This prediction interval has been found to perform very well even the heterogeneity is small and number of studies is small; this prediction interval was accurate when were in the range of and in simulation studies [Nagashima2018].

3 Calculating prediction intervals using R

3.1 The pimeta package

The prediction intervals introduced in Section 2 are implemented in the pimeta package. One of the main functions is ‘pima()’, and its simple usage is:

Ψlibrary("pimeta")
Ψsbp <- data.frame(
Ψy = c(0.00, 0.10, -0.40, -0.80, -0.63, 0.22, -0.34, -0.51, 0.03, -0.81),
Ψsigmak = c(0.42347717, 0.21939179, 0.02551067, 0.19898325, 0.30102594,
Ψ0.30102594, 0.07142988, 0.10204269, 0.12245123, 0.30102594),
Ψlabel = c("Almond (2001)", "Cashew (2003)", "Pecan (2004)",
Ψ"Macadamia (2007)", "Pistachio (2008)", "Hazelnut (2011)",
Ψ"Coconut (2012)", "Walnut (2014)", "Chestnut (2015)",
Ψ"Peanut (2017)")
Ψ)
Ψpiboot <- pima(sbp$y, sbp$sigmak, seed = 3141592)
Ψprint(piboot)

The output is:

ΨPrediction & Confidence Intervals for Random-Effects Meta-Analysis

ΨA parametric bootstrap prediction and confidence intervals
ΨHeterogeneity variance: DerSimonian-Laird
ΨVariance for average treatment effect: Hartung (Hartung-Knapp)

ΨNo. of studies: 10

ΨAverage treatment effect [95% prediction interval]:
Ψ-0.3341 [-0.8789, 0.2165]
Ψd.f.: 9

ΨAverage treatment effect [95% confidence interval]:
Ψ-0.3341 [-0.5673, -0.0985]
Ψd.f.: 9

ΨHeterogeneity measure
Ψtau-squared: 0.0282
ΨI-squared:  70.5%

Here, ‘y’ is the vector of effect size estimates

, ‘se’ is the vector of parameters of the within-study standard errors , and ‘method’ is the name of the calculation method for a prediction interval (the default is “boot” that corresponds to the Nagashima et al.’s prediction interval [see Section 2.4]). Because the Nagashima et al.’s prediction interval uses bootstrapping, the function ‘pima()’ provides the ‘parallel’ option to reduce the computational time required for bootstrapping. Other arguments for the ‘pima’ function are shown in Table 1.

Argument Type Description
‘y’ numeric vector The vector of effect size estimates.
‘se’ numeric vector The vector of within-study standard errors; ‘se’ should be ; either ‘se’ or ‘v’ must be specified.
‘v’ numeric vector The vector of within-study variances; ‘v’ should be ; either ‘se’ or ‘v’ must be specified.
‘method’ character The calculation method; ‘HTS‘ indicates Higgins et al., “APX”, “HK”, “SJ”, or “KR” indicates Partlett–Riley, “boot” indicates Nagashima et al.; the default is “boot”.
‘alpha’ numeric The alpha level; the default is 0.05.
‘B’ integer The number of bootstrap samples; ‘B’ should be ; the default is 25000; ignored if ‘method’ is not “boot”.
‘parallel’ integer/logical The number of threads used in parallel computing or ‘FALSE’; the default is ‘FALSE’, which means single threading; ignored if ‘method’ is not “boot”.
‘seed’ integer Set the value of random seed; ignored if ‘method’ is not “boot”.
‘maxit1’ integer The maximum number of iterations for the exact distribution function of ; ‘maxit1’ should be ; the default is 10000; ignored if ‘method’ is not “boot”.
‘eps’ numeric The desired level of accuracy for the exact distribution function of ; ‘eps’ should be ; the default is ; ignored if ‘method’ is not “boot”.
‘lower’ numeric The lower limit of random numbers of ; ‘lower’ should be ; the default is 0; ignored if ‘method’ is not “boot”.
‘upper’ numeric The upper limit of random numbers of ; ‘upper’ should be ; the default is 1000; ignored if ‘method’ is not “boot”.
‘maxit2’ integer The maximum number of iterations for numerical inversions; ‘maxit2’ should be the default is 1000; ignored if ‘method’ is not “boot”.
‘tol’ numeric The desired accuracy for numerical inversions; ‘tol’ should be ; the default is ‘.Machine$double.eps0̂.25; ignored if ‘method’ is not “boot”.
‘rnd’ numeric vector A vector of random numbers from the exact distribution of ; if ‘rnd’ is specified, then the step of random numbers generations for will be skipped; ignored if ‘method’ is not “boot”.
‘maxiter’ integer The maximum number of iterations for REML estimation; ‘maxiter’ should be ; the default is 100; ignored if ‘method’ is not “APX”, “HK”, “SJ”, or “KR”.
Table 1: Arguments for the ‘pima()’ function.

The ‘pima()’ function can also estimate the Higgins et al.’s (see Section 2.2) and Partlett & Riley’s (see Section 2.3) prediction intervals by setting the method option.

Ψpima(sbp$y, sbp$sigmak, method = "HTS")
Ψpima(sbp$y, sbp$sigmak, method = "HK")

An estimated result can be summarized by the ‘print()’ method for a ‘pima’ object returned by the ‘pima()’ function resulting in a list that is shown in Table 2. Both prediction and confidence intervals, and summary statistics that are usually reported in a random-effects meta-analysis, are provided. The method used in the confidence interval corresponds to that in the prediction interval. An estimated result can also be summarized by a forest plot by the ‘plot()’ method, as shown in Figure 1. The forest plot function has an argument ‘studylabel’ to set the labels for each study.

Output Description
‘K’ the number of studies,
‘muhat’ the average treatment effect estimate,
‘lpi’, ‘upi’ the lower and upper prediction limits
‘lci’, ‘uci’ the lower and upper confidence limits
‘nup’ the degrees of freedom for the prediction interval
‘nuc’ the degrees of freedom for the confidence interval
‘tau2h’ the estimate for
‘i2h’ the estimate for
Table 2: Output of the ‘pima()’ function.
Figure 1: A forest plot for a ‘pima’ object.
Ψcairo_pdf("forestplot.pdf", width = 6, height = 3, family = "Arial")
Ψplot(piboot, digits = 2, base_size = 10)
Ψdev.off()

3.2 Random-effects meta-analysis with binary outcomes

The pimeta package can handle random-effects meta-analysis of controlled clinical trials with binary outcomes. The ‘convert_bin()’ function in the pimeta package is an implementation of methods described in Hartung & Knapp [Hartung2001b]. Here, is the number of successes, is the number of patients, and is the true binomial probability of success in treatment group (for ) in the

th study. Logarithmic odds ratios are given by

and its variances are given by

Logarithmic relative risks are given by

and its variances are given by

Risk differences are given by

and its variances are given by

These estimates are used as effect size estimates and within-study variances (e.g., and .)

The ‘convert_bin()’ function converts binary outcome data to effect size estimates and within-study standard error vectors. This function has an argument ‘type’ that is a character indicating an outcome measure (i.e., “logOR”, “logRR”, or “RD” can be specified). Input data ‘m1’, ‘n1’, ‘m2’, and ‘n2’, which are integer vectors, are converted to effect size estimates and within-study standard errors . A dataset of 13 placebo-controlled trials with cisapride that was reported by Hartung & Knapp [Hartung2001b] was analyzed. This dataset is reproduced in Table 3.

Study Treatment (cisapride) group Placebo group
[m1/n1] [m2/n2]
Creytens 15/16 9/16
Milo 12/16 1/16
Francois and De Nutte 29/34 18/34
Deruyttere et al. 42/56 31/56
Hannon 14/22 6/22
Roesch 44/54 17/55
De Nutte et al. 14/17 7/15
Hausken and Bestad 29/58 23/58
Chung 10/14 3/15
Van Outryve et al. 17/26 6/27
Al-Quorain et al. 38/44 12/45
Kellow et al. 19/29 22/30
Yeoh et al. 21/38 19/38
Table 3: The cisapride data [Hartung2001b].

The converted data can also be analyzed by the ‘pima()’ function, and the results can be summarized as a forest plot (see Figure 2).

Figure 2: A forest plot of the cisapride data.
Ψm1 <- c(15,12,29,42,14,44,14,29,10,17,38,19,21)
Ψn1 <- c(16,16,34,56,22,54,17,58,14,26,44,29,38)
Ψm2 <- c( 9, 1,18,31, 6,17, 7,23, 3, 6,12,22,19)
Ψn2 <- c(16,16,34,56,22,55,15,58,15,27,45,30,38)
Ψdat <- convert_bin(m1, n1, m2, n2, type = "logOR")
Ψpibin <- pima(dat$y, dat$se, seed = 2236067)
Ψprint(pibin)
Ψbinlabel <- c(
Ψ"Creytens", "Milo", "Francois and De Nutte", "Deruyttere et al.",
Ψ"Hannon", "Roesch", "De Nutte et al.", "Hausken and Bestad",
Ψ"Chung", "Van Outryve et al.", "Al-Quorain et al.", "Kellow et al.",
Ψ"Yeoh et al.")
Ψcairo_pdf("forestplot_bin.pdf", width = 6, height = 3.5, family = "Arial")
Ψplot(pibin, digits = 2, base_size = 10, studylabel = binlabel)
Ψdev.off()

The coverage performances of prediction intervals have been discussed by Nagashima et al. [Nagashima2018]. The Nagashima et al.’s prediction interval achieved the nominal level of coverage under realistic meta-analysis settings.

3.3 Confidence intervals and heterogeneity variance estimators

Because Riley & Higgins [RileyHiggins2011] recommended that a prediction interval should be reported alongside a confidence interval and heterogeneity measure, the pimeta package also provides functions to calculate confidence intervals for the overall mean effect and heterogeneity variance estimators.

The ‘cima()’ function calculates a confidence interval. The arguments of the ‘cima()’ and ‘pima()’ functions are identical except for the ‘method’ as shown in Table 4.

‘method’ Description
“boot” A parametric bootstrap confidence interval [Nagashima2018]; the default is “boot”.
“DL” A Wald-type -distribution confidence interval. The DerSimonian–Laird estimator for and an approximate variance estimator for the overall mean effect are used.
“APX” A Wald-type -distribution confidence interval. The REML estimator for and an approximate variance estimator are used.
“HK” A Wald-type -distribution confidence interval. The REML estimator for and the Hartung–Knapp variance estimator are used [Hartung1999, Hartung2001].
“SJ” A Wald-type -distribution confidence interval. The REML estimator for and the Sidik–Jonkman variance estimator are used [Sidik2006].
“KR” A Wald-type -distribution confidence interval. The REML estimator for and the Kenward–Roger approach are used [Partlett2017].
“PL” Profile likelihood confidence interval [Hardy1996].
“BC” Profile likelihood confidence interval with a Bartlett-type correction [Noma2011].
Table 4: The ‘method’ option for the ‘cima()’ function.

For example, the following script calculates a profile likelihood confidence interval with a Bartlett-type correction.

Ψcima(sbp$y, sbp$sigmak, method = "BC")

The ‘tau2h()’ function calculates a heterogeneity variance estimate. This function has arguments to specify methods for the heterogeneity variance estimator and its confidence interval; ‘method’ and ‘methodci’ as shown in Table 5.

‘method’ Description
“DL” DerSimonian–Laird estimator [DerSimonian1986].
“VC” Variance component type estimator [Hedges1983].
“PM” Paule–Mandel estimator [Paule1982].
“HM” Hartung–Makambi estimator [Hartung2003].
“HS” Hunter–Schmidt estimator [Hunter2004]. This estimator has a negative bias [Viechtbauer2005].
“ML” Maximum likelihood (ML) estimator [DerSimonian1986].
“REML” Restricted maximum likelihood (REML) estimator [DerSimonian1986].
“AREML” Approximate restricted maximum likelihood estimator [Thompson1999].
“SJ” Sidik–Jonkman estimator [Sidik2005].
“SJ2” Sidik–Jonkman improved estimator [Sidik2007].
“EB” Empirical Bayes estimator [Morris1983].
“BM” Bayes modal estimator [Chung2013].
‘methodci’ Description
NA a confidence interval will not be calculated.
“ML” Wald confidence interval with a ML estimator [Biggerstaff1997].
“REML” Wald confidence interval with a REML estimator [Biggerstaff1997].
Table 5: The ‘method’ and ‘methodci’ options for the ‘tau2h()’ function.

The REML estimator for and its confidence interval based on can be calculated as

Ψtau2h(sbp$y, sbp$sigmak, method = "REML", methodci = "REML")

3.4 Datasets in the pimeta package

The pimeta package includes the following three published random-effects meta-analyses.

  1. Set-shifting data: The set-shifting data [Roberts2007] included 14 studies evaluating the set-shifting ability in people with eating disorders. Standardized mean differences in the time taken to complete Trail Making Test between subjects with eating disorders and healthy controls were collected. Positive estimates indicate impairment in set shifting ability in people with eating disorders.

  2. Pain data: The pain data [Hauser2009] included 22 studies comparing the treatment effect of antidepressants on reducing pain in patients with fibromyalgia syndrome. The treatment effects were summarized using standardized mean differences on a visual analog scale for pain between the antidepressant group and control group. Negative estimates indicate the reduction of pain in the antidepressant group.

  3. Hypertension data: The hypertension data [Wang2005] included 7 studies comparing the treatment effect of anti-hypertensive treatment versus control on reducing diastolic blood pressure (DBP) in patients with hypertension. Negative estimates indicate the reduction of DBP in the anti-hypertensive treatment group.

These datasets are available in the pimeta package by using the following code.

Ψdata(setshift, package = "pimeta")
Ψdata(pain, package = "pimeta")
Ψdata(hyp, package = "pimeta")

We applied the methods were introduced in Section 2 using the ‘pima()’ function.

Ψpima(setshift$y, setshift$sigmak, seed = 2718281)
Ψpima(pain$y, pain$sigmak, seed = 1732050)
Ψpima(hyp$y, hyp$se, seed = 1414213)

Results are summarized in Table 6.

Set-shifting Pain Hypertension
Method
95%PI HTS 0.36 [0.02, 0.74] 0.43 [0.84, 0.02] 8.83 [11.21, 6.46]
PR-APX 0.36 [0.06, 0.67] 0.42 [0.77, 0.07] 8.92 [12.74, 5.10]
PR-HK 0.36 [0.05, 0.68] 0.42 [0.78, 0.06] 8.92 [12.77, 5.07]
PR-SJ 0.36 [0.06, 0.67] 0.42 [0.77, 0.07] 8.92 [12.73, 5.12]
PR-KR 0.36 [0.04, 0.68] 0.42 [0.79, 0.06] 8.92 [15.98, 1.86]
NNF 0.36 [0.12, 0.85] 0.43 [0.91, 0.03] 8.83 [12.76, 5.51]
0.023 0.034 0.639
0.013 0.025 1.729
22.5 44.9 69.4
14.5 36.9 86.0
Table 6: Estimated results of the three published meta-analysis datasets in the pimeta package: the average treatment effect () and its 95% prediction intervals (PI: prediction intervals, HTS: the Higgins et al.’s prediction interval, PR-APX, PR-HK, PR-SJ, PR-KR: the Partlett–Riley’s prediction interval with the approximate, Hartung–Knapp, Sidik–Jonkman, or Kenward–Roger’s variance estimator, NNF: the Nagashima et al.’s prediction interval), and heterogeneity measures (, , , and ).

As shown in Table 6, the Nagashima et al.’s prediction intervals were substantially wider than the Higgins et al.’s and Partlett–Riley’s prediction intervals for the set-shifting and pain data. This result is consistent with findings of the simulation studies that revealed that the Higgins et al.’s and Partlett–Riley’s prediction intervals may be too short in a setting with and [Partlett2017, Nagashima2018]. Next, the Partlett–Riley’s prediction interval with Kenward–Roger’s variance estimator was extremely wider than the others for the hypertension data. This result is also consistent with the simulation studies. The Partlett–Riley’s prediction interval with Kenward–Roger’s variance estimator may be overly conservative in a setting when there is a considerable difference between the within-study variances [Partlett2017].

4 Conclusion

In this paper, we briefly summarize the recent developments in methods of prediction intervals for random-effects meta-analysis [Higgins2009, Partlett2017, Nagashima2018] and provides readable examples for multiple types of data with simple code. The pimeta package provides a function to compute a prediction interval and generate a forest plot for the estimated results. The R code used to generate the results described in this paper are given in the Supporting Information. The analysis is easily performed in R with a series of R packages.

Acknowledgements

This work was supported by JSPS KAKENHI Grant Number 19K20229.

References