Meta-analysis parameters computation: a Python approach to facilitate the crossing of experimental conditions

07/13/2020 ∙ by Flavien Quijoux, et al. ∙ 0

Meta-analysis is a data aggregation method that establishes an overall and objective level of evidence based on the results of several studies. It is necessary to maintain a high level of homogeneity in the aggregation of data collected from a systematic literature review. However, the current tools do not allow a cross-referencing of the experimental conditions that could explain the heterogeneity observed between studies. This article aims at proposing a Python programming code containing several functions allowing the analysis and rapid visualization of data from many studies, while allowing the possibility of cross-checking the results by experimental condition.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Meta-analysis is a popular statistical procedure used to combine the results of several clinical studies that address the same research question [11]. The objective of this method is to mitigate the bias associated with a particular selection of participants [23] in a single study and increase the statistical significance of the conclusions [15]. Systematic reviews with meta-analysis are often seen as the highest level of evidence [17]. The main output is a quantitative measure of the effect of a treatment or medical condition, the so-called the effect size. The statistical significance of this effect size is computed through a rigorous statistical approach which allows researchers to conclude on the presence or absence of a certain effect. In addition, a sensitivity analysis is often be included in meta-analyses to compare the impact of the different experimental conditions [10]. Indeed, collecting studies that scrupulously share the same recording conditions can be practically challenging, leading to biased or heterogeneous selections. On the other hand, thanks to the sensitivity analysis, researchers can identify spurious results (comparatively to other similar studies) [34, 10], and therefore take action to mitigate the common effect.

Related work.

To facilitate data aggregation and allow researchers to easily perform meta-analyses, the Cochrane Community has developed a software called Review Manager (RevMan) [6] which facilitates the writing of systematic reviews and the comparison of results from scientific articles. RevMan has a graphical interface and is particularly well disseminated in the scientific community because this software integrates many functionalities, including the writing of the review itself, the production of graphs, and the evaluation of bias [8]. However, Cohrane’s software has two main drawbacks. First, the exact formulas to compute the graphs are not easily found in the associated handbook. Second, when dealing with several subgroups in the clinical studies, researchers have to manually enter several time the same data, leading to potential copy errors. The amount of manual data duplication quickly grows as the number of considered subgroups increases. As an example, the systematic review in [31], considers 29 studies, 26 different variables of interest, and 8 experimental parameter values. This yields a table of 241 lines, where each line corresponds to a study, a variable and a combination of experimental conditions. We measured that in total, 297 different meta-analyses could be performed, by selecting all subsets of experimental settings. To compute all analyses in RevMan, we would need to manually enter each line about 6 times in the software, a procedure which can be error prone. This difficulty of combining results due to variations in experimental conditions has been previously highlighted in systematic reviews [29, 30, 33]. For instance, in the field of postural control analysis, the number of extracted parameters can exceed one hundred [9], making the subgroups analysis for each outcome and each condition of recording with RevMan unpractical.

Contributions.

We aim at providing an easy-to-use tool for conducting a meta-analysis even when the number of conditions and experimental subgroups is large. Thanks to this work, quick and easy selection of data according to the experimental conditions in which they were recorded can be performed in one operation, without manual data duplication. In addition, all calculations of the overall effect size, confidence interval, weights and model selection are carefully explained.

1.1 Outline

Section 2 describes the general objective of meta-analysis and provides an illustrative example. In the section 3, the exact steps to computed the overall effect size, the associated confidence interval, the forest and funnel plots are formally explained. The last section presents the software that goes with this article, especially the computational requirements, and the input and output formats.

2 Methodology of a meta-analysis

This section presents a general description of a meta-analysis and a simplified example to illustrate notions that will be used later as well as further motivate the need for tools which can deal with several experimental settings.

2.1 General principle

The objective of the meta-analysis is to determine if an outcome is significantly different between two groups (typically, a experimental or intervention group and a control group). The outcome is measured by one or several variables of interest, whose values (means and standard deviations) are measured on each group and compared. Depending on whether the difference between both groups is significantly away from zero, one can then conclude on the presence or absence of influence between the outcome and the group membership. By combining results from individual clinical studies, a systematic review can summarize and provide more robust measures of influence. The first step to a meta-analysis is to gather similar clinical studies that try to answer the same research question and extract the relevant information. Roughly, an article is included in a systematic review if it measures one of the variables of interest, in similar experimental settings 

[23]. The relevant information consist in the number of participants, the means and standard deviations of the considered variables, as well as contextual information about the clinical protocol. After this data collection process, it is possible to quantify the influence of a group to the considered outcome, by computing the so-called effect size which is, for a given variable of interest, proportional to the standardized mean difference. This quantity is computed, along with confidence intervals, for each individual study as well as for the considered pool of articles (the overall effect size). Since the results extracted from the selected studies contain some heterogeneity (random or not), extracted values are weighted according to the size of the associated cohorts and the variability of the measures. All computations (effect size and confidence interval) are often summarized in a visual representation, called a forest plot [35]

, which allows to visualize the relative contribution of each article to the overall effect size, and to quickly assess if a combined studies confirm the influence of a group to the outcome. In addition, a sensitivity analysis is also performed, to detect selection bias, which can occur because of a higher probability to publish significant results in the scientific literature. To that end, another visual representation, called a funnel plot is drawn. It consists of a scatter diagram where each study is represented according to their intra-study effect size and variance.

2.2 Illustrative example: the study of quiet stance balance and fall risk

(A complete version of this simplified meta-analysis protocol and results can be found in [31].)
In order to assess the risk of fall in the elderly population (which is one of the main causes of deadly injuries in this population [42]), the different postural strategies of senior individuals are analyzed [2, 3, 4, 28]. To that end, medical researchers often focus on the static balance of patients, measured by force platforms [30]. Force platforms record the displacement of the center of pressure (COP), i.e. the resultant of the weight distribution between the two legs, and from that recording, several quantities (velocity, duration, covered surface, etc.) are computed. Generally, participants are asked to remain stable on the platform; however, a great variability in the experimental conditions of the recording can be observed, leading to possibly many different meta-analyses, depending on which experimental setting is preferred. Here, we focus on the COP mean velocity, one of the most common features, especially in the anteroposterior (AP) direction, and two types of experimental conditions: eyes open or closed (the participant had his or her eyes open or closed during the recording) and retrospective or prospective fall recordings (a retrospective history of past falls occurred before the recording or a prospective follow-up of the falls occurred after the recording). The objective of a meta-analysis is to determine the relationship between the variable of interest (here, the AP mean velocity) and the risk of fall (here, the population is simply divided into fallers and non-fallers). Seven clinical studies are included in this meta-analysis and Table 1 summarizes the information extracted from the associated articles, namely the number of participants, the mean and standard deviations of the group of fallers and of the group of non-fallers, as well as the experimental conditions.

Study Condition
Howcroft, 2015 EO Retro 24 7.34 2.47 76 7.65 1.84
Howcroft, 2015 EC Retro 24 17.34 16.03 76 15.86 6.74
Howcroft, 2017 EC Pro 42 17.76 13.4 47 15.11 5.59
Howcroft, 2017 EO Pro 42 7.75 2.15 47 7.53 1.93
König, 2014 EC Retro 42 0.15 1.48 42 -0.12 0.12
Kwok, 2015 EO Retro 18 1.27 0.45 55 1.02 0.26
Maki, 1994 EC Pro 59 17.9 15.6 37 11.9 4.79
Maki, 1994 EO Pro 59 13 13.7 37 8.4 3.51
Maranesi, 2016 EC Retro 63 16.23 11.27 67 14.5 9.1
Maranesi, 2016 EO Retro 63 11 6.89 67 10 6.2
Pajala, 2008 EC Pro 189 12.46 5.09 230 12.5 6.8
Pajala, 2008 EO Pro 189 8.34 2.81 230 7.8 2.6
Table 1: Input example for the proposed meta-analysis procedure. Here a single variable of interest is considered (“Anteroposterior mean velocity”). “EO” and “EC” respectively denote “Eyes Open” and “Eyes Closed”. “Retro” corresponds to a retrospective fall history (before the recording) and “Pro” corresponds to a prospective follow-up of the falls (after the recording). The group is the group of elderly fallers; the group is the group of elderly non-fallers. The group sample sizes and , the means and , the standard deviations and are extracted from the associated articles. (See Section 3.1 for the notations.)

From the 12 measures in Table 1, 8 distinct meta-analyses can be performed, one for each combination of conditions111In detail: EO, EC, Pro, Retro, ECPro, ECRetro, EOPro, EORetro.. With RevMan, this would have resulted in duplicating manually those 12 lines in order to compute all of those analyses because it is not designed to cope with multiple combinations of conditions. The meta-analysis tool that we propose is able to compute all conditions in one pass. In total, for this simple example, 36 lines would have been entered into RevMan while only 12 are needed with our method. Part of the usual output of meta-analyses, namely the forest plots, is shown on Figure 1. Those forest plots illustrate the sensitivity of the AP mean velocity to the different conditions. The effect size and its confidence interval is provided for each study, along with the overall effect size (diamond shape). The vertical dashed line represents the absence of effect, and intuitively, if a confidence interval of an individual source crosses this line, it means that there is no difference between the two groups for this study. Conversely, if the confidence interval of the overall effect size crosses the null-effect line, it indicates that the meta-analysis does not demonstrate a significant effect for the variable of interest.

(a) Experimental setting: eyes open (“EO”) and retrospective fall recording (“Retro”). (b) Experimental setting: eyes closed (“EC”) and retrospective fall recording (“Retro”). (c) Experimental setting: eyes open (“EO”) and prospective fall recording (“Pro”). (d) Experimental setting: eyes closed (“EC”) and prospective fall recording (“Pro”).
Figure 1: Forest plot for the input data of Table 1. The variable of interest is “AP mean velocity” The effect sizes and associated confidence intervals are computed with our procedure for two sets experimental conditions (eyes open or closed, retrospective or prospective fall recording). Note that, for brevity, only 4 out of the 8 possible combinations of experimental settings are shown here.

3 Computing the effect sizes

This section presents the computation of the individual effect size, calculated for each study, and the overall effect size, calculated over several studies sharing the same experimental setting.

3.1 Notations

A subgroup of studies is defined by a variable of interest and a set of experimental conditions . All articles that evaluate this variable under the conditions form the subgroup. For ease of notation, the pair is denoted by in the remaining of the article. The number of articles in the subgroup defined by is denoted (). For each study (), a user needs to extract the following quantities:

  • , , , respectively the number of subjects, the empirical mean and the (unbiased) standard deviation of , for the first group (typically the study group),

  • , , , respectively the number of subjects, the empirical mean and the (unbiased) standard deviation of , for the second group (typically the control group).

3.2 Individual effect size

Roughly speaking, for a given study , the individual effect size quantitatively measures how the variable of interest varies between the two considered groups of subjects. For instance, this measure could determine if access to a certain treatment influences a certain measure of health status [26]. Two definitions of this quantity coexist: Cohen’s, denoted [7], and Hedges’, denoted [19]. (In the literature, those two quantities are commonly referred to as Cohen’s and Hedges’ .) The former is simply equal to the standardized mean difference

(1)

where , are extracted from the study and is the weighted standard deviation [18] defined by

(2)

with , , , and are extracted from study . This statistic is known to be upwardly biased with small samples [14]. As a results, Hedges’

has been introduced to better estimate effect size, even for studies with only few samples 

[16]. This measures is defined by

(3)

In the literature, both statistics, Cohen’s  (1) or Hedges’  (3), can be used to estimate the individual effect size and it is often left to the user to choose between one or the other. In the remaining of the article, will denote either or .

3.3 Overall effect size

In a nutshell, the overall effect size across studies, denoted , is a weighted average of the individual effect sizes:

(4)

where can either be  (1) or  (3), and the weights are introduced in the following sections. Roughly speaking, depending on the heterogeneity between studies (the inter-study variability), one can choose between a fixed-effect model, in which case (Section 3.3.1), or a random-effect model, in which case (Section 3.3.2). Both are described below, along with a criterion to choose between them.

3.3.1 Heterogeneity between studies: fixed-effects model

The fixed-effects model assumes that the set of considered studies are homogeneous, meaning that the differences between the extracted values and () for varying are only the result of random noise [11, 32]. In this context, where no heterogeneity between articles is considered and all studies estimate the exact same effect, the weights of the fixed-effects model, denoted by , are defined by

(5)

where is the intra-study variance:

(6)

where can either be  (1) or  (3). According to this model, studies with a large number of samples (, ) have a large weight  (5) and thus carry more information. Equation 4 can now be rewritten and the overall effect size for a fixed-effects model, denoted , is

(7)

3.3.2 Heterogeneity between studies: random-effects model

Whenever there is heterogeneity between studies, for instance because of differences in how measurements are taken or in how the variable of interest is computed [1], the fixed-effects model no longer applies: one can then resort to the random-effects model [12]. Before defining the weights of this model, two quantities are now introduced: the heterogeneity measure and the inter-study variance . First, the heterogeneity measure is derived from the fixed-effects model:

(8)

Second, the inter-study variance is as follows:

(9)

where the coefficient is computed from the following equation:

(10)

(Note that thus defined, is not guaranteed to be positive, therefore, in practice, negative are clipped to as in [40, 20].) The weights of the individual studies under the random-effects model are defined by

(11)

Equation 4 can now be rewritten and the overall effect size for a random-effects model, denoted , is

(12)

3.3.3 Choose between fixed-effects and random-effects

In order to choose between a fixed-effects model and a random-effects, the Cochrane [5] proposes a quantitative methodology based on the heterogeneity measure (Equation 8), and more precisely, on a derived percentage, denoted  [21, 22, 11]:

(13)

According to the Cochrane [5], when the value of is greater than 50%, the inter-study variability is substantial and the random-effects model should be chosen. Otherwise, the variability is considered to be moderate and the fixed-effects model should be preferred. Note that thus defined, is not guaranteed to be positive, therefore, in practice, negative are clipped to as in [22, 41].

3.4 Visualizations of a meta-analysis

This section describes two visual tools commonly used by researchers to quickly assess the magnitude of the overall effect size and the contribution of each studies included in the meta-analysis, namely the forest plot and the funnel plot.

3.4.1 Forest plot

For a given (variable of interest and experimental condition), a forest plot displays the confidence intervals of the individual effect sizes, Cohen’s  (1) or Hedges’  (3), and of the overall effect size,  (7) or  (12). An example is shown on Figure 1.

To that end, the effect sizes are modeled by Gaussian random variables. For each study

, the confidence interval of the individual effect size, at the level , is

(14)

where stands for  (1) or  (3), is the intra-study variance (6) and

is the quantile function of a standard normal distribution. Similarly, the confidence interval of the overall effect size, at the level

, is

(15)

where stands for  (7) and  (12) (depending on the adopted model), and is either  (5) or  (11). Generally, a vertical line (at -position equal to 0) represents the absence of effect. If the confidence interval associated with a publication crosses this line, it means that there is no statistically significant difference (in the variable for the experimental condition

) between the two studied groups (e.g the study group and the control group) in the considered publication. Similarly, if the confidence interval of the overall effect size crosses this line, this indicates that the meta-analysis did not find any statistically significant effect between the two studied groups for the considered pool of publications. In addition to the confidence interval, the z-score

of the overall effect size and the associated p-value are often computed as well:

(16)

where can either be  (7) and  (12) and is defined in Equation (15). The p-value is given by

(17)

where is the standard normal cumulative distribution (two-tailed statistical test).

3.4.2 Funnel plot

A funnel plot is a visual tool to assess publication bias. In a nutshell, publication bias is the consequence of an over-representation of statistically significant results, which can lead to biased effect sizes in meta-analyses [13, 36]. Formally, a funnel plot is a scatter plot in a two-dimensional plan where the -axis shows the effect size and the -axis, the intra-study variance. Each publication () with a point of coordinates where can either be  (1) or  (3), and is defined in (6). Usually, the ordinate axis is inverted, so that publications with a large intra-study variance are below publications with a small intra-study variance. In addition, the overall effect is graphical represented by two lines forming a funnel, giving its name to this plot, defined by the equations where can be  (7) or  (12) depending on the model (see Section 3.3.3), is a user-defined level of confidence (usually 5%), and is the quantile function of a standard normal distribution. A vertical line is also shown.
Intuitively, publications with a small intra-variability are located in the top of the funnel (the narrow part) while publications with a large variability are dispersed in the bottom of the funnel. Certain phenomena can easily be seen with a funnel plot. For instance, over-representation of articles with favourable results would result in a asymmetric distribution of the publications in this representation [39, 27]. Heterogeneity can also be a source of dispersion and contribute to the asymmetry of the funnel plot [24]. More interpretations of this representation can be found in [37, 38, 25]. An example is shown on Figure 2.

(a) Eyes open and retrospective fall recording. (b) Eyes closed and retrospective fall recording. (c) Eyes open and prospective fall recording. (d) Eyes closed and prospective fall recording.
Figure 2: Funnel plot for the input data of Table 1. The variable of interest is ‘AP mean velocity’. Effect size is equal to Hedges’ and the funnel lines are for . The associated forest plot is displayed on Figure 1. Note that only four out of the eight possible combinations of experimental settings are shown here.

4 Software description

4.1 User input

User input consists in three elements: the extracted information from the relevant studies (Table 1), the effect size formula to use (Cohen’s or Hedges’ ) and the confidence level (usually 5%). To work properly, our meta-analysis tool requires a certain formatting of the extracted information, which is now described.
Extracted information are passed using Comma-Separated Values (CSV) files. Each file contains at least nine columns, separated with semi-colons (“;”), but possibly more, depending on the number of considered experimental conditions:

  • study: a unique name to identify a study. A common practice is to use “{Last name of the first author}, {year}”, for instance, “Quijoux, 2020”.

  • variable: a unique name to identify a variable measured during a study.

  • n_1: the number of participants in the first group.

  • n_2: the number of participants in the second group.

  • mean_1: empirical mean of the considered variable in the first group.

  • mean_2: empirical mean of the considered variable in the second group.

  • std_1: empirical standard deviation of the considered variable in the first group.

  • std_1: empirical standard deviation of the considered variable in the second group.

  • condition_1: first condition that defines the experimental setting.

  • condition_2, condition_3, condition_4,…: second, third, fourth,…, conditions that define the experimental setting. Those columns are optional and should be added only if several experimental conditions are indeed considered. An arbitrary number of columns condition_k can be added.

Users should be particularly careful to define consistent groups, for instance group 1 can be the experimental/intervention group and group 2 can be the control group. Under this convention, in the forest plots (see Figure 1), the left-hand side (relatively to the vertical dashed line) of the graph favours “control” while the right-hand side favours “experimental/intervention”. On a more practical note, the columns are separated with semi-colons (“;”) and not commas (“,”). Common formatting mistakes include: using commas as column separators or decimal delimiters (“1.2” and not “1,2”), leaving a semi-colon at the end of a line, using special characters (“\”, “#”, “%”, “$”, “{}”, “_”, etc.), inconsistent use of upper and lower case (“AP mean velocity” and “Ap mean velocity” and “ap mean velocity” are considered as three different variables), leaving an empty string as a condition (it is better to provide a label). An example input file is displayed on Figure 3. It corresponds to the data of Table 1.

                study;variable;n_1;n_2;mean_1;std_1;mean_2;std_2;condition_1;condition_2
                Howcroft, 2015;AP mean velocity;24;76;7.34;2.47;7.65;1.84;EO;Retro
                Howcroft, 2017;AP mean velocity;42;47;7.75;2.15;7.53;1.93;EO;Pro
                Kwok, 2015;AP mean velocity;18;55;1.27;0.45;1.02;0.26;EO;Retro
                Maki, 1994;AP mean velocity;59;37;13;13.7;8.4;3.51;EO;Pro
                Maranesi, 2016;AP mean velocity;63;67;11;6.89;10;6.2;EO;Retro
                Pajala, 2008;AP mean velocity;189;230;8.34;2.81;7.8;2.6;EO;Pro
                Howcroft, 2015;AP mean velocity;24;76;17.34;16.03;15.86;6.74;EC;Retro
                Howcroft, 2017;AP mean velocity;42;47;17.76;13.4;15.11;5.59;EC;Pro
                Knig, 2014;AP mean velocity;42;42;0.15;1.48;-0.12;0.12;EC;Retro
                Maki, 1994;AP mean velocity;59;37;17.9;15.6;11.9;4.79;EC;Pro
                Maranesi, 2016;AP mean velocity;63;67;16.23;11.27;14.5;9.1;EC;Retro
                Pajala, 2008;AP mean velocity;189;230;12.46;5.09;12.5;6.8;EC;Pro
Figure 3: Raw version of Table 1. This is the input format for the information extracted from the clinical studies of a meta-analysis.

4.2 Program output

The output of our meta-analysis tool is organized in folders, one for each combination of variable and experimental conditions (i.e. one per ). As an example, for a given variable, e.g. “AP mean velocity”, and a set of experimental conditions, e.g. eyes closed (“EC”) and retrospective fall recording (“Retro”), the associated forest and funnel plots can be found in the following folder:

output/AP mean velocity-EC|Retro/

The name of the variable is separated from the experimental setting by an hyphen (“-”) and the experimental conditions are separated by pipes (“”). This organization is schematically shown on Figure 4. Within each folder, fives files can be found:

  • data.csv contains all extracted information (study name, number of participants, empirical means, etc.) and all computed quantities (effect size, confidence interval, etc.) in a tabular form.

  • forest_plot.pdf contains the forest plot (see Figure 1 for examples). In addition, the original LaTeXcode that produced the figure is provided in forest_plot.tex so that users can tweak the plot to their needs.

  • Similarly, the funnel plot is given in forest_plot.pdf and forest_plot.tex (see Figure 2 for examples).

.1 output/. .2 variable_1-condition_a/. .3 data.csv. .3 forest_plot.tex. .3 forest_plot.pdf. .3 funnel_plot.tex. .3 funnel_plot.pdf. .2 variable_1-condition_a—condition_b/. .3 data.csv. .3 forest_plot.tex. .3 forest_plot.pdf. .3 funnel_plot.tex. .3 funnel_plot.pdf. .2 variable_2-…. .3 ….
Figure 4: File structure of a meta-analysis output.

4.3 Interface

Command-line interface.

To launch a meta-analysis, the input CSV file (input_file.csv for instance) should be put in the same folder as the Python files (“.py” files) and execute the following command in the terminal:

    python3 main.py --input_fname input_file.csv --alpha 0.05 --which_delta Hedges

The value of can be changed using the --alpha argument: for (i.e. 1%), one only need to replace “--alpha 0.05” by “--alpha 0.01”. To use Cohen’s instead of Hedges’ g, again, one only need to substitute “--which_delta Hedges” by “--which_delta Cohen”.

Online demonstration.
Requirements.

The proposed meta-analysis tool is implemented using well-known open-source languages: Python (

python.org, version 3.6 or more) to compute the effect sizes and confidence intervals, and LaTeX(tug.org) to render the forest and funnel plots. The following Python libraries are needed: pandas, scikit-learn, jinja2, latex, click. They can easily be installed using pip (docs.python.org/3.6/installing).

5 Conclusion

Meta-analysis can be a useful tool for combining the results of studies dealing with the same issue and having similar methodologies, thus exceeding the individual scope of the selected studies [34]. However, the heterogeneity between the included studies is a limitation to the successful conclusion of systematic reviews using this analysis. It is then necessary to select the studies with the closest experimental conditions. The aim was to propose a calculation tool in Python programming language due to its wide dissemination. It is worthy to note that an R package already exists [11]

is that although this language is widely disseminated in the scientific community, a library of functions encoded in Python can be useful, as the popularity of this open access language is growing. The article details the calculus to facilitate the understanding of the process behind the meta-analysis while simplifying the conduct of a sensitivity study. In the absence of any other library available at the moment, this code provides the rudimentary analysis of a meta-analysis based simply on the data collected through a literature review.

6 Acknowledgement

This study is from the context of a PhD program in partnership with ORPEA group. This collaboration between Centre Borelli of Paris Descartes University, and ORPEA group is framed in the French conventions for Industrial Training by the Research (CIFRE) managed by the National Association Research-Technology (ANRT). The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  • [1] A. E. Ades, G. Lu, and J. P. T. Higgins (2005-11) The interpretation of random-effects meta-analysis in decision models. 25 (6), pp. 646–654. External Links: ISSN 0272-989X, 1552-681X, Link, Document Cited by: §3.3.2.
  • [2] J. Audiffren, I. Bargiotas, N. Vayatis, P. Vidal, and D. Ricard (2016) A non linear scoring approach for evaluating balance: classification of elderly as fallers and non-fallers. 11 (12), pp. e0167456. External Links: Link, Document Cited by: §2.2.
  • [3] I. Bargiotas, J. Audiffren, N. Vayatis, P. Vidal, S. Buffat, A. P. Yelnik, and D. Ricard (2018-02-23) On the importance of local dynamics in statokinesigram: a multivariate approach for postural control evaluation in elderly. 13 (2), pp. e0192868. External Links: ISSN 1932-6203, Link, Document Cited by: §2.2.
  • [4] F. Bloch, M. Thibaud, C. Tournoux-Facon, C. Brèque, A. Rigaud, B. Dugué, and G. Kemoun (2013-04) Estimation of the risk factors for falls in the elderly: can meta-analysis provide a valid answer?: estimation of falls risk factors in the elderly. 13 (2), pp. 250–263. External Links: ISSN 14441586, Link, Document Cited by: §2.2.
  • [5] J. Chandler, J. P. Higgins, J. J. Deeks, C. Davenport, and M. J. Clarke (2017) Cochrane handbook for systematic reviews of interventions. Cited by: §3.3.3.
  • [6] T. Cochrane (2008) Review manager (revman) 5.3. Copenhagen: The Nordic Cochrane Centre. Cited by: §1.
  • [7] J. Cohen (1988) Statistical power analysis for the behavioral sciences. 2nd ed edition, L. Erlbaum Associates. External Links: ISBN 978-0-8058-0283-2 Cited by: §3.2.
  • [8] T. C. Collaboration (2014-06-24) RevMan 5.3 user guide. External Links: Link Cited by: §1.
  • [9] L. Comber, J. J. Sosnoff, R. Galvin, and S. Coote (2018-03) Postural control deficits in people with multiple sclerosis: a systematic review and meta-analysis. 61, pp. 445–452. External Links: ISSN 09666362, Link, Document Cited by: §1.
  • [10] D.J. Cook, D.L. Sackett, and W.O. Spitzer (1995) Methodologic guidelines for systematic reviews of randomized control trials in health care from the potsdam consultation on meta-analysis. 48 (1), pp. 167–171. Cited by: §1.
  • [11] A. C. Del Re (2015) A practical tutorial on conducting meta-analysis in r. 11 (1), pp. 37–50. Cited by: §1, §3.3.1, §3.3.3, §5.
  • [12] R. DerSimonian and N. Laird (1986-09) Meta-analysis in clinical trials. 7 (3), pp. 177–188. External Links: ISSN 01972456, Link, Document Cited by: §3.3.2.
  • [13] H. Dubben and H. Beck-Bornholdt (2005-08-20) Systematic review of publication bias in studies on publication bias. 331 (7514), pp. 433–434. External Links: ISSN 0959-8138, 1468-5833, Link, Document Cited by: §3.4.2.
  • [14] S. V. Faraone Interpreting estimates of treatment effects. pp. 6. Cited by: §3.2.
  • [15] G. V. Glass (1976) Primary, secondary, and meta-analysis of research. 5 (10), pp. 3–8. External Links: ISSN 0013189X, 1935102X, Link, Document Cited by: §1.
  • [16] R. J. Grissom and J. J. Kim (2005) Effect sizes for research: a broad practical approach.. Effect sizes for research: A broad practical approach., Lawrence Erlbaum Associates Publishers. External Links: ISBN 0-8058-5014-7 (Hardcover) Cited by: §3.2.
  • [17] G. Guyatt, J. Cairns, D. Churchill, D. Cook, B. Haynes, J. Hirsh, J. Irvine, M. Levine, M. Levine, J. Nishikawa, D. Sackett, P. Brill-Edwards, H. Gerstein, J. Gibson, R. Jaeschke, A. Kerigan, A. Neville, A. Panju, A. Detsky, M. Enkin, P. Frid, M. Gerrity, A. Laupacis, V. Lawrence, J. Menard, V. Moyer, C. Mulrow, P. Links, A. Oxman, J. Sinclair, and P. Tugwell (1992-11) Evidence-Based Medicine: A New Approach to Teaching the Practice of Medicine. JAMA 268 (17), pp. 2420–2425. External Links: ISSN 0098-7484, Document, Link, https://jamanetwork.com/journals/jama/articlepdf/400956/jama_268_17_032.pdf Cited by: §1.
  • [18] L. V. Hedges and I. Olkin (2014) Statistical methods for meta-analysis. Academic press. Cited by: §3.2.
  • [19] L. V. Hedges (1981) Distribution theory for glass’s estimator of effect size and related estimators. 6 (2), pp. 107. External Links: ISSN 03629791, Link, Document Cited by: §3.2.
  • [20] J. P. T. Higgins (2008-10-01) Commentary: heterogeneity in meta-analysis should be expected and appropriately quantified. 37 (5), pp. 1158–1160. External Links: ISSN 0300-5771, 1464-3685, Link, Document Cited by: §3.3.2.
  • [21] J. P. T. Higgins and S. G. Thompson (2002-06-15) Quantifying heterogeneity in a meta-analysis. 21 (11), pp. 1539–1558. External Links: ISSN 0277-6715, 1097-0258, Link, Document Cited by: §3.3.3.
  • [22] J. P. Higgins, S. G. Thompson, J. J. Deeks, and D. G. Altman (2003) Measuring inconsistency in meta-analyses. 327 (7414), pp. 557. Cited by: §3.3.3.
  • [23] H. Israel and R. R. Richter (2011-07) A guide to understanding meta-analysis. 41 (7), pp. 496–504. External Links: ISSN 0190-6011, 1938-1344, Link, Document Cited by: §1, §2.1.
  • [24] Z. Jin, X. Zhou, and J. He (2014) Statistical methods for dealing with publication bias in meta‐analysis. pp. 18. Cited by: §3.4.2.
  • [25] L. Lin (2019-09) Graphical augmentations to sample-size-based funnel plot in meta-analysis. 10 (3), pp. 376–388. External Links: ISSN 17592879, Link, Document Cited by: §3.4.2.
  • [26] J. Littell, J. Corcoran, and V. Pillai (2009-01-01) Systematic reviews and meta-analysis. Vol. 5. External Links: Document Cited by: §3.2.
  • [27] A. Mlinarić, M. Horvat, and V. Šupak Smolčić (2017-10-15) Dealing with the positive publication bias: why you should really publish your negative results. 27 (3), pp. 030201. External Links: ISSN 1330-0962, 1846-7482, Link, Document Cited by: §3.4.2.
  • [28] S. W. Muir, K. Berg, B. Chesworth, N. Klar, and M. Speechley (2010-04) Quantifying the magnitude of risk for balance impairment on falls in community-dwelling older adults: a systematic review and meta-analysis. 63 (4), pp. 389–406. External Links: ISSN 0895-4356, Link, Document Cited by: §2.2.
  • [29] M. Piirtola and P. Era (2006-01-27) Force platform measurements as predictors of falls among older people – a review. 52 (1), pp. 1–16. External Links: ISSN 0304-324X, 1423-0003, Link, Document Cited by: §1.
  • [30] F. Quijoux, A. Vienne-Jumeau, F. Bertin-Hugault, M. Lefèvre, P. Zawieja, P. Vidal, and D. Ricard (2019-12) Center of pressure characteristics from quiet standing measures to predict the risk of falling in older adults: a protocol for a systematic review and meta-analysis. 8 (1), pp. 232. External Links: ISSN 2046-4053, Link, Document Cited by: §1, §2.2.
  • [31] F. Quijoux, A. Vienne-Jumeau, F. Bertin-Hugault, P. Zawieja, M. Lefevre, P. Vidal, and D. Ricard (2020-06) Center of pressure displacement characteristics differentiate fall risk in older people: a systematic review with meta-analysis. pp. 101117. External Links: ISSN 15681637, Link, Document Cited by: §1, §2.2.
  • [32] R. D. Riley, J. P. T. Higgins, and J. J. Deeks (2011) Interpretation of random effects meta-analyses. BMJ 342. External Links: Document, ISSN 0959-8138, Link, https://www.bmj.com/content Cited by: §3.3.1.
  • [33] A. Ruhe, R. Fejer, and B. Walker (2011-03) Center of pressure excursion as a measure of balance performance in patients with non-specific low back pain compared to healthy controls: a systematic review of the literature. 20 (3), pp. 358–368. External Links: ISSN 0940-6719, 1432-0932, Link, Document Cited by: §1.
  • [34] M. W. Russo (2007) How to review a meta-analysis. 3 (8), pp. 637. Cited by: §1, §5.
  • [35] D. L. Schriger, D. G. Altman, J. A. Vetter, T. Heafner, and D. Moher (2010-01) Forest plots in reports of systematic reviews: a cross-sectional study reviewing current practice. International Journal of Epidemiology 39 (2), pp. 421–429. External Links: ISSN 0300-5771, Document, Link, http://oup.prod.sis.lan/ije/article-pdf/39/2/421/14148621/dyp370.pdf Cited by: §2.1.
  • [36] F. Song, Hooper, and Y. Loke (2013-07) Publication bias: what is it? how do we measure it? how do we avoid it?. pp. 71. External Links: ISSN 1179-1519, Link, Document Cited by: §3.4.2.
  • [37] J. A. C. Sterne and M. Egger (2001) Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. pp. 10. Cited by: §3.4.2.
  • [38] J. A. C. Sterne and R. M. Harbord (2004-06) Funnel plots in meta-analysis. 4 (2), pp. 127–141. External Links: ISSN 1536-867X, 1536-8734, Link, Document Cited by: §3.4.2.
  • [39] A. Thornton (2000-02) Publication bias in meta-analysis its causes and consequences. 53 (2), pp. 207–216. External Links: ISSN 08954356, Link, Document Cited by: §3.4.2.
  • [40] A. A. Veroniki, D. Jackson, W. Viechtbauer, R. Bender, J. Bowden, G. Knapp, O. Kuss, J. P. Higgins, D. Langan, and G. Salanti (2016-03) Methods to estimate the between-study variance and its uncertainty in meta-analysis. 7 (1), pp. 55–79. External Links: ISSN 17592879, Link, Document Cited by: §3.3.2.
  • [41] P. T. von Hippel (2015-12) The heterogeneity statistic i2 can be biased in small meta-analyses. 15 (1), pp. 35. External Links: ISSN 1471-2288, Link, Document Cited by: §3.3.3.
  • [42] WHO (2008) WHO global report on falls prevention in older age. pp. 47. Note: OCLC: ocn226291980 External Links: ISSN 978-92-4-156353-6 Cited by: §2.2.