Impact Factors and the Central Limit Theorem: Why Citation Averages Are Scale Dependent

03/06/2018
by   Manolis Antonoyiannakis, et al.
0

Citation averages, and Impact Factors (IFs) in particular, are sensitive to sample size. We apply the Central Limit Theorem (CLT) to IFs to understand their scale-dependent behavior. For a journal of n randomly selected papers from a population of all papers, we expect from the CLT that its IF fluctuates around the population average μ, and spans a range of values proportional to σ/√(n), where σ^2 is the variance of the population's citation distribution. The 1/√(n) dependence has profound implications for IF rankings: The larger a journal, the narrower the range around μ where its IF lies. IF rankings therefore allocate an unfair advantage to smaller journals in the high IF ranks, and to larger journals in the low IF ranks. We expect a scale-dependent stratification of journals in IF rankings, whereby small journals occupy top, middle, and bottom ranks; mid-sized journals occupy middle ranks; and very large journals have IFs that asymptotically approach μ. We confirm these predictions by analyzing (i) 166,498 IF & journal-size data pairs in the 1997--2016 Journal Citation Reports of Clarivate Analytics, (ii) the top-cited portion of 276,000 physics papers published in 2014--2015, and (iii) the citation distributions of an arbitrarily sampled list of physics journals. We conclude that the CLT is a good predictor of the IF range of actual journals, while sustained deviations from its predictions are a mark of true, non-random, citation impact. IF rankings are thus misleading unless one compares like-sized journals or adjusts for these effects. We propose the Φ index, a rescaled IF adjusted for size, which can be generalized to account also for different citation practices across research fields. Our methodology applies also to citation averages used to compare research fields, university departments or countries in various rankings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

03/06/2018

Impact Factors and the Central Limit Theorem

In rankings by average metrics, smaller samples are more volatile: They ...
11/05/2019

Impact Factor volatility to a single paper: A comprehensive analysis

We study how a single paper affects the Impact Factor (IF) by analyzing ...
11/05/2019

Impact Factor volatility to a single paper: A comprehensive analysis of 11639 journals

We study how a single paper affects the Impact Factor (IF) by analyzing ...
06/06/2019

How a Single Paper Affects the Impact Factor: Implications for Scholarly Publishing

Because the Impact Factor (IF) is an average quantity and most journals ...
06/29/2021

When standard network measures fail to rank journals: A theoretical and empirical analysis

Journal rankings are widely used and are often based on citation data in...
12/16/2021

Citation inequity and gendered citation practices in contemporary physics

The historical and contemporary under-attribution of women's contributio...
12/30/2020

More crime in cities? On the scaling laws of crime and the inadequacy of per capita rankings – a cross-country study

Objectives: To evaluate the relationship between population size and num...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

What do crime rates, cancer rates, high-school mean test scores, and Impact Factors have in common? They are all manifestations of the Central Limit Theorem, which explains why small populations (cities, schools, or research journals) score more often than one would expect at the top and

bottom places of rankings, while large populations end up in less remarkable positions. But if size affects one’s position in a ranking, then rankings of population averages must be misleading. The Impact Factor is an average measure of the citation impact of journals. Therefore, it may seem perfectly justifiable to use it when ranking journals of different sizes, in the same vein we use averages to rank, say, the class size of schools, the GPA’s of students, the fuel efficiency of engines, the life expectancy in countries, or the GDP per capita for various countries. However, underlying such comparisons is the tacit admission (De Veaux, Velleman, & Bock, 2014) that the distributions being compared are (approximately) symmetric and do not contain outliers (i.e., extreme values)—or if they do, that the sample sizes are large enough to absorb extreme values. If the distributions are highly skewed, with outliers, and especially if the populations are small, then rankings by averages can be misleading, because averages are no longer representative of the distributions. Impact Factors qualify for these caveats. So far, several studies drew attention to the skewness of the citation distribution, or various other features of the Impact Factor, such as the ‘free’ citations to front-matter items of journals, the need to normalize for different citation practices among fields, the citation time windows, the lack of verifiability in the citation counts entering the Impact Factor calculations, the mixing of document types with disparate citabilities (articles versus reviews), etc. (Seglen, 1992; Seglen, 1997; Redner, 1998; Rossner, Van Epps, & Hill, 2007; Adler, Ewing, & Taylor, 2008; Radicchi, Fortunato, & Castellano, 2008; Wall, 2009; Fersht, 2009; Glänzel & Moed, 2013; Antonoyiannakis, 2015a; Antonoyiannakis, 2015b; San Francisco Declaration on Research Assessment, 2012; Bornmann & Leydesdorff, 2017). However, little attention has been paid (Amin & Mabe, 2004; Antonoyiannakis & Mitra, 2009) to the effect of journal scale on Impact Factors, which, as we will show, is substantial.

The Impact Factor is defined as

(1)

where are the citations received in year to journal content published in years , and is the biennial publication count, i.e., the number of citable items (articles and reviews) published in years . As can be verified from the Journal Citation Reports (JCR) of Clarivate Analytics, the annual publication count of journals ranges from a few papers to a few tens of thousands of papers. At the same time, individual papers can collect from zero to a few thousand citations in the JCR year. With a span of 4 orders of magnitude in the numerator and 5 orders of magnitude in the denominator, the IF is a quantity with considerable room for wiggle.

In this paper, first, we apply the Central Limit Theorem (the celebrated theorem of statistics) to understand and predict the behavior of Impact Factors. We find that Impact Factor rankings produce a scale-dependent stratification of journals, as follows. (a) Small journals occupy all ranks (top, middle and bottom); (b) mid-sized journals occupy the middle ranks; and (c) very large journals (“megajournals”) converge to a single Impact Factor value—the population mean—almost irrespective of their size. Impact Factors are thus sensitive to journal size, and Impact Factor rankings do not provide a ‘level playing field,’ because size affects a journal’s chances to make it in the top, middle, or bottom ranks. Second, we apply the Central Limit Theorem to arrive at an Impact Factor uncertainty relation: an expression that limits the expected range of Impact Factor values for a journal as a function of journal size and the citation variance of the population of all published papers. Third, we confirm our theoretical results, by analyzing 166,498 IF & journal-size data pairs, the citation-distribution data from 276,000 physics papers, and an arbitrarily sampled list of physics journals. We observe the predicted scale-dependent stratification of journals. We find that the Impact Factor uncertainty relation is a very good predictor of the range of Impact Factors observed in actual journals. And fourth, we argue that sustained deviation from the expected IF range is a mark of non-random citation impact. We thus propose to normalize IFs with regard to the theoretically expected maximum at a given size (using appropriate offsets), as a scale-independent index of citation impact.

Why does all this matter? Because statistically problematic comparisons can lead to misguided decisions, and Impact Factor rankings remain in wide use (and abuse) today (Gaind, 2018; Stephan, Veugelers, & Wang, 2017).

Our analysis shows that Impact Factor comparisons—even for similar fields and document types—for different-sized journals can be misleading. We argue that it is imperative to a seek metrics that are immune from or correct for this effect.

2 Theoretical Background

2.1 The Central Limit Theorem for citation averages (i.e., Impact Factors)

The Central Limit Theorem is the fundamental theorem of statistics. In a nutshell, it says that for independent and identically distributed data whose variance is finite, the sampling distribution of any mean becomes more nearly normal (i.e., Gaussian) as the sample size grows (De Veaux, Velleman, & Bock, 2014). The sample mean will then approach the population mean , in distribution. More formally,

(2)

whence

(3)

where

is the normal distribution and the symbol “d” in the equality means

in distribution.

is the standard deviation of a sampling distribution,

is the standard deviation of the entire population we wish to study (and which is often not known), and the sample size. So, sample means vary less than individual measurements. (The square of the standard deviation is the variance.)

The sampling distribution is a notional (imaginary) distribution from a very large number of samples, each one of size , which approaches a normal distribution in the limit of large . In practice, the Central Limit Theorem holds for as low as 30, unless there are exceptional circumstances—e.g., when the population distribution is highly skewed—in which case higher values are needed. So, measures how widely the sample means of size vary around the the population mean (which is approached in the limit of large ).

Before we apply the Central Limit Theorem to IFs, let us comment on the assumptions involved. First, our population consists of all papers (articles and reviews) published in all research fields over a number of previous years (normally two for the IF), and the quantity we are going to average over is the number of citations of each paper received in the current year. The population mean is the citation average of all papers, i.e., the IF of the ‘megajournal’ consisting of all papers in all fields. The sampling involves randomly drawing papers from the population—i.e., forming a ‘journal’—and calculating their citation average, which is essentially the IF of the random sample (journal). The Central Limit Theorem then tells us that the IF of all random samples (journals) approaches the population mean in distribution as becomes large, and describes how the variance of all IFs of -sized journals depends on and on the population properties (). Of course, in real journals the papers are not randomly drawn but selected by editors, board members, and referees who are consciously trying to ‘bias’ the decision process in favor of the better papers for the benefit of their readers. We will revisit this assumption of random samples in §6 and §7 once we present our analysis and data.

Let us now examine the two quantities ( and ) on which the sample standard deviation () depends in Eq. (3).

2.1.1 Variance effects (dependence on )

Equation (3) shows that the sample variance is proportional to the population variance. High variance (i.e., variability, disparity of values) in the population causes high variance in the sample. This makes sense. For example, imagine that the world’s richest and tallest persons simultaneously move into a neighborhood of a population of 1000 people. Because income disparity (variance) among the population is far greater than height disparity, we would expect the income means (averages) of various random samples drawn from the population to vary more than height means. Note that citation ‘wealth’ is very unevenly distributed, like monetary wealth.

For populations of scientific research papers, the citation distributions have high variances, because the individual papers can be cited from 0 to a few thousand times. Therefore citation means (Impact Factors) have a much higher variance at a given sample size, compared to, say, the height means for adults.

It is the high variance () of citations in populations of scientific papers that makes the Central Limit Theorem relevant in Impact Factor rankings. Had been 100 times smaller for citation distributions, none of the effects described in this paper would be seen—they would be there, of course, but they would be too small to be of interest and would not interfere with rankings of average metrics. Thus, the multiplier in the numerator of Eq. (3) acts as a ‘switch’ that turns on the size effects of the denominator.

2.1.2 Size effects (dependence on )

The inverse square root dependence of Eq. (3) with sample size means that for small journals (small ), IFs can fluctuate widely around the population mean . Thus, for small journals we expect to see a wide range of IFs, from very low to very high values. Small journals will thus dominate the high ranks of Impact Factor values, but also the low ranks! Actually, small journals will cover the entire range of Impact Factor values. With increasing to medium-sized journals, the fluctuation around the population mean decreases. Therefore, mid-sized journals will not be able to achieve as high Impact Factors as small journals but they will be spared from really low values too. So, mid-sized journals will do better than small journals in the low ranks but worse in the top ranks. Finally, for large journal sizes , the fluctuation of sample means around the population mean is small, so all Impact Factors of large journals will asymptotically approach . Therefore, very large journals have no chance at all to populate any remarkable (i.e., prestigious) ranks. However, they will be ranked higher than many small (and a few mid-sized) journals.

We can codify the above discussion in a simple conceptual diagram. For simplicity, let us use three size classifications, as follows. We classify journals with biennial publication count

as ‘small;’ journals with as ‘mid-sized;’ and journals with as ‘large.’ We would then expect the scale-dependent stratification of journals in IF rankings that is shown in Table 1.

By the way, such stratification effects have been reported for other average metrics—e.g., crime statistics, school performances, cancer rates, etc.—and explained in terms of the Central Limit Theorem (Wainer & Zwerling, 2006; Wainer, 2007; Gelman & Nolan, 2002). In one spectacular example that made headlines, the Bill and Melinda Gates Foundation “began funding an effort to encourage the breakup of large schools into smaller schools” since it “had been noticed that smaller schools were more common among the best-performing schools than one would expect” (De Veaux, Velleman, & Bock, 2014). However, small schools were also more likely to be among the worst-performing schools, in agreement with the Central Limit Theorem. Ultimately, the Foundation shifted its emphasis away from the small-schools effort.

Impact Factor Journal Size
High Small
Moderate Small — Medium
Average Small — Medium — Large
Below average Small — Medium
Low Small
Table 1: Because of the Central Limit Theorem, we expect a scale-dependent stratification of journals in IF rankings.

2.1.3 Why is the Central Limit Theorem relevant for Impact Factors?

Journal sizes range typically from 50–50,000 (biennial count, ), so the quantity ranges from . If the population standard deviation were no greater than 1, then random fluctuations in Impact Factors (which are a few times the ) would be smaller than 0.5, say, and thus irrelevant for journal rankings (except for very low Impact Factors). But if, as we will show later, is at least 10, then lies in the range 0.05–1.5, and journal rankings are affected significantly, because random fluctuations lie in the range 0.2–5 and are no longer negligible compared to Impact Factors themselves. This is why the Central Limit Theorem is relevant here. (This is a first justification of the applicability of the Theorem for Impact Factors, see §2.1. More reasons will be provided later.)

2.2 An uncertainty relation for Impact Factors

Consider the population of citations in a certain year to all papers published in the previous two years. Imagine that we draw random samples (“journals”) of size from this population, and calculate their citation average, , which for practical purposes is equal to the Impact Factor of the papers (we ignore from the ‘free’ citations in the numerator, which generally do not play a major role in IF values). Because the sampling distribution of the sample means is normal (i.e., Gaussian), we can expect roughly 68% of values to lie within of , 95% of values to lie within of , 99.7% of values to lie within of , and 99.99% of values to lie within of . So, we can write that for an integer (where, in practice, or 4), the values are bounded as

(4)

We invoke the Central Limit Theorem, Eq. (3), to rewrite the above inequality as

(5)

The inequality (5) says that the IF values that are statistically available to a randomly formed journal range from the theoretical minimum, , to the theoretical maximum, , which are defined as

(6)

We can recast expression (5) as

(7)

whence the term uncertainty relation becomes evident. Indeed, the ‘uncertainty’ —the range of values of as measured from —multiplied by the square root of the journal (biennial) size cannot exceed , statistically speaking. Therefore, for small journals is large, while for large journals is small. Again, the expressions (5) or (7) hold in a statistical sense—roughly in 99.99% cases for , or at a distance of 4 from .

The Impact Factor uncertainty relation has important practical implications, as we discuss below.

Implication #1. Expression (5) says that Impact Factor uncertainties have a maximum value that is inversely proportional to the square root of journal size. This scale dependence is rather punitive for large journals: A 100-fold increase in journal size yields a 10-fold decrease in the range of Impact Factor values, as measured from . Therefore, large journals cannot have very high (or very low) Impact Factors. Conversely, small journals have a high range of Impact Factor values. Therefore, small journals can reach very high (and very low) Impact Factors. For small compared to , the term is large and can be dropped from expression (5), so the highest Impact Factor is inversely proportional to .

Implication #2. The theoretical maximum increases with the population standard deviation , which is a measure of the disparity (variability) of citations among all papers in the population. The theoretical minimum decreases with by the same amount. The maximum uncertainty , and hence the range of IF values, both increase then. That is, as increases in a population, the range of IF values broadens.

But what is the population? We have assumed so far that it consists of all papers in all research fields, and this statement is true in a general sense. However, for research fields that cite each other little or not at all, one can claim that they are distinct populations, each with its own and . In this case, expression (5) says that journals from the population with larger can reach higher Impact Factors. This is why, for example, mathematics journals have lower IFs compared to chemistry journals (and why normalizing citation averages to account for the different citedness of research fields makes sense and is standard practice in bibliometrics).

As the readers may have inferred, the parameters and incorporate everything that makes one population (research field) distinct from another, citations-wise. Differences among fields such as (i) size of reference lists, (ii) chronological distribution of citations, (iii) size of research field, (iv) document types being cited, etc., are all factored in and .

For the remainder of this manuscript, we do not account for different citation practices across research fields—a standard practice in bibliometrics (Moed, 2005, Bornmann & Leydesdorff, 2018, Waltman et al., 2012). This is of course an oversimplification, but to address it here would take us beyond the scope of this paper. We will address this issue in future work.

Implication #3. For large enough there is an Impact Factor minimum, equal to (where has to be nonnegative, of course). That is, is bounded from below.

Let us take a more detailed look at the Impact Factor uncertainty relation— Eq. (5)—for two limits of interest.

Case I. Small journals. If there are values such that , then can be left out and Eq. (5) simplifies to

(8)

So, for small journals the Impact Factor can range from 0 to a maximum value that is inversely proportional to , which can become quite large for small enough size. In other words, small journals are highly volatile, and they will populate all positions in Impact Factor ranks, from the lowest to the highest.

Case II. Very large journals, i.e., .

Here, expression (5) reduces to

(9)

and the Impact Factor asymptotically approaches the population mean, . This is both good and bad news for very large journals in Impact Factor rankings: They will neither populate the low ranks nor the high ranks. These journals sample the population, so to speak, so they are stable and insensitive to size effects. Their Impact Factors are bounded from above and below.

To recap, the Impact Factor uncertainty relation is merely the result of applying the Central Limit Theorem to citation averages, and using standard properties of the normal distribution.

3 Materials and Methods

3.1 Approximating for easier data retrieval

We collected data on Impact Factors and citable items from Clarivate Analytics’ Journal Citation Reports (JCR), in the 20-year period 1997–2016. (Both Science Citation Index Expanded (SCIE) and the Social Sciences Citation Index (SSCI) were selected. In the remainder of the paper, unless noted otherwise, all references to JCR include both SCIE and SSCI editions.) The citable items () data refer to the JCR year: They are the sum of articles and reviews published by a journal in that year. From the original data, we removed those journals whose Impact Factors or citable items were listed as either non-available or zero, as well as duplicate entries. A total of 166,498 journal datapoints were thus obtained. The values range from 1 to 31,496, while the Impact Factor values range from 0.027 to 187.04.

For the purposes of this paper we need data on the IF and its denominator, , the biennial publication count in the two years prior to the JCR year. A practical difficulty arises here. While the JCR list IF values and yearly publication counts () in the JCR year, they do not list values. To obtain data we must check each journal individually in the Web of Science—a conceptually trivial but nevertheless cumbersome procedure for tens of thousands of journal datapoints. However, it is reasonable to assume that the publication count does not change appreciably over the 3-year window spanned by and , and write

(10)

If the approximation (10) holds for all journals, we would be justified to use data as a substitute for . It would thus suffice that be no greater (or smaller) than a few times the product . That is,

(11)

We test Eq. (10) for the 2016 JCR year and the 8710 journals in the Science Citation Index Expanded (SCIE) list. As we can see from Fig. 1, the quantities and are strongly correlated (, =0.82, Pearson correlation coefficient = 0.90). Of the 8710 journals, all but 5 (or 99.94%) satisfy Eq. (11), while even for the 5 remaining journals, remains modest (). Therefore, we are justified to use the approximation , provided we are interested in the broad, overall relationship of Impact Factors with journal size. But when we analyze individual journals, especially with respect to each other (as in ranking), then we must use . Certainly, the only reason we may prefer to use instead of is the ease of data retrieval from JCR, but where and when necessary, the value should be used.

Figure 1: How good is the approximation of Eq. (10)? We test this from the 2016 JCR SCIE data. The biennial publication count in the two years prior to the JCR year (), is plotted against twice the annual publication count in the JCR year ().

4 Data Analysis

4.1 Range of Impact Factor values from citation distribution data

As a first check of the relevance of the Central Limit Theorem—and its underlying sampling distribution—for real distributions, we study the citation distribution of the 276,000 physics papers (articles and reviews) published in 2014–2015, and cited in 2016, in the Web of Science Core Collection. Since we are interested in the range of IF values, we calculate how the highest possible citation average depends on sample size for the top-cited portion of the distribution—that is, for the 2089 papers cited at least 30 times in 2016.

First, we confirm that the citation distribution has finite variance—a key prerequisite for the Central Limit Theorem. Specifically, we find that the probability for a paper among the 2089 papers to be cited more than

times follows a Pareto distribution with a shape parameter , i.e., . Because , the variance is finite. One could easily follow a similar analysis to show that the tails of citation distributions in subjects other than physics also follow a Pareto distribution with finite variance. For example, of the 12633 articles and reviews published in the subject Information Science and Library Science in 2015–2016, the citations of the 241 papers that were cited at least 10 times in 2017 form a Pareto distribution with a shape parameter . For shape parameter values the variance of a Pareto distribution is infinite and the Central Limit Theorem does not hold (Newman, 2005).

We calculated the total citations, , as a function of decreasing citation rank, , for the 2089 papers in our set. The dependence of on is found to be (see Fig. 2, inset)

(12)

where is a scale factor in the order of the number of citations received by the most cited paper in this distribution, which in this case is . Note that grows much slower than linearly with . The Impact Factor is defined as the ratio . (We denote results obtained from data fitting with a ‘fit’ superscript, as opposed to results from theory where we use ‘th’.) So we can write

(13)

with and here. Clearly, grows less than linearly with , which results in a size-dependent citation average, . See Fig. 2.

The fact that Eq. (13) has been deduced from ‘only’ the top 2089 papers should not distract us from recognizing the generality of the conclusion: Equation (13) is in good agreement with the Impact Factor uncertainly relation (5) and—because the small-journal approximation holds at the high value of where Eq. (13) is terminated—with its simplified version, Eq. (8). In fact, had we continued the analysis to lower-ranked papers, the exponent in Eq. (12) would have surely decreased, since the curve would cave downward to account for lower-cited papers; consequently, the value in Eq. (13) would approach 0.5. Therefore, the frequency distribution of citation averages for physics articles and reviews agrees with the sampling distribution from the Central Limit Theorem.

To further explore the agreement between actual citation distributions and the sampling distribution, we selected an arbitrarily sampled list of 15 physics journals, and compared the citation average, , of their top- cited papers with the right-hand-side of Eq. (5), i.e., . The journals were Nat. Physics, New J. Phys., Nucl. Phys. B, Phys. Lett. B, Phys. Rev. X, Phys. Rev. Lett., Phys. Rev. A, Phys. Rev. B, Phys. Rev. C, Phys. Rev. D, Phys. Rev. E, Phys. Rev. Applied, Phys. Rev. Phys. Ed. Res., Phys. Rev. Acc. Beams, and Rev. Mod. Phys. We obtained the citation distributions for papers published in 2014–2015, cited in 2016. Here, the small-journal approximation does not necessarily hold, so we used the value for the population average, a choice that will be justified later. When we plot the quantity against for the top-25% cited portion of each journal, we find an behavior. The exponent ranges from 0.38 to 0.56 for the 15 journals, with a median and mean value both equal to 0.47, in reasonable agreement with the expected value of 0.50 from the Central Limit Theorem. The median and mean values of were not particularly sensitive to the specific cutoff point of the top-25%.

To sum up, the agreement of both the empirical curve Eq. (13) from physics papers and the corresponding curves from our sampled list of 15 physics journals with the Impact Factor uncertainty relation of Eq. (5) is another confirmation that the Central Limit Theorem applies here. And the key consequence of this applicability is that the range of IF values that are statistically available to a journal of size scales as .

Figure 2: Impact Factor, , of the set of top-cited papers among the 276,000 papers published in physics in 2014–2015, versus citation rank, . Inset: Total citations of top-cited papers versus . Here, and .

4.2 Range of Impact Factor values from Impact-Factor & journal-size data

We have analyzed all 166,498 IF and journal-size data pairs with nonzero values for the Impact Factor and number of annual citable items () in the Clarivate Analytics’ Journal Citation Reports in the 1997–2016 period. Before we proceed, let us present two important features of Impact Factors and journal sizes, which may not be widely known.

4.2.1 Small journals are extremely common

In Fig. 3 we plot the frequency distribution of journals vs. their annual size, i.e., the number of citable items (articles and reviews) published in the JCR year, . Small journals are extremely common: The most common journal size is 24 citable items per year. 50% of all journals publish 60 or fewer citable items per year, while 90% of all journals publish 250 or fewer citable items annually.

Figure 3: Frequency (filled dots) and fraction (hollow dots) of journals with annual publication count . Data for 166,498 journal datapoints in the 1997–2016 JCR.

4.2.2 Most journals have low Impact Factors

In Fig. 4 we plot the frequency distribution of journals vs. their IFs. As is evident from the figure, most IFs are quite low: The most commonly occurring value is 0.5. In the range , which covers 85% of all journals, the frequency distribution can be approximated by an exponentially decreasing function (see dotted line).

Figure 4: Frequency distributions of IF values. The dotted line is an exponential fit, valid () in the range . Inset: Same data but in the range , plotted as a histogram in linear scale. Data for 166,498 journal datapoints in the 1997–2016 JCR. The bin width of values used in the distribution is 0.1.

We are now ready to analyze Impact-Factor and journal-size data to obtain a boundary curve for Impact Factors.

4.2.3 Impact Factor vs. journal size

In Fig. 5, we plot the Impact Factor versus journal (annual) size for all 166,498 journal datapoints in our set. All values of journal sizes and IFs listed in the 20 years of JCR data are shown in a linear-log plot. (As we noted before, the Central Limit Theorem applies for sample sizes (i.e., biennial journal sizes) that are typically 30 or greater, but for completeness, we show all the data in the figure. More on the range of applicability of the Theorem later.) The first thing to note is that for (see inset), Fig. 5 has the signature appearance of the Central Limit Theorem at work: The data show high variance at small scales, which gradually decreases at higher scales as the datapoints almost converge to a single value at the highest scales. Compare, for instance, with Fig. 3 of Wainer, 2007, which shows the age-adjusted kidney-cancer rates in US counties, and which is a well-known manifestation of the Central Limit Theorem.

What happens for ? The abrupt peak of extremely high () IF values at biennial size is a clear break from the gradually diminishing triangular-shaped distribution that is the signature of the Central Limit Theorem. All the datapoints of this peak belong to CA-A Cancer Journal for Clinicians, a journal whose citation distribution has a very high standard deviation (, typically). To avoid these “problematic” datapoints that challenge the Theorem’s validity, we suggest as a conservative lower bound for the biennial sample size of citation distributions. Indeed, for , the scatter plot of Fig. 5 has largely returned to behavior typical of the Central Limit Theorem. The peaks at and do not deviate drastically from the triangle-shaped distribution.

Figure 5: IF values versus annual journal size . 166,498 points shown, corresponding to all journals with a nonzero IF and from 1997–2016. All values shown. Data from Journal Citation Reports, Clarivate Analytics. Inset: Detail of main plot, for .

A glance at Fig. 5 confirms the penalizing effect of journal size on the range of IFs. We observe a global (i.e., large-scale) trend whereby large journals do not have high Impact Factors: Higher IF values tend to occur for smaller () than larger journals. In broad terms, we observe that of the 166,498 journal datapoints, (a) no journal with has ; (b) no journal with has ; (c) no journal with has ; etc. As we zoom in at smaller scales, we notice local irregularities, most notably three local peaks (groups of high-IF datapoints) centered at around . The first peak () results from very small and selective journals that publish a few mega-cited papers, most notably CA-A Cancer Journal for Clinicians. The second peak () results from highly selective monodisciplinary journals such as the New England Journal of Medicine, Lancet, Chemical Reviews, Journal of the American Medical Association, Cell, Nature Reviews Molecular Cell Biology, Nature Materials, Nature Nanotechnology, etc. And the third peak () is due to highly selective multidisciplinary journals, such as Nature and Science.

We are now ready to directly demonstrate the relevance for IF values of the Central Limit Theorem—or the Impact Factor uncertainty relation, Eq. (5). In Fig. 6 we plot the theoretical upper bound , for , and for (this choice for will be justified later.) We also plot the lower bound for . In the same plot, we show the IF vs. size data (same data as Fig. 5). Clearly, for (i.e., for when ), the Central Limit Theorem is practically irrelevant in IF rankings, because the random variations it considers, which are equal to , are much smaller than the vast majority of actual IF values. However, for the other values of listed in the figure, random variations from are clearly comparable to IF values and we can no longer ignore them. And since, as we will shortly explain, the population of scientific papers has and hence , we conclude that accounting for Central Limit Theorem effects in IF rankings is neither a curiosity nor a choice, but a necessity. In fact is probably more realistic, as we discuss below.

Figure 6: Upper and lower bounds of the theoretically expected IFs from the Central Limit Theorem, shown together with actual IF values from JCR data. Four values of are shown for upper bounds, one for a lower bound. For high enough values of , the random fluctuations in the IF described by the Central Limit Theorem are large enough to interfere with IF values.

Why did we choose and ? First, the population mean

can be estimated from the Journal Citation Reports (JCR), by summing up all Impact Factor numerators and dividing by all Impact Factor denominators (as these are approximated via Eq. (

10)). Doing so for the 1997–2016 JCR we obtain . It is reasonable to expect a slight annual inflation of , and indeed for the years 2014, 2015, and 2016, we obtain , and 3.18, respectively. Now, the presence of is more clearly seen in the high- limit where the term is small. And since all but two of the datapoints in this limit () correspond to journals that have rapidly grown in size in recent years (namely, RSC Adv., Sci Rep., and PLoS ONE), we used the most recent value in Fig. 6. (We will comment on the two outlier datapoints shortly.) Estimating the population standard deviation for the 4–5 million articles and reviews published in a two-year period is a little trickier. Our approach is to use the Central Limit Theorem itself to estimate , by drawing a number of randomly selected samples of (approximately) equal size , calculate the citation mean of each sample, calculate from the values, and obtain from Eq. (3). We performed several tests of this kind from the Web of Science. One practical difficulty was that the ordering of articles in Web of Science searches is not exactly random (even if one orders by date or author name), and in addition, only 100,000 items can be accessed at any time in a search. To avoid spurious effects we performed several types of selections that we tried to make as random as possible, by searching for random ranges of author identifiers (ORCID), or for generic author last names, or for author names starting with a sequence of letters, or for a number of generic words in a topic search, etc. The range of values for we obtained in this way was 17–148, with a median of 23. For the purposes of this study, we will use the estimate and we will also take —that is, we include random variations within from , or at the 99.99% level—which makes . The value seems conservative and is in accord with our experience from analyzing journal citation datasets. But even if were as low as 10, the Central Limit Theorem effects would be relevant in IF rankings, as we discussed above.

With the parameter choices and at hand, let us now revisit the IF vs. size plot. See Fig. 7. The solid and dashed curves are the theoretical maximum and minimum, respectively, from the Impact Factor uncertainty relation, while the insets show the low- and high- limit behavior. We observe that the theoretical curves and envelope the IF data and capture the general trend of IF behavior. From the high range of IF values for small journal sizes, to the gradually diminishing IF range with increasing size, to the concentration of IF values within a narrow band in the long-size limit, the theoretical curves capture well what happens with real journals. Actually, 98.1% of the journals with have IFs within the range []. So there is a good agreement, both qualitatively and quantitatively, between theory (Central Limit Theorem) and ‘experiment’ (IF data).

Note, incidentally, the two very low (outlier) values of IF, equal to 0.5 and 0.4 for 16,000 and 19,000 respectively. These datapoints belong to the 2004 and 2005 IF values for Lecture Notes in Computer Science. Because this is a specialized title that publishes conference proceedings, it corresponds to a distinct population with different citation features (different and ) than other large journals— see the implication #2 in §2.2. Indeed, this journal is now classified in the Web of Science as Book Series and does not receive an IF value. So it is a special case and does not invalidate the uncertainly relation (5).

Figure 7: IF values versus annual journal size . 166,498 datapoints shown, corresponding to all journals with a nonzero IF and from 1997–2016. Data from Journal Citation Reports, Clarivate Analytics. Left inset: Detail of main plot, for . Right inset: values from 10000–35000. The solid and dashed lines are the theoretical maximum and minimum, respectively, from the Central Limit Theorem—see Eq. (5) The parameters used were and .

While in Fig. 7 we used the approximation for the IF denominator, some of the very large journals had their annual size fluctuate considerably over the last few years (particularly Scientific Reports and PLoS ONE). In order to more accurately test the applicability of the Central Limit Theorem in the high- limit behavior of IF data, we plot in Fig. 8 the IF vs. its exact denominator for the journals whose biennial size exceeds 8000 in the JCR years 2013–2016. These journals are Applied Physics Letters, Journal of Applied Physics, Physical Review B, RSC Advances, Scientific Reports, and PLoS ONE. Evidently, 17 out of 18 datapoints are within the range [], in full confirmation of the Central Limit Theorem.

To those readers wondering whether “predatory” journals appear in Fig. 7 or Fig. 8, the answer is negative: Predatory journals, which publish papers for money in an open access model with no regard for the quality or validity of the papers they publish, are generally flagged by gatekeepers at Clarivate Analytics and do not make it in the JCR. Certainly all the megajournals in Fig. 8 are legitimate, respectable journals.

Figure 8: IF values versus biennial journal size for large journals () from the JCR years 2013–2016. The solid and dashed lines are the theoretical maximum and minimum, respectively, from the Central Limit Theorem—see Eq. (5) The parameters used were and .

We predicted earlier (§2.1.2) a scale-dependent stratification of journals in IF rankings, and this is indeed what we observe. Again, the (simple) classification system we use is that journals with biennial publication count are ‘small’; journals with are ‘mid-sized’; and journals with are ‘large’. For example, if we rank by IF the 8825 journals in the 2016 JCR Science Citation Index Expanded list (SCIE), we find that all the top 100 ranks, as well as all the bottom 1315 ranks, are occupied by small journals. The highest rank occupied by a mid-sized journal is #109, and the lowest rank is #7510. Consistent with the Central Limit Theorem, the 3 very large journals appear in the rather unremarkable ranks #973, #1888, and #2264. Among the top 500 ranks, 482 slots are taken by small journals and 18 by mid-sized journals. Among the top 1000 ranks, 955 slots are taken by small journals, 44 slots by mid-sized journals, and 1 slot by a large journal. By rank 5000, 90% of the mid-sized journals have appeared, as well as all 3 large journals. Finally, among the bottom 3825 ranks (i.e., ranks 5001–8825), 3815 slots are taken by small journals, and 10 by mid-sized journals. The agreement between the observed (Fig. 9) and expected (Table 1) stratification of journals in IF rankings is yet another justification of the applicability of the Central Limit Theorem here. Even the asymmetrical position of the large journals and mid-sized journals towards the left of the center of the distribution makes sense: Recall that larger journals approach in the high- limit while small journals can ‘fall’ all the way to the bottom, and that there are many more low-cited than highly-cited papers for small journals to “sample.”

Figure 9: Scale-dependent stratification of journals in IF rankings. On the y-axes we plot the frequency of three categories of journals in the ranked positions shown in the x-axis, for the 8825 journals in the 2016 JCR Science Citation Index Expanded list (SCIE). The three journal categories are: small (clear columns; values in primary y-axis on the left), mid-sized (striped blue; values in secondary y-axis on the right), and large (filled blue; values in secondary y-axis), as explained in §2.1.2. High IF journals appear in top ranks (top 100, 101-200, etc.), low IF journals in the bottom ranks (8001-8500, etc.). This bar plot fully confirms Table 1.

Some readers may wonder whether the Impact Factor uncertainty relation—and its underlying normal distribution—can account for the IF variations of particular journals from year to year. That is, given a certain journal, is its annual IF variation smaller than (or comparable to) the random variations expected by the Central Limit Theorem? The answer is positive. In 97.4% of the 13,099 unique journal titles in our 1997–2016 JCR list whose IF varied anytime but its title remained unchanged in that 20-year period, the maximum uncertainty expected from the Central Limit Theorem is greater than the largest IF-variation observed. (However, see also §6, where we discuss cases where the annual IF variations are much smaller than what the Theorem predicts.)

But if IF values fluctuate due to random effects, can we use error bars or confidence intervals on the mean (Impact Factor) to describe these fluctuations? The short answer is

yes for the sampling distribution but no for the citation distributions of actual journals. While approximate confidence intervals for means can be constructed even for non-normally distributed data, the use of confidence intervals is based on the premise of a random

variable, and actual journals are certainly not random samples of papers drawn from the wider population. The sampling distribution, on the other hand, which we have used to estimate IF ranges based on the Central Limit Theorem, pertains to a random variable and we are thus allowed to calculate a standard error or a confidence interval on the mean

of §2.2. Fundamentally, this is equivalent to our approach in Eq. (5). We have simply chosen to focus not on and its standard error, , but on the range of values, , which we have found more informative.

4.2.4 Maximum Impact Factor values from Impact-Factor & journal-size data

Let us now check whether the maximum IF values for actual journals have the scale-dependent behavior expected from the Central Limit Theorem. To do that, we extract the dependence of on from data (recall ). So we bin the journal size data of Fig. 7 in groups (bins) of 10 citable items each (1–10, 11–20, etc.). For the journals in each bin, we calculate and plot it against in Fig. 10 (filled dots). The global downward trend is clearly visible: The journal size, , has an adverse effect on . Also shown in Fig. 10 and is a best-fit curve calculated from the binned data

(14)

where and . The details of binning have some effect on the exponent of Eq. (14). Clearly, the bin size should not be too large, as it can affect what we are trying to measure, which is size-dependent. Since the expression (14) has an exponent of , we also plot the product () in Fig. 10 (hollow dots). Evidently, this product is independent of (horizontal dotted line), which means that its variance is size-independent (as opposed to the variance of Impact Factors).

Equation (14) confirms the uncertainly relation (5) from Impact Factor data, just like Eq. (13) did from citation data from physics papers. Once again, we observe that the frequency distribution of actual citation averages (IF data) agrees with the (notional) sampling distribution from the Central Limit Theorem, . We have thus identified, for the 166,498 journal datapoints in our set, a global boundary curve for Impact Factors as a function of journal size, in the form of Eq. (8), or, more generally, Eq. (5).

As by-product of the data fitting process in Fig. 10, we observe that the behavior of appears to hold all the way to large sizes. This indicates that the term in the IF uncertainty relation is actually greater than we have assumed, otherwise the constant term would dominate at these sizes. Indeed, the coefficient of Eq. (14) provides a measure of that is greater than the value 100 that we have so far assumed. This indication for a population variance higher the we assumed ( as opposed to , for ) only enhances the effects of the Central Limit Theorem in Impact Factor rankings.

Figure 10: Maximum Impact Factor data, (filled dots) for each annual journal size, , in the Journal Citation Reports from 1997–2016. The data is drawn from Fig. 7, with binning as described in the text. The dotted line (best fit, ) is Eq. (14) with and . Also shown (hollow dots) is the product , which, evidently, does not depend on journal size. ()

5 How the Central Limit Theorem tips the balance in IF rankings

The conclusion from the above discussion is unequivocal. The Central Limit Theorem—and its corollary for scholarly journals, the Impact Factor uncertainty relation of Eq. (5)—is an essential tool for explaining and quantifying the scale-dependent behavior of IFs.

The implications of the dependence of Eq. (5) for Impact Factors are remarkable. First, the very notion of a constraint implies an unfair advantage: If two athletes are subjected to different constraints for how high they can score, then surely we cannot speak of a level playing field. Likewise, for journals, the range of IF values that are statistically available to different-sized journals depends on their relative size. So the effect of size interferes with the quantity being measured (citation impact) and unless removed, it will “tip the balance.” Second, the rapidly decaying form of means that the advantage is strongest for small journals, which can thus reach very high Impact Factors. This effect is much stronger than has been previously reported (Amin, & Mabe, 2004). It was anticipated though not fully analyzed in our previous work (Antonoyiannakis & Mitra, 2009; Antonoyiannakis, 2008). Indeed, the “citation density curve” for Phys. Rev. Letters in Antonoyiannakis & Mitra, 2009, which is identical to our here, has approximately a dependence on rank , a behavior we had then observed also for other journals. At the time, we had found this a curious feature, but we now understand its significance.

But another unfair advantage works also in favor of very large journals when compared to low-IF small journals, since their IFs are bounded from below—see Eq. (9). First, very large journals (for which ) will never have very low IFs, i.e., much smaller than . Second, while among the small journals there are many high-IF titles (), there are many more low-IF journals (), and they dominate the statistical correlation of IF with journal size. (This is also why in Fig. 9 large journals are asymmetrically positioned to the left of the center.) Why this asymmetry? For two reasons. First, citation distributions are highly skewed: Low-cited papers are abundant while highly-cited papers are scarce. A random allocation of papers into journals (in the spirit of the Central Limit Theorem) is thus more likely to yield low-IF than high-IF journals. Second (and going beyond the Central Limit Theorem), the stratification of journals by prestige causes the already prestigious journals to disproportionately attract the few highly cited papers, which limits the number of high-IF journals even further. This behavior is consistent with the findings of Rousseau & Van Hooydonk, 1996, who report a positive correlation between journal production (i.e., journal size) and Impact Factor for non-review journals grouped in bins of IF values from . It is also broadly consistent with Huang, 2016, who reports a positive correlation between article number (size) and IF for scholarly journals in the various subject categories of JCR. (Huang also reports that the correlation is obscure, i.e., not evident, in a direct plot of journal size vs. IF, but because he looks at a very small region of the full spectrum—namely, and —this is not surprising; compare to the full spectrum of IF and journal size values of Fig. 5. Other concerns with the Huang paper were raised by Rousseau, 2016.) On a similar note, Fang et al.

, 2018, find that higher IF journals “publish more papers than expected”—for example, “journals in the top IF quartile publish 44% of all SCI papers,” in agreement with our analysis: Since most journals are small (see Section 4.2.1) and have low IFs (Section 4.2.2), the bottom quartile of IFs must be populated by small journals, which would therefore account for fewer than 25% of all papers. Indeed, when we analyze our data in a similar fashion as Fang

et al., we find that the journals in the top quartile of IF values publish 46% of the papers. Clearly, the combined effects of the Central Limit Theorem (large journals have “decent” IFs that approach ) and the high skewness of citation distributions (plenty of low-cited papers but few highly-cited papers) , exacerbated by the tendency of highly cited papers to congregate in more prestigious journals, explains the behavior seen by Rousseau & Van Hooydonk, 1996, Huang, 2016, and Fang et al., 2018.

Our findings are also in agreement with Katz, 2000, who studied the effect of scale of scientific communities on the citation impact of papers, and reported a power-law behavior that is, in most but not all cases, in favor of larger scientific communities, which he attributed not to the Central Limit Theorem but to the Matthew effect in science (“the rich get richer”) (Merton, 1968). In a similar vein, van Raan, 2008, studied the 100 largest European research universities and reported a size-dependent cumulative advantage of the correlation between the number of citations and number of publications. A closer look at the van Raan study reveals that the advantage (power exponent ) for larger universities decreases as one moves from the bottom- to the top-performing universities, and in fact becomes a disadvantage (power exponent ) for the top 10% performing universities. This behavior is in qualitative agreement with the Impact Factor uncertainty relation of Eq. (5): The top-performing universities (‘samples’) have higher citation averages and are thus closer to the curve, which decreases with size (since it has negative slope, see the right-hand side of Eq. (5)); while for the bottom 10% universities there is gain to be had in increasing in size, because they are closer to the curve that increases with size (has positive slope, see the left-hand side of Eq. (5)). Paraphrasing the van Raan study, we could say that, statistically speaking, low-IF journals will gain by increasing their size, while high-IF journals will lose. As we see, it is not necessary to attribute such scale-dependent effects to the Matthew effect, while the Central Limit Theorem provides a more nuanced, elegant, and quantitatively-minded explanation.

Crespo et al., 2012, proposed a method to assess the merit of a target set of scientific papers, by calculating the probability that a randomly selected set of articles from a given pool of articles in the same field has a lower citation impact indicator than the target set, where the citation impact indicator can be the mean citations or the h-index. Although their approach is quite different from ours (and they do not include papers published in multidisciplinary journals), the underlying motivation is similar, as it is based on the realization that both the mean citations and the h-index are actually not independent of size.

Thus, counter-intuitively, perhaps (Waltman, 2016), the process of averaging does not guarantee size independence. But then, what is the point in using averages? Should we avoid them altogether? Of course not. When there is low disparity (variance) within the population, or when sample sizes are large enough to absorb the fluctuations, then averages are fine. But in citation wealth, as in actual wealth, the vast disparity within the population makes plain (i.e, unnormalized) averages misleading, especially when very small samples are involved. After all, we are used to reports of average wealth (GDP per capita) for nations (large populations), but median house prices for neighborhoods (small populations).

6 But actual journals are not random paper selections…

The Central Limit Theorem is based on the sampling distribution, in a process of randomly drawing papers from the entire scientific literature and ‘forming’ a journal, many times over, and then studying the statistics of the citation averages produced. Even though it may appear counterintuitive at first, we have established in this paper that the Central Limit Theorem successfully explains and quantifies many properties of the scale-dependent behavior of Impact Factors. But surely there are other aspects of Impact Factors that cannot be attributed to randomness and the Theorem? Indeed there are—and to find them, we need to look for sustained above-average performance, where “above average” is meant “with respect to .” If a journal repeatedly (i.e., year after year) scores a “high IF”—close to , or even higher—then this behavior cannot be explained by chance and the Central Limit Theorem. An alternative hypothesis must be at work, where the journal’s high IF results from the concerted efforts by authors, editors, editorial boards, and referees, to uphold high standards. (At least, this is the case for journals in similar research fields. For as we stated in implication #2 of §2.2, we do not differentiate here for different citation practices across research fields.) Indeed, there are many journal datapoints in Fig. 7 whose IFs deviate substantially and repeatedly from the Central Limit Theorem predictions. For example, 2576 out of 136874 IF & journal-size data pairs in 1997–2016 with annual size , had their . For 509 journal datapoints, . We already encountered such journals in §4.2.3, when we noticed local peaks (groups of high-IF datapoints) at journal sizes , where titles such as the New England Journal of Medicine, Lancet, Chemical Reviews, Journal of the American Medical Association, Cell, Nature Reviews Molecular Cell Biology, Nature Materials, Nature Nanotechnology, etc., routinely scored well above at the corresponding size. Another example was at , where Nature and Science also score well above year after year. In addition, there is an even higher number of journals that repeatedly ‘float’ below yet near the curve, and this is no statistical accident either: Even though the Central Limit Theorem predicts that IFs of randomly selected journals will range from to for a given size , if the journals were truly random selections of papers they would each ‘scan’ all the IF range of values available to them, given enough time. And yet many journals consistently stay near . Again, this can only be the result of a collective effort by several ‘actors’ for high-quality content.

At the other end of the spectrum, if a journal has an , then it demonstrates below-average performance (“average” with respect to ). And if its IF is consistently below and close to (or even lower than) the theoretical minimum, , then we can say that the journal is consistently not attracting or retaining well-cited papers.

7 The index: A scale-adjusted Impact Factor

The above discussion leads us naturally to consider an alternative measure of scale-adjusted citation impact. Let us define the index,

(15)

Defined this way, is a measure of a journal’s citation impact offset by the average and normalized to the maximum IF uncertainty (see Eq. (7)). Clearly, for average performance, for above-average performance, and for below-average performance. The normalization is chosen so that journals with have , while journals with have (recall that and are symmetric about ). Journals that are “equidistant” from the average , where the distance is normalized to , are ranked equal.

We can substitute from Eq. (6) into Eq. (15), to obtain

(16)

which can be readily calculated for journals once and are known or at least estimated. If one is merely interested in the relative ranks of journals, one can drop the denominator and compare the values , where . (The underlying assumption for such a comparison is that journals belong to the same population, in the sense described in implication #2 of §2.2. One can make this assumption as a zeroth-order approximation, but for more detailed comparisons, the distinct and for each population should be used as necessary. Indeed, some sort of field normalization, or accounting for citation differences across different populations (research fields) of papers, is important for Impact Factors—and therefore also for the index.)

In Table 2 we list the top-50 journals, ranked by their 2016 index. In the table, we also list the IF value, the journal biennial size , and the IF rank. We observe that 31 journals in the top-50 ranks are also in the top-50 IF ranks. Of the remaining 19, the most interesting new entries (compared to the top-50 IF ranks) are: J. Am. Chem. Soc. (#10 from #111), Nat. Commun. (#12 from #148), Angew. Chem. Int. Edit. (#16 from #150), ACS Nano (#20 from #109), PNAS (#22 from #210), Nano Lett. (#28 from #133), J. Mater. Chem. A (#31 from #249), Blood (#33 from #125), Phys. Rev. Lett. (#38 from #273), Nucleic Acids Res. (#39 from #196), Adv. Funct. Mater. (#44 from #147), and ACS Appl. Mater. Interf. (#46 from #327). All these journals are scoring well above average given their size. As for the three megajournals, their ranks, compared to their IF ranks, are as follows: Sci. Rep., #174 from #973; RSC Adv., #3445 from #1888; and PLoS ONE, #8811 from #2264. So, Sci. Rep. is pulled to higher rank by the index compared to the IF, while RSC Adv. and PLoS ONE are pushed to lower ranks. This happens because the IFs of RSC Adv. and PLoS ONE are below average (at 3.108 and 2.806 respectively, compared to ) while the IF of Sci. Rep. (4.259) is above average.

Thus the definition of the index is an attempt to correct for two unfair advantages that result from the Central Limit Theorem: (a) the advantage of small journals to score a high-IF—in which case, is high but is low—and (b) the advantage of megajournals to score an IF close to the population average —in which case, is high but is low. See Eqs. (15, 16). A more detailed analysis of the index will be presented in a forthcoming publication.

Journal (ranked by index) IF IF rank
1. NEW ENGL J MED 72.406 18.24 695 2
2. NATURE 40.137 15.51 1763 10
3. SCIENCE 37.205 13.84 1656 16
4. CA-CANCER J CLIN 187.040 13.00 50 1
5. LANCET 47.831 10.75 580 5
6. CHEM REV 47.928 10.41 542 4
7. CHEM SOC REV 38.618 9.63 740 14
8. JAMA-J AM MED ASSOC 44.405 8.49 425 7
9. CELL 30.410 8.04 873 22
10. J AM CHEM SOC 13.858 7.56 5030 111
11. ADV MATER 19.791 7.33 1954 58
12. NAT COMMUN 12.124 6.91 6001 148
13. ENERG ENVIRON SCI 29.518 6.86 680 24
14. NAT MATER 39.737 6.57 323 12
15. J CLIN ONCOL 24.008 6.49 973 39
16. ANGEW CHEM INT EDIT 11.994 6.24 5035 150
17. NAT NANOTECHNOL 38.986 6.14 294 13
18. NAT BIOTECHNOL 41.667 5.77 225 8
19. LANCET ONCOL 33.900 5.66 340 19
20. ACS NANO 13.942 5.47 2596 109
21. NAT PHOTONICS 37.852 5.40 243 15
22. P NATL ACAD SCI USA 9.661 5.36 6870 210
23. NAT GENET 27.959 4.87 387 29
24. NAT REV DRUG DISCOV 57.000 4.78 79 3
25. J AM COLL CARDIOL 19.896 4.76 814 56
26. NAT MED 29.886 4.75 317 23
27. NAT REV MOL CELL BIO 46.602 4.71 118 6
28. NANO LETT 12.712 4.62 2363 133
29. CIRCULATION 19.309 4.61 818 61
30. ACCOUNTS CHEM RES 20.268 4.39 663 49
31. J MATER CHEM A 8.867 4.13 5309 249
32. NAT REV IMMUNOL 39.932 3.99 118 11
33. BLOOD 13.164 3.96 1581 125
34. EUR HEART J 19.651 3.95 576 60
35. NAT METHODS 25.062 3.93 323 38
36. BMJ-BRIT MED J 20.785 3.91 494 47
37. NAT REV GENET 40.282 3.89 110 9
38. PHYS REV LETT 8.462 3.83 5287 273
39. NUCLEIC ACIDS RES 10.162 3.69 2813 196
40. NAT REV CANCER 37.147 3.66 116 17
41. CANCER CELL 27.407 3.64 226 30
42. NAT CHEM 25.870 3.61 253 36
43. ADV ENERGY MATER 16.721 3.55 691 79
44. ADV FUNCT MATER 12.124 3.55 1583 147
45. IMMUNITY 22.845 3.53 323 41
46. ACS APPL MATER INTER 7.504 3.36 6112 327
47. GASTROENTEROLOGY 18.392 3.35 485 65
48. NAT PHYS 22.806 3.30 284 43
49. LANCET NEUROL 26.284 3.12 183 35
50. NAT NEUROSCI 17.839 3.08 442 69
Table 2: Journal rankings by index. Rankings by Impact Factor shown in last column.

8 Conclusions and outlook

In this paper, we have used the Central Limit Theorem of statistics to understand the behavior of citation averages (Impact Factors). We find that Impact Factors are strongly dependent on journal size. We explain the observed stratification of journals in Impact Factor rankings, whereby small journals occupy the top, middle, and bottom ranks; mid-sized journals occupy the middle ranks; and very large journals (“megajournals”) converge to a single Impact Factor value. Further, we applied the Central Limit Theorem to develop an uncertainty relation for Impact Factors in the form of Eq. (5), a mathematical expression for the range of IF values statistically available to journals of a certain size. We confirm the functional form of the upper bound of the IF range by analyzing the complete set of 166,498 IF and journal-size data pairs in the 1997–2016 Journal Citation Reports (JCR) of Clarivate Analytics, the top-cited portion of 276,000 physics papers published in 2014–2015, as well as the citation distributions of an arbitrarily sampled list of physics journals.

The Central Limit Theorem effects are strong enough to interfere with IF values and affect the corresponding rankings. The Impact Factor uncertainty relation quantifies these effects as a function of any journal size, but the two main unfair advantages we have identified are that (a) small journals can attain very high IFs, while (b) very large journals will avoid low IFs. The former advantage is unfair towards all mid-sized and large journals, the latter is unfair towards low-IF small and mid-sized journals. It is thus suggested to compare like-sized journals in IF rankings. If one must compare different-sized journals, it would be better to use the rescaled index .

One could analyze the uncertainty relation Eq. (5) by research fields, which would result in field-specific and , and likewise for the upper and lower bounds for Impact Factors. Field normalization (or accounting for citation differences across fields one way or another) is standard procedure for citation averages. Our objective is to introduce size-normalization—expressed via the index—in addition to field normalization of citation averages.

It would also be interesting to study the effect of citation inflation, which we have ignored here, on and for the citations population of research papers, even though it is reasonable to expect that this would be rather small (Althouse et al., 2009).

We have applied the Central Limit Theorem to understand Impact Factors. But there is nothing exclusive about journals in our analysis, and the same methodology can be applied to other types of citation averages, to describe university departments, entire universities, or even countries, as in various global rankings. The key features to keep in mind when assessing such rankings of average quantities, are that (a) the Central Limit Theorem allows small entities to fluctuate much more vividly and thus reach much higher values than larger entities; and (b) very large entities will converge to a single value, characteristic of the population itself.

Before closing, let us briefly comment on the size-dependence of indicators other than arithmetic averages, such as percentile-based indicators (Leydesdorff & Bornmann, 2011) or geometric averages (Thelwall & Fairclough, 2015). Because the Central Limit Theorem applies specifically to arithmetic averages, the dependence that we observe here does not apply to percentile-based indicators or geometric averages. An analysis of size effects for such indicators would be interesting but is clearly beyond the scope of this study. Let us simply register an observation, drawn from the physics papers published in 2014–2015 and cited in 2016. For this set of papers, the range of values of the geometric average—determined by its maximum value—decreases with sample size (i.e., decreasing citation rank) as . While this decrease is slower that for the corresponding arithmetic average of Eq. (13)—and, curiously, with exactly half the exponent—it still suggests that some form of size-normalization if also necessary for geometric averages of citation distributions.

Acknowledgments: I am grateful to Jerry I. Dadap (Columbia University), Albyn Jones (Reed College), Andrew Gelman (Columbia University), and Amine Triki (Clarivate Analytics) for stimulating discussions, and to Richard Osgood Jr. (Columbia University) for encouragement and hospitality.

Disclaimer: The author is an Associate Editor at the American Physical Society. The opinions expressed here are his own.

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

References

Adler, R., Ewing, J., Taylor, P., (2008). Citation Statistics. Joint Committee on Quantitative

Assessment of Research, available at

Althouse, B. M., West, J. D., Bergstrom, C. T., Bergstrom, T., (2009). Differences in Impact

Factor across fields and over time, Journal of the American Society for Information Science & Technology, 60, 27–34.

Amin, M., Mabe, M., (2004). Impact factors: Use and abuse, International Journal of Environ-

mental Science and Technology, 1, 1–6.

Antonoyiannakis, M., (2008). Citation analysis: Beyond the Journal Impact Factor. APS March

Antonoyiannakis, M., (2015a). Impact Facts & Impact Factors: A view from the Physical

Review. PQE - The 45th Winter Colloquium on the Physics of Quantum Electronics,

Antonoyiannakis, M., (2015b). Median Citation Index vs Journal Impact Factor. APS March

Antonoyiannakis, M., Mitra, S., (2009). Editorial: Is PRL too large to have an ‘impact’? Physical

Review Letters, 102, 060001.

Bornmann, L., Leydesdorff, L., (2017). Skewness of citation impact data and covariates of

citation distributions: A large-scale empirical analysis based on Web of Science data.

Journal of Informetrics, 11, 164–175.

Bornmann, L., Leydesdorff, L. (2018). Count highly-cited papers instead of papers with h

citations: use normalized citation counts and compare “like with like”!, Scientometrics 115,

1119–1123.

Crespo, J., Ortuño-Ortín, I., Ruiz-Castillo, J., (2012). The citation merit of scientific

publications, PLoS ONE, 7, e49156.

De Veaux, R. D., Velleman, P. D., & Bock, D. E. (2014). Stats: Data and Models (3rd Edition),

Pearson Education Limited.

Fang, L., Wenbo, G., Chao Z., (2018). High impact factor journals have more publications

than expected. Current Science, 114, (5), 955–956.

Fersht, A., (2009). The most influential journals: Impact Factor and Eigenfactor. Proceedings of

the National Academy of Sciences of the United States of America, 106, 6883–6884.

Gaind, N. (2018). Few UK universities have adopted rules against impact-factor abuse, Nature

Gelman, A., Nolan, D., (2002). Teaching Statistics: A Bag of Tricks, Oxford University Press.

Glänzel, W., Moed, H. F., (2013) Opinion paper: Thoughts and facts on bibliometric indicators.

Scientometrics, 96, 381–394.

Huang, D.-w., (2016). Positive correlation between quality and quantity in academic journals,

Journal of Informetrics 10, 329–335.

Katz, J.S., (2000). Scale-independent indicators and research evaluation, Science and Public

Policy, 27, 23–36.

Leydesdorff, L., & Bornmann, L., (2011), Integrated Impact Indicators (I3) compared with

Impact Factors (IFs): An alternative design with policy implications, Journal of the American

Society for Information Science and Technology, 62 (11), 2133–2146.

Merton, R., (1968). The Matthew effect in science, Science, 159, 56–63.

Moed, H. F. (2005). Citation Analysis in Research Evaluation, Springer.

Newman, M. E. J., (2005). Power laws, Pareto distributions and Zipf’s law, Contemporary

Physics, 46, (5), 323–351.

Radicchi, F., Fortunato, S., Castellano, C., (2008). Universality of citation distributions: Toward

an objective measure of scientific impact. Proceedings of the National Academy of Sciences

of the United States of America, 105, 17268–17272.

Redner, S., (1998). How popular is your paper? An empirical study of the citation distribution.

European Physical Journal B, 4, 131–134.

Rossner, M., Van Epps, H., Hill, E., (2007). Show me the data. Journal of Cell Biology 179,

1091–1092.

Rousseau, R. (2016). Positive correlation between journal production and journal impact factor.

Journal of Informetrics, 10, 567–568.

Rousseau, R., Van Hooydonk, G., (1996). Journal production and journal impact factors, Journal

of the American Society for Information Science, 47, 775–780.

San Francisco Declaration on Research Assessment, (2012). Available at

Seglen, P. O., (1992). The skewness of science. Journal of the American Society for Information

Science, 43, 628–638.

Seglen, P. O., (1997). Why the Impact Factor of journals should not be used for evaluating

research. British Medical Journal, 314, 498–502.

Stephan, P., Veugelers, R., Wang, J., (2017). Reviewers are blinkered by bibliometrics. Nature,

544, 411–412.

Thelwall, M., Fairclough, R., (2015). Geometric journal impact factors correcting for individual

highly cited articles. Journal of Informetrics, 9, 263–272.

van Raan, A. F., (2008). Bibliometric statistical properties of the 100 largest European research

universities: Prevalent scaling rules in the science system, Journal of the American Society

for Information Science & Technology, 59, 461–475.

Wainer, H., Zwerling, H., (2006). Legal and empirical evidence that smaller schools do not

improve student achievement. The Phi Delta Kappan, 87, 300–303.

Wainer, H., (2007). The most dangerous equation, American Scientist, May-June 249–256.

Wall, H. J., (2009). Don’t get skewed over by journal rankings. The B.E. Journal of Economic

Analysis & Policy, 9, article 34.

Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E.C.M., Tijssen, R.J.W., van Eck, N.J.,

van Leeuwen, T.N., van Raan, A.F.J., Visser, M.S., and Wouters, P. (2012). The Leiden

Ranking 2011/2012: Data Collection, Indicators, and Interpretation, Journal of the

American Society of Information Science and Technology, 63(12), 2419–2432.

Waltman, L., (2016). A review of the literature on citation impact indicators, Journal of

Informetrics, 10, 365–391.