Bibliometric indices are ubiquitous in measuring the scientific impact of both journals, institutions, research groups and individual researchers. At some stage in their evaluation process, scientific committees, administrators or policy makers often rely on citation data to assess scientific output. Among the variety of measures that can be derived from raw citation data, the -index  became the most popular bibliometric criterion for the ranking of research accomplishment [2, 3, 4].
The -index of an individual is given by the total number of published papers with citations . Due to its simple definition, its ease of computation from existing bibliography databases, its robustness against errors in the long tails of the citations-rank distribution  and its good properties when quantifying the scientific production and its impact, the -index has been largely adopted as a reliable measure of research output . Nowadays, the automatic calculation of -indices is a built-in feature of major bibliographic databases such as Google Scholar, Researchgate, Scopus and Web of Science. While it is quite dependent on the database being used, the typical value of the -index is robust when it comes to ranking both individuals and research institutions.
Shortcomings of the -index have been pointed out, although many other available indicators do suffer from similar biases . Among the arguments raised against using the -index one can enumerate: it does not allow to compare scientists from different disciplines , it does not take into account multi-authored papers [7, 8, 9, 10, 11, 12], it is a time-dependent quantity [13, 14, 15, 16], it does not highlight the citation scores of top articles , it does not take into account the context of the citations in the papers. Consequently, several subsequent variations of the -index have been proposed to overcome some of these drawbacks [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. A non exhaustive list of such bibliometric indices include the -index , the -index , the - - and -index [20, 21], the -index , the -index , etc. Interestingly, all of them were based on an increasingly sophisticated analysis of raw citation data, which makes them more difficult to access. Moreover, although being designed to surmount its perceived shortcomings, they were found to be quite positively correlated with the -index [4, 27]. This fact yielded the general consensus that no other bibliometric indicator of research output is clearly preferable to the -index [4, 31].
Since it is desirable to evaluate researchers and/or institutions on more than a single quality, we propose to supplement the -index with a new quantitative indicator specialized to interdisciplinary research and readership extent. This journal-based index is by construction independent of citation counts and measures different qualities that could be useful to assess, for example in the process of hiring individuals who are expected to teach and to advise, for which high specialization values are not required.
A new two-dimensional index
Many refinements of the -index, on both theoretical and empirical sides, were based on alternative analysis of citation data. We believe that this is the reason for their strong positive correlation with the latter. Therefore, the quest for other bibliometric indices should be aimed at complementing the -index rather than replacing it. A new index should be a quantitative indicator with a core that is independent of citation data and as easy to compute from existing bibliographic databases as the -index. It should concern both individuals, research groups and institutions and should highlight different achievements than research output, which is already well quantified by the -index.
Noticing that the -index is bounded by the total number of published papers, , we have looked for another possibly interesting and simple quantity that satisfies the same property. A straightforward one is the number of different journals in which these papers were published. This number, denoted by , is not directly available in existing bibliographic databases but can be easily extracted from them, either by direct counting or using simple script codes. Obviously, and is not based on citation count, thus one would expect that it is unlikely correlated to research output. Indeed, common publishing habits show that a paper is submitted to a given journal because of the studied area of research, to reach a specific scientific community, to fit a specific format or to be faced with a given more or less selective refereeing procedure. One could argue that a such choice is mainly dictated by the journal’s impact factor, its ranking by a target institution or its interdisciplinary readership. Nevertheless, if the published research is worthwhile it will positively impact the -index and consequently research output, regardless of the journal’s ranking. Moreover, a paper in a high impact journal with an interdisciplinary readership is often followed by a series of detailed papers in the same subject intended for a specific scientific community, which thus contributes to increase .
However, what type of research achievement is embedded in this new number? On the one hand, publishing in a small set of journals could indicate mono-thematic research interests while a multidisciplinary researcher is prone to publish in a wide range of journals. On the other hand, a researcher could impact a single field by publishing in the same journal on a single subject, and conversely, an author may publish about various subjects in many journals by achieving a moderate impact. These two extreme and antagonistic examples are in favour of taking into account both and the -index for more elaborate ranking purposes. Although may exhibit some flaws, which will be discussed later, one can reasonably assume that it carries information about either the multidisciplinary nature of the research, the diversity of interests of the researcher, or the extent of his readership. While these features are indicators of research quality, they are clearly not directly provided by the -index.
To test the relevance of this two-dimensional index, we have chosen as a panel 95 permanent physicists belonging to a same department, namely the Physics Department of ENS Paris, which is divided in three sub-departments (“Laboratoires”) with more or less specific research areas: condensed matter physics (Laboratoire Pierre Aigrain, LPA, 31 researchers), theoretical and statistical physics (Laboratoire de Physique Théorique, LPT, 26 researchers), statistical physics, biophysics and nonlinear physics (Laboratoire de Physique Statistique, LPS, 38 researchers). The -index and of each individual have been retrieved from Web of Science in July 2016. All types of publications were taken into account. Fig. 1 shows the results of this survey: a scatter plot of these quantities, though containing a global trend, is dominated by the scatter noise, illustrating the signals low correlation. This confirms that when the department is taken as a whole, individuals bibliometric indicators and carry complementary information. This example justifies the use of both and to characterise research accomplishment. In order to sharpen the comparison between individuals, we propose a different representation of these two bibliometric indicators. Since both and are bounded by , one can define a “complex” representation of the two-dimensional index as follow
Obviously, the two-dimensional index is defined such that and . This representation separates the extensive contribution characterizing the volume of quality output, from an intensive characterization of its multidisciplinarity: small values of the argument indicates strong specialization, while close to one indicates thematic dispersion with rather low impact. Fig. 2 reproduces the same set of data as Fig. 1 in these two variables. Interestingly, while is widely distributed, the average of discriminates more sharply between the different “Laboratoires”. One can even identify that the overlap between LPT and LPS is ensured by LPT researchers working in statistical physics. The overlap between data of LPA and LPS is due to the fact that LPA benefits from large publication material due the applied part of its research area (readership extent), while LPS researchers often work in different fields (multidisciplinarity). Clearly the -index differentiates between communities and highlights readership diversity and research multidisciplinarity. The two indices together allow for a better qualification of research achievement compared to a single evaluator. Moreover, this alternative representation increases the level of independance between the two components, with a coefficient of correlation of 0.7 for and and for and in our dataset.
The use of the -index to compare scientists from different research areas is a quite difficult task due to the inherent differences among different research fields. Although it is not an exclusive problem of the -index, there have been several different efforts in the literature to supersede it. Interestingly, our work quantitatively demonstrates a bias associated with a citation-only index, namely its inability to capture research diversity. This is in favour to define as an additional index to supplement the -index. However, the representation using the two-dimensional index is preferable since it yields weakly correlated components. Furthermore, while is bounded by 1 and separates finely the different disciplines, , which is bounded by the number of papers, balances impact and diversity. In the following, we discuss questions that this two-dimensional index may rise and propose refinements to tackle its possible drawbacks.
There are generalist journals that encompass different research areas and that could be downgraded by the proposed index. In order to overcome this difficulty, one could for example modify to categorize the paper using its research topic. Unfortunately, there is no uniform nomenclature between journal publishers (PACS numbers, keywords, subject classification,…). To do that, one would need a standard classification of research topics that is equivalent to Digital Object Identifier (DOI) system for publishers.
Let’s try to compare two individuals with the same . In an extreme case, an individual with and , though carrying out multidisciplinary research, is not very successful since his research is not followed up by others. On the other side of the spectrum, a scientist with and , though very exclusive in his publication choices, can be considered much more successful due to his citation rate. Hence having is never a good thing, while having can be of interest in some situations. That is why we claim that it would be ideal to have , then , thus balancing recognition and curiosity.
The choice of the “perfect” would of course strongly depend on the kind of target one aims at, but it lifts a degeneracy and provides deciders with an additional lever. The existence of a two-dimensional index refines the judgement depending of what one looks for. For example, hiring a researcher for a specific research position or for his teaching abilities is not the same, the former would require a smaller than the latter. Furthermore, evaluating an institution through citation metrics only is not fair because interdisciplinary research is an indisputable quality of an institution as it illustrates its local and international attractiveness and the diversity of education offered to students.
We claim that it is too limiting of an approach to categorize individuals along a single axis. Additional and independent indicators should complement such an approach. Our main line of axiomatics is that, independently of how “good or bad” a researcher is ranked through the -index, he may actually exhibit alternative qualities, like being a specialist, or conversely foraging results in various scientific areas. We claim that the two-dimensional index we introduced in Eq. (2) is orthogonal to the usually accepted notion of “good or bad”. Finally, we hope our proposition will trigger more bibliometric studies. Full-scale analysis of the -index using different databases is an interesting question that will test its robustness and ease of computation. Before being adopted, it is necessary to apply it to real evaluation problems and situations: for example, additional studies comparing institutions should be developed. Nevertheless, we believe that combining the two numbers will refine degenerate situations and balance between impact and scope. Eventually, adding indices as legitimate as the -index that quantify different facets of research achievements limit the misuse of the -index.
-  Hirsch, J.: An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences 102, 16,569–16,572 (2005)
-  Ball, P.: Index aims for fair ranking of scientists. Nature 436, 900 (2005)
-  Fersht, A.: The most influential journals: Impact factor and eigenfactor. Proceedings of the National Academy of Sciences 106, 6883–6884 (2009)
-  Alonso, S., Cabrerizo, F.J., Herrera-Viedma, E., Herrera, F.: h-index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics 3, 273–289 (2009)
-  Vanclay, J.K.: On the robustness of the h-index. Journal of the American Society for Information Science and Technology 58(10), 1547–1550 (2007)
-  Batista, P.D., Campiteli, M.G., Kinouchi, O., Martinez, A.S.: Is it possible to compare researchers with different scientific interests? Scientometrics 68, 179–189 (2006)
-  Iglesias, J.E., Pecharromán, C.: Scaling the h-index for different scientific ISI fields. Scientometrics 73(3), 303–320 (2007)
-  Egghe, L.: Mathematical theory of the h- and g- index in case of fractional counting of authorship. Journal of the American Society for Information Science and Technology 59, 1608–1616 (2008)
-  Radicchi, F., Fortunato, S., Castellano, C.: Universality of citation distributions: toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences 105, 17,268–17,272 (2008)
-  Sekercioglu, C.H.: Quantifying coauthor contributions. Science 322, 371–371 (2008)
-  Zhang, C.T.: A proposal for calculating weighted citations based on author rank. EMBO reports 10, 416–417 (2009)
-  Hirsch, J.: An index to quantify an individual’s scientific research output that takes into account the effect of multiple coauthorship. Scientometrics 85, 741–754 (2010)
-  Burrell, Q.L.: On the h-index, the size of the hirsch core and jin’s A-index. Journal of Informetrics 1, 170–177 (2007)
-  Eom, Y.H., Fortunato, S.: Characterizing and modeling citation dynamics. PLoS ONE 6(9), e24,926 (2011)
-  Acuna, D.E., Allesina, S., Kording, K.P.: Future impact: Predicting scientific success. Nature 489, 201–202 (2012)
-  Wang, D., Song, C., Barabási, A.L.: Quantifying long-term scientific impact. Science 342, 127–132 (2016)
-  Egghe, L.: Theory and practice of the g-index. Scientometrics 69(1), 131–152 (2006)
-  Komulski, M.: A new hirsch-type index saves time and works equally well as the original h-index. ISSI Newsletter 2, 4–6 (2006)
-  Sidiropoulos, A., Katsaros, D., Manolopoulos, Y.: Generalized hirsch h-index for disclosing latent facts in citation networks. Scientometrics 72, 253–280 (2007)
-  Jin, B.H., Liang, L.M., Rousseau, R., Egghe, L.: The R- and AR-indices: Complementing the h-index. Chinese Science Bulletin 52, 855–863 (2007)
-  Jin, B.H.: The AR-index: complementing the h-index. ISSI Newsletter 3, 6–6 (2007)
-  Eck, N.J.V., Waltman, L.: Generalizing the h- and g-indices. Journal of Informetrics 2, 263–271 (2008)
-  Rousseau, R., Ye, F.: A proposal for a dynamic h-type index. Journal of the American Society for Information Science and Technology 59, 1853–1855 (2008)
-  Egghe, L., Rousseau, R.: An h-index weighted by citation impact. Information Processiong and Management 44, 770–780 (2008)
-  Ruane, F., Tol, R.: Rational (successive) h-indices: An application to economics in the republic of ireland. Scientometrics 75, 395–405 (2008)
-  Anderson, T., Hankin, K., Killworth, P.: Beyond the durfee square: Enhancing the h-index to score total publication output. Scientometrics 76, 577–588 (2008)
-  Bornmann, L., Mutz, R., Daniel, H.D.: Are there better indices for evaluation purposes than the h index? a comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology 59, 830–837 (2008)
-  Antonakis, J., Lalive, R.: Quantifying scholarly impact: IQp versus the hirsch h. Journal of the American Society for Information Science and Technology 59, 956–969 (2008)
-  Schreiber, M.: A case study of the modified hirsch index accounting for multiple coauthors. Journal of the American Society for Information Science and Technology 60, 1274–1282 (2009)
-  Guns, R., Rousseau, R.: Real and rational variants of the h-index and the g-index. Journal of Informetrics 3, 64–71 (2009)
-  Hirsch, J.: Does the h-index have predictive power? Proceedings of the National Academy of Sciences 104, 19,193–19,198 (2007)