When standard network measures fail to rank journals: A theoretical and empirical analysis

06/29/2021
by   Giacomo Vaccario, et al.
0

Journal rankings are widely used and are often based on citation data in combination with a network perspective. We argue that some of these network-based rankings can produce misleading results. From a theoretical point of view, we show that the standard network modelling approach of citation data at the journal level (i.e., the projection of paper citations onto journals) introduces fictitious relations among journals. To overcome this problem, we propose a citation path perspective, and empirically show that rankings based on the network and the citation path perspective are very different. Based on our theoretical and empirical analysis, we highlight the limitations of standard network metrics, and propose a method to overcome these limitations and compute journal rankings.

READ FULL TEXT
research
05/04/2021

Eigenfactor

The Eigenfactor is a journal metric, which was developed by Bergstrom an...
research
01/18/2014

Combining Evaluation Metrics via the Unanimous Improvement Ratio and its Application to Clustering Tasks

Many Artificial Intelligence tasks cannot be evaluated with a single qua...
research
03/06/2018

Impact Factors and the Central Limit Theorem: Why Citation Averages Are Scale Dependent

Citation averages, and Impact Factors (IFs) in particular, are sensitive...
research
09/13/2021

The correlation coefficient between citation metrics and winning a Nobel or Abel Prize

Computing such correlation coefficient would be straightforward had we h...
research
04/19/2015

Computing a consensus journal meta-ranking using paired comparisons and adaptive lasso estimators

In a "publish-or-perish culture", the ranking of scientific journals pla...
research
03/06/2018

Impact Factors and the Central Limit Theorem

In rankings by average metrics, smaller samples are more volatile: They ...

Please sign up or login with your details

Forgot password? Click here to reset