Do disruption index indicators measure what they propose to measure? The comparison of several indicator variants with assessments by peers

by   Lutz Bornmann, et al.

Recently, Wu, Wang, and Evans (2019) and Bu, Waltman, and Huang (2019) proposed a new family of indicators, which measure whether a scientific publication is disruptive to a field or tradition of research. Such disruptive influences are characterized by citations to a focal paper, but not its cited references. In this study, we are interested in the question of convergent validity, i.e., whether these indicators of disruption are able to measure what they propose to measure ('disruptiveness'). We used external criteria of newness to examine convergent validity: in the post-publication peer review system of F1000Prime, experts assess papers whether the reported research fulfills these criteria (e.g., reports new findings). This study is based on 120,179 papers from F1000Prime published between 2000 and 2016. In the first part of the study we discuss the indicators. Based on the insights from the discussion, we propose alternate variants of disruption indicators. In the second part, we investigate the convergent validity of the indicators and the (possibly) improved variants. Although the results of a factor analysis show that the different variants measure similar dimensions, the results of regression analyses reveal that one variant (DI5) performs slightly better than the others.


Convergent validity of several indicators measuring disruptiveness with milestone assignments to physics papers by experts

This study focuses on a recently introduced type of indicator measuring ...

An empirical review of the different variants of the Probabilistic Affinity Index as applied to scientific collaboration

Responsible indicators are crucial for research assessment and monitorin...

What do we know about the disruption indicator in scientometrics? An overview of the literature

The purpose of this paper is to provide a review of the literature on th...

Like-for-like bibliometric substitutes for peer review: advantages and limits of indicators calculated from the ep index

The use of bibliometric indicators would simplify research assessments. ...

On the Predictability of Utilizing Rank Percentile to Evaluate Scientific Impact

Bibliographic metrics are commonly utilized for evaluation purposes with...

Field- and time-normalization of zero-inflated data: An empirical analysis using citation and Twitter data

Thelwall (2017a, 2017b) proposed a new family of field- and time-normali...

Please sign up or login with your details

Forgot password? Click here to reset