DeepAI AI Chat
Log In Sign Up

Assessment and support of emerging research groups

by   Henk F. Moed, et al.

The starting point of this paper is a desktop research assessment model that does not take properly into account the complexities of research assessment, but rather bases itself on a series of highly simplifying, questionable assumptions related to the availability, validity and evaluative significance of research performance indicators, and to funding policy criteria. The paper presents a critique of this model, and proposes alternative assessment approaches, based on an explicit evaluative framework, focusing on preconditions to performance or communication effectiveness rather than on performance itself, combining metrics and expert knowledge, and using metrics primarily to set minimum standards. Giving special attention to early career scientists in emerging research groups, the paper discusses the limits of classical bibliometric indicators and altmetrics. It proposes alternative funding formula of research institutions aimed to support emerging research groups.


page 1

page 2

page 3

page 4


Piloting topic-aware research impact assessment features in BIP! Services

Various research activities rely on citation-based impact indicators. Ho...

A Silicon Valley Love Triangle: Hiring Algorithms, Pseudo-Science, and the Quest for Auditability

In this paper, we suggest a systematic approach for developing socio-tec...

Applied Evaluative Informetrics: Part 1

This manuscript is a preprint version of Part 1 (General Introduction an...

A scientists' view of scientometrics: Not everything that counts can be counted

Like it or not, attempts to evaluate and monitor the quality of academic...

System of Bibliometric Monitoring Sciences in Ukraine

The origins of scientometrics (research metrics) were analysed and the l...

A Bibliometric Model for Identifying Emerging Research Topics

Detecting emerging research topics is essential, not only for research a...