Can We Count on Social Media Metrics? First Insights into the Active Scholarly Use of Social Media

04/08/2018
by   Maryam Mehrazar, et al.
ZBW
0

Measuring research impact is important for ranking publications in academic search engines and for research evaluation. Social media metrics or altmetrics measure the impact of scientific work based on social media activity. Altmetrics are complementary to traditional, citation-based metrics, e.g. allowing the assessment of new publications for which citations are not yet available. Despite the increasing importance of altmetrics, their characteristics are not well understood: Until now it has not been researched what kind of researchers are actively using which social media services and why - important questions for scientific impact prediction. Based on a survey among 3,430 scientists, we uncover previously unknown and significant differences between social media services: We identify services which attract young and experienced researchers, respectively, and detect differences in usage motivations. Our findings have direct implications for the future design of altmetrics for scientific impact prediction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/13/2018

Towards the social media studies of science: social media metrics, present and future

In this paper we aim at providing a general reflection around the presen...
05/09/2017

Measuring Social Media Activity of Scientific Literature: An Exhaustive Comparison of Scopus and Novel Altmetrics Big Data

This paper measures social media activity of 15 broad scientific discipl...
07/28/2020

Measuring prominence of scientific work in online news as a proxy for impact

The impact made by a scientific paper on the work of other academics has...
12/23/2016

Anatomy of Scholarly Information Behavior Patterns in the Wake of Social Media

As more scholarly content is being born digital or digitized, digital li...
03/02/2020

How to organize an online conference

On January 13th 2020, the inaugural Photonics Online Meetup (POM) brough...
06/27/2018

Social media metrics for new research evaluation

This chapter approaches, both from a theoretical and practical perspecti...
02/25/2020

Engaging Users through Social Media in Public Libraries

The participatory library is an emerging concept which refers to the ide...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The use of the web is an integral part of scientific work. On social media, researchers discover new research, discuss research ideas with fellows and disseminate research results to the public and to the scientific community (Priem et al., 2011; Hadgu and Jäschke, 2014; Jordan, 2015). Additionally, academic search engines support scientists in finding scholarly literature.

In order to improve their performance, academic search engines employ scholarly metrics: citation-based measures for the scientific impact of authors and scientific works (Sugimoto et al., 2017). In fact, scholarly metrics are also important for other applications such as hiring decisions and project and application evaluation (Project, 2014; Beel and Gipp, 2009).

One drawback of traditional, citation-based metrics is that citations are not available for new publications – the first citation of a paper may take years. Additionally, scholarly metrics do not cover the scientific impact on the web. Therefore, social media metrics or altmetrics were introduced as a complement to traditional metrics: By analysing usage patterns on social media, altmetrics evaluate the quality of scholarly products through their impact on the web (Priem et al., 2011). Altmetrics which predict the scientific impact of scholarly work (Zoller et al., 2015) will likely play a central role in many future applications such as scientific literature retrieval.

To the best of our knowledge, current altmetrics data providers such as altmetric.com (Priem et al., 2011; Priem, 2015) or PlumX (Champieux, 2015) use sums or simplistic weightings for aggregating altmetrics (e.g. the number of mentions) from different social media services. For instance, view counts are aggregated across services using weighted sums. It has not yet been investigated whether this practice reflects the complexity of actions on social media. In order to improve altmetrics for scientific impact prediction, it is essential to understand the demographics and motives of scholarly social media users. If social media services differ significantly in the demographics or motives of their users, the mechanisms of altmetrics would have to be improved: One example could be a service-specific correction for the share of postdocs, who are known to have a high productivity (Carayol and Matt, 2006) and thus create more citations, which are to be predicted.

This paper analyses the results of a survey among 3,430 scientists, providing first insights into the scholarly use of social media by detecting and describing (i) demographic differences of active scholarly users of social media and (ii) variations in the motivation for scholarly use of social media between services.

It is well-known that a small share of active users in social media contributes the majority of observed activities, the so-called “90:9:1 rule” (Nielson, 2006). As a result, active users are responsible for most of the activities measured by altmetrics. Unlike previous analyses of the scholarly use of social media (Jordan, 2015; Zoller et al., 2015), we therefore only consider active users who use social media at least weekly for scientific purposes.

Figure 1. Academic experience of active users per service. Programming-related services (StackExchange, StackOverflow, GitHub) are used by young scientists, while networking services (Google+, Twitter, LinkedIn), SlideShare and Wikipedia have the highest percentage of experienced scientists. The legend shows services ordered by their share of young scientists (0-4 years of experience). The services with the highest share of young researchers (StackExchange) and the highest share of experienced academics (Google+) are highlighted. Social media services show substantial differences in the experience level of active scholarly users.

2. Related Work

Social media have become increasingly popular for scholarly communication (Ponte and Simon, 2011; Tenopir et al., 2013). Several metrics based on scholarly social media activities have been shown to correlate with traditional, citation metrics (Zoller et al., 2015; Mohammadi et al., 2015), though previous studies have pointed out that there is wide variability in the social media use of researchers. Differences have been observed in age, academic role, discipline and country, among others (Hadgu and Jäschke, 2014; Procter et al., 2010; Nicholas and Rowlands, 2011; Bowman, 2015; Mikki et al., 2015).

Using metrics based on social media activities comes with various challenges such as the assurance of data quality, the consideration of the heterogeneity of acts, users and motivations on social media, and the prevention of bias (Haustein, 2016; Bornmann, 2014). One key problem of scholarly social media data is the systematic bias towards scholars with certain demographic characteristics such as bias towards younger users (Priem, 2015) and towards users with a professional interest in research (Neylon and Wu, 2009). Several studies state that the lack of accurate user statistics or sample descriptions for social media sites complicates the quantification of these biases (Bornmann, 2014; Sugimoto et al., 2017).

Scholars use social media for various reasons. Van Noorden (Van Noorden, 2014) identified multiple categories of motivations for scholarly social media use: contacting peers, posting content, sharing links to authored content, actively discussing research, commenting on research, following discussions, tracking metrics, discovering jobs, discovering peers, discovering recommended papers, offering a contact possibility and curiosity. Jordan (Jordan, 2015) identified motivation categories by manually coding questions asked by researchers on Academia.edu.

It is well known that a small minority of active social media users is responsible for a large share of activities (Whittaker et al., 2003; Kittur et al., 2007). Russo et al.  (Russo and Peacock, 2009) give an overview of multiple studies on various social media sites and Kunegis (Kunegis et al., 2012) shows statistics for dozens of social networks, all confirming the effect. To the best of our knowledge, there is no study on the demographics and motivations of active scholarly users on social media.

3. Data and Methods

We analyse data from an exploratory online survey 111https://github.com/marymm/-metrics/raw/master/questionnaire.pdf on the professional scholarly use of social media, which we conducted as part of a larger research project on metrics. Among others, we asked participants for their research experience, their academic role, the social media services they use, and how often and why they are using social media services.

3.1. Survey data

Our survey on social media usage was distributed via multiple channels: Authors who had at least one publication after 2015 with an email listed in the Web of Science, RePEc and multiple mailing lists related to Economics, Social Sciences or its subfields. As the survey was conducted as part of an interdisciplinary project involving partners from economics and social sciences, our main target group was economists and social scientists.

More than 3,430 international researchers participated in our survey from March to May 2017 with a response rate of about 6%. More than half of the participants – 1,731 researchers – use at least one functionality (e.g. like, share or post) of a social media service per week. We call these researchers active users.

Most of the researchers are from the fields of economics (60%) and social sciences (22%). Researchers from 84 countries participated, the majority of them from Germany (51%), followed by the US (10%) and the UK (5%). Ages of participants range from 19 to 89 (median age 38). The distribution of academic roles is as follows: About 44% of the participants are professors, followed by PhD students / research assistants (31%) and postdocs (19%). This is in line with studies showing that professors together with PhD students have the highest share of profiles on academic social media (Mikki et al., 2015; Ortega, 2015).

Though our sample is not representative, the high share of active users in our survey allows us to analyse differences between social media services. If we find significant differences between services in our survey, we also expect to find differences in the parent population.

3.2. Experience differences and motivations

In our survey, we asked participants to state their research experience since graduation using predefined ordinal categories (0-4 years etc.). For detecting significant differences in the distribution of research experience, we look at all possible pairs between the twelve most-mentioned services and use pairwise tests on category counts of the answers of participants. We apply the Benjamini-Hochberg procedure with a false discovery rate of and only pairs with strong effect sizes ( were considered.

Our survey contains a question on reasons for using social media. In order to detect latent motivations for using social media, we ran Latent Dirichlet Allocation (LDA) (Blei et al., 2003), the most common topic model, on the free text answers. Topic models detect sets of semantically related words using the co-occurrence of words in documents. We chose to set the topic parameter to 10 topics (the lowest number yielding meaningful topics), and used sparse, symmetric document-topic and topic-word Dirichlet priors with . Negative answers (e.g. “none”) were manually deleted and stopwords (from NLTK (Loper and Bird, 2002)) were removed from the remaining answers, resulting in 997 answer texts.

4. Results

In this section, we look at differences between social media services in terms of demographics and motivations of active users.

Figure 2. Role distribution of active scholarly users for selected services. We found strong differences between services: StackExchange is mostly used by research assistants and PhD students, while in our survey Google+, SlideShare and Wikipedia are mainly used by Professors. While the share of professors is roughly the same for Wikipedia and Google+, the share of post docs is twice-as-high for Google+, indicating a relationship between role and service use.

4.1. Research experience

To check for demographic differences between the active users of services, we plot the distribution of research experience among active users for the twelve most-frequently named services, shown in Figure 1. We find that services for software development and question answering – StackExchange, StackOverflow and GitHub – have the highest share of young researchers. On the other hand, services for networking like Google+, Twitter and LinkedIn as well as services for spreading research and information to the general public, like SlideShare and Wikipedia, have a far higher share of experienced researchers. We identified multiple pairs of services with significant differences and large effect sizes (): The difference between Google+ and StackExchange is significant (p-value , effect size 0.55), as well as the difference to LinkedIn, Wikipedia and Academia.edu. Google+ additionally is significantly different from StackOverflow, GitHub and Youtube. Other pairs with significant differences are StackOverflow–LinkedIn and GitHub–Wikipedia. This is the first evidence that research experience influences the active scholarly use of social media. Altmetrics based on social media with a focus on software development will be biased towards young researchers, metrics on services mainly used for networking will be biased towards the actions of experienced researchers.

To take a closer look at this finding, we compare the distribution of academic roles between services with significant differences in user experience in Figure 2. Google+ and Wikipedia have the highest share of experienced users. Looking at their distribution of academic roles, we see that Wikipedia has twice as many PhD students as Google+, while the latter has about twice as many postdocs compared to Wikipedia.

Both findings indicate that different social media services fulfill different demands and thus both role and experience distributions of their users vary.

4.2. User motivations

The motivations for using social media are known to vary among scholars (Van Noorden, 2014; Jordan, 2015). In our survey, we asked researchers to name reasons for using social media. We ran LDA (Blei et al., 2003) on their answers to detect these latent motivations.

@l—l—l—l—l@

Topic 0& Topic 1&Topic 2&Topic 3&Topic 4
share&relevant&find&get&interest
peers&others&interesting&information&interesting
access&research&work&new&results
read&think&share&topic&spread
people&work&knowledge&findings&content

Topic 5& Topic 6& Topic 7& Topic 8& Topic 9
interesting&articles&research&research&make
topics&download&work&share&researchers
show&public&relevance&like&important
article&news&good&academic&work
find&available&friends&retweet&colleagues

Topic ID
(a)

Average topic probabilities for active users

Topic ID Topic ID
(b) Deviation from average topic probabilities for selected services
Figure 3. Analysis of topics found in user responses on the question on reasons for using social media. These topics can be interpreted as latent motives and they vary for different services. The differences in motives could explain observed variations in research experience and academic roles.

Table 4.2 shows the detected topics. By looking at the top words of the topics and at answers with high topic probabilities in the corpus, we found that the topics can be interpreted as follows: Topic 0: sharing and accessing papers of peers/other people, Topic 1: users who think that their research is relevant to others, Topic 2: finding and sharing interesting works, Topic 3: getting information on new topics, Topic 4: spreading interesting results, Topic 5: showing interesting topics to the community, Topic 6: downloading articles, Topic 7: sharing relevant research with friends, Topic 9: promoting important work of colleagues. Topic 8 repeats the words from the question, indicating an influence of the question on the answers. We therefore ignore this topic in our analysis.

In order to check whether there are differences between user motivations between the different services, we compare the topic distributions of the active users for different services. A user can be active in multiple services. The global topic distribution is shown in Figure 2(a). To find services with strong differences, we show the difference from the global topic distribution for services with a significant difference in user experience in Figure 2(b).

We see that different social media services meet different needs: StackExchange has less users who want to find interesting academic works (Topic 2) and more active users who want to share research with friends (Topic 7) and to get new information (Topic 3). In contrast, Wikipedia has a below-average share of users who want to share relevant research with friends or their community (Topic 5 and 7), but they like to share relevant research and interesting findings with a general audience (Topic 1 and 2). Similarly, SlideShare has an above-average share of users who use social media because they think that their research is relevant for others (Topic 1) and they like to spread interesting results (Topic 4) but have a lower probability for sharing content in their community (Topic 5 and 7). Finally, Google+ has a higher share of users who want to share relevant research with friends but a lower probability for promoting work of colleagues (Topic 9).

These findings contribute to the understanding of the patterns found in Figure 1: Google+ attracts relatively more scholars who want to share their research – and this could explain why we see a higher share of professors / experienced users. StackExchange attracts more users who search for information – and we can assume that this causes a high share of research assistants and PhD students, who have more practical duties and a higher need for question answering services.

5. Discussion and Conclusion

In order to assess differences in demographics and usage motives between social media services, we studied survey responses of 1,731 active scholarly social media users. Our first analysis shows that (i) the distribution of research experience and professional roles per social service varies greatly for active users: Experienced users use social networks and services which make research results available to the general public, young researchers are more dominant in question answering services and platforms for publishing code; and (ii) the motivation of researchers for using social media services varies per service: While services with a higher share of inexperienced researchers may attract users who search for information, services with a high share of professors / experienced researchers attract users who want to share their research results with friends or the general public.

These findings have implications for the future development of altmetrics for scientific impact prediction: The observed variety of experience and of motivations for social media use is likely to influence the meaning of actions per service. While a post mentioning a paper on StackExchange is likely a question of a young researcher (satisfying a need for information), a post mentioning a paper on Google+ is more likely explained by an experienced researcher sharing a relevant publication with friends. This variety should be accounted for when measuring the activities of scholars in social media for scientific impact prediction, e.g. for improving literature search engines or the evaluation of research.

In future work, we will extract author information from social media services and conduct surveys to better approximate the distribution of research experience per service. Combined with the distribution of citation counts depending on the research experience, this allows us to create and evaluate novel altmetrics for scientific impact prediction.

6. Acknowledgement

This work is part of the DFG-funded *metrics project (project number: 314727790). Further information on the project can be found on metrics-project.net.

References

  • (1)
  • Beel and Gipp (2009) Jöran Beel and Bela Gipp. 2009. Google Scholar’s ranking algorithm: an introductory overview. In Proceedings of the 12th International Conference on Scientometrics and Informetrics (ISSI’09), Vol. 1. Rio de Janeiro (Brazil), 230–241.
  • Blei et al. (2003) David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent Dirichlet allocation.

    Journal of Machine Learning Research

    3 (2003), 2003.
  • Bornmann (2014) Lutz Bornmann. 2014. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. Journal of informetrics 8, 4 (2014), 895–903.
  • Bowman (2015) Timothy David Bowman. 2015. Investigating the use of affordances and framing techniques by scholars to manage personal and professional impressions on Twitter. Ph.D. Dissertation. Indiana University.
  • Carayol and Matt (2006) Nicolas Carayol and Mireille Matt. 2006. Individual and collective determinants of academic scientists’ productivity. Information Economics and Policy 18, 1 (2006), 55–72.
  • Champieux (2015) Robin Champieux. 2015. PlumX. Journal of the Medical Library Association: JMLA 103, 1 (2015), 63.
  • Hadgu and Jäschke (2014) Asmelash Teka Hadgu and Robert Jäschke. 2014. Identifying and analyzing researchers on twitter. In Proceedings of the 2014 ACM conference on Web science. ACM, 23–32.
  • Haustein (2016) Stefanie Haustein. 2016. Grand challenges in altmetrics: heterogeneity, data quality and dependencies. Scientometrics 108, 1 (2016), 413–423.
  • Jordan (2015) Katy Jordan. 2015. What do academics ask their online networks?: An analysis of questions posed via Academia.edu. In WebSci, David De Roure, Pete Burnap, and Susan Halford (Eds.). ACM, 42:1–42:2. http://dblp.uni-trier.de/db/conf/websci/websci2015.html#Jordan15
  • Kittur et al. (2007) Aniket Kittur, Ed Chi, Bryan A Pendleton, Bongwon Suh, and Todd Mytkowicz. 2007. Power of the few vs. wisdom of the crowd: Wikipedia and the rise of the bourgeoisie. World wide web 1, 2 (2007), 19.
  • Kunegis et al. (2012) Jérôme Kunegis, Steffen Staab, and Daniel Dünker. 2012. KONECT – The Koblenz Network Collection. In Proc. Int. Sch. and Conf. on Netw. Sci.
  • Loper and Bird (2002) Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In

    In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Philadelphia: Association for Computational Linguistics

    .
  • Mikki et al. (2015) Susanne Mikki, Marta Zygmuntowska, Øyvind Liland Gjesdal, and Hemed Ali Al Ruwehy. 2015. Digital presence of Norwegian scholars on academic network sites – where and who are they? PloS one 10, 11 (2015), e0142709.
  • Mohammadi et al. (2015) Ehsan Mohammadi, Mike Thelwall, Stefanie Haustein, and Vincent Larivière. 2015. Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology 66, 9 (2015), 1832–1846.
  • Neylon and Wu (2009) Cameron Neylon and Shirley Wu. 2009. level metrics and the evolution of scientific impact. PLoS biology 7, 11 (2009), e1000242.
  • Nicholas and Rowlands (2011) David Nicholas and Ian Rowlands. 2011. Social media use in the research workflow. Information Services & Use 31, 1-2 (2011), 61–83.
  • Nielson (2006) J Nielson. 2006. Alertbox participation inequality: encouraging more users to contribute. useit. com (2006).
  • Ortega (2015) José Luis Ortega. 2015. How is an academic social site populated? A demographic study of Google Scholar Citations population. Scientometrics 104, 1 (2015), 1–18.
  • Ponte and Simon (2011) Diego Ponte and Judith Simon. 2011. Scholarly communication 2.0: Exploring researchers’ opinions on Web 2.0 for scientific knowledge creation, evaluation and dissemination. Serials review 37, 3 (2011), 149–156.
  • Priem (2015) Jason Priem. 2015. Altmetrics (Chapter from Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact). CoRR abs/1507.01328 (2015). arXiv:1507.01328 http://arxiv.org/abs/1507.01328
  • Priem et al. (2011) Jason Priem, Dario Taraborelli, Paul Groth, and Cameron Neylon. 2011. Altmetrics: a Manifesto. (2011). http://altmetrics.org/manifesto/
  • Procter et al. (2010) Rob Procter, Robin Williams, James Stewart, Meik Poschen, Helene Snee, Alex Voss, and Marzieh Asgari-Targhi. 2010. Adoption and use of Web 2.0 in scholarly communications. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 368, 1926 (2010), 4039–4056.
  • Project (2014) NISO Alternative Assessment Metrics Project. 2014. NISO Altmetrics Standards Project White Paper. https://groups.niso.org/apps/group_public/download.php/16268/NISO%20RP-25-201x-1,%20Altmetrics%20Definitions%20and%20Use%20Cases%20-%20draft%20for%20public%20comment.pdf
  • Russo and Peacock (2009) Angelina Russo and Darren Peacock. 2009. Great expectations: Sustaining participation in social media spaces. In Museums and the Web 2009. Archives & Museum Informatics, 23–36.
  • Sugimoto et al. (2017) Cassidy R. Sugimoto, Sam Work, Vincent Larivière, and Stefanie Haustein. 2017. Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology 68, 9 (2017), 2037–2062. https://doi.org/10.1002/asi.23833
  • Tenopir et al. (2013) Carol Tenopir, Rachel Volentine, and Donald W King. 2013. Social media and scholarly reading. Online Information Review 37, 2 (2013), 193–216.
  • Van Noorden (2014) Richard Van Noorden. 2014. Online collaboration: Scientists and the social network. Nature 512, 7513 (2014), 126–129.
  • Whittaker et al. (2003) Steve Whittaker, Loen Terveen, Will Hill, and Lynn Cherny. 2003. The dynamics of mass interaction. In From Usenet to CoWebs. Springer, 79–91.
  • Zoller et al. (2015) Daniel Zoller, Stephan Doerfel, Robert Jäschke, Gerd Stumme, and Andreas Hotho. 2015. On Publication Usage in a Social Bookmarking System. In Proceedings of the 2015 ACM Conference on Web Science.
Table 1. Top-5 words for topics detected in the answers on “What are other common reasons for you to like/retweet/share/… academic research on […] services?”. The topics are interpretable and expose latent motives of researchers active on social media.

5. Discussion and Conclusion

In order to assess differences in demographics and usage motives between social media services, we studied survey responses of 1,731 active scholarly social media users. Our first analysis shows that (i) the distribution of research experience and professional roles per social service varies greatly for active users: Experienced users use social networks and services which make research results available to the general public, young researchers are more dominant in question answering services and platforms for publishing code; and (ii) the motivation of researchers for using social media services varies per service: While services with a higher share of inexperienced researchers may attract users who search for information, services with a high share of professors / experienced researchers attract users who want to share their research results with friends or the general public.

These findings have implications for the future development of altmetrics for scientific impact prediction: The observed variety of experience and of motivations for social media use is likely to influence the meaning of actions per service. While a post mentioning a paper on StackExchange is likely a question of a young researcher (satisfying a need for information), a post mentioning a paper on Google+ is more likely explained by an experienced researcher sharing a relevant publication with friends. This variety should be accounted for when measuring the activities of scholars in social media for scientific impact prediction, e.g. for improving literature search engines or the evaluation of research.

In future work, we will extract author information from social media services and conduct surveys to better approximate the distribution of research experience per service. Combined with the distribution of citation counts depending on the research experience, this allows us to create and evaluate novel altmetrics for scientific impact prediction.

6. Acknowledgement

This work is part of the DFG-funded *metrics project (project number: 314727790). Further information on the project can be found on metrics-project.net.

References

  • (1)
  • Beel and Gipp (2009) Jöran Beel and Bela Gipp. 2009. Google Scholar’s ranking algorithm: an introductory overview. In Proceedings of the 12th International Conference on Scientometrics and Informetrics (ISSI’09), Vol. 1. Rio de Janeiro (Brazil), 230–241.
  • Blei et al. (2003) David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent Dirichlet allocation.

    Journal of Machine Learning Research

    3 (2003), 2003.
  • Bornmann (2014) Lutz Bornmann. 2014. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. Journal of informetrics 8, 4 (2014), 895–903.
  • Bowman (2015) Timothy David Bowman. 2015. Investigating the use of affordances and framing techniques by scholars to manage personal and professional impressions on Twitter. Ph.D. Dissertation. Indiana University.
  • Carayol and Matt (2006) Nicolas Carayol and Mireille Matt. 2006. Individual and collective determinants of academic scientists’ productivity. Information Economics and Policy 18, 1 (2006), 55–72.
  • Champieux (2015) Robin Champieux. 2015. PlumX. Journal of the Medical Library Association: JMLA 103, 1 (2015), 63.
  • Hadgu and Jäschke (2014) Asmelash Teka Hadgu and Robert Jäschke. 2014. Identifying and analyzing researchers on twitter. In Proceedings of the 2014 ACM conference on Web science. ACM, 23–32.
  • Haustein (2016) Stefanie Haustein. 2016. Grand challenges in altmetrics: heterogeneity, data quality and dependencies. Scientometrics 108, 1 (2016), 413–423.
  • Jordan (2015) Katy Jordan. 2015. What do academics ask their online networks?: An analysis of questions posed via Academia.edu. In WebSci, David De Roure, Pete Burnap, and Susan Halford (Eds.). ACM, 42:1–42:2. http://dblp.uni-trier.de/db/conf/websci/websci2015.html#Jordan15
  • Kittur et al. (2007) Aniket Kittur, Ed Chi, Bryan A Pendleton, Bongwon Suh, and Todd Mytkowicz. 2007. Power of the few vs. wisdom of the crowd: Wikipedia and the rise of the bourgeoisie. World wide web 1, 2 (2007), 19.
  • Kunegis et al. (2012) Jérôme Kunegis, Steffen Staab, and Daniel Dünker. 2012. KONECT – The Koblenz Network Collection. In Proc. Int. Sch. and Conf. on Netw. Sci.
  • Loper and Bird (2002) Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In

    In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Philadelphia: Association for Computational Linguistics

    .
  • Mikki et al. (2015) Susanne Mikki, Marta Zygmuntowska, Øyvind Liland Gjesdal, and Hemed Ali Al Ruwehy. 2015. Digital presence of Norwegian scholars on academic network sites – where and who are they? PloS one 10, 11 (2015), e0142709.
  • Mohammadi et al. (2015) Ehsan Mohammadi, Mike Thelwall, Stefanie Haustein, and Vincent Larivière. 2015. Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology 66, 9 (2015), 1832–1846.
  • Neylon and Wu (2009) Cameron Neylon and Shirley Wu. 2009. level metrics and the evolution of scientific impact. PLoS biology 7, 11 (2009), e1000242.
  • Nicholas and Rowlands (2011) David Nicholas and Ian Rowlands. 2011. Social media use in the research workflow. Information Services & Use 31, 1-2 (2011), 61–83.
  • Nielson (2006) J Nielson. 2006. Alertbox participation inequality: encouraging more users to contribute. useit. com (2006).
  • Ortega (2015) José Luis Ortega. 2015. How is an academic social site populated? A demographic study of Google Scholar Citations population. Scientometrics 104, 1 (2015), 1–18.
  • Ponte and Simon (2011) Diego Ponte and Judith Simon. 2011. Scholarly communication 2.0: Exploring researchers’ opinions on Web 2.0 for scientific knowledge creation, evaluation and dissemination. Serials review 37, 3 (2011), 149–156.
  • Priem (2015) Jason Priem. 2015. Altmetrics (Chapter from Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact). CoRR abs/1507.01328 (2015). arXiv:1507.01328 http://arxiv.org/abs/1507.01328
  • Priem et al. (2011) Jason Priem, Dario Taraborelli, Paul Groth, and Cameron Neylon. 2011. Altmetrics: a Manifesto. (2011). http://altmetrics.org/manifesto/
  • Procter et al. (2010) Rob Procter, Robin Williams, James Stewart, Meik Poschen, Helene Snee, Alex Voss, and Marzieh Asgari-Targhi. 2010. Adoption and use of Web 2.0 in scholarly communications. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 368, 1926 (2010), 4039–4056.
  • Project (2014) NISO Alternative Assessment Metrics Project. 2014. NISO Altmetrics Standards Project White Paper. https://groups.niso.org/apps/group_public/download.php/16268/NISO%20RP-25-201x-1,%20Altmetrics%20Definitions%20and%20Use%20Cases%20-%20draft%20for%20public%20comment.pdf
  • Russo and Peacock (2009) Angelina Russo and Darren Peacock. 2009. Great expectations: Sustaining participation in social media spaces. In Museums and the Web 2009. Archives & Museum Informatics, 23–36.
  • Sugimoto et al. (2017) Cassidy R. Sugimoto, Sam Work, Vincent Larivière, and Stefanie Haustein. 2017. Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology 68, 9 (2017), 2037–2062. https://doi.org/10.1002/asi.23833
  • Tenopir et al. (2013) Carol Tenopir, Rachel Volentine, and Donald W King. 2013. Social media and scholarly reading. Online Information Review 37, 2 (2013), 193–216.
  • Van Noorden (2014) Richard Van Noorden. 2014. Online collaboration: Scientists and the social network. Nature 512, 7513 (2014), 126–129.
  • Whittaker et al. (2003) Steve Whittaker, Loen Terveen, Will Hill, and Lynn Cherny. 2003. The dynamics of mass interaction. In From Usenet to CoWebs. Springer, 79–91.
  • Zoller et al. (2015) Daniel Zoller, Stephan Doerfel, Robert Jäschke, Gerd Stumme, and Andreas Hotho. 2015. On Publication Usage in a Social Bookmarking System. In Proceedings of the 2015 ACM Conference on Web Science.

6. Acknowledgement

This work is part of the DFG-funded *metrics project (project number: 314727790). Further information on the project can be found on metrics-project.net.

References

  • (1)
  • Beel and Gipp (2009) Jöran Beel and Bela Gipp. 2009. Google Scholar’s ranking algorithm: an introductory overview. In Proceedings of the 12th International Conference on Scientometrics and Informetrics (ISSI’09), Vol. 1. Rio de Janeiro (Brazil), 230–241.
  • Blei et al. (2003) David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent Dirichlet allocation.

    Journal of Machine Learning Research

    3 (2003), 2003.
  • Bornmann (2014) Lutz Bornmann. 2014. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. Journal of informetrics 8, 4 (2014), 895–903.
  • Bowman (2015) Timothy David Bowman. 2015. Investigating the use of affordances and framing techniques by scholars to manage personal and professional impressions on Twitter. Ph.D. Dissertation. Indiana University.
  • Carayol and Matt (2006) Nicolas Carayol and Mireille Matt. 2006. Individual and collective determinants of academic scientists’ productivity. Information Economics and Policy 18, 1 (2006), 55–72.
  • Champieux (2015) Robin Champieux. 2015. PlumX. Journal of the Medical Library Association: JMLA 103, 1 (2015), 63.
  • Hadgu and Jäschke (2014) Asmelash Teka Hadgu and Robert Jäschke. 2014. Identifying and analyzing researchers on twitter. In Proceedings of the 2014 ACM conference on Web science. ACM, 23–32.
  • Haustein (2016) Stefanie Haustein. 2016. Grand challenges in altmetrics: heterogeneity, data quality and dependencies. Scientometrics 108, 1 (2016), 413–423.
  • Jordan (2015) Katy Jordan. 2015. What do academics ask their online networks?: An analysis of questions posed via Academia.edu. In WebSci, David De Roure, Pete Burnap, and Susan Halford (Eds.). ACM, 42:1–42:2. http://dblp.uni-trier.de/db/conf/websci/websci2015.html#Jordan15
  • Kittur et al. (2007) Aniket Kittur, Ed Chi, Bryan A Pendleton, Bongwon Suh, and Todd Mytkowicz. 2007. Power of the few vs. wisdom of the crowd: Wikipedia and the rise of the bourgeoisie. World wide web 1, 2 (2007), 19.
  • Kunegis et al. (2012) Jérôme Kunegis, Steffen Staab, and Daniel Dünker. 2012. KONECT – The Koblenz Network Collection. In Proc. Int. Sch. and Conf. on Netw. Sci.
  • Loper and Bird (2002) Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In

    In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Philadelphia: Association for Computational Linguistics

    .
  • Mikki et al. (2015) Susanne Mikki, Marta Zygmuntowska, Øyvind Liland Gjesdal, and Hemed Ali Al Ruwehy. 2015. Digital presence of Norwegian scholars on academic network sites – where and who are they? PloS one 10, 11 (2015), e0142709.
  • Mohammadi et al. (2015) Ehsan Mohammadi, Mike Thelwall, Stefanie Haustein, and Vincent Larivière. 2015. Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology 66, 9 (2015), 1832–1846.
  • Neylon and Wu (2009) Cameron Neylon and Shirley Wu. 2009. level metrics and the evolution of scientific impact. PLoS biology 7, 11 (2009), e1000242.
  • Nicholas and Rowlands (2011) David Nicholas and Ian Rowlands. 2011. Social media use in the research workflow. Information Services & Use 31, 1-2 (2011), 61–83.
  • Nielson (2006) J Nielson. 2006. Alertbox participation inequality: encouraging more users to contribute. useit. com (2006).
  • Ortega (2015) José Luis Ortega. 2015. How is an academic social site populated? A demographic study of Google Scholar Citations population. Scientometrics 104, 1 (2015), 1–18.
  • Ponte and Simon (2011) Diego Ponte and Judith Simon. 2011. Scholarly communication 2.0: Exploring researchers’ opinions on Web 2.0 for scientific knowledge creation, evaluation and dissemination. Serials review 37, 3 (2011), 149–156.
  • Priem (2015) Jason Priem. 2015. Altmetrics (Chapter from Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact). CoRR abs/1507.01328 (2015). arXiv:1507.01328 http://arxiv.org/abs/1507.01328
  • Priem et al. (2011) Jason Priem, Dario Taraborelli, Paul Groth, and Cameron Neylon. 2011. Altmetrics: a Manifesto. (2011). http://altmetrics.org/manifesto/
  • Procter et al. (2010) Rob Procter, Robin Williams, James Stewart, Meik Poschen, Helene Snee, Alex Voss, and Marzieh Asgari-Targhi. 2010. Adoption and use of Web 2.0 in scholarly communications. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 368, 1926 (2010), 4039–4056.
  • Project (2014) NISO Alternative Assessment Metrics Project. 2014. NISO Altmetrics Standards Project White Paper. https://groups.niso.org/apps/group_public/download.php/16268/NISO%20RP-25-201x-1,%20Altmetrics%20Definitions%20and%20Use%20Cases%20-%20draft%20for%20public%20comment.pdf
  • Russo and Peacock (2009) Angelina Russo and Darren Peacock. 2009. Great expectations: Sustaining participation in social media spaces. In Museums and the Web 2009. Archives & Museum Informatics, 23–36.
  • Sugimoto et al. (2017) Cassidy R. Sugimoto, Sam Work, Vincent Larivière, and Stefanie Haustein. 2017. Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology 68, 9 (2017), 2037–2062. https://doi.org/10.1002/asi.23833
  • Tenopir et al. (2013) Carol Tenopir, Rachel Volentine, and Donald W King. 2013. Social media and scholarly reading. Online Information Review 37, 2 (2013), 193–216.
  • Van Noorden (2014) Richard Van Noorden. 2014. Online collaboration: Scientists and the social network. Nature 512, 7513 (2014), 126–129.
  • Whittaker et al. (2003) Steve Whittaker, Loen Terveen, Will Hill, and Lynn Cherny. 2003. The dynamics of mass interaction. In From Usenet to CoWebs. Springer, 79–91.
  • Zoller et al. (2015) Daniel Zoller, Stephan Doerfel, Robert Jäschke, Gerd Stumme, and Andreas Hotho. 2015. On Publication Usage in a Social Bookmarking System. In Proceedings of the 2015 ACM Conference on Web Science.

References

  • (1)
  • Beel and Gipp (2009) Jöran Beel and Bela Gipp. 2009. Google Scholar’s ranking algorithm: an introductory overview. In Proceedings of the 12th International Conference on Scientometrics and Informetrics (ISSI’09), Vol. 1. Rio de Janeiro (Brazil), 230–241.
  • Blei et al. (2003) David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent Dirichlet allocation.

    Journal of Machine Learning Research

    3 (2003), 2003.
  • Bornmann (2014) Lutz Bornmann. 2014. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. Journal of informetrics 8, 4 (2014), 895–903.
  • Bowman (2015) Timothy David Bowman. 2015. Investigating the use of affordances and framing techniques by scholars to manage personal and professional impressions on Twitter. Ph.D. Dissertation. Indiana University.
  • Carayol and Matt (2006) Nicolas Carayol and Mireille Matt. 2006. Individual and collective determinants of academic scientists’ productivity. Information Economics and Policy 18, 1 (2006), 55–72.
  • Champieux (2015) Robin Champieux. 2015. PlumX. Journal of the Medical Library Association: JMLA 103, 1 (2015), 63.
  • Hadgu and Jäschke (2014) Asmelash Teka Hadgu and Robert Jäschke. 2014. Identifying and analyzing researchers on twitter. In Proceedings of the 2014 ACM conference on Web science. ACM, 23–32.
  • Haustein (2016) Stefanie Haustein. 2016. Grand challenges in altmetrics: heterogeneity, data quality and dependencies. Scientometrics 108, 1 (2016), 413–423.
  • Jordan (2015) Katy Jordan. 2015. What do academics ask their online networks?: An analysis of questions posed via Academia.edu. In WebSci, David De Roure, Pete Burnap, and Susan Halford (Eds.). ACM, 42:1–42:2. http://dblp.uni-trier.de/db/conf/websci/websci2015.html#Jordan15
  • Kittur et al. (2007) Aniket Kittur, Ed Chi, Bryan A Pendleton, Bongwon Suh, and Todd Mytkowicz. 2007. Power of the few vs. wisdom of the crowd: Wikipedia and the rise of the bourgeoisie. World wide web 1, 2 (2007), 19.
  • Kunegis et al. (2012) Jérôme Kunegis, Steffen Staab, and Daniel Dünker. 2012. KONECT – The Koblenz Network Collection. In Proc. Int. Sch. and Conf. on Netw. Sci.
  • Loper and Bird (2002) Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In

    In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Philadelphia: Association for Computational Linguistics

    .
  • Mikki et al. (2015) Susanne Mikki, Marta Zygmuntowska, Øyvind Liland Gjesdal, and Hemed Ali Al Ruwehy. 2015. Digital presence of Norwegian scholars on academic network sites – where and who are they? PloS one 10, 11 (2015), e0142709.
  • Mohammadi et al. (2015) Ehsan Mohammadi, Mike Thelwall, Stefanie Haustein, and Vincent Larivière. 2015. Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology 66, 9 (2015), 1832–1846.
  • Neylon and Wu (2009) Cameron Neylon and Shirley Wu. 2009. level metrics and the evolution of scientific impact. PLoS biology 7, 11 (2009), e1000242.
  • Nicholas and Rowlands (2011) David Nicholas and Ian Rowlands. 2011. Social media use in the research workflow. Information Services & Use 31, 1-2 (2011), 61–83.
  • Nielson (2006) J Nielson. 2006. Alertbox participation inequality: encouraging more users to contribute. useit. com (2006).
  • Ortega (2015) José Luis Ortega. 2015. How is an academic social site populated? A demographic study of Google Scholar Citations population. Scientometrics 104, 1 (2015), 1–18.
  • Ponte and Simon (2011) Diego Ponte and Judith Simon. 2011. Scholarly communication 2.0: Exploring researchers’ opinions on Web 2.0 for scientific knowledge creation, evaluation and dissemination. Serials review 37, 3 (2011), 149–156.
  • Priem (2015) Jason Priem. 2015. Altmetrics (Chapter from Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact). CoRR abs/1507.01328 (2015). arXiv:1507.01328 http://arxiv.org/abs/1507.01328
  • Priem et al. (2011) Jason Priem, Dario Taraborelli, Paul Groth, and Cameron Neylon. 2011. Altmetrics: a Manifesto. (2011). http://altmetrics.org/manifesto/
  • Procter et al. (2010) Rob Procter, Robin Williams, James Stewart, Meik Poschen, Helene Snee, Alex Voss, and Marzieh Asgari-Targhi. 2010. Adoption and use of Web 2.0 in scholarly communications. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 368, 1926 (2010), 4039–4056.
  • Project (2014) NISO Alternative Assessment Metrics Project. 2014. NISO Altmetrics Standards Project White Paper. https://groups.niso.org/apps/group_public/download.php/16268/NISO%20RP-25-201x-1,%20Altmetrics%20Definitions%20and%20Use%20Cases%20-%20draft%20for%20public%20comment.pdf
  • Russo and Peacock (2009) Angelina Russo and Darren Peacock. 2009. Great expectations: Sustaining participation in social media spaces. In Museums and the Web 2009. Archives & Museum Informatics, 23–36.
  • Sugimoto et al. (2017) Cassidy R. Sugimoto, Sam Work, Vincent Larivière, and Stefanie Haustein. 2017. Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology 68, 9 (2017), 2037–2062. https://doi.org/10.1002/asi.23833
  • Tenopir et al. (2013) Carol Tenopir, Rachel Volentine, and Donald W King. 2013. Social media and scholarly reading. Online Information Review 37, 2 (2013), 193–216.
  • Van Noorden (2014) Richard Van Noorden. 2014. Online collaboration: Scientists and the social network. Nature 512, 7513 (2014), 126–129.
  • Whittaker et al. (2003) Steve Whittaker, Loen Terveen, Will Hill, and Lynn Cherny. 2003. The dynamics of mass interaction. In From Usenet to CoWebs. Springer, 79–91.
  • Zoller et al. (2015) Daniel Zoller, Stephan Doerfel, Robert Jäschke, Gerd Stumme, and Andreas Hotho. 2015. On Publication Usage in a Social Bookmarking System. In Proceedings of the 2015 ACM Conference on Web Science.