I Introduction
Success in science refers to scientists’ achievements in their academic careers. Quantifying success in science has developed into a very important part of bibliometrics and scientometrics. An influential publication or scholar always brings much to the followers to carry out their research. Therefore, the ability of bibliography retrieval is very important for researchers, including mining, managing, and examining scholarly big data to identify the successful papers and scholars [1, 2, 3, 4, 5]. In addition, quantifying scholar impact has special significance in funding allocation and recruitment decisions. Quantifying the impact of paper and journal can help scientists know the frontier of science development. Therefore, quantifying success in science provides useful guidance to the scientific community, such as offering candidates to university, recommending scientists for promotion, and distribution for research funds [6, 7].
Quantifying success in science mainly focuses on quantifying the current impact of academic entities, including paper, scholar, journal, scholarly team, and institution [8, 9, 10, 11]. Because the research on the impact of paper, author, and journal is very rich, this paper mainly introduces quantified success in science from these three aspects. Generally, the number of citations is used as an evaluation indicator, which is derived from its easy availability. Lots of factors influence a paper’s success, such as paper’s visibility [12, 13] and paper’s age [14]. A common method to judge the success of a scholarly paper is to use evaluation indicators, which may take into several important factors. The countingbased and networkbased evaluation methods are frequently used to quantify success in science. The countingbased methods are the most direct representation of evaluating, such as citations, author’s hindex [15], and Journal Impact Factor (JIF) [16]. Different academic entities form different kinds of academic networks, such as citation network, coauthor network, and cocitation network [17]. Currently, the HITStype and PageRanktype algorithms can mine the complex scholarly relationship based on different scholarly networks and give reasonable evaluation. The features of scholarly networks are also critical to evaluate paper impact. Further, based on these features, many researchers have improved PageRank [18] or HITS [19] algorithms to make them more suitable to measure the impact of paper.
Same as quantifying the impact of paper, scholar impact is also influenced by many factors. Lots of methods and indices to measure scholar impact are proposed, such as hindex [20], gindex [21], and hgindex [22]. These indices can be unfair for some young researchers because the quality and quantity of a scholar’s publications are associated with their academic ages. The methods based on the network can avoid this situation to a certain extent.
Evaluating journal impact is an important part of quantifying success in science. Many networkbased evaluation methods and indices are used to quantify the impact of paper and author, which can also be used to evaluate journal [23, 24, 25, 26]. These methods are based on PageRank, HITS, or consider the structural position of a journal in the journal citation network. In addition, Journal Citation Reports (JCR) is very popular for ranking journals.
Even though the existing research provides a tool to quantify success in science, it still has some limitations. Every indicator to quantify scientific impact has its shortages. In particular, in quantifying scientific success research, one of the most challenging problems stems from the heterogeneous attribute and the dynamic nature of big scholarly data. At present, in most of quantifying scientific success methods, implicit features and implicit relationships have attracted the attention of researchers [27].
This paper presents a review of recent developments in quantifying success in science and this review complements relevant work in the past: Wildgaard et al. [28] present a review on author impact evaluation. One limitation of this review is that it does not consider paper and journal impact evaluation research. Bai et al. [9] offer a review of the literature on paper impact evaluation. This overview covers key techniques and paper impact metrics. The limitation of this work is that authors have not consider author and journal impact evaluation. In addition, factors influencing scholarly impact have not analyzed. Therefore, in this paper, the progress of impact evaluation of the paper, author and, journal is described in detail.
FIGURE 1 shows the framework of quantifying success in science. Quantifying success in science includes the following parts: data collection, data preprocessing, relationship analysis, evaluation method and evaluation indices. Several public accessible data sets are used to quantify success in science, including American Physical Society (APS)^{1}^{1}1http://publish.aps.org, Digital Bibliography & Library Project (DBLP)^{2}^{2}2https://dblp.unitrier.de/, and Microsoft Academic Graph (MAG)^{3}^{3}3http://aka.ms/academicgraph
. Data preprocessing in quantitative scientific success studies is very important because it relates to the accuracy. The homogenous and heterogeneous scholarly networks are used to research the scholarly relationships such as citation relationships, coauthor relationships, and paperjournal relationships. Spearman’s rank correlation coefficient, Discounted Cumulative Gain, and RI can be used as evaluation metrics for quantifying success in science
[29, 30]. Specially, the heterogeneous scholarly network structures have increased the challenges in scholarly network analysis.To retrieve the papers of quantifying success in science, based on Google Scholar, we enter search terms such as the success of science, paper impact, scholar impact, journal impact, etc. We first search for the related papers recently published in top journals and top conferences, and then look for their references, and the papers cite these papers to obtain more related papers. Search for papers in a stepbystep manner, then filter and classify from three aspects: paper impact, scholar impact and journal impact, and retain the representative related papers. Based on the above work, we mark the publication years of these papers, read through these papers by year, analyze and summarize the following aspects: the features that influence scholarly impact, evaluation methods and indices. For example, in terms of these features of evaluation paper impact, we classify these features, including reference, references, selected features, statistical feature, network feature, explicit feature, implicit feature, and evaluating paper impact. By analyzing and summarizing these evaluation methods, we identify open issues and challenges, and provide possible solutions.
The rest of this survey is organized as follows. In Section II, we discuss the evaluation of paper impact. In Section III, we introduce the evaluation of author impact. The evaluation of journal impact is discussed in section IV. Open issues will be discussed in Section V. Finally, we conclude this survey in Section VI.
Ii Evaluation of Paper Impact
In this section, we will make a detailed introduction to the evaluation methods and indices of paper impact. Besides, we will discuss the evolution of the existent methods and indices, showing their advantages and shortcomings. At first, we begin with the evaluation of paper impact, because many assessment methods and indices of scholars and journals are based on the assessment of their papers. Therefore, it is of great significance whether the quality of papers can be quantified accurately. Although the value of a paper is mainly based on its content, the evaluation of its content is easily influenced by subjective factors, and the evaluation efficiency cannot meet the demand of scholarly bid data. This phenomenon drives researchers to give some accurate, efficient automatic evaluation methods. One possible solution is to construct a multidimensional metric in which the importance of citation, social relationships of authors, the relationship between the impact of early citers and scholarly paper impact, and citation inflation need to be explored.
Iia Factors Influencing the Impact of Papers
TABLE I shows an example of selected features for evaluating paper impact, including references, selected features, statistical feature, network feature, explicit feature, implicit feature, and evaluating paper impact.
References  Selected Features  Statistical Feature  Network Feature  Explicit Feature  Implicit Feature  Evaluating paper impact 
[14]  citation rate of a paper, time  yes  no  yes  no  the citation rate of a paper at a given time 
[27]  a relative citation weight  yes  no  no  yes  applying a relative citation weight to the higherorder quantum PageRank algorithm 
[30]  collaboration times, the time span of collaboration, citing times and the time span of citing  yes  no  yes  no  weakening the relationship of Conflict of Interest (COI) in the citation network 
[31]  number of citations  yes  no  yes  no  using the number of citations 
[32]  citations, authors, journals/conferences and the publication time information  no  yes  yes  yes  integrating the selected features into PageRank and HITS algorithms 
[33]  preferential attachment, aging, fitness  yes  no  no  yes  identifying the three fundamental mechanisms to evaluate longterm impact 
[34]  importance of paper  no  yes  yes  no  applying the Google PageRank algorithm to obtain the relative importance of all publications 
[35]  citation relevance and author contribution  no  yes  yes  no  using the selected features to weight citation network and authorship network to evaluate paper impact 
[36]  Altmetrics  yes  no  yes  no  monitoring citations, blogs, tweets, download statistics and attributions in research articles 
[37]  prestige of a paper, prestige of author, time  yes  yes  no  yes  using the citation network, the authorship network and the publication time of the article for predicting future citations 
[38]  the timeweighted citation count, the citation width, the citation depth  yes  no  no  yes  using entropy weight the three indices 
The number of citations has been used as a metric to evaluate paper impact for a long time [31]. Since the number of citations is relatively easy to obtain, it is frequently manipulated such as selfcitation, mutual citation, and friend’s citation. Although some scholars can cite their papers, because their research subjects can have several stages output and the former results can be the foundation of the latter. But if a selfcitation only means to increase the number of citations, it will mislead the scholarly evaluation and bring unfair factors to the evaluation system. For inappropriate citations, previous researchers proposed corresponding methods to waken the influence of selfcitation by relying on the higherorder citation network [27].
Previous research shows that the impact of paper will decay over time, which confirms that the age of a paper is a factor influencing its impact. Generally, an old paper has more citations than a new one, but its work was already covered by new papers so that it could get fewer citations in the future. Parolo et al. [14] showed that the decay of the attention paid to a paper is a universal phenomenon, and the decay rate is close to a power law. In some cases, papers can be forgotten more quickly so the attention decay is faster, which fits an exponential curve. The time factor, the prestige of a paper, and the prestige of the author were used to evaluate scholarly paper impact [37]. Based on the three factors, they evaluated scholarly paper impact by predicting the number of citations of scholarly papers in the future. Wang et al. [33] considered the aging factor to evaluate paper impact because it can capture the fact that new ideas are integrated in subsequent work. Wang et al. [38] first developed the three indices: the timeweighted citation count, the citation width, the citation depth. They then leveraged entropy to weight these indices to evaluate paper impact. Chan et al. [39] discussed that the impact of authors and affiliations can influence on the impact of their papers. In their research, they argued that the reputation of authors and the impact of their affiliations had the power to boost paper impact in the early stages of publication, but this influence could decay fast and in the following stages. Chen et al. [34] found the scientific gems using Google’s PageRank algorithm in the citation network. Zhang et al. [35] evaluated the impact of authors and papers based on the heterogenous authorcitation academic networks.
In addition to the factors mentioned above, some other factors were also used to evaluate paper impact, such as individual, institutional and international collaboration, reference impact, reference totals, keyword totals, and abstract readability [40]. Preferential attachment, fitness, and aging factors were used to quantify the longterm scientific impact, and the three factors can drive the citation history of scholarly paper [33]. In this research, the preferential attachment captures the fact that highly cited papers are more likely to be cited again than lesscited papers. Fitness captures the inherent differences between scholarly papers. The aging has been introduced before. It can be traced back to the journal impact factor that was once used as a criterion for assessing the impact of a paper [41]. Altmetrics evaluated scholarly impact based on the activities in the social media platforms, such as citations, blogs, tweets, download statistics, and attributions in research articles [36]. Altmetrics scores were used to complement the evaluation of scholarly paper with new insights [42]. Since we have already known most factors that influence the impact of paper, the evaluation methods and the corresponding indices can be designed.
IiB CountingBased Evaluation Methods and Indices
TABLE II shows the comparison of countingbased evaluation methods and indices from the following aspects: method and reference, selected factors, importance of each citation, advantage and disadvantage.
Method and Reference  Selected Factors  Importance of Each Citation  Advantage  Disadvantage 
citations [43]  citations  equal  Easy to obtained.  Easy to be manipulated. Strong despondence on paper’s age. 
impact factor [41]  number of paper, citations of paper, time  equal  Easy to calculate.  Easy to be manipulated. Hard to unify impact factors across different disciplines. 
a SVR model [44]  occurrence times, located section, time interval, selfcited  unequal  Distinguish the importance of citation.  Hard to calculate. 
a supervised machine learning model [45] 
citation location, semantic similarities, cited frequency, number of citations  unequal  It can distinguish the importance of citation.  Hard to calculate. 
paper’s normalized distribution of citation and JIF [46] 
distribution of citation, JIF  equal  It is feasible on a scale typical of a national evaluation exercise.  Easy to be manipulated. 
Garfield et al. [47] first proposed using the number of citations to assess the impact of scholarly papers. Citations are the simplest and most direct countingbased index of paper impact. However, citations as an evaluation metric have some drawbacks. For example, it relies heavily on the time of publication of the paper. The longer the time is, the more the citations are. Considering this drawback, previous research used the journal impact factor to quantify the impact of paper [41]. The reason is that to a certain extent, journal impact can characterize paper impact. However, Seglen et al. [41] summarized problems associated with the use of journal impact factors, and they found that the journal impact factor is not representative of individual paper. It has been recognized that not all citations are equal importance and hence the importance of citation needs be distinguished [45].
To distinguish the importance of citation, previous researchers have made many attempts. Wan et al. [44] divided the importance of citation into 5 levels, which was called citation strength. In their research, the importance of citation was determined by the following features: occurrence times, located section, time interval, the average length of citing sentences, average density of citation occurrences, and selfcited. Then a SVR model was used to calculate every citation’s importance level with giving some artificially labeled data. The impact of a paper is calculated by summing up all the citation strengthes. Their experimental results showed that ranking papers using citation strength fitted the ground truth better. Zhu et al. [45] distinguished the importance of citation by identifying a set of four features that are useful to determine the impact of a scholarly paper, including citation location in paper, semantic similarities between titles of cited paper and the content of citing paper, cited frequency, number of citations in a literature.
Anfossi et al. [46] argued that it was more reasonable to rank papers by combining the information of several indicators than using only one. In their paper, an evaluation tool was proposed, which used paper’s normalized distribution of citation and JIF and located a paper in the (citation, JIF) space intuitively as a scatter plot. Then this space was divided into regions by drawing thresholds as weighted linear combinations of the paper’s citation and JIF, shown in Function ,
(1) 
where is a constant that controls the segmentation of the region, and CIT indicates paper’s citation. The different calibrations of the segmentation result in different classifications of articles. Before Anfossi’s work, Ancaiani et al. [48] performed an analysis of a large amount of research outcomes submitted by Italian universities and other research bodies.
Nowadays more and more research results or papers are spreading on social media, which is helpful to promote a scholar’s impact. The times of downloading, sharing, or commenting of papers on the online social networks have already been a group of metrics to evaluate the research outputs, which are known as Altmetrics [36]. The social networkbased Altmetrics are used more and more widely as a new emerging evaluation metrics of paper. Xia et al. [49] performed an analysis on how the Twitter and Facebook users impact the paper’s influence published on Nature. They found that the users of Twitter are easier to spread the impact of papers published on Nature. Although Altmetrics are able to complement and improve the assessment of paper impact, Altmetrics are not authoritative as an evaluation indicator. Mainly because Altmetrics are easily manipulated as citations. The method of quantifying academic impact based on Altmetrics needs further exploration.
IiC NetworkBased Evaluation Methods and Indices
TABLE III shows the comparison on networkbased evaluation methods and indices from the following aspects: method and reference, selected factors, scholarly network, algorithms, advantage and disadvantage.
Method and Reference  Scholarly Network  Homogeneous Network  Heterogeneous Network  Algorithms  Advantage  Disadvantage 
PageRank [34]  citation network  yes  no  PageRank  It begins to use a structured approach to quantify paper impact.  It does not consider attenuation of paper impact over time. 
CiteRank [50]  citation network  yes  no  PageRank  It promotes the impact of recent publications.  It does not consider the impact of author and journal. 
nonlinearity PageRank [51]  citation network  yes  no  PageRank  This method can control the paper’s score accumulation.  It does not consider the impact of author and journal. 
PageRanktype method [52]  coauthor network, citation network, authorpaper network, papertext feature network,authortext feature network  yes  yes  PageRank  This method can control the paper’s score accumulation.  It does not consider the impact of author and journal. 
HITStype method [53]  citation network,coauthor network  yes  yes  HITS  This method can evaluate paper and its author at the same time.  It does not consider the impact of journal. 
TriRank [54]  citation network,coauthor network, venue citation network  yes  yes  HITS  This method can rank authors, papers and venues simultaneously in heterogeneous networks.  It does not consider attenuation of paper impact over time. 
FutureRank [37]  citation network, paperauthor network  yes  yes  PageRank, HITS  FutureRank can combine information about citations, authors and publication time to rank papers.  It does not consider the impact of journal. 
CAJTRank [32]  citation network, paperauthor network, paperjournal  yes  yes  PageRank, HITS  It can combine information about citations, authors, journal and publication time to rank papers.  Citation weights are equal. 
COIRank [30]  citation network, paperauthor network, paperjournal  yes  yes  PageRank, HITS  It can distinguish the importance of citation in heterogeneous scholarly network.  COI relationship contains many factors, and it is not easy to mine. 
higherorder weighted quantum PageRank [27]  citation network  yes  no  Quantum PageRank  It can reveal the actual impact of papers, including necessary selfcitations.  Time is costly. 
A classical networkbased evaluation method is PageRank algorithm [18]. Another famous algorithm for evaluating the importance of nodes in heterogeneous networks is HITS. The two methods have been used to quantify the impact of papers. PageRank algorithm is used in a homogeneous scholarly network, and HITS is used a heterogeneous scholarly network. FIGURE 2 shows several typical scholarly networks for paper impact evaluation, such as citation network, coauthor network, paperauthor network, paperjournal network. The four scholarly networks are generated from randomly selected 10 authors for the computer science area in the MAG dataset. The different color nodes indicate different types of academic entities and the lines between them indicate their scholarly relationships.
Chen et al. [34] applied the Google PageRank algorithm on all publications in the Physical Review family of journals from 1893 to 2003 to find out some exceptional papers. PageRank can find the linear relation among papers in the citation network. Recently, London et al. [55] proposed a local form of PageRank to evaluate the impact of paper only on a small set of nodes extracted from the whole citation network. A paper that has more citations or has been cited by an important paper will be set a higher score through the algorithm. But the classical PageRank algorithm is nontimesensitive. This leads to an unreasonable result that an outofdate paper may still get a high impact because of its citations accumulating long before, but its true value has already been replaced by many new publications. To overcome this problem, Walker et al. [50] introduced CiteRank, to weight with timebased on PageRank to promote recent publications. The function of this method is as follows:
(2) 
is a matrix of the final scores of all papers.
is the transferring probability matrix where
if cites and 0 otherwise, where is the out degree of the th paper. is the initial probability of selecting the th paper in the citation network, there given as , where indicates years the th paper’s after published.Many efforts have been paid for updating the PageRank to make it fit the characteristics of the academic network. Yao et al. [51] introduced nonlinearity to the PageRank algorithm by aggregating the score from downstream neighboring nodes in a nonlinearity way. The iteration function changes into the following form correspondingly:
(3) 
By tuning the value of , this method can control the paper’s score accumulation and make it more sensitive to the citer’s impact. This nonlinear method considers that the value of a citation from high impact paper is more important than the one from lowlevel paper.
Wang et al. [52] proposed a PageRanktype method that used several scholarly networks to rank papers, including a timeaware coauthor network (), a timeaware paper citation network (), an authorpaper network () indicating the paper’s authorship, a papertext feature network () indicating the paper’s textual features and an authortext feature network (). The iteration equation is:
(4) 
where , and M =
and are both diagonal matrixes with the diagonal elements and , respectively.
is a zero matrix. Vectors
and are authority of paper, author and text features respectively.Jiang et al. [56] took this dynamic evolution of citation network into account and put forward a method with the same idea of PageRank. The method integrates three factors in scientific development, including knowledge accumulation by individual papers, knowledge diffusion through citation behavior, and knowledge decay with time elapse. Then it uses a random walk process on the citation network to describe these three factors. The dynamically evolving process is simulated by dividing all papers according to their publishing time and adding into the citation network partially with the time sequence.
Another type of method is based on HITS [19]. Zhou et al. [53] performed the HITS algorithm on paper’s citation network and coauthor network, which were connected by authorship. In both citation network and coauthor network, nodes’ scores were first calculated by PageRank, and then a HITS was performed on the bipartite graph to get the final scores of papers and authors. So this method can evaluate the impact of authors and their papers at the same time. The iteration function is as follows:
(5) 
where matrix and are the transferring probability matrix of coauthor network and citation network correspondingly. And is the iteration matrix in the PageRank process on coauthor network, which is given by , where is a matrix with all elements equaling 1. is the same meaning. Vector a storages the scores of all authors and vector d storages the scores of all papers. A similar method is the TriRank algorithm proposed by reference [54] in 2014, which took the paper’s publication information into account and performed a HITStype method on three linked networks, adding a venue citation network on the two networks used before.
In addition, some methods that combine PageRank and HITS to evaluate the impact of papers. A typical one is FutureRank, proposed by reference [37]. Different from other methods, FutureRank ranks the impact of papers and authors by predicting their future PageRank scores. PageRank algorithm is first used to rank papers via the citation network, and then the HITS algorithm is used to calculate the authority score of papers and hub score of authors based on the hybrid network. After calculating the PageRank score of papers, the authority score of papers, and the hub score of authors, the final result of the evaluation is finally obtained by weighting to these scores, seeing function (6).
(6) 
where is the number of nodes in the network. Wang et al. [32] proposed a similar method that added a journal/conference network to show where the paper was published. The evaluation method’s form is the same as FutureRank but it can rank journals/conferences together. Using the HITS algorithm can also evaluate paper and author’s quality. Based on their work, Bai et al.[30] ranked scholarly papers by investigating the citation relationships to weaken the relationship of Conflict of Interest in the citation network. To a certain extent, this method weakens the impact of selfcitation. Besides, Bai et al.[27] quantified the impact of scholarly papers based on the higherorder weighted citations. In this research, a higherorder weighted quantum PageRank algorithm is developed to reflect the multistep citation behavior. One advantage of the method is that it can weaken the effect of manipulated citation activities.
Iii Evaluation of Scholar Impact
The evaluation of scholars always relates to their papers. Many methods can evaluate paper together with its authors, such as Corank [53], TriRank [54], FutureRank [37], sindex [57]. These networkbased methods usually rank several academic entities together because using information provided by a single network is always not enough to give a reasonable evaluation. There are also some countingbased evaluation methods like the famous hindex for quantifying author impact. In this section, we compare different countingbased methods, including method and reference, selected factors, importance of each citation, advantage, and disadvantage. We also compare different networkbased methods based on the following several aspects: method and reference, scholarly network, homogeneous network, heterogeneous network, algorithms, advantage, and disadvantage. Besides, we will discuss the evolution of the existent methods and indices and summarize the issues of these methods. One possible solution is to explore the higherorder academic network analysis, author impact inflation, and academic success gene.
Iiia Factors Influencing the Impact of Scholars
The author impact evaluation has undergone a transition from unstructured measure to structured measure [29]. The factors used by researchers to assess author impact ranged from simple statistical factors to structural factors, from explicit factors to implicit factors. Currently, the commonly used factors influencing the impact of scholars can be divided into six categories including paperrelated, authorrelated, venuerelated, socialrelated, referencerelated, and temporalrelated factors. TABLE IV shows an example of selected factors for evaluating author impact.
Factors  Factor category  Explicit Factor  Implicit Factor  References 
number of citations  paperrelated  yes  no  [58, 59] 
the number of publications  paperrelated  yes  no  [59, 60] 
papers scores  paperrelated  yes  no  [61] 
share keywords between author and paper  paperrelated  yes  no  [62] 
PageRank  paperrelated  no  yes  [63, 64] 
paper authority vector  paperrelated  no  yes  [65] 
the number of author  authorrelated  yes  no  [59] 
Maximum Entropy  authorrelated  yes  no  [60] 
venues scores  venuerelated  yes  no  [61] 
journal impact factor  venuerelated  yes  no  [60] 
paper’s references being cited by the author before, ratio of the paper’s references being cited by the author before, paper’s references in the author’s previous publications  referencerelated  yes  no  [62, 66] 
times the author attend the paper’s venue before, ratio of times the author attend the paper’s venue before  referencerelated  yes  no  [62] 
time  timerelated  yes  no  [67] 
In the scientific community, scholars can continuously accumulate academic impact but to some extent, the inherent impact of scholars determines their final research results. Since the papers published by scholars can represent the impact of scholars, the paperrelated factors are frequently used to measure the impact of scholars. These factors can be selected primarily to consider the quality and quantity of the papers. However, these factors can lead to bias. The academic output of scholars is generally related to their academic age. Scholars with an old academic age may have more output. In this way, simply evaluating scholar impact in terms of output has a big bias for newcomers. Such biases also exist when evaluating scholar impact across research fields. Scientists have made many attempts to eliminate the imbalance between disciplines in evaluating scholar impact. In addition, the allocation of contributions of coauthors of a scholarly paper may also lead to bias in scholar impact evaluation. Shen et al. [68] developed a credit allocation algorithm to capture the coauthors’ contributions.
To a certain extent, authorrelated factors and venuerelated factors can reflect the scholar’s impact. Dong et al. [66] found that two factors, the impact of scholars and venues, played a key role in improving the hindex of lead authors. Deville et al. [69] discussed the mobility patterns of scientists at an institutional level and success in science in their careers. They found that the consequence of scholars switching from highimpact institutions to lowimpact institutions is a decline in both research quality and output, suggesting that the academic environment has an impact on academic outcomes. Scholars also use online platforms (Google Scholar, Microsoft Academic Search), and social media to enhance their academic impact. MasBleda et al. [70] found that although most highly cited scholars working in European institutions had their institutional web pages, they rarely maintained them. Most of them used other social media, which also accelerated the development of Altmetrics.
In addition, referencerelated factors and timerelated factors have attracted scholars’ attention. Dong et al. [66] researched scholar’s impact considering two referencerelated factors: the ratio of maxhindex citations of references to the total number of references of the paper and the average number of citations accumulated by references of the paper. Zhang et al. [67] considered academic innovations and assessed scholar impact by a Timeaware ranking algorithm, allocating more credits to the newly published papers according to the representative time functions. Based on the above factors, many evaluation indices have been proposed to quantify scholar impact. In the following two subsections we introduce the countingbased evaluation methods and indices, and networkbased evaluation methods and indices respectively.
IiiB CountingBased Evaluation Methods and Indices
In 2005, Hirsch [20] proposed the famous hindex to evaluate scholar impact, which is the most famous metric widely used in the whole scientific community. A scholar’s hindex means that he has at least papers cited at least times. The advantages of hindex include that it is easy to compute and the definition combines quantity and quality of a scholar’s outputs. But there are still some scholars who argue that hindex has many shortcomings such as the unbalance between different disciplines, the allocation of coauthors’ impact, and the impact of highly cited papers ignored. To keep the impact of highly cited papers from being ignored, Egghe et al. [21] proposed the gindex. If the citations of all papers published by an author are listed in descending order, the gindex is top scholarly papers with citations. Similar to the gindex, Jin et al. [71] proposed Rindex and ARindex to overcome the shortcomings of hindex. The Rindex is defined as
(7) 
where is the author’s hindex and indicates the author’s papers that have been cited more than times, also known as the hcore papers. The ARindex takes age of publications into account, which is calculated by
(8) 
where denotes the th paper’s age.
For the same purpose, Zhang [72] divided the author’s citation function into three parts: the hsquared representing the information of the hindex itself, the excess representing the information of papers having more citations than hindex and the htail representing the information of papers with fewer citations. Then, a triangle mapping technique was used to map these three parts to a regular triangle to make the analysis easier. An author’s impact was mapped to three parts correspondingly the excess (eindex) representing the research quality, the htail (tindex) representing the research quantity and the hsquare (hindex) representing the average. This method used three independent parts to quantify an author’s impact. In this paper, the authors are divided into two types. The first type of authors have published several highquality papers but these authors have lower Hindex or higher eindex; the second type of authors have published a large number of lowquality papers, but these authors have relatively high hindex, tindex, and lower eindex. Dorogovtsev et al. [73] developed the oindex to improve the impact of most cited papers. An author’s oindex is defined as , where is the author’s hindex, and indicates the citations of his/her most cited paper(s).
Another disadvantage of hindex is that it considers all authors of a paper equally. Authors of a multiauthored paper always don’t have equal contribution to the work, therefore, the hindex leads to bias. Many studies have tried to solve this problem. Wang et al. [74] presented Aindex to quantify the relative contributions of coauthors. Based on Aindex, Stallings et al. [60]developed a collaboration index, Cindex, to quantify the author impact. Cindex was defined by
(9) 
where was the author’s Aindex. The Pindex was proposed to quantify researcher’s impact by considering the quality of publications, which was given by
(10) 
where the was the impact factor of the journal where the th paper was published. Besides, some researchers pointed out that even authors that had different citation patterns may get the same hindex. Farooq et al. [75] proposed the DSindex, which is an extension of gindex and intend to provide a distinctive ranking for authors with similar citation pattern. The DSindex is defined as
(11) 
where is the number of gcore papers and is the th gcore paper’s citation. Same as hcore papers, the gcore papers are papers that are used to calculate the gindex of the author.
The indices introduced above are all extension and improvement of hindex. Using hindex can partly reflect the publication behavior and the citation distribution of an author. To more reasonably quantify scholar impact, Sinatra et al. [76] explored the citation distribution of physicists and found that the highestimpact of a scholar was randomly distributed in their academic careers. Based on this randomimpact rule they proposed a stochastic model, in which a unique parameter was assigned to predict scholar impact. The Qvalue of an author is calculated by
(12) 
where is the value of an author . indicates the average logarithmic citations of all papers published by author . is the th paper of author . is the average impact of luck in the success of papers.
Citationbased author impact evaluation methods show differences among disciplines. Waltman et al. [77] found that using the fractional counting method can give a more suitable result for crossfield scholar evaluation. Radicchi et al. [78] proposed a universal variant hindex to solve this problem, named index. In 2013, together with Radicchi, Kaur et al. [79] improved the index and proposed a new method to compare scientific impact across disciplinary boundaries. The new index was introduced in their work, which was a normalized hindex by the average hindex of all authors in the same disciplines. Lima et al. [80] considered that a paper can belong to several research areas and the author’s impact in an area was calculated by the papers published in the area, which was used by the author’s percentile rank. Finally, the impact of an author was quantified by summing up impact across all areas. By this method, although the bias among different disciplines can be reduced, the authors who are active in a rapidly developing area can also get a higher score than others in the basic disciplines.
IiiC NetworkBased Evaluation Methods and Indices
Because countingbased evaluation methods are easily manipulated in evaluating scholar impact, scholars explore the structured methods to overcome the shortcomings. The networkbased evaluation methods of scholars have evolved from homogeneous scholarly networks to heterogeneous scholarly networks [53, 54, 37, 32, 52, 81, 65, 57]. The scholarly networks are made up of academic entities, including scholars, papers, journals or conferences, and institutions. Ding et al. [82] used the PageRank algorithm to quantify the impact of the author based on author cocitation network. Yan et al. [83] developed PRank, which used three different networks, including citation network, authorship network, and publishrelationship network, to evaluate the impact of authors, papers and journals. A HITStype method was first performed to update the scores of papers, authors, and journals in the authorship network and publishrelationship network. Then these scores were used as nodes’ initial values to run a PageRank in the citation network to get the final scores of papers. Because the HITStype algorithm is more suitable for heterogeneous academic networks, mining the academic relationships of heterogeneous networks in depth can make the HITStype algorithm work better. Amjad et al. [84] considered the topic distributions of scholarly entities that were generated by Latent Dirichlet Allocation (LDA) [85] and proposed a topicbased ranking method called Topicbased Heterogeneous Rank (TH Rank). Because of the network complexity and the cost of computing LDA, TH Rank is not an efficient algorithm. Li et al. [86] put forward a method named QRank for the purpose to rank authors effectively and efficiently. Nykl et al. [87] used the PageRank algorithm together with several individual evaluation indices, including hindex, publication count, citation count, and author count of a publication to rank scholars.
Although the existing networkbased evaluation methods have achieved certain results, the existing evaluation methods still have the following problems: (1) most previous studies quantify author impact based on the firstorder academic networks; (2) the citation inflation influences the real impact of the author; (3) the origin of the academic success genes is unknown. Therefore, the higherorder academic network analysis, author impact inflation, and academic success gene need to be explored.
Iv Evaluation of Journal Impact
The impact of journal generates from papers published. Authors are more willing to publish papers on the journal with a high impact. The evaluation of journals is associated with the evaluation of papers and authors. There are several famous publishing groups around the world. They are Elsevier, Springer, Wiley, Wolters Kluwe, and Pearson. It is worth mentioning that the most famous journals, Lancet and Cell, are published by the Elsevier, and Nature is published by the Macmillan. From 1975, Journal Citation Reports (JCR) started to provide the last year’s Impact Factor (IF) of journals, together with other evaluation indicators such as the journals’ current rank, abbreviated journal title, International Standard Serial Number (ISSN), total cites, immediacy index, total article and cited halflife. Since JCR was taken as an important data resource for quantifying journals. The JCR metrics have become the most popular indices to evaluate journals, and several other metrics have been proposed by the ThomsonReuters, such as EigenFactor Score (EF), the yearly JCR, and CiteScore^{4}^{4}4https://www.scopus.com. Nowadays, many other evaluation methods and metrics have been proposed except the JCR metrics. In the following subsection, we will discuss the evolution of the existent methods and indices, and summarize the issues of these journal evaluation methods. One possible solution is to explore journal impact inflation and the higherorder academic network analysis.
Iva Factors Influencing the Impact of Journals
Some classical highimpact journals always have been lasting for many years, such as Nature and Science. Journal’s quality is decided by the quality of papers published on it. Many metrics for evaluating journals are based on citations. The development of the Internet has promoted the paper’s citation, as well as the impact of journals. Therefore, open access journals may have a higher impact than the private ones.
Journal impact is with strong discipline, that is, different disciplines have different authoritative journals. Besides, the journal’s type may influence its impact factor. Some journals prefer to publish review papers, and some others publish long research papers and short papers. Generally, a review journal impact factor is higher than other journals in the same discipline.
IvB The Journal Citation Reports
The Journal Citation Report started in 1975. Now it provides more than 10,000 highquality journals rank every year and is released on the Web of Science (WoS). The evaluation of journal impact contains several usually used metrics, such as journal’s total cites, journal impact factor, impact factor without journal selfcitations, 5year impact factor, immediacy index, cited halflife, citing halflife, Eigenfactor score, article influence score and number of citable items of the journal and other metrics. This report is always seen as the most authoritative assessment of journals.
Journal impact factor, which always refers to the 2year impact factor, was proposed by Garfield in 1955 [47]. The JIF of a journal in year was defined as follows:
(13) 
where is the number of papers published on this journal in year and is the number of the journal’s citations in year . The computation of the 5year journal impact factor is the same as the 2year impact factor, which is considering the number of papers and citations of the journal in recent 5 years. The impact factor without journal selfcitations eliminates the influence of the journal’s selfcitations, which gives a more objective evaluation of the journal’s impact. The cited halflife is years that are taken to reach half of the total citations of the journal, which indicates the persistence of a journal’s impact. The citing halflife is defined as the years for the number of references accumulating to half of all, which indicates the novelty of the references.
Other metrics, such as immediacy index, Eigenfactor score, and article influence score, are to cover shortages of the impact factor. The immediacy index is defined as the average citation of papers published on journals in the given year, which can reflect the impact of the journal in that year. Eigenfactor score is calculated by the journal’s citation network without selfcitation, using a PageRanktype method [88].
IvC Analysis and Improvement of the JCR
Although the JCR metrics are used widely, it leads to bias if only using a single metric to assess journals. Many efforts have been paid to overcome the shortages and many other metrics have been proposed, such as Hindex for journals [15], SCImago Journal Rank (SJR) [89], Source Normalized Impact per Paper (SNIP) [90]
. In addition to using a single metric, it is found that the ranking result can be improved by combining these common metrics in some ways, like computing their harmonic means
[91]or using the Neural Network to find a nonlinear represent
[92]. Serenko et al. [93] found that scholars always preferred to the familiar journals and gave them a higher evaluation. It suggests that introducing personal opinions in the evaluation of journals may be helpful. Tsai et al. [94] studied the correlation between subjective evaluation (scholars’ personal opinions) and objective evaluation (journal rank by JIF and hindex) and used the Borda counting method to combine the two ranking results. Beets et al. [95] ranked accounting journals referencing the departmental journal lists, which were used to evaluate faculty publications in several famous business schools.There are also many scholars concern about the relationship among the different journal rank by these metrics [96, 97, 98, 99, 100, 101]. Setti [99] argued that it was impossible to capture the real impact of journals by any single indicator. Different evaluation methods quantify journals from different views, so which metrics are more useful is always based on application scenarios. Sometimes it is meaningful to rank journals only by the percentage of highly cited publications of a journal [102]. Besides, the evaluation of journals in different disciplines or different fields of the same subject also needs discussion [102, 103].
Chatterjee et al. [104] studied the citation distribution and found that a few highcited papers had hold most citations in both journals and institutions. Based on many research of the citation distribution of journals, Kao et al. [105] proposed a stochastic dominance analysis based method to evaluate journals.
IvD NetworkBased Evaluation Methods and Indices
The most used methods for evaluating nodes in network are PageRank and HITS. As discussed in the previous sections, HITS algorithm can be used to rank paper, author and journal together. There are some PageRanktype methods being designed for ranking journals, which have a basic form like
(14) 
where indicates the adaptive damping factor and satisfies . Generally, the value of is set as . represents the importance score of journal [106].
Based on the PageRank algorithm, Chen [23]
added the expert judgments on the method as a weight part, and optimized the function by Particle Swarm Optimization (PSO). In the same way, Lim et al.
[107] used the relevance and importance of the citations between journals to design the weighted PageRank. Zhang proposed the HRPageRank algorithm to evaluate journal impact via weighted PageRank according to the author’s Hindex, and relevance between citing and cited papers [108]. Bohlin et al. [109]studied the different performances of zero (the classical Markov model), first and secondorder Markov model while ranking journals and found that higherorder Markov models performed better and were more robust.
Some evaluation methods consider the structural position of journals in the journal citation network. Zhang et al. [24] proposed an indicator named QualityStructure Index (QSI), which ranked journals by the intrinsic popularity and structural position of journals. The intrinsic popularity was quantified by some frequently used metrics, such as JIF, Eigenfactor score, PageRank score. Similarly, Leydesdorff [25] introduced the betweenness centrality of journals in the journal citation network to the assessment task. Su [26] gave a linkbased representation to some frequently used metrics for journals, such as JIF, and proposed a linkbased fusing method to fuse several metrics together according to the links in and among paper citation network, authorship network and paper publishing network. This method has found a new way to consider many metrics together to evaluate academic entities.
Based on the above analysis, the existing journal evaluation methods still have the following problems: (1) most previous studies quantify journal impact based on the firstorder academic networks; (2) the citation inflation influences the real impact of a journal. Therefore, researchers need to explore the higherorder academic network analysis and journal impact inflation to resolve the challenging issues of journal evaluation.
V Open Issues and Challenges
In this section, several open issues and challenges are shown for further research in this area, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation.
Va Pattern of collaboration Impact
A significant amount of work has been focused on quantifying the impact of scholarly papers, scholars, and journals [27, 76, 108]. However, little is known about how the impact of scientific collaboration evolves over time. Previous researchers measure the impact of coauthors by citations, which are easy to be manipulated. The structured methods for measuring the impact of coauthors are urgently needed in the science community. With the available largescale datasets on citations and collaborations, it becomes possible to explore the patterns of collaboration impact in scientific collaborative careers over time and their potential relationships with scientists’ success. Since the structured methods are needed to quantify the impact of coauthors, how to construct the network to measure the collaborative impact and how to model remain the broader challenge. One possible solution is to construct a heterogeneous academic network in which the impact of the coauthors are quantified. Based on this, researchers explore the pattern of collaboration impact.
VB Unified Evaluation Standards
We have mentioned many automatic evaluation methods that try to find highquality papers from a mass of publications. But these methods can only give researchers suggestions that which paper may be useful, the contents of the papers recommended are not concerned by the algorithm. Therefore, there is still a strong demand for efforts to find the papers you need in the research process. Although there are many automatic evaluation methods, we can not find a unified evaluation standard to evaluate which method can outperform others. A widely accepted ground truth is of great need in the evaluation systems. To solve this problem, the data set must be unified first.
VC Implicit Success Factor Mining
In the past, more attention has been given to explicit success factors. In author impact evaluation research, researchers have found some explicit success factors such as academic age, institution, research field, and country [110]. However, little is known about the mechanisms of the temporal evolution of success in science. Uncovering the origin of the success factors in science is a challenging task. Success in science may depend on exogenous factors, such as mentorstudent relationship, learning habits, and education level, remains unknown. Actively exploring the relationship between exogenous factors and academic success may provide a method for implicit success factor mining.
VD Dynamic Academic Network Embedding
Many static network embedding methods have been proposed, however, academic networks evolve over time. For example, in citation networks, citing papers and cited papers always dynamically change over time, e.g., new citations are continuously added to the citation networks when authors cite previous research work. To learn the representations of nodes in dynamic scholarly networks, the existing academic network embedding methods need to run repeatedly and take time. Therefore, further study on dynamic scholarly network embedding algorithms remains an open challenge in this area. To obtain the efficient representation, a deep feature learning and the associated representation model supported by dynamic academic data may need to be established.
VE Scholarly Impact Inflation
Scholarly impact inflation, which arises from the exponential growth of scholarly papers, affects the real value of scholarly impact, therefore, impacting the comparative evaluation of papers, scholars, journals, institutions, and country output across different periods [111]. Scholars can increase their citations by relying on their friends and coauthors, indicating that citations are easily manipulated. Many work has focused on unraveling the dynamics of inflation for citations [112, 113, 30, 114]. Under the background of the inflation of citations, how to construct the evaluation network of scholarly impact and how to model are surprisingly difficult, highlighting the broader challenge of evaluating the scholarly impact in the science community. One possible solution is to weaken citation inflation through the higherorder academic networks.
Vi Conclusion
In this paper, we conduct a comprehensive review of the literature in quantifying success in science, focusing on evaluation indices of scholarly impact. Two changes have taken place in quantifying success in science research: (1) from unstructured evaluation indices to structured evaluation indices; (2) from singledisciplinary impact assessment to interdisciplinary impact assessment. However, the literaturebased analysis has led to the conclusion that despite a large number of evaluation indices have been used to resolve the problems in quantifying success in science, the solutions of some potential issues remain unknowns, such as the pattern of collaborative impact, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. To solve these challenging issues, researchers can explore from the highorder scholarly network, heterogeneous network analysis and modeling, and academic relationship identifying.
References
 [1] F. Xia, W. Wang, T. M. Bekele, and H. Liu, “Big scholarly data: A survey,” IEEE Transactions on Big Data, vol. 3, no. 1, pp. 18–35, 2017.

[2]
J. Liu, X. Kong, F. Xia, X. Bai, L. Wang, Q. Qing, and I. Lee, “Artificial Intelligence in the 21st century,”
IEEE Access, vol. 6, pp. 34 403–34 421, 2018. 
[3]
W. Wang, J. Liu, F. Xia, I. King, and H. Tong, “Shifu: Deep learning based advisoradvisee relationship mining in scholarly big data,” in
Proceedings of the 26th International Conference on World Wide Web Companion. International World Wide Web Conferences Steering Committee, 2017, pp. 303–310.  [4] W. Wang, J. Liu, Z. Yang, X. Kong, and F. Xia, “Sustainable collaborator recommendation based on conference closure,” IEEE Transactions on Computational Social Systems, vol. 6, no. 2, pp. 311–322, 2019.
 [5] L. He, H. Fang, X. Wang, W. Yuyong, H. Ge, C. Li, C. Chen, Y. Wan, and H. He, “The 100 mostcited articles in urological surgery: A bibliometric analysis,” International Journal of Surgery, vol. 75, pp. 74–79, 2020.
 [6] F. Xia, H. Liu, I. Lee, and L. Cao, “Scientific article recommendation: Exploiting common author relations and historical preferences,” IEEE Transactions on Big Data, vol. 2, no. 2, pp. 101–112, 2016.
 [7] F. Xia, Z. Chen, W. Wang, J. Li, and L. T. Yang, “Mvcwalker: Random walkbased most valuable collaborators recommendation exploiting academic factors,” IEEE Transactions on Emerging Topics in Computing, vol. 2, no. 3, pp. 364–375, 2014.
 [8] X. Bai, I. Lee, Z. Ning, A. Tolba, and F. Xia, “The role of positive and negative citations in scientific evaluation,” IEEE Access, vol. 5, pp. 17 607–17 617, 2017.
 [9] X. Bai, H. Liu, F. Zhang, Z. Ning, X. Kong, I. Lee, and F. Xia, “An overview on evaluating and predicting scholarly article impact,” Information, vol. 8, no. 3, p. 73, 2017.
 [10] T. Amjad, Y. Rehmat, A. Daud, and R. Ayaz Abbasi, “Scientific impact of an author and role of selfcitations,” Scientometrics, vol. 122, no. 2, pp. 915–932, 2020.
 [11] X. Bai, F. Zhang, J. Ni, L. Shi, and I. Lee, “Measure the impact of institution and paper via institutioncitation network,” IEEE Access, vol. 8, pp. 17 548–17 555, 2020.
 [12] N. A. Ebrahim, H. Salehi, M. A. Embi, F. H. Tanha, H. Gholizadeh, and S. M. Motahar, “Visibility and citation impact,” Social Science Electronic Publishing, vol. 7, no. 4, pp. 120–125, 2014.

[13]
J. Liu, T. Tang, W. Wang, B. Xu, X. Kong, and F. Xia, “A survey of scholarly data visualization,”
IEEE Access, vol. 6, pp. 19 205–19 221, 2018.  [14] P. D. B. Parolo, R. K. Pan, R. Ghosh, B. A. Huberman, K. Kaski, and S. Fortunato, “Attention decay in science,” Journal of Informetrics, vol. 9, no. 4, pp. 734–745, 2015.
 [15] T. Braun, W. Glanzel, and A. Schubert, “A Hirschtype index for journals,” Scientometrics, vol. 69, no. 1, pp. 169–173, 2006.
 [16] E. Garfield, “Citation analysis as a tool in journal evaluation,” Science, vol. 178, no. 4060, pp. 471–479, 1972.
 [17] Y. Chen, Q. Jin, H. Fang, H. Lei, J. Hu, Y. Wu, J. Chen, C. Wang, and Y. Wan, “Analytic network process: Academic insights and perspectives analysis,” Journal of Cleaner Production, vol. 235, pp. 1276–1294, 2019.
 [18] L. Page, S. Brin, R. Motwani, and T. Winograd, “The PageRank citation ranking: Bringing order to the web,” Stanford Digital Libraries Working Paper, vol. 9, no. 1, pp. 1–14, 1998.
 [19] J. M. Kleinberg, “Authoritative sources in a hyperlinked environment,” Journal of the ACM, vol. 46, no. 5, pp. 604–632, 1999.
 [20] J. E. Hirsch, “An index to quantify an individual’s scientific research output,” Proceedings of the National Academy of Sciences of the United States of America, vol. 102, no. 46, pp. 16 569–16 572, 2005.
 [21] L. Egghe, “Theory and practise of the gindex,” Scientometrics, vol. 69, no. 1, pp. 131–152, 2006.
 [22] S. Alonso, F. J. Cabrerizo, E. HerreraViedma, and F. Herrera, “hg index: A new index to characterize the scientific output of researchers based on the h  and g indices,” Scientometrics, vol. 82, no. 2, pp. 391–400, 2010.
 [23] Y. L. Chen and X. H. Chen, “An evolutionary PageRank approach for journal ranking with expert judgements,” Wireless Personal Communications, vol. 54, no. 3, pp. 467–484, 2011.
 [24] C. Zhang, X. Liu, Y. Xu, and Y. Wang, “QualityStructure index: A new metric to measure scientific journal influence,” Journal of the American Society for Information Science and Technology, vol. 62, no. 4, pp. 643–653, 2011.
 [25] L. Leydesdorff, “Betweenness centrality as an indicator of the interdisciplinarity of scientific journals,” Journal of the Association for Information Science & Technology, vol. 58, no. 9, pp. 1303–1319, 2009.
 [26] P. Su and Q. Shen, “Linkbased methods for bibliometric journal ranking,” Soft Computing, vol. 17, no. 12, pp. 2399–2410, 2013.
 [27] X. Bai, F. Zhang, J. Hou, I. Lee, X. Kong, A. Tolba, and F. Xia, “Quantifying the impact of scholarly papers based on higherorder weighted citations,” PloS one, vol. 13, no. 3, p. e0193192, 2018.
 [28] L. Wildgaard, J. W. Schneider, and B. Larsen, “A review of the characteristics of 108 authorlevel bibliometric indicators,” Scientometrics, vol. 101, no. 1, pp. 125–158, 2014.
 [29] F. Zhang, X. Bai, and I. Lee, “Author Impact: Evaluations, predictions, and challenges,” IEEE Access, vol. 7, pp. 38 657–38 669, 2019.
 [30] X. Bai, F. Xia, I. Lee, J. Zhang, and Z. Ning, “Identifying anomalous citations for objective evaluation of scholarly article impact,” Plos One, vol. 11, no. 9, p. e0162364, 2016.
 [31] S. Lehmann, A. D. Jackson, and B. E. Lautrup, “Measures for measures,” Nature, vol. 444, no. 7122, pp. 1003–1004, 2006.
 [32] Y. Wang, Y. Tong, and M. Zeng, “Ranking scientific articles by exploiting citations, authors, journals, and time information,” in AAAI Conference on Artificial Intelligence. AAAI Press, 2013, pp. 933–939.
 [33] D. Wang and A. Barabasi, “Quantifying longterm scientific impact,” Science, vol. 342, no. 6154, pp. 127–132, 2013.
 [34] P. Chen, H. Xie, S. Maslov, and S. Redner, “Finding scientific gems with Google’s PageRank algorithm,” Journal of Informetrics, vol. 1, no. 1, pp. 8–15, 2006.
 [35] Y. Zhang, M. Wang, F. Gottwalt, M. Saberi, and E. Chang, “Ranking scientific articles based on bibliometric networks with a weighting scheme,” Journal of Informetrics, vol. 13, no. 2, pp. 616–634, 2019.
 [36] H. Piwowar, “Altmetrics: Value all research products,” Nature, vol. 493, no. 7431, pp. 159–159, 2013.
 [37] H. Sayyadi and L. Getoor, “FutureRank: Ranking scientific articles by predicting their future PageRank,” in SIAM International Conference on Data Mining. SIAM, 2009, pp. 533–544.
 [38] M. Wang, J. Ren, S. Li, and G. Chen, “Quantifying a paper academic impact by distinguishing the unequal intensities and contributions of citations,” IEEE Access, vol. 7, no. 99, pp. 96 198–96 214, 2019.
 [39] H. F. Chan, M. Guillot, L. Page, and B. Torgler, “The inner quality of an article: Will time tell?” Scientometrics, vol. 104, no. 1, pp. 19–41, 2015.
 [40] F. Didegah and M. Thelwall, “Which factors help authors produce the highest impact research? Collaboration, journal and document properties,” Journal of Informetrics, vol. 7, no. 4, pp. 861–873, 2013.
 [41] P. O. Seglen, “Why the impact factor of journals should not be used for evaluating research,” BMJ British Medical Journal, vol. 314, no. 7079, pp. 498–502, 1997.
 [42] R. Costas, Z. Zahedi, and P. Wouters, “Do Altmetrics correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective,” Journal of the Association for Information Science and Technology, vol. 66, no. 10, pp. 2003–2019, 2015.
 [43] D. J. D. S. Price, “Networks of scientific papers,” Science, vol. 149, no. 3683, pp. 510–515, 1965.

[44]
X. Wan and F. Liu, “Are all literature citations equally important? Automatic citation strength estimation and its applications,”
Journal of the Association for Information Science & Technology, vol. 65, no. 9, pp. 1929–1938, 2014.  [45] X. Zhu, P. Turney, D. Lemire, and A. Vellino, “Measuring academic influence: Not all citations are equal,” Journal of the Association for Information Science & Technology, vol. 66, no. 2, pp. 408–427, 2015.
 [46] A. Anfossi, A. Ciolfi, F. Costa, G. Parisi, and S. Benedetto, “Largescale assessment of research outputs through a weighted combination of bibliometric indicators,” Scientometrics, vol. 107, no. 2, pp. 671–683, 2016.
 [47] E. Garfield, “Science indexes for science: A new dimension in documentation through association of ideas,” Science, vol. 122, no. 3159, pp. 108–111, 1955.
 [48] A. Ancaiani, F. A. Anfossi, A. Barbara, S. Benedetto, B. Blasi, V. Carletti, T. Cicero, A. Ciolfi, F. Costa, and G. Colizza, “Evaluating scientific research in Italy: The 200410 research evaluation exercise,” Research Evaluation, vol. 24, no. 3, pp. 242–255, 2015.
 [49] F. Xia, X. Su, W. Wang, C. Zhang, Z. Ning, and I. Lee, “Bibliographic analysis of nature based on Twitter and Facebook Altmetrics data,” Plos One, vol. 11, no. 12, p. e0165997, 2016.
 [50] D. Walker, H. Xie, K. K. Yan, and S. Maslov, “Ranking scientific publications using a simple model of network traffic,” Journal of Statistical Mechanics Theory & Experiment, vol. 6, no. 6, pp. p06 010–p06 010, 2006.
 [51] L. Yao, W. Tian, A. Zeng, Y. Fan, and Z. Di, “Ranking scientific publications: The effect of nonlinearity,” Scientific Reports, vol. 4, no. 6663, pp. 1–6, 2014.
 [52] S. Wang, S. Xie, X. Zhang, Z. Li, and Y. He, “Coranking the future influence of multiobjects in bibliographic network through mutual reinforcement,” ACM Transactions on Intelligent Systems & Technology, vol. 7, no. 4, pp. 1–28, 2016.
 [53] D. Zhou, S. A. Orshanskiy, H. Zha, and C. L. Giles, “Coranking authors and documents in a heterogeneous network,” in IEEE International Conference on Data Mining. IEEE, 2007, pp. 739–744.
 [54] Z. Liu, H. Huang, X. Wei, and X. Mao, “TriRank: An authority ranking framework in heterogeneous academic networks by mutual reinforce,” in 2014 IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2014, pp. 493–500.
 [55] A. London, T. Németh, A. Pluhár, and T. Csendes, “A local PageRank algorithm for evaluating the importance of scientific articles,” Annales Mathematicae Et Informaticae, vol. 44, pp. 131–140, 2015.
 [56] X. Jiang, C. Gao, and R. Liang, “Ranking scientific articles in a dynamically evolving citation network,” in International Conference on Semantics, Knowledge and Grids. IEEE, 2017, pp. 154–157.
 [57] N. Shah and Y. Song, “Sindex: Towards better metrics for quantifying research impact,” arXiv preprint arXiv:1507.03650, pp. 1–10, 2015.
 [58] D. Bouyssou and T. Marchant, “Ranking authors using fractional counting of citations: An axiomatic approach,” Journal of Informetrics, vol. 10, no. 1, pp. 183–199, 2016.
 [59] T. Marchant, “Scorebased bibliometric rankings of authors,” Journal of the American Society for Information Science and Technology, vol. 60, no. 6, pp. 1132–1137, 2009.
 [60] J. Stallings, E. Vance, J. Yang, M. W. Vannier, J. Liang, L. Pang, L. Dai, I. Ye, and G. Wang, “Determining scientific impact using a collaboration index,” Proc Natl Acad Sci U S A, vol. 110, no. 24, pp. 9680–9685, 2013.
 [61] A. Usmani and A. Daud, “Unified author ranking based on integrated publication and venue rank,” International Arab Journal of Information Technology (IAJIT), vol. 14, no. 1, pp. 111–118, 2017.
 [62] C. Zhang, L. Yu, X. Zhang, and N. V. Chawla, “TaskGuided and semanticaware ranking for academic authorpaper aorrelation inference,” in IJCAI, 2018, pp. 3641–3647.
 [63] U. Senanayake, M. Piraveenan, and A. Zomaya, “The PageRankindex: Going beyond citation counts in quantifying scientific impact of researchers,” PloS One, vol. 10, no. 8, p. e0134794, 2015.
 [64] M. Dunaiski, J. Geldenhuys, and W. Visser, “Author ranking evaluation at scale,” Journal of Informetrics, vol. 12, no. 3, pp. 679–702, 2018.
 [65] R. Liang and X. Jiang, “Scientific ranking over heterogeneous academic hypernetwork,” in Thirtieth AAAI Conference on Artificial Intelligence. AAAI Press, 2016, pp. 20–26.
 [66] Y. Dong, R. A. Johnson, and N. V. Chawla, “Will this paper increase your h index?: Scientific impact prediction,” in Eighth ACM International Conference on Web Search and Data Mining. ACM, 2015, pp. 149–158.
 [67] J. Zhang, Z. Ning, X. Bai, X. Kong, J. Zhou, and F. Xia, “Exploring time factors in measuring the scientific impact of scholars,” Scientometrics, vol. 112, no. 3, pp. 1301–1321, 2017.
 [68] H. W. Shen and A. L. Barabasi, “Collective credit allocation in science,” Proceedings of the National Academy of Sciences of the United States of America, vol. 111, no. 34, pp. 12 325–12 330, 2014.
 [69] P. Deville, D. Wang, R. Sinatra, C. Song, V. D. Blondel, and A. L. Barabsi, “Career on the move: geography, stratification, and scientific impact,” Scientific Reports, vol. 4, no. 1, pp. 4770–4770, 2014.
 [70] A. MasBleda, M. Thelwall, K. Kousha, and I. F. Aguillo, “Do highly cited researchers successfully use the social web?” Scientometrics, vol. 101, no. 1, pp. 337–356, 2014.
 [71] B. H. Jin, L. M. Liang, R. Rousseau, and L. Egghe, “The R and ARindices: Complementing the hindex,” Science Bulletin, vol. 52, no. 6, pp. 855–863, 2007.
 [72] C. T. Zhang, “A novel triangle mapping technique to study the hindex based citation distribution,” Scientific Reports, vol. 3, no. 1, pp. 1023–1023, 2013.
 [73] S. N. Dorogovtsev and J. F. F. Mendes, “Ranking scientists,” Nature Physics, vol. 11, no. 11, pp. 882–884, 2015.
 [74] G. Wang and J. Yang, “Axiomatic quantification of coauthors’ relative contributions,” arXiv preprint arXiv:1003.3362, pp. 1–17, 2010.
 [75] M. Farooq, H. U. Khan, S. Iqbal, E. U. Munir, and A. Shahzad, “DSindex: Ranking authors distinctively in an Academic network,” IEEE Access, vol. 5, pp. 19 588–19 596, 2017.
 [76] R. Sinatra, D. Wang, P. Deville, C. Song, and A. L. Barabáasi, “Quantifying the evolution of individual scientific impact,” Science, vol. 354, no. 6312, pp. aaf5239–aaf5239, 2016.
 [77] L. Waltman and N. J. V. Eck, “Fieldnormalized citation impact indicators and the choice of an appropriate counting method,” Journal of Informetrics, vol. 9, no. 4, pp. 872–894, 2015.
 [78] F. Radicchi, S. Fortunato, and C. Castellano, “Universality of citation distributions: Toward an objective measure of scientific impact,” Proc Natl Acad Sci U S A, vol. 105, no. 45, pp. 17 268–17 272, 2008.
 [79] J. Kaur, F. Radicchi, and F. Menczer, “Universality of scholarly impact metrics,” Journal of Informetrics, vol. 7, no. 4, pp. 924–932, 2013.
 [80] H. Lima, T. H. P. Silva, M. M. Moro, R. L. T. Santos, W. Meira, and A. H. F. Laender, “Aggregating productivity indices for ranking researchers across multiple areas,” in ACM/IEEECS Joint Conference on Digital Libraries. ACM, 2013, pp. 97–106.
 [81] Y. B. Zhou, L. Lü, and M. Li, “Quantifying the influence of scientists and their publications: Distinguish prestige from popularity,” New Journal of Physics, vol. 14, no. 3, pp. 33 033–33 049, 2012.
 [82] Y. Ding, E. Yan, A. Frazho, and J. Caverlee, “PageRank for ranking authors in cocitation networks,” Journal of the American Society for Information Science and Technology, vol. 60, no. 11, pp. 2229–2243, 2009.
 [83] E. Yan, Y. Ding, and C. R. Sugimoto, “PRank: An indicator measuring prestige in heterogeneous scholarly networks,” Journal of the American Society for Information Science & Technology, vol. 62, no. 3, pp. 467–477, 2011.
 [84] T. Amjad, Y. Ding, A. Daud, J. Xu, and V. Malic, “Topicbased heterogeneous rank,” Scientometrics, vol. 104, no. 1, pp. 1–22, 2015.
 [85] D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,” Journal of Machine Learning Research, vol. 3, pp. 993–1022, 2003.
 [86] L. Li, X. Wang, Q. Zhang, P. Lei, M. Ma, and X. Chen, A quick and effective method for ranking authors in academic social network. Springer, 2014.
 [87] M. Nykl, M. Campr, and K. Jeek, “Author ranking based on personalized PageRank,” Journal of Informetrics, vol. 9, no. 4, pp. 777–799, 2015.
 [88] C. Bergstrom, “Measuring the value and prestige of scholarly journals,” College & Research Libraries News, vol. 68, no. 5, pp. 314–316, 2007.
 [89] B. GonzálezPereira, V. P. GuerreroBote, and F. MoyaAnegón, “A new approach to the metric of journals’ scientific prestige: The SJR indicator,” Journal of Informetrics, vol. 4, no. 3, pp. 379–391, 2010.
 [90] H. F. Moed, “Measuring contextual citation impact of scientific journals,” Journal of Informetrics, vol. 4, no. 3, pp. 265–277, 2010.
 [91] C. Chang and M. Mcaleer, “Ranking journal quality by harmonic mean of ranks: An application to ISI statistics & probability,” Statistica Neerlandica, vol. 67, no. 1, pp. 27–53, 2012.
 [92] S. Papavlasopoulos, M. Poulos, N. Korfiatis, and G. Bokos, “A nonlinear index to evaluate a journal’s scientific impact,” Information Sciences An International Journal, vol. 180, no. 11, pp. 2156–2175, 2010.
 [93] A. Serenko and N. Bontis, “What’s familiar is excellent: The impact of exposure effect on perceived journal quality,” Journal of Informetrics, vol. 5, no. 1, pp. 219–223, 2011.
 [94] C. F. Tsai, Y. H. Hu, and S. W. G. Ke, “A Borda count approach to combine subjective and objective based MIS journal rankings,” Online Information Review, vol. 38, no. 4, pp. 469–483, 2014.
 [95] S. D. Beets, A. S. Kelton, and B. R. Lewis, “An assessment of accounting journal quality based on departmental lists,” Scientometrics, vol. 102, no. 1, pp. 315–332, 2015.
 [96] L. Zarifmahmoudi, J. Jamali, and R. Sadeghi, “Google Scholar journal metrics: Comparison with impact factor and SCImago journal rank indicator for nuclear medicine journals,” Iranian Journal of Nuclear Medicine, vol. 23, no. 1, pp. 8–14, 2015.
 [97] C. F. Tsai, “Citation impact analysis of top ranked computer science journals and their rankings,” Journal of Informetrics, vol. 8, no. 2, pp. 318–328, 2014.
 [98] L. Bornmann, W. Marx, and H. Schier, “HirschType index values for organic chemistry journals: A comparison of new metrics with the Journal Impact Factor,” European Journal of Organic Chemistry, vol. 2009, no. 10, pp. 1471–1476, 2010.
 [99] G. Setti, “Bibliometric indicators: Why do we need more than one?” IEEE Access, vol. 1, pp. 232–246, 2013.
 [100] M. R. Elkins, C. G. Maher, R. D. Herbert, A. M. Moseley, and C. Sherrington, “Correlation between the Journal Impact Factor and three other journal citation indices,” Scientometrics, vol. 85, no. 1, pp. 81–93, 2010.
 [101] P. Jacsó, “Differences in the rank position of journals by Eigenfactor metrics and the fiveyear impact factor in the Journal Citation Reports and the Eigenfactor Project web site,” Online Information Review, vol. 34, no. 3, pp. 496–508, 2010.
 [102] S. M. GonzalezBetancor and P. DortaGonzalez, “An indicator of journal impact that is based on calculating a journal’s percentage of highly cited publications,” arXiv preprint arXiv:1510.03648, pp. 1–30, 2015.
 [103] G. F. Templeton and B. R. Lewis, “Fairness in the institutional valuation of business journals,” Mis Quarterly, vol. 39, no. 3, pp. 523–539, 2015.
 [104] A. Chatterjee, A. Ghosh, and B. K. Chakrabarti, “Universality of citation distributions for academic institutions and journals,” Plos One, vol. 11, no. 2, p. e0148863, 2016.
 [105] E. H. Kao, C. H. Hsu, Y. Lu, and H. G. Fung, “Ranking of finance journals: A stochastic dominance analysis,” Managerial Finance, vol. 42, no. 4, pp. 312–323, 2016.
 [106] J. Bollen, M. A. Rodriquez, and H. V. D. Sompel, “Journal status,” Scientometrics, vol. 69, no. 3, pp. 669–687, 2006.
 [107] A. Lim, H. Ma, Q. Wen, Z. Xu, and B. Cheang, “Distinguishing citation quality for journal impact assessment,” Communications of the ACM, vol. 52, no. 8, pp. 111–116, 2008.
 [108] F. Zhang, “Evaluating journal impact based on weighted citations,” Scientometrics, vol. 113, no. 2, pp. 1–15, 2017.
 [109] L. Bohlin, A. Viamontes Esquivel, A. Lancichinetti, and M. Rosvall, “Robustness of journal rankings by network flows with different amounts of memory,” Journal of the Association for Information Science & Technology, vol. 67, no. 10, pp. 2527–2535, 2016.
 [110] L. Cai, J. Tian, J. Liu, X. Bai, I. Lee, X. Kong, and F. Xia, “Scholarly impact assessment: A survey of citation weighting solutions,” Scientometrics, vol. 118, no. 2, pp. 453–478, 2019.
 [111] R. K. Pan, A. M. Petersen, F. Pammolli, and S. Fortunato, “The memory of science: Inflation, myopia, and the knowledge network,” Journal of Informetrics, vol. 12, no. 3, pp. 656–678, 2018.
 [112] K. W. Higham, M. Governale, A. Jaffe, and U. Zülicke, “Unraveling the dynamics of growth, aging and inflation for citations to scientific articles from specific research fields,” Journal of Informetrics, vol. 11, no. 4, pp. 1190–1200, 2017.
 [113] A. M. Petersen, R. K. Pan, F. Pammolli, and S. Fortunato, “Methods to account for citation inflation in research evaluation,” Research Policy, vol. 48, no. 7, pp. 1855–1865, 2019.
 [114] S. TarkhanMouravi, “Traditional indicators inflate some countries? Scientific impact over 10 times,” Scientometrics, vol. 123, no. 1, pp. 337–356, 2020.
Comments
There are no comments yet.