A shared latent space matrix factorisation method for recommending new trial evidence for systematic review updates

09/20/2017 ∙ by Didi Surian, et al. ∙ 0

Clinical trial registries can be used to monitor the production of trial evidence and signal when systematic reviews become out of date. However, this use has been limited to date due to the extensive manual review required to search for and screen relevant trial registrations. Our aim was to evaluate a new method that could partially automate the identification of trial registrations that may be relevant for systematic review updates. We identified 179 systematic reviews of drug interventions for type 2 diabetes, which included 537 clinical trials that had registrations in ClinicalTrials.gov. We tested a matrix factorisation approach that uses a shared latent space to learn how to rank relevant trial registrations for each systematic review, comparing the performance to document similarity to rank relevant trial registrations. The two approaches were tested on a holdout set of the newest trials from the set of type 2 diabetes systematic reviews and an unseen set of 141 clinical trial registrations from 17 updated systematic reviews published in the Cochrane Database of Systematic Reviews. The matrix factorisation approach outperformed the document similarity approach with a median rank of 59 and recall@100 of 60.9 in the document similarity baseline. In the second set of systematic reviews and their updates, the highest performing approach used document similarity and gave a median rank of 67 (recall@100 of 62.9 for ranking trial registrations to reduce the manual workload associated with finding relevant trials for systematic review updates. The results suggest that the approach could be used as part of a semi-automated pipeline for monitoring potentially new evidence for inclusion in a review update.



There are no comments yet.


page 9

Code Repositories


Matrix factorisation with shared latent space

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Background

Systematic reviews of clinical trials are at the foundation of evidence-based medicine and should represent comprehensive, high quality, and up to date syntheses of trial evidence. With the rapid growth of the scientific literature, identifying relevant evidence and keeping systematic reviews up to date is increasingly difficult. Studies examining the timing of systematic reviews suggest that reviews are updated on average every 5.5 years, though a substantial proportion should be updated within 2 years [1, 2, 3, 4]

. Performing systematic reviews is time and resource intensive and even determining when a systematic review needs updating often requires completion of the searching and screening steps of the systematic review process. To facilitate this assessment, a number of tools and guidelines have been developed that aim to identify when new relevant research becomes available, or estimate the risk that the results of the systematic review may have substantially changed due to new evidence 

[3, 4, 5, 6, 7, 8, 9, 10].

These approaches rely on bibliographic databases, which are limited due to publication and reporting biases that affect the timing and completeness of the results [11]. About half of all trials remain unpublished two years after trial completion, and of those that are published, around half have missing or changed outcomes [12, 13]. As a consequence, bibliographic databases may not provide a complete and timely source of relevant trial evidence for systematic reviews. As various policies and mandates are making prospective trial registration standard practice, clinical trial registries are an increasingly comprehensive and timely source of new research evidence, and in many cases may provide a more complete and less biased record than bibliographic databases [14]. However, the vast majority of methods aiming to support the identification of relevant studies for a systematic review operate over bibliographic databases rather than trial registries [15] and systematic reviews often fail to incorporate any clinical trial registries to identify relevant trials [16]. New methods for identifying relevant trials in clinical trial registries could help determine when systematic reviews need to be updated and support living systematic reviews and automated systematic review updates [17, 18, 19].

Our aim was to evaluate a new method to partially automate the identification of trials that may be relevant for systematic review updates given the existing trials in a systematic review. This process could serve to signal when a systematic review becomes out of date, based on the amount and type of new evidence that is detected.

2 Related Work

A number of semi-automated methods have been proposed to identify relevant trials for inclusion in systematic reviews and improve the efficiency of the searching and screening processes [15, 20, 21]. The methods typically use the words or concepts included in the text of published articles to find similarities that are then used to distinguish relevant from irrelevant articles. Some work has also been done to directly extract information on populations, interventions, comparators, and outcomes [22, 23]

, which can then be used to match search queries. Several approaches have included the use of active learning 

[24, 25]

, while others have examined representations that use neural network based vector space models 

[26]. Far less work has been performed on identifying trials from the information stored in clinical trial registries or on linking clinical trial registries to bibliographic databases [14, 27, 28, 29]. However, some methods have shown that it is possible to identify meaningful clusters of similar trials within registries [30, 31, 32], especially in relation to populations [33], and ClinicalTrials.gov data has been used in predicting black box warnings [34].

Matrix factorisation has the potential to support the identification of relevant trials for inclusion in systematic reviews. The approach has a long history of use in addressing problems in link prediction [35, 36, 37], for example in building systems that recommend books, music, or new social connections to users. This process, commonly referred to as “the item prediction problem”, aims to predict the presence or absence of links between the nodes of a graph where the vertices represent users and items, and edges that connect vertices are weighted according to preference scores. Matrix factorisation produces a mapping between users and items into a low-dimensional representation (latent factors) to model the user-item affinity in vector space.

Past work on the use of matrix factorisation for collaborative filtering focused on increasing prediction accuracy by including neighbourhood information [38]. Later, Koren et al. [39] proposed SVD++, a matrix factorisation approach that unified neighbourhood and latent factors. Guo et al. [40] proposed TrustSVD, an extension of SVD++ that incorporates social trust information to help mitigate data sparsity and the cold start problem. TrustSVD includes factorisation of two matrices that share a same latent space—meaning a matrix of user-item preference scores and another matrix that defines trust information among users.

Our proposed matrix factorisation approach shares structural similarity with TrustSVD but applies different regularization methods to adapt to weighted links between systematic reviews and trial registrations. To the best of our knowledge, this is the first study to address the problem of recommending trial registrations for systematic review updates using matrix factorisation with a shared latent space.

3 Materials and Methods

3.1 Study data

We searched PubMed and Embase for systematic reviews of drugs used to treat type 2 diabetes. The final search was performed on 26 March 2017, using a search strategy that included terms for type 2 diabetes, type 2 diabetes interventions, and publication type information. Systematic reviews were considered for inclusion in the experiments if they were focused on populations with type 2 diabetes and included at least one meta-analysis (Figure 1). Reviews that summarised or synthesised meta-analyses from other reviews were excluded, as were reviews that did not include full details of a search strategy and did not include at least one meta-analysis of a safety or efficacy outcome. Reviews that also included other populations were included only if they had sub-group analyses that were specific to type 2 diabetes.

After excluding systematic reviews that did not have links to at least 4 trial registrations, we had 179 systematic reviews with 4,447 links to 537 unique trial registrations. For each systematic review, we ordered the registrations by trial completion date to use older trials for training and newer trials for testing, simulating the way new trials might be added to existing systematic reviews. Trial registrations had a median of 14 links (ranging from 4 to 147) to the systematic reviews.

Fig. 1: From 2,854 unique articles identified in the search, 230 systematic reviews of type 2 diabetes were identified and 179 were included in the experiments.

We accessed ClinicalTrials.gov on May 16, 2017 to create a static local version of information from 128,392 completed trials in order to produce consistent results across all experiments. Information on the brief and official titles, detailed description, inclusion criteria, and intervention names were extracted for use in the experiments.

To test the methods in a general scenario, we additionally identified 17 systematic reviews that were published in the Cochrane Database of Systematic Reviews on any topic. To be included in the experiments, they must have either had an update published with a new search performed before 25 April 2017, or have listed at least three ongoing studies in ClinicalTrials.gov in the published review. We then used the set of included trials as the training set, and the new trials added to the update (or listed as ongoing trials) as the test set. There were 72 unique trials in the original reviews and 69 unique trials in the updates and lists of ongoing relevant trials. The number of trial registrations per systematic review was between 1 and 14 in the original versions and between 1 and 13 in the updates.

3.2 Feature representations

Each of the trial registrations was considered as a single document and treated as a bag of words (order was not considered). Each word in the text was converted to lowercase, standard stop words were excluded, the Porter stemmer [41] was applied, and words present in fewer than five trial registrations were excluded. Each trial registration was represented by the set of extracted words from the text after the pre-processing step.

We used multiple feature representations for each trial registration. First, a binary vector representation was created to indicate whether a word feature is present or absent in the text of a trial registration. Secondly, a word frequency vector representation was used to capture the number of times a word feature appears in the text of a trial registration. Lastly, we used a term frequency-inverse document frequency (tf-idf) vector representation, where the entry of the vector is the tf-idf score of a word feature. In the experiments that follow, we refer to these three feature representations as full-dimension feature representations.

We used Principal Component Analysis (PCA) and Latent Dirichlet Allocation (LDA) to reduce the dimensionality of the feature spaces. The PCA technique transforms data into a lower-dimensional space based on several linearly uncorrelated variables by maximising the variance. We used the implementation of incremental PCA from scikit-learn with its default settings, and tested it with 20, 50, 100, 200, 300, and 400 components on the tf-idf and word frequency vector representations. The LDA method is a technique that was first introduced to model topics across a set of documents 

[42, 43]

. It uses co-occurrence of words to find latent structures, or topics, in a corpus of documents and a topic is represented by a distribution of word probabilities for that topic. Using LDA, a trial registration is represented by a distribution of topics, where the number of topics is set by parameter. For LDA, we used the gensim implementation with standard settings 

[44], and tested it with 20, 50, 100, 200, 300 and 400 topics. We refer the feature representations using PCA and LDA as reduced-dimension feature representations.

Given a systematic review, one approach to identify relevant trial registrations is to rank all trial registrations according to their similarity with the set of trial registrations for trials included in the review. To determine similarity, we used three document similarity measures: cosine similarity, Euclidean distance, and squared Euclidean distance. These measures were used as a baseline approach against which the matrix factorisation approach was tested. The overall process is illustrated in Figure


Fig. 2: High-level view of ranking process for trial inclusion in systematic review updates.

3.3 Matrix factorisation with a shared latent space

Matrix factorisation decomposes a matrix into products of matrices. In our approach, we combine two available sources of information: the text from each trial registration and the links between the trial registrations and the systematic reviews in which they were included. The approach integrates both sources of information via a shared latent space in the learning process to rank all other trial registrations relative to the systematic reviews.

Given trial registrations and systematic reviews, where each trial registration is represented by an -dimensional feature vector extracted from its text and an -dimensional binary vector to denote the absence or present of a link between a trial registration and a systematic review, we created two matrices: a matrix of trial registrations-features, ; and a matrix of trial registrations-systematic reviews, {0,1}. Given latent factors (where and ), matrices and are then decomposed into the following:

In these equations, is a matrix of trial registration-latent factors, is a matrix of trial registration feature-latent factors and is a matrix of systematic review-latent factors (Figure 3).

Fig. 3: The illustration of matrix factorisation for (a) the matrix of trial registry entries-features; and (b) the matrix of trial registries-systematic reviews.

Note that the matrix is shared and used to connect both and in the decomposition process. In other words, the matrix will describe how trial registrations are associated with systematic reviews based on the features of trial registrations. The goal of matrix factorisation is then to learn matrices , , and . Once the matrices are learnt, their respective dot products approximate values in matrices and ; i.e.  and . For the unknown values in matrix where link information between trial registrations and systematic reviews is not present, the respective approximated values in matrix are used as a measure of similarity that can then be used to rank unknown trial registrations for each systematic review.

Because the matrix of trial registrations-features, , is decomposed into a matrix of trial registration-latent factor and a matrix of trial registration feature-latent factor

, the loss function that we need to minimise is as follows 

[37, 40]:

where represents the Frobenius norm, is the regularization parameter, is the entry of matrix in row and column , and . and represent -dimensional vector of trial registration and its feature respectively.

In a similar way, the matrix of (trial registrations-systematic reviews), {0,1}, is decomposed into a matrix of (trial registration-latent factor) and a matrix of (systematic review-latent factor) , and the loss function to minimise is as follows:

where represents a set of systematic reviews that a trial registration belongs to, is the entry of matrix in row and column , and . and represent -dimensional vector of trial registration and systematic review respectively.

Because matrix is shared in Equation 3.3 and 3.3, the new objective function to minimise is then given as follows:


We performed gradient descent on , and based on the final loss function in Equation 3 described in the following equations:


where represents the feature prediction error for trial registration on feature , and represents the link prediction error for trial registration and systematic review .

We used 5,000 as our maximum number of iterations for the learning process. We calculated root-mean-square error (RMSE) in each iteration during the learning process and returned the results from the learning process when the RMSE was the lowest. We implemented our matrix factorisation approach in Cython. We performed the experiments with =0.001, =0.01, and the number of latent factors was set to 5, 10, 20, 30, 40, and 50. The source code of the proposed matrix factorisation approach has been made available via a repository (https://github.com/dsurian/matfac).

3.4 Experiments and outcome performance measures

For each systematic review, we split the set of links between included trials and systematic reviews into training and test sets with a minimum of 3 links included in the training for each systematic review, and kept this set constant for all experiments. Because systematic reviews had a different number of included trials, the training set included between 3 and 89 links (a median of 9) per systematic review, and a test set with between 1 and 58 links (a median of 5) per systematic review. In the Cochrane systematic reviews, the training set comprised the set of trials included in the systematic reviews (between 1 and 14 links) and the test set comprised the set of trials included in the update of the systematic reviews (between 1 and 13 links).

In each experiment and for each systematic review we produced a ranking for all 128,392 completed ClinicalTrials.gov registrations, and identified where the test links for that systematic review were found in the rankings. For each experiment, the aim was to rank the links from the test set as high as possible within the full set of candidates. The links corresponding to the training set were not included in the ranking and did not contribute to the performance measures.

We calculated the median rank of the trials in the test set by aggregating the ranks of all of the links in the test set and taking the median (i.e. each trial-systematic review link contributed one score and the median rank was calculated from all of its scores). Using this approach, if we were to assign a random rank to all of the trials in the test set, the median rank would converge to approximately 64,196.

We evaluated the proportion of links in the test set that were included in the first 100 candidates in each of the ranked lists (recall@100), and produced a function for the general version of the same measure (recall@N). The recall@N function can be used to determine the number of candidates that need to be examined per systematic review to identify a given proportion of relevant trial registrations. We also included the work saved over sampling (WSS) [20] which shows the percent reduction in screening process to retrieve the relevant trials. We measured WSS at recall 95% (WSS@95%) by calculating the proportion of registrations that did not need to be screened after screening the ranked candidate registrations and identifying at least 95% of the registrations in the test set.

To evaluate the matrix factorisation and document similarity approaches against an appropriate baseline, we undertook a manual search of ClinicalTrials.gov using the Advanced Search function, which ranks trials based on a relevance score. One investigator (AD) replicated the search strategies of the reviews in the format of population, intervention, and outcome (as required), and the returned search results were used as a ranked set. These results were evaluated using the same outcome measures.

4 Results

We identified 40,049 unique words in the vocabulary extracted from the processed text of trial registrations. For the binary and word frequency vector representations, the number of features that were shared between training and test sets was 2,367, comprising 5.9% of the 40,049 words (Figure 4).

Fig. 4: The distribution of the frequency with which each word feature was found: (a) at least once in any trial registration; (b) at least once in the 537 type 2 diabetes trial registrations; and (c) shared across the training and test sets for each of the 179 systematic reviews.

For the document similarity methods, the full set of word features with tf-idf weights consistently produced higher levels of performance (Table I). In the best performing methods, half of the correct links could be identified by checking 138 candidates (i.e., median rank of 138), and 42.8% of correct links could be identified by checking 100 candidates. Compared to the manual search, the results from the document similarity method were 9.2% lower for the median rank and 33.3% higher for recall@100.

The best performing matrix factorisation approach produced a recall@100 of 60.9% (and a median rank of 59), which was achieved with the LDA feature representation using 200 topics, and 50 latent factors. The median rank of the best-performing matrix factorisation approach was 57.2% lower and the recall@100 was 42.3% higher than the best performing document similarity approach, and both methods gave a similar WSS@95%. Compared to the manual search, the results from the matrix factorisation were 61.2% lower for the median rank and 89.7% higher for the recall@100. The performance of matrix factorisation using the full-dimension feature representations was consistently lower than the performance of document similarity.

WSS@95% Median Rank Recall@100
Manual Search 97.0% 152 32.1%
Document Similarity
     Full tf-idf weights
          Euclidean distance 99.5% 139 42.6%
          Squared Euclidean distance 99.5% 138 42.8%
          Cosine similarity 99.5% 154 41.7%
     PCA (400 components), tf-idf weights
          Euclidean distance 95.5% 329.5 29.4%
          Squared Euclidean distance 95.4% 341 27.4%
          Cosine similarity 99.1% 195 37.0%
     LDA (400 topics)
          Euclidean distance 94.0% 363 23.4%
          Squared Euclidean distance 93.5% 409.5 22.2%
          Cosine similarity 98.0% 433 20.4%
Matrix Factorisation
     PCA (300 components), tf-idf weights
          5 latent factors 99.4% 76 57.6%
          10 latent factors 99.3% 75 57.6%
          20 latent factors 99.4% 75 57.7%
          30 latent factors 99.4% 76 58.0%
          40 latent factors 99.3% 74 58.1%
          50 latent factors 99.4% 74 58.6%
     LDA (400 topics)
          5 latent factors 96.2% 78 57.1%
          10 latent factors 99.0% 75.5 57.6%
          20 latent factors 99.2% 77 57.4%
          30 latent factors 99.2% 74 57.9%
          40 latent factors 99.2% 59 60.7%
          50 latent factors 99.2% 59 60.9%
TABLE I: The results from the manual search compared to the highest performing document similarity methods and matrix factorisation.

Using the matrix factorisation approach, a user would need to examine the top 400 candidates to find 90% of the test set, and after examining the top 700 candidates, approximately 95% of the test set would have been found (Figure 5).

Fig. 5: Recall@N for various number of candidates that were needed to be checked before the trial registrations included in the type 2 diabetes systematic reviews were found among the set of 128,392 trial registrations. The horizontal dashed-lines mark the positions where recall 90% and 95% were achieved.

4.1 Tests in a general scenario

The document similarity baseline gave the best performance compared to the matrix factorisation approach in the general scenario, which had fewer training examples. The best-performing document similarity method produced a median rank of 67 (recall@100: 62.9%) using all word features and tf-idf weights and cosine similarity, while the best-performing matrix factorisation approach produced a median rank of 5,124 (recall@100: 1.4%) using LDA with 20 topics and 30 latent factors.

The performance of the matrix factorisation method was lower in the general scenario than in the type 2 diabetes examples, and lower than the document similarity methods. To investigate possible reasons, we looked at how the results varied with the number of training examples in the type 2 diabetes systematic reviews (Figure 6). The results indicate that the document similarity method degrades as the number of examples decreases, while the matrix factorisation tends to improve or maintain its performance as the number of training examples increases (see Discussion and Figure 7).

Fig. 6:

Changes in the recall@100 relative to the number of training examples for (a) the highest-performing matrix factorisation method; and (b) the highest performing document similarity method when applied to the 179 type 2 diabetes reviews. Each systematic review contributes to the size of the black dots, and the moving average (range of 7) and its exact 95% confidence interval are given in red as a guide.

5 Discussion

Our results indicate that a shared latent space matrix factorisation method can support the identification of trial registrations that are relevant to a systematic review. Both the matrix factorisation and document similarity methods could be used together as part of a process to signal when a systematic review is likely to be out of date.

5.1 Comparisons with existing research

This is the first study to evaluate matrix factorisation for the purpose of finding new relevant trials from ClinicalTrials.gov for systematic review updates. In terms of manual screening requirements, our findings show that the performance of our methods compares favourably to similar methods that operate over bibliographic databases. While the results of studies testing semi-automated screening of published articles for inclusion in systematic reviews are not directly comparable, standard tools in this area use active learning to avoid screening 80% of articles to reach 95% recall [20, 25, 26]. In our experiments, we were able to reach 95% recall after screening approximately 700 candidates (avoiding 99.5% of the candidate registrations).

A recent study using citation links trained on 23 systematic reviews found that to reach 75% recall, the precision dropped to 3.6% [45], retrieving 300 correct citations after screening 8,298 articles. For our approach trained on 179 systematic reviews, 75% recall was reached after screening fewer than 200 trial registrations.

In another recent example, investigators used generalized linear models and gradient boosting machines on 3 systematic reviews, and needed to screen between 192 and 2,112 articles to reach 96% recall 

[46]. The approach is similar to ours in that it requires knowing which studies were included in existing systematic reviews. However, the approach appears to have used between 6,502 and 41,066 negative training examples and between 55 and 356 positive training examples. By comparison, our approach demonstrated that investigators could screen a similar number of trial registrations to reach the same level of recall, but using fewer positive training examples with no negative training examples. This was possible because the matrix factorisation approach we proposed takes advantage of the latent structure of the set of other training examples and is therefore expected to improve with time as more training examples become available.

5.2 Implications and recommendations

As policies and practice around the prospective registration and structured results reporting of clinical trials continue to expand, use of trial registries in the production of systematic reviews will likely increase. An important benefit will include earlier identification of when the evidence summarised in a systematic review has become out of date. Tools that automate the identification of relevant trials or dramatically reduce the human workload required in trial selection, could be applied to provide rapid estimates of how much of the available evidence is covered in a published systematic review, thus signalling the need for an update. If an update cannot be produced in a specified time, the systematic review could be flagged as potentially out of date with reference to the new research, in order to inform clinicians and consumers using systematic reviews in their care decisions.

Our results suggest that the matrix factorisation approach works best when there are more trials for training from similar systematic reviews. While the document similarity approach only uses included trials from one systematic review to find close examples of other trial registrations, the matrix factorisation method uses information from other nearby systematic reviews to learn how to rank registrations. Figure 7 is a t-SNE visualisation [47] of the 128,392 completed trial registrations, illustrating the differences between the 537 trials included in the 179 type 2 diabetes systematic reviews (orange) and the 141 trials included in the 17 Cochrane systematic reviews (blue). In the figure, the position is determined by mapping the feature space into two dimensions such that similar trials are located close together. The matrix factorisation is better able to learn the features that reproduce the known links available in the orange region of this space, and struggles to learn the important features for the Cochrane reviews because the known links are more sparsely distributed across the 128,392 completed trial registrations from ClinicalTrials.gov.

Fig. 7: The t-SNE visualisation of the trial registration space constructed from the PCA feature representation. The 537 unique trial registrations included in the type 2 diabetes reviews (orange), and the 141 unique trial registrations in the Cochrane reviews (blue) are highlighted among the set of 128,392 trial registrations (grey).

The results of the experiments suggest that the document similarity and matrix factorisation approaches can replace the need to undertake a search of trial registrations as well as improve the efficiency of screening for relevant trials. This suggests that the two could be used as complementary solutions as part of a pipeline of methods for automatically identifying trials that may be relevant to the update of a systematic review, or used to support updates of living systematic reviews [17].

5.3 Limitations

The study has several limitations. First, the set of systematic reviews used in the training and validation of the methods are related to drug interventions in type 2 diabetes–an area with a substantial number of new drugs and a large volume of new trials. This means that the results may not generalise to clinical application domains with fewer trials available [48], or where trial descriptions are more heterogeneous. Second, our results may underestimate the performance of both the baseline document similarity approach and the matrix factorisation approach because we only used known links to test the performance, and other highly–ranked candidates–especially those completed since the systematic reviews were published–may also have been relevant but not included in the systematic review.

6 Conclusion

The use of clinical trial registries to monitor which and when systematic reviews require updating has been limited by the extensive manual processes required to identify relevant trial registrations. To date, semi-automated article screening methods used in bibliographic databases have not been extended to trial registries. We found that a matrix factorisation method can be used to rank trial registrations such that 75% of relevant trial registrations will appear within the top 200 candidates. The matrix factorisation approach for identifying trials relevant to a systematic review update is likely to be most useful in practice if implemented in a pipeline or in combination with other methods designed for scenarios where no or few relevant trial registrations are known.


FB and AD report funding from the Agency for Healthcare Research and Quality (R03HS024798).


  • [1] K. G. Shojania, M. Sampson, M. T. Ansari, J. Ji, S. Doucette, and D. Moher, “How quickly do systematic reviews go out of date? A survival analysis,” Ann Intern Med, vol. 147, pp. 224–233, 2007.
  • [2] M. L. W. Jaidee, D. Moher, “Time to update and quantitative changes in the results of Cochrane pregnancy and childbirth reviews,” PLoS One, vol. 5, p. e11553, 2010.
  • [3] C. Garritty, A. Tsertsvadze, A. C. Tricco, M. Sampson, and D. Moher, “Updating systematic reviews: An international survey,” PLoS One, vol. 5, p. e9914, 2010.
  • [4] K. Peterson, M. S. McDonagh, and R. Fu, “Decisions to update comparative drug effectiveness reviews vary based on type of new evidence,” Journal of Clinical Epidemiology, vol. 64, pp. 977–984, 2011.
  • [5] M. Chung, S. J. Newberry, M. T. Ansari, W. W. Yu, H. Wu, J. Lee, M. Suttorp, J. M. Gaylor, A. Motala, D. Moher, E. M. Balk, and P. G. Shekelle, “Two methods provide similar signals for the need to update systematic reviews,” Journal of Clinical Epidemiology, vol. 65, pp. 660–668, 2012.
  • [6] P. Pattanittum, M. Laopaiboon, D. Moher, P. Lumbiganon, and C. Ngamjarus, “A comparison of statistical methods for identifying out-of-date systematic reviews,” PLoS One, vol. 7, p. e48894, 2012.
  • [7] Y. Takwoingi, S. Hopewell, D. Tovey, and A. Sutton, “A multicomponent decision tool for prioritising the updating of systematic reviews,” BMJ, vol. 347, p. f7191, 2013.
  • [8] N. Ahmadzai, S. J. Newberry, M. A. Maglione, A. Tsertsvadze, M. T. Ansari, S. Hempel, A. Motala, S. Tsouros, J. J. S. Chafen, R. Shanman, D. Moher, and P. G. Shekelle, “A surveillance system to assess the need for updating systematic reviews,” Systematic Reviews, vol. 2, p. 104, 2013.
  • [9] P. G. Shekelle, A. Motala, B. Johnsen, and S. J. Newberry, “Assessment of a method to detect signals for updating systematic reviews,” Systematic Reviews, vol. 3, p. 13, 2014.
  • [10] P. Garner, S. Hopewell, J. Chandler, H. MacLehose, E. A. Akl, J. Beyene, and et al., “When and how to update systematic reviews: Consensus and checklist,” BMJ, vol. 354, p. i3507, 2016.
  • [11] I. Chalmers and P. Glasziou, “Avoidable waste in the production and reporting of research evidence,” Lancet, vol. 374, pp. 86–89, 2009.
  • [12] K. Dwan, C. Gamble, P. R. Williamson, and J. J. Kirkham, “Systematic review of the empirical evidence of study publication bias and outcome reporting bias – an updated review,” PLoS One, vol. 8, p. e66844, 2013.
  • [13] K. Dickersin, “The existence of publication bias and risk factors for its occurrence,” JAMA, vol. 263, pp. 1385–1389, 1990.
  • [14] R. Bashir, F. T. Bourgeois, and A. G. Dunn, “A systematic review of the processes used to link clinical trial registrations to their published results,” Systematic Reviews, vol. 6, p. 123, 2017.
  • [15] A. O’Mara-Eves, J. Thomas, J. McNaught, M. Miwa, and S. Ananiadou, “Using text mining for study identification in systematic reviews: a systematic review of current approaches,” Systematic Reviews, vol. 4, p. 5, 2015.
  • [16] M. B. M, A. Yavchitz, P. Ravaud, E. Perrodeau, and I. Boutron, “Impact of searching clinical trial registries in systematic reviews of pharmaceutical treatments: Methodological systematic review and reanalysis of meta-analyses,” BMJ, vol. 356, p. j448, 2017.
  • [17] J. H. Elliott, T. Turner, O. Clavisi, J. Thomas, J. P. T. Higgins, C. Mavergames, and et al., “Living systematic reviews: An emerging opportunity to narrow the evidence-practice gap,” PLoS Medicine, vol. 11, p. e1001603, 2014.
  • [18] G. Tsafnat, P. Glasziou, M. K. Choong, A. G. Dunn, F. Galgani, and E. Coiera, “Systematic review automation technologies,” Systematic Reviews, vol. 3, p. 74, 2014.
  • [19] G. Tsafnat, A. G. Dunn, P. Glasziou, and E. Coiera, “The automation of systematic reviews,” BMJ, vol. 346, p. f139, 2013.
  • [20] A. M. Cohen, W. R. Hersh, K. Peterson, and P.-Y. Yen, “Reducing workload in systematic review preparation using automated citation classification,” JAMIA, vol. 13, pp. 2061–219, 2006.
  • [21] Y. Aphinyanaphongs, I. Tsamardinos, A. Statnikov, D. Hardin, and C. F. Aliferis, “Text categorization models for high-quality article retrieval in internal medicine,” J Am Med Inform Assoc, vol. 12, pp. 207–216, 2005.
  • [22] H. Kim, J. Bian, J. Mostafa, S. Jonnalagadda, and G. D. Fiol, “Feasibility of extracting key elements from ClinicalTrials.gov to support clinicians’ patient care decisions,” AMIA Annual Symposium Proceedings, pp. 705–714, 2016.
  • [23] B. K. Olorisade, P. Brereton, and P. Andras, “Reproducibility of studies on text mining for citation screening in systematic reviews: Evaluation and checklist,” Journal of Biomedical Informatics, vol. 73, pp. 1–13, 2017.
  • [24] M. Miwa, J. Thomas, A. O’Mara-Eves, and S. Ananiadou, “Reducing systematic review workload through certainty-based screening,” Journal of Biomedical Informatics, vol. 51, pp. 242–253, 2014.
  • [25] B. C. Wallace, T. A. Trikalinos, J. Lau, C. Brodley, and C. H. Schmid, “Semi-automated screening of biomedical citations for systematic reviews,” BMC Bioinformatics, vol. 11, p. 55, 2010.
  • [26] K. Hashimoto, G. Kontonatsios, M. Miwa, and S. Ananiadou, “Topic detection using paragraph vectors to support active learning in systematic reviews,” Journal of Biomedical Informatics, vol. 62, pp. 59–65, 2016.
  • [27] V. Huser and J. Cimino, “Linking ClinicalTrials.gov and PubMed to track results of interventional human clinical trials,” PLoS One, vol. 8, p. e68409, 2013.
  • [28] ——, “Precision and negative predictive value of links between ClinicalTrials.gov and PubMed,” AMIA Annu Symp Proc, pp. 400–408, 2012.
  • [29] R. Bashir and A. G. Dunn, “Systematic review protocol assessing the processes for linking clinical trial registries and their published results,” BMJ Open, vol. 6, p. e013048, 2016.
  • [30] T. Hao, A. Rusanov, M. R. Boland, and C. Weng, “Clustering clinical trials with similar eligibility criteria features,” Journal of Biomedical Informatics, vol. 52, pp. 112–120, 2014.
  • [31] C. Weng, A. Yaman, K. Lin, and Z. He, “Trend and network analysis of common eligibility features for cancer trials in ClinicalTrials.gov,” in Proceedings of the International Conference for Smart Health (ICSH).   Springer International Publishing, 2014, pp. 130–141.
  • [32] M. R. Boland, R. Miotto, J. Gao, and C. Weng, “Feasibility of feature-based indexing, clustering, and search of clinical trials. a case study of breast cancer trials from ClinicalTrials.gov,” Methods of Information in Medicine, vol. 52, pp. 382–394, 2013.
  • [33] Z. He, S. Carini, I. Sim, and C. Weng, “Visual aggregate analysis of eligibility features of clinical trials,” Journal of Biomedical Informatics, vol. 54, pp. 241–255, 2015.
  • [34] H. Ma and C. Weng, “Prediction of black box warning by mining patterns of convergent focus shift in clinical trial study populations using linked public data,” Journal of Biomedical Informatics, vol. 60, pp. 132–144, 2016.
  • [35] M. Jamali and M. Ester, “A matrix factorization technique with trust propagation for recommendation in social networks,” in Proceedings of the fourth ACM conference on Recommender systems.   ACM New York, 2010, pp. 135–142.
  • [36] Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” IEEE Computer, p. 42, 2009.
  • [37] A. K. Menon and C. Elkan, “Link prediction via matrix factorization,” in

    Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD)

    .   Springer, 2011, pp. 437–452.
  • [38] R. M. Bell, Y. Koren, and C. Volinsky, “Modeling relationships at multiple scales to improve accuracy of large recommender systems,” in Proceedings of the 13th International Conference on Knowledge Discovery and Data Mining (SIGKDD).   ACM, 2007, pp. 95–104.
  • [39] Y. Koren, “Factorization meets the neighborhood: A multifaceted collaborative filtering model,” in Proceedings of the 14th International Conference on Knowledge Discovery and Data Mining (SIGKDD).   ACM, 2008, pp. 426––434.
  • [40] G. Guo, J. Zhang, and N. Yorke-Smith, “TrustSVD: Collaborative filtering with both the explicit and implicit influence of user trust and of item ratings,” in

    Proceedings of the 29th AAAI Conference on Artificial Intelligence

    , 2015, pp. 123–129.
  • [41] M. F. Porter, “An algorithm for suffix stripping,” Readings in Information Retrieval, pp. 313–316, 1997.
  • [42] S. P. Crain, K. Zhou, S.-H. Yang, and H. Zha, “Dimensionality reduction and topic modeling: From latent semantic indexing to latent dirichlet allocation and beyond,” in Mining Text Data.   Springer US, 2012, pp. 129–161.
  • [43] D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent Dirichlet Allocation,” Journal of Machine Learning Research (JMLR), vol. 3, pp. 993–1022, 2003.
  • [44] Y. Lu, Q. Z. Mei, and C. X. Zhai, “Investigating task performance of probabilistic topic models: An empirical study of PLSA and LDA,” Inform Retrieval, vol. 14, pp. 178–203, 2011.
  • [45] C. W. Belter, “A relevance ranking method for citation-based search results,” Scientometrics, vol. 112, pp. 731–146, 2017.
  • [46] P. G. Shekelle, K. Shetty, S. Newberry, M. Maglione, and A. Motala, “Machine learning versus standard techniques for updating searches for systematic reviews: A diagnostic accuracy study,” Annals of Internal Medicine, vol. 167, pp. 213–215, 2017.
  • [47] L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of Machine Learning Research (JMLR), vol. 9, pp. 2579–2605, 2008.
  • [48] JamesThomas, “Citation analysis may well have a role to play in study identification, but more evaluation and system development are required. (in press),” Journal of Clinical Epidemiology, 2017.

7 Appendix

The search strategy for identifying systematic reviews of type 2 diabetes for PubMed (and translated to equivalent terms for Embase):

((type 2[Title/Abstract] OR type II[Title/Abstract] OR adult[Title/Abstract] OR slow[Title/Abstract]) AND (diabete*[Title/Abstract] OR diabetic*[Title/Abstract])) AND ("meta analysis"[Publication Type] OR "review"[Publication Type] OR systematic review[Title/Abstract] OR meta analysis[Title/Abstract]) AND (metformin[Title/Abstract] OR glucophage[Title/Abstract] OR dipeptidyl-peptidase iv inhibitors[Title/Abstract] OR *gliptin[Title/Abstract] OR Januvia[Title/Abstract] OR glucagon-like peptide[Title/Abstract] OR Galvus[Title/Abstract] OR exenatide[Title/Abstract] OR Trajenta[Title/Abstract] OR Byetta[Title/Abstract] OR Onglyza[Title/Abstract] OR Bydureon[Title/Abstract] OR liraglutide[Title/Abstract] OR Victoza[Title/Abstract] OR lixisenatide[Title/Abstract] OR Lyxumia[Title/Abstract] OR thiazolidinedione*[Title/Abstract] OR glitazone*[Title/Abstract] OR *glitazone[Title/Abstract] OR Avandia[Title/Abstract) OR sulfonylurea*[Title/Abstract] OR sulphonylurea*[Title/Abstract] OR tolbutamide[Title/Abstract] OR glibenclamide[Title/Abstract] OR glipizide[Title/Abstract] OR Minidiab[Title/Abstract] OR glimepiride[Title/Abstract] OR Amaryl[Title/Abstract] OR gliclazide[Title/Abstract] OR Diamicron[Title/Abstract])