As reports of clinical trials continue to amass at rapid pace, staying on top of all current literature to inform evidence-based practice is next to impossible. As of 2010, about seventy clinical trial reports were published daily, on average Bastian et al. (2010). This has risen to over one hundred thirty trials per day.111See https://ijmarshall.github.io/sote/. Motivated by the rapid growth in clinical trial publications, there now exist a plethora of tools to partially automate the systematic review task Marshall and Wallace (2019). However, efforts at fully integrating the PICO framework into this process have been limited Eriksen and Frandsen (2018). What if we could build a database of Participants,222We omit Participants in this work as we focus on the document level task of inferring study result directionality, and the Participants are inherent to the study, i.e., studies do not typically consider multiple patient populations. Interventions, Comparisons, and Outcomes studied in these trials, and the findings reported concerning these? If done accurately, this would provide direct access to which treatments the evidence supports. In the near-term, such technologies may mitigate the tedious work necessary for manual synthesis.
Recent efforts in this direction include the EBM-NLP project Nye et al. (2018), and Evidence Inference Lehman et al. (2019), both of which comprise annotations collected on reports of Randomized Control Trials (RCTs) from PubMed.333https://pubmed.ncbi.nlm.nih.gov/ Here we build upon the latter, which tasks systems with inferring findings in full-text reports of RCTs with respect to particular interventions and outcomes, and extracting evidence snippets supporting these.
We describe the collection of an additional 2,503 unique ‘prompts’ (see Section 2) with matched full-text articles; this is a 25% expansion of the original evidence inference dataset that we will release. We additionally have collected an abstract-only subset of data intended to facilitate rapid iterative design of models, as working over full-texts can be prohibitively time-consuming.
We introduce and evaluate new models, achieving SOTA performance for this task.
We ablate components of these models and characterize the types of errors that they tend to still make, pointing to potential directions for further improving models.
In the Evidence Inference task Lehman et al. (2019), a model is provided with a full-text article describing a randomized controlled trial (RCT) and a ‘prompt’ that specifies an Intervention (e.g., aspirin), a Comparator (e.g., placebo), and an Outcome (e.g., duration of headache). We refer to these as ICO prompts. The task then is to infer whether a given article reports that the Intervention resulted in a significant increase, significant decrease, or produced no significant difference in the Outcome, as compared to the Comparator.
Our annotation process largely follows that outlined in Lehman et al. (2019); we summarize this briefly here. Data collection comprises three steps: (1) prompt generation; (2) prompt and article annotation; and (3) verification. All steps are performed by Medical Doctors (MDs) hired through Upwork.444http://upwork.com. Annotators were divided into mutually exclusive groups performing these tasks, described below.
Combining this new data with the dataset introduced in Lehman et al. (2019) yields in total 12,616 unique prompts stemming from 3,346 unique articles, increasing the original dataset by 25%.555We use the first release of the data by Lehman et al., which included 10,137 prompts. A subsequent release contained 10,113 prompts, as the authors removed prompts where the answer and rationale were produced by different doctors. To acquire the new annotations, we hired 11 doctors: 1 for prompt generation, 6 for prompt annotation, and 4 for verification.
2.1 Prompt Generation
In this collection phase, a single doctor is asked to read an article and identify triplets of interventions, comparators, and outcomes; we refer to these as ICO prompts. Each doctor is assigned a unique article, so as to not overlap with one another. Doctors were asked to find a maximum of 5 prompts per article as a practical trade-off between the expense of exhaustive annotation and acquiring annotations over a variety of articles. This resulted in our collecting 3.77 prompts per article, on average. We asked doctors to derive at least 1 prompt from the body (rather than the abstract) of the article. A large difficulty of the task stems from the wide variety of treatments and outcomes used in the trials: 35.8% of interventions, 24.0% of comparators, and 81.6% of outcomes are unique to one another.
In addition to these ICO prompts, doctors were asked to report the relationship between the intervention and comparator with respect to the outcome, and cite what span from the article supports their reasoning. We find that 48.4% of the collected prompts can be answered using only the abstract. However, 63.0% of the evidence spans supporting judgments (provided by both the prompt generator and prompt annotator), are from outside of the abstract. Additionally, 13.6% of evidence spans cover more than one sentence in length.
2.2 Prompt Annotation
Following the guidelines presented in Lehman et al. (2019), each prompt was assigned to a single doctor. They were asked to report the difference between the specified intervention and comparator, with respect to the given outcome. In particular, options for this relationship were: “increase”, “decrease”, “no difference” or “invalid prompt.” Annotators were also asked to mark a span of text supporting their answers: a rationale. However, unlike Lehman et al. (2019), here, annotators were not restricted via the annotation platform to only look at the abstract at first. They were free to search the article as necessary.
Because trials tend to investigate multiple interventions and measure more than one outcome, articles will usually correspond to multiple — potentially many — valid ICO prompts (with correspondingly different findings). In the data we collected, 62.9% of articles comprise at least two ICO prompts with different associated labels (for the same article).
Given both the answers and rationales of the prompt generator and prompt annotator, a third doctor — the verifier — was asked to determine the validity of both of the previous stages.666The verifier can also discard low-quality or incorrect prompts.
We estimate the accuracy of each task with respect to these verification labels. For prompt generation, answers were 94.0% accurate, and rationales were 96.1% accurate. For prompt annotation, the answers were 90.0% accurate, and accuracy of the rationales was 88.8%. The drop in accuracy between prompt generation answers and prompt annotation answers is likely due to confusion with respect to the scope of the intervention, comparator, and outcome.
We additionally calculated agreement statistics amongst the doctors across all stages, yielding a Krippendorf’s of . In contrast, the agreement between prompt generator and annotator (excluding verifier) had a .
2.4 Abstract Only Subset
We subset the articles and their content, yielding 9,680 of 24,686 annotations, or approximately 40%. This leaves 6375 prompts, 50.5% of the total.
We consider a simple BERT-based Devlin et al. (2018) pipeline comprising two independent models, as depicted in Figure 1. The first identifies evidence bearing sentences within an article for a given ICO. The second model then classifies the reported findings for an ICO prompt using the evidence extracted by this first model. These models place a dense layer on top of representations yielded from SciBERT Beltagy et al. (2019), a variant of BERT pre-trained over scientific corpora,777We use the [CLS] representations. followed by a Softmax.
Specifically, we first perform sentence segmentation over full-text articles using ScispaCy Neumann et al. (2019). We use this segmentation to recover evidence bearing sentences. We train an evidence identifier by learning to discriminate between evidence bearing sentences and randomly sampled non-evidence sentences.888We train this via negative sampling because the vast majority of sentences are not evidence-bearing. We then train an evidence classifier over the evidence bearing sentences to characterize the trial’s finding as reporting that the Intervention significantly decreased, did not significantly change, or significantly increased the Outcome compared to the Comparator in an ICO. When making a prediction for an (ICO, document) pair we use the highest scoring evidence sentence from the identifier, feeding this to the evidence classifier for a final result. Note that the evidence classifier is conditioned on the ICO frame; we simply prepend the ICO embedding (from SciBERT) to the embedding of the identified evidence snippet. Reassuringly, removing this signal degrades performance (Table 1).
For all models we fine-tuned the underlying BERT parameters. We trained all models using the Adam optimizer Kingma and Ba (2014)
with a BERT learning rate 2e-5. We train these models for 10 epochs, keeping the best performing version on a nested held-out set with respect to macro-averaged f-scores. When training the evidence identifier, we experiment with different numbers of random samples per positive instance. We usedScikit-Learn Pedregosa et al. (2011) for evaluation and diagnostics, and implemented all models in PyTorch Paszke et al. (2019). We additionally reproduce the end-to-end system from Lehman et al. (2019) et al. (2014) to encode the document, attention Bahdanau et al. (2015)
conditioned on the ICO, with the resultant vector (plus the ICO) fed into an MLP for a final significance decision.
4 Experiments and Results
Our main results are reported in Table 1. We make a few key observations. First, the gains over the prior state-of-the-art model — which was not BERT based — are substantial: 20+ absolute points in F-score, even beyond what one might expect to see shifting to large pre-trained models.999To verify the impact of architecture changes, we experiment with randomly initialized and fine-tuned BERTs. We find that these perform worse than the original models in all instances and elide more detailed results. Second, conditioning on the ICO prompt is key; failing to do so results in substantial performance drops. Finally, we seem to have reached a plateau in terms of the performance of the BERT pipeline model; adding the newly collected training data does not budge performance (evaluated on the augmented test set). This suggests that to realize stronger performance here, we perhaps need a less naive architecture that better models the domain. We next probe specific aspects of our design and training decisions.
Impact of Negative Sampling As negative sampling is a crucial part of the pipeline, we vary the number of samples and evaluate performance. We provide detailed results in Appendix A, but to summarize briefly: we find that two to four negative samples (per positive) performs the best for the end-to-end task, with little change in both AUROC and accuracy of the best fit evidence sentence. This is likely because the model needs only to maximize discriminative capability, rather than calibration.
|BERT Pipeline abs.||✓||.803||.798||.799|
|BERT Pipeline 1.0||✓||.749||.761||.753|
|Oracle Spans abs.||✓||.866||.862||.863|
|Baseline Oracle 1.0||✓||.740||.739||.739|
Distribution Shift In addition to comparable Krippendorf- values computed above, we measure the impact of the new data on pipeline performance. We compare performance of the pipeline with all data “BERT Pipeline” vs. just the old data “BERT Pipeline 1.0” in Table 1. As performance stays relatively constant, we believe the new data to be well-aligned with the existing release. This also suggests that the performance of the current simple pipeline model may have plateaued; better performance perhaps requires inductive biases via domain knowledge or improved strategies for evidence identification.
Oracle Evidence We report two types of Oracle evidence experiments - one using ground truth evidence spans “Oracle spans”, the other using sentences for classification. In the former experiment, we choose an arbitrary evidence span101010Evidence classification operates on a single sentence, but an annotator’s selection is span based. Furthermore, the prompt annotation stage may produce different evidence spans than prompt generation. for each prompt for decoding. For the latter, we arbitrarily choose a sentence contained within a span. Both experiments are trained to use a matching classifier. We find that using a span versus a sentence causes a marginal change in score. Both diagnostics provide an upper bound on this model type, improve over the original Oracle baseline by approximately 10 points. Using Oracle evidence as opposed to a trained evidence identifier leaves an end-to-end performance gap of approximately 0.08 F1 score.
Conditioning As the BERT pipeline can optionally condition on the ICO, we ablate over both the ICO and the actual document text. We find that using the ICO alone performs about as effectively as an unconditioned end-to-end pipeline, 0.49 F1 score (Table 1). However, when fed Oracle sentences, the unconditioned pipeline performance jumps to 0.77 F1. As shown in Table 3 (Appendix A), this large decrease in score can be attributed to the model losing the ability to identify the correct evidence sentence.
|Ev. Cls||ID Acc.||Sig||Sig||Sig|
Breakdown of the conditioned BERT pipeline model mistakes and performance by evidence class. ID Acc. is the ”identification accuracy”, or percentage of . To the right is a confusion matrix for end-to-end predictions. ‘Sig’ indicates significantly decreased, ‘Sig ’ indicates no significant difference, ‘Sig ’ indicates significantly increased.
Mistake Breakdown We further perform an analysis of model mistakes in Table 2. We find that the BERT-to-BERT model is somewhat better at identifying significantly decreased spans than it is at identifying spans for the significantly increased or no significant difference evidence classes. Spans for the no significant difference tend to be classified correctly, and spans for the significantly increased category tend to be confused in a similar pattern to the significantly decreased class. End-to-end mistakes are relatively balanced between all possible confusion classes.
Abstract Only Results We report a full suite of experiments over the abstracts-only subset in Appendix B. This leads to better performance (compared to the full-text dataset), with a pipeline F1 of 0.80. This is not surprising, as abstracts are reasonably concise and likely report findings for only a single ICO in many cases.
The Oracle span performance over abstracts was 0.86 F1, leaving a comparable gap in performance with the end-to-end pipeline.
5 Conclusiosns and Future Work
We have introduced an expanded version of the Evidence Inference dataset. We have proposed and evaluated BERT-based models for the evidence inference task (which entails identifying snippets of evidence for particular ICO prompts in long documents and then classifying the reported finding on the basis of these), achieving state of the art results on this task.
With this expanded dataset, we hope to support further development of NLP for assisting Evidence Based Medicine. Our results demonstrate promise for the task of automatically inferring results from Randomized Control Trials, but still leave room for improvement. In our future work, we intend to jointly automate the identification of ICO triplets and inference concerning these. We are also keen to investigate whether pre-training on related scientific ‘fact verification’ tasks might improve performance Wadden et al. (2020).
We thank the anonymous BioNLP reviewers.
This work was supported by the National Science Foundation, CAREER award 1750978.
- Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun (Eds.), External Links: Cited by: §3.
- Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?. PLoS Med 7 (9), pp. e1000326. Cited by: §1.
- Scibert: pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676. Cited by: §3.
Learning phrase representations using RNN encoder–decoder for statistical machine translation.
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1724–1734. External Links: Cited by: §3.
- Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §3.
- The impact of patient, intervention, comparison, outcome (PICO) as a search strategy tool on literature search quality: a systematic review. Journal of the Medical Library Association 106 (4). External Links: Cited by: §1.
- Adam: A method for stochastic optimization. CoRR abs/1412.6980. External Links: Cited by: §3.
- Inferring which medical treatments work from reports of clinical trials. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 3705–3717. External Links: Cited by: Evidence Inference 2.0: More Data, Better Models, §1, §2.2, §2, §2, §2, §3, Table 1, footnote 5.
Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Systematic Reviews 8 (1), pp. 163. External Links: Cited by: §1.
- ScispaCy: fast and robust models for biomedical natural language processing. External Links: Cited by: §3.
- A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 197–207. External Links: Cited by: §1.
PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: §3.
- Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §3.
- Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1.
- Fact or fiction: verifying scientific claim. In Association for Computational Linguistics (ACL), Cited by: §5.
Appendix A Negative Sampling Results
|Neg. Samples||Cond?||AUROC||Top1 Acc|
Appendix B Abstract Only Results
We repeat the experiments described in Section 4. Our primary findings are that the abstract-only task is easier and eight negative samples perform better than four. Otherwise results follow a similar trend to the full-document task.
|Neg. Samples||Cond?||AUROC||Top1 Acc|
|Ev. Cls||ID Acc.||Sig||Sig||Sig|
|Number of prompts||10150||1238||1228||12616|
|Number of articles||2672||340||334||3346|
|Label counts (-1 / 0 / 1)||2465 / 4563 / 3122||299 / 544 / 395||295 / 516 / 417||3059 / 5623 / 3934|