Electronic Health Records (EHR) contain a wealth of patients’ data ranging from diagnoses, problems, treatments, medications to imaging and clinical narratives such as discharge summaries and progress reports. Structured data are important for billing, quality and outcomes. On the other hand, narrative text is more expressive, more engaging and captures patient’s story more accurately. Narrative notes may also contain information about level of concern and uncertainty to others who are reviewing the note. Studies have shown that narrative notes contain more naturalistic prose, more reliable in identifying patients with a given disease and more understandable to healthcare providers reviewing those notes [34, 19, 30, 45, 9]. Therefore, to have a clear perspective on patient condition, narrative text should be analyzed. However, manual analysis of massive number of narrative text is time consuming, labor intensive and prone to errors.
Many clinical Natural Language Processing (NLP) tools and systems were published to help us make sense of those valuable narrative text. For instance, clinical Text Analysis and Knowledge Extraction System (cTAKES)  is an open-source NLP package based on the Unstructured Information Management Architecture (UIMA) framework  and OpenNLP  natural language processing toolkit. cTAKES uses a dictionary look-up and each mention is mapped to a Unified Medical Language System (UMLS) concept . MetaMap  is another open-source tool aims at mapping mentions in biomedical text to UMLS concepts using dictionary lookup. MetaMap Lite  adds negation detection based on either ConText  or NegEx .
The Clinical Language Annotation, Modeling, and Processing (CLAMP) 
is one of the most recent clinical NLP systems. CLAMP is motivated by the fact that existing clinical NLP systems need customization and must be tailored to one’s task. For NER, CLAMP takes two approaches: machine learning approach using Conditional Random Field (CRF) and dictionary-based, which maps mentions to standardized ontologies. CLAMP also provides assertion and negation detection based on machine learning or rule-based NegEx.
Many of the existing NLP systems rely on ConText  and NegEx  to detect assertions such as negation. ConText extracts three contextual features for medical conditions: negation, historical or hypothetical and experienced by someone other than the patient. ConText is an extension of NegEx, which is based on regular expression.
Most of the NLP systems discussed above perform linking of mentions to UMLS. They are based on pipelined components that are configurable, rely on dictionary look-up for NER and regular expressions for assertion detection.
Recently, neural network models have been proposed to overcome some of the limitations of rule-based techniques. A feedforward and bidirectional Long Short Term Memory (BiLSTM) networks for generic negation scope detection was proposed in. In 
a gated recurrent units (GRUs) are used to represent the clinical relations and their context, along with an attention mechanism. Given a text annotated with relations, it classifies the presence and period of the relations. However, this approach is not end-to-end as it does not predict the relations. Additionally, these models generally require large annotated corpus to achieve good performance, but clinical data is scarce.
Kernel-based approaches are also very common, especially in the 2010 i2b2/VA task of predicting assertions. The state-of-the-art in that challenge applied support vector machines (SVM) to assertion prediction as a separate step after entity extraction. They train classifiers to predict assertions of each concept word, and a separate classifier to predict the assertion of the whole entity. Augmented Bag of Words Kernel (ABoW), which generates features based on NegEx rules along with bag-of-words features was proposed in  and a CRF based approach for classification of cues and scope detection was proposed in . These machine learning based approaches often suffer in generalizability.
Once named entities are extracted it is important to identify the relationships between the entities. Several end-to-end models were proposed that jointly learn named entity recognition and relationship extraction [32, 52, 1]. Generally, relationship extraction models consist of an encoder followed by relationship classification unit [46, 14, 44]
. The encoder provides context aware vector representations for both target entities, which are then merged or concatenated before being passed to the relation classification unit, where a two layered neural network or multi-layered perceptron classifies the pair into different relation types.
Despite the existence of many clinical NLP systems, automatic information extraction from narrative clinical text has not achieved enough traction yet . As reported by  there is a significant gap between clinical studies using Electornic Health Record (EHR) data and studies using clinical information extraction. Reasons for such gap can be attributed to limited expertise of NLP experts in the clinical domain, limited availability of clinical data sets due to the HIPAA privacy rules and poor portability and generalizability of clinical NLP systems. Rule-based NLP systems require handcrafted rules, while machine learning-based NLP systems require annotated datasets.
To narrow the clinical NLP adoption gap and to address some of the limitations in existing NLP systems, we present Comprehend Medical, a web service for clinical named entity recognition and relationship extraction. Our contributions are as follows:
Named entity recognition, relationship extraction and trait detection service encapsulated in one easy to use API.
Web service that uses deep learning multi-task  approach trained on labeled training data and requires no configurations or customization.
Trait (negation, sign, symptom and diagnosis) detection for medical condition and negation detection for medication.
The rest of the paper is organized as follows: section II presents the methods, section III describes the datasets and experimental settings, section IV contains the results for the NER and RE models, section V talks about the implementation details, section VI gives overview of the supported entities, traits and relationships, section VII presents some of the use cases and we conclude in section VIII.
In this section we briefly introduce the architectures for named entity recognition and trait detection proposed in  and the relation extraction using explicit context conditioning proposed in .
Ii-a Named Entity Recognition Architecture
A sequence tagging problem such as NER can be formulated as maximizing the conditional probability distribution over tagsgiven an input sequence , and model parameters .
is the length of the sequence, and are tags for the previous words. The architecture we use as a foundation is that of [27, 50]. The model consists of three main components: (i) character encoder, (ii) word encoder, and (iii) decoder/tagger.
Given an input sequence whose coordinates indicate the words in the input vocabulary, we first encode the character level representation for each word. For each the corresponding sequence of character embeddings is fed into an encoder, where is the length of a given word and is the size of the character embedding. The character encoder employs two LSTM units which produce , and
, the forward and backward hidden representations, respectively, whereis the last timestep in both sequences. We concatenate the last timestep of each of these as the final encoded representation, , of at the character level.
The output of the character encoder is concatenated with a pre-trained word embedding, , which is used as the input to the word level encoder.
Using learned character embeddings alongside word embeddings has shown to be useful for learning word level morphology, as well as mitigating loss of representation for out-of-vocabulary words. Similar to the character encoder we use a BiLSTM to encode the sequence at the word level. The word encoder does not lose resolution, meaning the output at each timestep is the concatenated output of both word LSTMs, .
Ii-A2 Decoder and Tagger
Finally, the concatenated output of the word encoder is used as input to the decoder, along with the label embedding of the previous timestep. During training we use teacher forcing  to provide the gold standard label as part of the input.
where , is the number of hidden units in the decoder LSTM, and is the number of tags. The model is trained in an end-to-end fashion using a standard cross-entropy objective.
Ii-A3 Named Entity Recognition Decoder Model
Our decoder model provides more context to trait detection by adding an additional input, which is the softmax output from entity extraction. We refer to this architecture as the Conditional Softmax Decoder as shown in Fig. 1 . Thus, the model learns more about the input as well as the label distribution from entity extraction prediction. As an example, we use negation only for problem entity in the i2b2 dataset. Providing the entity prediction distribution helps the negation model to make better predictions. The negation model learns that if the prediction probability is not inclined towards the problem entity, then it should not predict negation irrespective of the word representation.
where, is the softmax output of the entity at time step .
Readers are referred to  for more detailed discussion on the conditional softmax decoder model.
Ii-B Relationship Extraction Architecture
The extracted entities are not very meaningful by themselves, specially in the healthcare domain. For instance, it is important to know if the procedure was performed bilaterally, on the left or right side. Knowing the correct location will result in more accurate and reliable billing and reimbursement. Hence, it is important to identify the relationships among those clinical entities.
The RE model architecture is described in , but we reiterate some of the important details here. Relationships are defined between two entities, which we refer to as head and tail entity. To extract such relationships we proposed relation extraction using explicit context conditioning, where two target entities (head and tail) can be explicitly connected via a context token also known as second order relations. Similar to Bi-affine Relation Attention Networks (BRAN) , we first compute the representations for both the head, , and tail, , entities, which are then passed through two multi-layer perceptron (MLP-1) to obtain first-order relation scores, , as shown in Fig. 2. We also pass and through MLP-2 to obtain second-order relation scores, , where and are the indices for the head and tail entities. The motivation for adding MLP-2 was driven by the need for representations focused on establishing relations with context tokens, as opposed to first-order relations. At the end, the final score for relation between two entities is given as a weighted sum of first and second order scores.
We evaluated our model on two datasets. First is the 2010 i2b2/VA challenge dataset for “test, treatment, problem” (TTP) entity extraction and assertion detection, herein referred to as i2b2. Unfortunately, only part of this dataset was made public after the challenge, therefore we cannot directly compare with NegEx and ABoW results. We followed the original data split from  of 170 notes for training and 256 for testing. The second dataset is proprietary and consists of 4,200 de-identified clinical notes with medical conditions, herein referred to as DCN.
The i2b2 dataset contains six predefined relations types including TrCP (Treatment Causes Problem), TrIP (Treatment Improves Problem), TrWP (Treatment Worsens Problem) and one negative relation. The DCN dataset contains seven predefined relationship types such as with_dosage, every and one negative relation. A summary of the datasets is presented in Table I.
Iii-B NER Model Settings
Word, character and tag embeddings are 100, 25, and 50 dimensions, respectively. Word embeddings are initialized using GloVe, while character and tag embeddings are learned. Character and word encoders have 50 and 100 hidden units, respectively, while the decoder LSTM has a hidden size of 50. Dropout is used after every LSTM, as well as for word embedding input. We use Adam as an optimizer. Our model is built using MXNet. Hyperparameters are tuned using Bayesian Optimization.
Iii-C RE Model Settings
Our final network had two encoder layers, with 8 attention heads in each multi-head attention sublayer and 256 filters for convolution layers in position-wise feedforward sublayer. We used dropout with probability 0.3 after the embedding layer, head/tail MLPs and the output of each encoder sublayer. We also used a word dropout with probability 0.15 before the embedding layer.
Iv-a NER and Trait Detection Results
We report the results for NER and negation detection for both the i2b2 and DCN datasets in Table II. We observe that our purposed conditional softmax decoder approach outperforms the best model  on the i2b2 challenge.
We compare our models for negation detection against NegEx  and ABoW , which has the best results for the negation detection task on i2b2 dataset. Conditional softmax decoder model outperforms both NegEx and ABoW (Table II). Low performance of NegEx and ABoW is mainly attributed to the fact that they use ontology lookup to index findings and negation regular expression search within a fixed scope. A similar trend was observed in the medication condition dataset (Table II
). The important thing to note is the low F1 score for NegEx. This can primarily be attributed to abbreviations and misspellings in clinical notes which can not be handled well by rule-based systems.
|ABoW Kernel ||0.899||0.900||0.900|
We also evaluated the conditional softmax decoder in low resource settings, where we used a sample of our training data. We observed that conditional decoder is more robust in low resource settings than other approaches as we reported in .
Iv-B RE Results
To show the benefits of using second-order relations we compared our model’s performance to BRAN. The two models are different in the weighted addition of second-order relation scores. We tune over this weight parameter on the dev set and observed an improvement in MacroF1 score from 0.712 to 0.734 over DCN data and from 0.395 to 0.407 over i2b2 data. For further comparison a recently published model called Hybrid Deep Learning Approach (HDLA) 
reported a macroF1 score of 0.388 on the same i2b2 dataset. It should be mentioned that HDLA used syntactic parsers for feature extraction but we do not use any such external tools.
Table III summarizes the performance of our relationship model (+SOR) using second-order relations compared to BRAN and HDLA. We refer the readers to  for more detailed analysis of our relationship extraction model.
Comprehend Medical APIs run in Amazon’s proven, high-availability data centers, with service stack replication configured across three facilities in each AWS region to provide fault tolerance in the event of a server failure or Availability Zone outage. Additionally, Comprehend Medical ensures that system artifacts are encrypted in transit and user data is pass through and will not be stored in any part of the system.
Comprehend Medical is available through a Graphical User interface (GUI) within the AWS console and can be accessed using the Java and Python SDK. Comprehend Medical offers two APIs: 1) the NERe API which returns all the extracted named entities, their traits and the relationships between them, 2) the PHId API which returns just the protected health information contained in the text. Developers can easily integrate Comprehend Medical into their data processing pipelines as shown in Fig. 4.
Vi Entities, Traits and Relationships
Named entity mentions found in narrative notes are tagged with entity types listed in Table IV. The entities are divided into five categories: Anatomy, Medical Condition, Medication, PHI and TTP. Comprehend Medical is HIPAA eligible and therefore it supports HIPAA identifiers. Some of those identifiers are grouped under one identifier. For instance, Contact Point covers phone and fax numbers, and ID covers social security number, medical record number, account number, certificate or license number and vehicle or device number. An example input text is shown in Fig. 3.
|System Organ Site|
|Medical Condition||Dx Name|
|Route or Mode|
Comprehend Medical covers four traits, listed in Table V. Negation asserts the presence or absence of a Dx Name and whether or not the individual is taking the medication. Dx Name has three additional traits: Diagnosis, Sign and Symptom. Diagnosis identifies an illness or a disease. Sign is an objective evidence of disease and it is a phenomenon that is detected by a physician or a nurse. Symptom is a subjective evidence of disease and it is phenomenon that is observed by the individual affected by the disease. An example of traits is shown in Fig. 3.
|Negation||Brand/Generic Name, Dx Name|
A relationship is defined between a pair of entities in the Medication and TTP categories (Table VI). One of the entities in a relationship is the head while the other is the tail entity. In Medication, Generic and Brand Name are the head entity, which can have relationships to tail entities such as Strength and Dosage. An example of relations is shown in Fig. 3.
|Head Entity||Tail Entity|
|Route or Mode|
|Test Name||Test Value|
Vii Use Cases
Comprehend Medical reduces the cost, time and effort of processing large amounts of unstructured medical text with high accuracy, making it possible to pursue use cases such as clinical trial management, clinical decision support and revenue cycle management.
Vii-a Clinical Trial Management
It can take about 10-15 years for a treatment to be developed from discovery to registration with the Federal Drug Administration (FDA). During that time, research organization can spend six years on clinical trails. Despite the number of year it takes to design those clinical trails, 90% of all clinical trails fail to enroll patients within the targeted time and are forced to extend the enrollment period, 75% of trails fail to enroll the targeted number of patients and 27% fail to enroll any subjects .
Life sciences and clinical research organizations can speed up and optimize the process of recruiting patients into a clinical trial as extractions from unstructured text and medical records can expedite the matching process. For instance, indexing patients based on medication, medical condition and treatments can help with quickly identifying the right participants for a lifesaving clinical trial.
Fred Hutchinson Cancer Research Center (FHCRC) utilized Comprehend Medical in their clinical trail management. FHCRC was spending 1.5 hours to annotate a single patient note, about 2.5 hours on manual chart abstraction per patient and per day they can process charts for about three patients. By using Comprehend Medical, FHCRC was able to annotate 9,642 patient notes per hour.
Vii-B Patient and Population Health Analytics
Population health focuses on the discovery of factors and conditions for the a health of a population over time. It aims at identifying patterns of occurrence and knowledge discovery in order to develop polices and actions to improve health of a group or population .
Examples of population health analytics include patient stratification, readmission prediction and mortality measurement. Automatically unlocking important information from the narrative text is invaluable to organizations participating in value-based healthcare and population health. Structured medical records do not fully identify patients with medical history of diabetes, which results in an underestimation of disease prevalence . The inability to identify patient cohorts from structured data represents a problem for the development of population health and clinical management systems. It also negatively affects the accuracy of identifying high-risk and high-cost patients . Ref.  identified three areas that may have an impact on readmission, but that are poorly documented in the EMR system, thus the need for NLP-based solutions to extract such information. Also, some symptoms and illness characteristics that are necessary to develop reliable predictors are missing in the coded billing data . Ref.  performed mortality prediction and reported a 2% increase in the Area Under the Curve when using features from both structured data and concepts extracted from narrative notes and  found that the predictive power of suicide risk factors found in EMR systems become asymptotic, leading them to incorporate analysis on clinical notes to predict risk of suicide.
As seen from the examples above, NLP-based approaches can assist in identifying concepts that are incorrectly codified or are missing in EMR system. Population health platforms can expand their risk analytics to leveraged unstructured clinical data for prediction of high risk patients and epidemiologic studies on outbreaks of diseases.
Vii-C Revenue Cycle Management
In healthcare, Revenue Cycle Management (RCM) is the process of collecting revenue and tracking claims from healthcare providers including hospitals, outpatient clinics, nursing homes, dentist clinics and physician groups .
RCM process has been inefficient as most healthcare systems use rule-based approaches and manual audits of documents for billing and coding purposes . Rule-based systems are time consuming, expensive to maintain, require attention and frequent human intervention. Due to these ineffective processes, data coded at point care, which is the source for claims data, can contain errors and inconsistencies.
Coding is the process of encoding the details of patient encounters into standardized terminology . A study by  shows that 48 errors found in 38 of the 106 finished consultant episodes in urology and 71% of these errors are caused by inaccurate coding. Ref.  measured the consistency of coded data and found that some of these errors were significant enough to change the diagnostic related group.
RCM companies can use Comprehend Medical to enhance existing workflows around computer assisted coding, and validate submitted codes by providers. In addition, claim audits, which often requires finding text evidence for submitted claims and is done manually, could be done more accurately and faster.
The aim of pharmacovigilance is to monitor, detect and prevent adverse drug events (ADE) of medical drugs. Early system used for pharmacovigilance is the spontaneous reporting system (SRS), which provided safety information on drugs . However, SRS databases are incomplete, inaccurate and contain biased reporting [47, 24]. A newer generation of databases was created that contains clinical information for large patient population, such as the Intensive Medicines Monitoring Program (IMMP) and the General Practice Research Database (GPRD). Such databases included data from structured fields and forms, but very small amount of details are stored in the structured fields. Researchers then started to look into EHR data for pharmacovigilance. However, most valuable information in patient records are contained in the unstructured text.
Studies have shown that narrative notes are more expressive, more engaging and captures patient’s story more accurately compared to the structured EHR data. They also contain more naturalistic prose, more reliable in identifying patients with a given disease and more understandable to healthcare providers reviewing those notes, which urges the need for a more accurate, intuitive and easy to use NLP system. In this paper we presented Comprehend Medical, a HIPAA eligible Amazon Web Service for medical language entity recognition and relationship extraction. Comprehend Medical supports several entity types divided into five different categories (Anatomy, Medical Condition, Medication, Protected Health Information, Treatment, Test and Procedure) and four traits (Negation, Diagnosis, Sign, Symptom). Comprehend Medical uses state-of-the-art deep learning models and provides two APIs, the NERe and PHId API. Comprehend Medical also comes with four different interfaces (CLI, Java SDK, Python SDK and GUI) and contrary to many other existing clinical NLP systems, it does not require dependencies, configuration or pipelined components customization.
Global normalization of convolutional neural networks for joint entity and relation classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 1723–1729. External Links: Cited by: §I.
-  (2010-01) An overview of MetaMap: historical perspective and recent advances.. Journal of the American Medical Informatics Association : JAMIA 17 (3), pp. 229–36. External Links: Cited by: §I.
-  (2005) The Apache OpenNLP Project. URL: https://opennlp.apache.org/. Cited by: §I.
-  (2000-03) Do we do what they say we do? Coding errors in urology. BJU International 85 (4), pp. 389–391. External Links: Cited by: §VII-C.
-  (2014-07) Big Data In Health Care: Using Analytics To Identify And Manage High-Risk And High-Cost Patients. Health Affairs 33 (7), pp. 1123–1131. External Links: Cited by: §VII-B.
Dynamic transfer learning for named entity recognition. In International Workshop on Health Intelligence, pp. 69–81. Cited by: 2nd item.
-  (2019) Joint Entity Extraction and Assertion Detection for Clinical Text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 954–959. External Links: Cited by: §II-A3, §II-A3, §II, §IV-A.
-  (2004-01) The Unified Medical Language System (UMLS): integrating biomedical terminology. Nucleic Acids Research 32 (90001), pp. 267D–270. External Links: Cited by: §I.
-  (1997) Natural language generation in health care.. Journal of the American Medical Informatics Association : JAMIA 4 (6), pp. 473–82. External Links: Cited by: §I.
-  (2016) Bidirectional LSTM-CRF for Clinical Concept Extraction. In Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP), Osaka, Japan, pp. 7–12. Cited by: §III-A, §IV-A, TABLE II.
-  (2001-10) A Simple Algorithm for Identifying Negated Findings and Diseases in Discharge Summaries. Journal of Biomedical Informatics 34 (5), pp. 301–310. External Links: Cited by: §I, §I, §IV-A, TABLE II.
-  (2017) Automatic negation and speculation detection in veterinary clinical text. In Proceedings of the Australasian Language Technology Association Workshop 2017, pp. 70–78. Cited by: §I.
-  (2018) A hybrid deep learning approach for medical relation extraction. arXiv preprint arXiv:1806.11189. Cited by: §IV-B, TABLE III.
-  (2018-07) A walk-based model on entity graphs for relation extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia, pp. 81–88. Cited by: §I.
-  (2011) Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010. Journal of the American Medical Informatics Association 18 (5), pp. 557–562. Cited by: §I.
-  (2017-01) MetaMap Lite: an evaluation of a new Java implementation of MetaMap. Journal of the American Medical Informatics Association 24 (4), pp. ocw177. External Links: Cited by: §I.
-  (2016) Neural networks for negation scope detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1, pp. 495–504. Cited by: §I.
-  (2004-09) UIMA: an architectural approach to unstructured information processing in the corporate research environment. Natural Language Engineering 10 (3-4), pp. 327–348 (English). External Links: Cited by: §I.
-  (1998-06) Accuracy of medical records in hip fracture.. Journal of the American Geriatrics Society 46 (6), pp. 745–50. External Links: Cited by: §I.
-  (2008) EHR’s effect on the revenue cycle management Coding function.. Journal of healthcare information management : JHIM 22 (1), pp. 26–30. External Links: Cited by: §VII-B, §VII-C.
-  (2010) Transforming clinical research in the United States: challenges and opportunities: workshop summary. National Academies Press. Cited by: §VII-A.
-  (2017-03) A Novel Model for Predicting Rehospitalization Risk Incorporating Physical Function, Cognitive Status, and Psychosocial Support Using Natural Language Processing. Medical Care 55 (3), pp. 261–266. External Links: Cited by: §VII-B.
-  (2009-10) ConText: An algorithm for determining negation, experiencer, and temporal status from clinical reports. Journal of Biomedical Informatics 42 (5), pp. 839–851. External Links: Cited by: §I, §I.
-  (2015-10) Identifying adverse drug event information in clinical notes with distributional semantic representations of context. Journal of Biomedical Informatics 57, pp. 333–349. External Links: Cited by: §VII-D, §VII-D.
-  (2018) Improving Hospital Mortality Prediction with Medical Named Entities and Multimodal Learning. Neural Information Processing Systems workshop on Machine Learning for Health. External Links: Cited by: §VII-B.
-  (2001) Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the 18th International Conference on Machine Learning, Vol. 951, pp. 282–289. Cited by: §I.
-  (2016) Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pp. 260–270. Cited by: §II-A.
-  (2003) Benchmarking variation in coding accuracy across the United States.. Journal of health care finance 29 (4), pp. 29–42. External Links: Cited by: §VII-C.
-  (2017-11) Natural Language Processing for EHR-Based Pharmacovigilance: A Structured Review. Drug Safety 40 (11), pp. 1075–1089. External Links: Cited by: §VII-D.
-  (1999-05) Prospective, randomized trial of template-assisted versus undirected written recording of physician records in the emergency department.. Annals of emergency medicine 33 (5), pp. 500–9. External Links: Cited by: §I.
-  (2015) Contextualist inquiry into IT-enabled hospital revenue cycle management: bridging research and practice. Journal of the Association for Information Systems 16 (12), pp. 1016. Cited by: §VII-C.
-  (2016) End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1, pp. 1105–1116. Cited by: §I.
-  (2014-01) Predicting the Risk of Suicide by Analyzing the Text of Clinical Notes. PLoS ONE 9 (1), pp. e85733. External Links: Cited by: §VII-B.
-  (2011-03) Data from clinical notes: a perspective on the tension between structure and flexible documentation. Journal of the American Medical Informatics Association 18 (2), pp. 181–186. External Links: Cited by: §I.
-  (2017) A hybrid neural network model for joint prediction of presence and period assertions of medical events in clinical notes. In AMIA Annual Symposium Proceedings, Vol. 2017, pp. 1149. Cited by: §I.
-  (2016-10) Predicting early psychiatric readmission with natural language processing of narrative discharge summaries. Translational Psychiatry 6 (10), pp. e921–e921. External Links: Cited by: §VII-B.
-  (2010-01) Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications.. Journal of the American Medical Informatics Association : JAMIA 17 (5), pp. 507–13 (en). External Links: Cited by: §I.
-  (2013-02) Big data in health care: solving provider revenue leakage with advanced analytics. Healthcare Financial Management 67 (2), pp. 40–43. External Links: Cited by: §VII-C.
-  (2014-12) Identifying plausible adverse drug reactions using knowledge extracted from the literature. Journal of Biomedical Informatics 52, pp. 293–310. External Links: Cited by: §VII-D.
-  (2015) Extending negex with kernel methods for negation detection in clinical text. In Proceedings of the Second Workshop on Extra-Propositional Aspects of Meaning in Computational Semantics (ExProM 2015), pp. 41–46. Cited by: §I, §IV-A, TABLE II.
-  (2019) Relation Extraction using Explicit Context Conditioning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, USA, pp. 1442–1447. External Links: Cited by: §II-B, §II, §IV-B.
-  (2012) Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pp. 2951–2959. Cited by: §III-B.
-  (2018-03) CLAMP – a toolkit for efficiently building customized clinical natural language processing pipelines. Journal of the American Medical Informatics Association 25 (3), pp. 331–336. External Links: Cited by: §I.
-  (2018-06) Global relation embedding for relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 820–830. External Links: Cited by: §I.
-  (1996-06) The physician’s flexible narrative.. Methods of information in medicine 35 (2), pp. 98–100. External Links: Cited by: §I.
-  (2018-06) Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 872–884. External Links: Cited by: §I, §II-B, TABLE III.
-  (2009-05) Active Computerized Pharmacovigilance Using Natural Language Processing, Statistics, and Electronic Health Records: A Feasibility Study. Journal of the American Medical Informatics Association 16 (3), pp. 328–337. External Links: Cited by: §VII-D.
-  (2018-01) Clinical information extraction applications: A literature review. Journal of Biomedical Informatics 77, pp. 34–49. External Links: Cited by: §I.
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation 1 (2), pp. 270–280. External Links: Cited by: §II-A2.
-  (2016) Multi-task cross-lingual sequence tagging from scratch. arXiv preprint arXiv:1603.06270. Cited by: §II-A.
-  (2016-11) Web-based Real-Time Case Finding for the Population Health Management of Patients With Diabetes Mellitus: A Prospective Validation of the Natural Language Processing-Based Algorithm With Statewide Electronic Medical Records.. JMIR medical informatics 4 (4), pp. e37. External Links: Cited by: §VII-B.
-  (2017-07) Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 1227–1236. External Links: Cited by: §I.