Do We Still Need Clinical Language Models?

02/16/2023
by   Eric Lehman, et al.
0

Although recent advances in scaling large language models (LLMs) have resulted in improvements on many NLP tasks, it remains unclear whether these models trained primarily with general web text are the right tool in highly specialized, safety critical domains such as clinical text. Recent results have suggested that LLMs encode a surprising amount of medical knowledge. This raises an important question regarding the utility of smaller domain-specific language models. With the success of general-domain LLMs, is there still a need for specialized clinical models? To investigate this question, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records. As part of our experiments, we train T5-Base and T5-Large models from scratch on clinical notes from MIMIC III and IV to directly investigate the efficiency of clinical tokens. We show that relatively small specialized clinical models substantially outperform all in-context learning approaches, even when finetuned on limited annotated data. Further, we find that pretraining on clinical tokens allows for smaller, more parameter-efficient models that either match or outperform much larger language models trained on general text. We release the code and the models used under the PhysioNet Credentialed Health Data license and data use agreement.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2023

DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains

In recent years, pre-trained language models (PLMs) achieve the best per...
research
09/12/2023

The first step is the hardest: Pitfalls of Representing and Tokenizing Temporal Data for Large Language Models

Large Language Models (LLMs) have demonstrated remarkable generalization...
research
03/22/2023

The Shaky Foundations of Clinical Foundation Models: A Survey of Large Language Models and Foundation Models for EMRs

The successes of foundation models such as ChatGPT and AlphaFold have sp...
research
07/28/2023

A Critical Review of Large Language Models: Sensitivity, Bias, and the Path Toward Specialized AI

This paper examines the comparative effectiveness of a specialized compi...
research
05/16/2019

Towards Automatic Generation of Shareable Synthetic Clinical Notes Using Neural Language Models

Large-scale clinical data is invaluable to driving many computational sc...
research
05/25/2021

Estimating Redundancy in Clinical Text

The current mode of use of Electronic Health Record (EHR) elicits text r...
research
06/01/2023

The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only

Large language models are commonly trained on a mixture of filtered web ...

Please sign up or login with your details

Forgot password? Click here to reset