Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts

by   Dave Van Veen, et al.

Sifting through vast textual data and summarizing key information imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy across diverse clinical summarization tasks has not yet been rigorously examined. In this work, we employ domain adaptation methods on eight LLMs, spanning six datasets and four distinct summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not lead to improved results. Further, in a clinical reader study with six physicians, we depict that summaries from the best adapted LLM are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis delineates mutual challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and other irreplaceable human aspects of medicine.


page 10

page 12

page 14

page 19

page 20

page 21

page 22

page 23


Generating medically-accurate summaries of patient-provider dialogue: A multi-stage approach using large language models

A medical provider's summary of a patient visit serves several critical ...

Exploring the Limits of ChatGPT for Query or Aspect-based Text Summarization

Text summarization has been a crucial problem in natural language proces...

Efficient Adaptation of Pretrained Transformers for Abstractive Summarization

Large-scale learning of transformer language models has yielded improvem...

Towards Automatic Generation of Shareable Synthetic Clinical Notes Using Neural Language Models

Large-scale clinical data is invaluable to driving many computational sc...

Domain-adapted large language models for classifying nuclear medicine reports

With the growing use of transformer-based language models in medicine, i...

Few-shot Adaptation Works with UnpredicTable Data

Prior work on language models (LMs) shows that training on a large numbe...

Which anonymization technique is best for which NLP task? – It depends. A Systematic Study on Clinical Text Processing

Clinical text processing has gained more and more attention in recent ye...

Please sign up or login with your details

Forgot password? Click here to reset