RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language Models

05/02/2023
by   Dave Van Veen, et al.
0

We systematically investigate lightweight strategies to adapt large language models (LLMs) for the task of radiology report summarization (RRS). Specifically, we focus on domain adaptation via pretraining (on natural language, biomedical text, and clinical text) and via prompting (zero-shot, in-context learning) or parameter-efficient fine-tuning (prefix tuning, LoRA). Our results on the MIMIC-III dataset consistently demonstrate best performance by maximally adapting to the task via pretraining on clinical text and parameter-efficient fine-tuning on RRS examples. Importantly, this method fine-tunes a mere 0.32 end-to-end fine-tuning (100 of in-context examples and out-of-distribution (OOD) training before concluding with a radiologist reader study and qualitative analysis. Our findings highlight the importance of domain adaptation in RRS and provide valuable insights toward developing effective natural language processing solutions for clinical tasks.

READ FULL TEXT
research
04/09/2022

Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization

The most advanced abstractive dialogue summarizers lack generalization a...
research
07/07/2022

Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation

Large pretrained language models (PLMs) are often domain- or task-adapte...
research
07/11/2023

SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization

Finetuning Large Language Models helps improve the results for domain-sp...
research
05/30/2023

ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning

A number of recent benchmarks seek to assess how well models handle natu...
research
04/26/2023

Fine Tuning with Abnormal Examples

Given the prevalence of crowd sourced labor in creating Natural Language...
research
05/19/2023

Introspective Tips: Large Language Model for In-Context Decision Making

The emergence of large language models (LLMs) has substantially influenc...
research
09/20/2021

Improving Span Representation for Domain-adapted Coreference Resolution

Recent work has shown fine-tuning neural coreference models can produce ...

Please sign up or login with your details

Forgot password? Click here to reset