Parameter-Efficient Fine-Tuning of LLaMA for the Clinical Domain

07/06/2023
by   Aryo Pradipta Gema, et al.
0

Adapting pretrained language models to novel domains, such as clinical applications, traditionally involves retraining their entire set of parameters. However, this approach is increasingly proven to be impractical owing to the substantial computational requirements associated with training such large language models. To address this issue, Parameter-Efficient Fine-Tuning (PEFT) techniques offer a viable solution by selectively fine-tuning a small subset of additional parameters, significantly reducing the computational requirements for domain adaptation. In this study, we propose Clinical LLaMA-LoRA, a PEFT adapter layer built upon the open-sourced LLaMA model. Clinical LLaMA-LoRA is trained using clinical notes obtained from the MIMIC-IV database, thereby creating a specialised adapter designed for the clinical domain. Additionally, we propose a two-step PEFT framework which fuses Clinical LLaMA-LoRA with Downstream LLaMA-LoRA, another PEFT adapter specialised for downstream tasks. We evaluate this framework on multiple clinical outcome prediction datasets, comparing it to clinically trained language models. Our proposed framework achieves a state-of-the-art AUROC score averaged across all clinical downstream tasks. We observe substantial improvements of 6-9 large-scale multilabel classification tasks, such as diagnoses and procedures classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2022

Parameter-efficient Fine-tuning for Vision Transformers

In computer vision, it has achieved great success in adapting large-scal...
research
11/22/2022

HyperTuning: Toward Adapting Large Language Models without Back-propagation

Fine-tuning large language models for different tasks can be costly and ...
research
05/15/2023

Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence modeling

The increasing size of language models raises great research interests i...
research
07/15/2023

CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models

Parameter-efficient tuning (PET) has been widely explored in recent year...
research
03/27/2023

Adapting Pretrained Language Models for Solving Tabular Prediction Problems in the Electronic Health Record

We propose an approach for adapting the DeBERTa model for electronic hea...
research
05/20/2023

Prefix Propagation: Parameter-Efficient Tuning for Long Sequences

Parameter-efficient tuning aims to mitigate the large memory requirement...
research
07/11/2023

Attribute Controlled Dialogue Prompting

Prompt-tuning has become an increasingly popular parameter-efficient met...

Please sign up or login with your details

Forgot password? Click here to reset