Using Bottleneck Adapters to Identify Cancer in Clinical Notes under Low-Resource Constraints

10/17/2022
by   Omid Rohanian, et al.
0

Processing information locked within clinical health records is a challenging task that remains an active area of research in biomedical NLP. In this work, we evaluate a broad set of machine learning techniques ranging from simple RNNs to specialised transformers such as BioBERT on a dataset containing clinical notes along with a set of annotations indicating whether a sample is cancer-related or not. Furthermore, we specifically employ efficient fine-tuning methods from NLP, namely, bottleneck adapters and prompt tuning, to adapt the models to our specialised task. Our evaluations suggest that fine-tuning a frozen BERT model pre-trained on natural language and with bottleneck adapters outperforms all other strategies, including full fine-tuning of the specialised BioBERT model. Based on our findings, we suggest that using bottleneck adapters in low-resource situations with limited access to labelled data or processing capacity could be a viable strategy in biomedical text mining. The code used in the experiments are going to be made available at https://github.com/omidrohanian/bottleneck-adapters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2021

Fine-Tuning Large Neural Language Models for Biomedical Natural Language Processing

Motivation: A perennial challenge for biomedical researchers and clinica...
research
10/23/2022

On Cross-Domain Pre-Trained Language Models for Clinical Text Mining: How Do They Perform on Data-Constrained Fine-Tuning?

Pre-trained language models (PLMs) have been deployed in many natural la...
research
05/23/2023

Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings

Pre-trained language models (PLMs) have ignited a surge in demand for ef...
research
06/10/2021

Variational Information Bottleneck for Effective Low-Resource Fine-Tuning

While large-scale pretrained language models have obtained impressive re...
research
03/28/2023

Soft-prompt tuning to predict lung cancer using primary care free-text Dutch medical notes

We investigate different natural language processing (NLP) approaches ba...
research
09/14/2020

Can Fine-tuning Pre-trained Models Lead to Perfect NLP? A Study of the Generalizability of Relation Extraction

Fine-tuning pre-trained models have achieved impressive performance on s...
research
08/24/2022

Ontology-Driven Self-Supervision for Adverse Childhood Experiences Identification Using Social Media Datasets

Adverse Childhood Experiences (ACEs) are defined as a collection of high...

Please sign up or login with your details

Forgot password? Click here to reset