GPT-3 Models are Poor Few-Shot Learners in the Biomedical Domain

09/06/2021
by   Milad Moradi, et al.
0

Deep neural language models have set new breakthroughs in many tasks of Natural Language Processing (NLP). Recent work has shown that deep transformer language models (pretrained on large amounts of texts) can achieve high levels of task-specific few-shot performance comparable to state-of-the-art models. However, the ability of these large language models in few-shot transfer learning has not yet been explored in the biomedical domain. We investigated the performance of two powerful transformer language models, i.e. GPT-3 and BioBERT, in few-shot settings on various biomedical NLP tasks. The experimental results showed that, to a great extent, both the models underperform a language model fine-tuned on the full training data. Although GPT-3 had already achieved near state-of-the-art results in few-shot knowledge transfer on open-domain NLP tasks, it could not perform as effectively as BioBERT, which is orders of magnitude smaller than GPT-3. Regarding that BioBERT was already pretrained on large biomedical text corpora, our study suggests that language models may largely benefit from in-domain pretraining in task-specific few-shot learning. However, in-domain pretraining seems not to be sufficient; novel pretraining and few-shot learning strategies are required in the biomedical NLP domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2020

Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing

Pretraining large neural language models, such as BERT, has led to impre...
research
09/20/2023

Making Small Language Models Better Multi-task Learners with Mixture-of-Task-Adapters

Recently, Large Language Models (LLMs) have achieved amazing zero-shot l...
research
02/25/2021

Automated essay scoring using efficient transformer-based language models

Automated Essay Scoring (AES) is a cross-disciplinary effort involving E...
research
08/18/2021

AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters

The open-access dissemination of pretrained language models through onli...
research
03/21/2019

Linguistic Knowledge and Transferability of Contextual Representations

Contextual word representations derived from large-scale neural language...
research
08/01/2022

Few-shot Adaptation Works with UnpredicTable Data

Prior work on language models (LMs) shows that training on a large numbe...
research
04/19/2023

GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information

While large language models (LLMs) have been successfully applied to var...

Please sign up or login with your details

Forgot password? Click here to reset