CBAG: Conditional Biomedical Abstract Generation

02/13/2020
by   Justin Sybrandt, et al.
6

Biomedical research papers use significantly different language and jargon when compared to typical English text, which reduces the utility of pre-trained NLP models in this domain. Meanwhile Medline, a database of biomedical abstracts, introduces nearly a million new documents per-year. Applications that could benefit from understanding this wealth of publicly available information, such as scientific writing assistants, chat-bots, or descriptive hypothesis generation systems, require new domain-centered approaches. A conditional language model, one that learns the probability of words given some a priori criteria, is a fundamental building block in many such applications. We propose a transformer-based conditional language model with a shallow encoder "condition" stack, and a deep "language model" stack of multi-headed attention blocks. The condition stack encodes metadata used to alter the output probability distribution of the language model stack. We sample this distribution in order to generate biomedical abstracts given only a proposed title, an intended publication year, and a set of keywords. Using typical natural language generation metrics, we demonstrate that this proposed approach is more capable of producing non-trivial relevant entities within the abstract body than the 1.5B parameter GPT-2 language model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2022

BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining

Pre-trained language models have attracted increasing attention in the b...
research
04/27/2023

PMC-LLaMA: Further Finetuning LLaMA on Medical Papers

Large Language Models (LLMs) have showcased remarkable capabilities in n...
research
02/20/2017

MOLIERE: Automatic Biomedical Hypothesis Generation System

Hypothesis generation is becoming a crucial time-saving technique which ...
research
02/03/2023

Bioformer: an efficient transformer language model for biomedical text mining

Pretrained language models such as Bidirectional Encoder Representations...
research
09/29/2020

Improving Low Compute Language Modeling with In-Domain Embedding Initialisation

Many NLP applications, such as biomedical data and technical support, ha...
research
02/03/2023

GLADIS: A General and Large Acronym Disambiguation Benchmark

Acronym Disambiguation (AD) is crucial for natural language understandin...
research
05/01/2020

Selecting Informative Contexts Improves Language Model Finetuning

We present a general finetuning meta-method that we call information gai...

Please sign up or login with your details

Forgot password? Click here to reset