KU-DMIS-MSRA at RadSum23: Pre-trained Vision-Language Model for Radiology Report Summarization

07/10/2023
by   Gangwoo Kim, et al.
0

In this paper, we introduce CheXOFA, a new pre-trained vision-language model (VLM) for the chest X-ray domain. Our model is initially pre-trained on various multimodal datasets within the general domain before being transferred to the chest X-ray domain. Following a prominent VLM, we unify various domain-specific tasks into a simple sequence-to-sequence schema. It enables the model to effectively learn the required knowledge and skills from limited resources in the domain. Demonstrating superior performance on the benchmark datasets provided by the BioNLP shared task, our model benefits from its training across multiple tasks and domains. With subtle techniques including ensemble and factual calibration, our system achieves first place on the RadSum23 leaderboard for the hidden test set.

READ FULL TEXT
research
03/22/2019

Pre-trained Language Model Representations for Language Generation

Pre-trained language model representations have been successful in a wid...
research
05/11/2023

Musketeer (All for One, and One for All): A Generalist Vision-Language Model with Task Explanation Prompts

We present a sequence-to-sequence vision-language model whose parameters...
research
08/10/2021

BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease Diagnosis

Vision-and-language(V L) models take image and text as input and learn...
research
06/13/2023

Improving Zero-Shot Detection of Low Prevalence Chest Pathologies using Domain Pre-trained Language Models

Recent advances in zero-shot learning have enabled the use of paired ima...
research
03/21/2022

DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization

Large-scale pre-trained sequence-to-sequence models like BART and T5 ach...
research
11/02/2020

IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP

Although the Indonesian language is spoken by almost 200 million people ...
research
01/21/2023

Adapting a Language Model While Preserving its General Knowledge

Domain-adaptive pre-training (or DA-training for short), also known as p...

Please sign up or login with your details

Forgot password? Click here to reset