Progressive Transformer-Based Generation of Radiology Reports

02/19/2021
by   Farhad Nooralahzadeh, et al.
0

Inspired by Curriculum Learning, we propose a consecutive (i.e. image-to-text-to-text) generation framework where we divide the problem of radiology report generation into two steps. Contrary to generating the full radiology report from the image at once, the model generates global concepts from the image in the first step and then reforms them into finer and coherent texts using transformer-based architecture. We follow the transformer-based sequence-to-sequence paradigm at each step. We improve upon the state-of-the-art on two benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2019

Modeling Graph Structure in Transformer for Better AMR-to-Text Generation

Recent studies on AMR-to-text generation often formalize the task as a s...
research
09/21/2021

TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models

Text recognition is a long-standing research problem for document digita...
research
08/09/2022

High Recall Data-to-text Generation with Progressive Edit

Data-to-text (D2T) generation is the task of generating texts from struc...
research
10/19/2021

Unifying Multimodal Transformer for Bi-directional Image and Text Generation

We study the joint learning of image-to-text and text-to-image generatio...
research
06/20/2023

Explicit Syntactic Guidance for Neural Text Generation

Most existing text generation models follow the sequence-to-sequence par...
research
06/24/2022

Competence-based Multimodal Curriculum Learning for Medical Report Generation

Medical report generation task, which targets to produce long and cohere...
research
02/12/2021

On Efficient Training, Controllability and Compositional Generalization of Insertion-based Language Generators

Auto-regressive language models with the left-to-right generation order ...

Please sign up or login with your details

Forgot password? Click here to reset