Multimodal Story Generation on Plural Images

01/16/2020
by   Jing Jiang, et al.
0

Traditionally, text generation models take in a sequence of text as input, and iteratively generate the next most probable word using pre-trained parameters. In this work, we propose the architecture to use images instead of text as the input of the text generation model, called StoryGen. In the architecture, we design a Relational Text Data Generator algorithm that relates different features from multiple images. The output samples from the model demonstrate the ability to generate meaningful paragraphs of text containing the extracted features from the input images.

READ FULL TEXT
research
10/15/2021

Control Prefixes for Text Generation

Prompt learning methods adapt pre-trained language models to downstream ...
research
09/26/2022

Informative Text Generation from Knowledge Triples

As the development of the encoder-decoder architecture, researchers are ...
research
12/08/2022

Momentum Calibration for Text Generation

The input and output of most text generation tasks can be transformed to...
research
10/12/2020

Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data

Neural text generation (data- or text-to-text) demonstrates remarkable p...
research
06/28/2023

You Can Generate It Again: Data-to-text Generation with Verification and Correction Prompting

Despite significant advancements in existing models, generating text des...
research
11/03/2020

Data-to-Text Generation with Iterative Text Editing

We present a novel approach to data-to-text generation based on iterativ...
research
12/14/2021

TopNet: Learning from Neural Topic Model to Generate Long Stories

Long story generation (LSG) is one of the coveted goals in natural langu...

Please sign up or login with your details

Forgot password? Click here to reset