Learning to Imagine: Visually-Augmented Natural Language Generation

05/26/2023
by   Tianyi Tang, et al.
0

People often imagine relevant scenes to aid in the writing process. In this work, we aim to utilize visual information for composition in the same manner as humans. We propose a method, LIVE, that makes pre-trained language models (PLMs) Learn to Imagine for Visuallyaugmented natural language gEneration. First, we imagine the scene based on the text: we use a diffusion model to synthesize high-quality images conditioned on the input texts. Second, we use CLIP to determine whether the text can evoke the imagination in a posterior way. Finally, our imagination is dynamic, and we conduct synthesis for each sentence rather than generate only one image for an entire paragraph. Technically, we propose a novel plug-and-play fusion layer to obtain visually-augmented representations for each text. Our vision-text fusion layer is compatible with Transformerbased architecture. We have conducted extensive experiments on four generation tasks using BART and T5, and the automatic results and human evaluation demonstrate the effectiveness of our proposed method. We will release the code, model, and data at the link: https://github.com/RUCAIBox/LIVE.

READ FULL TEXT
research
12/15/2022

Visually-augmented pretrained language models for NLP tasks without images

Although pre-trained language models (PLMs) have shown impressive perfor...
research
05/20/2022

Visually-Augmented Language Modeling

Human language is grounded on multimodal knowledge including visual know...
research
02/01/2022

Novelty Controlled Paraphrase Generation with Retrieval Augmented Conditional Prompt Tuning

Paraphrase generation is a fundamental and long-standing task in natural...
research
07/25/2023

FacTool: Factuality Detection in Generative AI – A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios

The emergence of generative pre-trained models has facilitated the synth...
research
03/25/2023

Indonesian Text-to-Image Synthesis with Sentence-BERT and FastGAN

Currently, text-to-image synthesis uses text encoder and image generator...
research
10/04/2020

Holistic static and animated 3D scene generation from diverse text descriptions

We propose a framework for holistic static and animated 3D scene generat...
research
06/26/2023

The Art of Embedding Fusion: Optimizing Hate Speech Detection

Hate speech detection is a challenging natural language processing task ...

Please sign up or login with your details

Forgot password? Click here to reset