DiffuSIA: A Spiral Interaction Architecture for Encoder-Decoder Text Diffusion

05/19/2023
by   Chao-Hong Tan, et al.
0

Diffusion models have emerged as the new state-of-the-art family of deep generative models, and their promising potentials for text generation have recently attracted increasing attention. Existing studies mostly adopt a single encoder architecture with partially noising processes for conditional text generation, but its degree of flexibility for conditional modeling is limited. In fact, the encoder-decoder architecture is naturally more flexible for its detachable encoder and decoder modules, which is extensible to multilingual and multimodal generation tasks for conditions and target texts. However, the encoding process of conditional texts lacks the understanding of target texts. To this end, a spiral interaction architecture for encoder-decoder text diffusion (DiffuSIA) is proposed. Concretely, the conditional information from encoder is designed to be captured by the diffusion decoder, while the target information from decoder is designed to be captured by the conditional encoder. These two types of information flow run through multilayer interaction spirally for deep fusion and understanding. DiffuSIA is evaluated on four text generation tasks, including paraphrase, text simplification, question generation, and open-domain dialogue generation. Experimental results show that DiffuSIA achieves competitive performance among previous methods on all four tasks, demonstrating the effectiveness and generalization ability of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2018

Learning Neural Templates for Text Generation

While neural, encoder-decoder models have had significant empirical succ...
research
04/17/2018

Bootstrapping Generators from Noisy Data

A core step in statistical data-to-text generation concerns learning cor...
research
12/20/2022

SeqDiffuSeq: Text Diffusion with Encoder-Decoder Transformers

Diffusion model, a new generative modelling paradigm, has achieved great...
research
04/04/2022

Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors

Generating high quality texts with high diversity is important for many ...
research
04/04/2019

Text Generation from Knowledge Graphs with Graph Transformers

Generating texts which express complex ideas spanning multiple sentences...
research
12/28/2022

Exploring Vision Transformers as Diffusion Learners

Score-based diffusion models have captured widespread attention and fund...
research
02/06/2018

Byte-Level Recursive Convolutional Auto-Encoder for Text

This article proposes to auto-encode text at byte-level using convolutio...

Please sign up or login with your details

Forgot password? Click here to reset