SALSA-TEXT : self attentive latent space based adversarial text generation

09/28/2018
by   Jules Gagnon-Marchand, et al.
0

Inspired by the success of self attention mechanism and Transformer architecture in sequence transduction and image generation applications, we propose novel self attention-based architectures to improve the performance of adversarial latent code- based schemes in text generation. Adversarial latent code-based text generation has recently gained a lot of attention due to their promising results. In this paper, we take a step to fortify the architectures used in these setups, specifically AAE and ARAE. We benchmark two latent code-based methods (AAE and ARAE) designed based on adversarial setups. In our experiments, the Google sentence compression dataset is utilized to compare our method with these methods using various objective and subjective measures. The experiments demonstrate the proposed (self) attention-based models outperform the state-of-the-art in adversarial code-based text generation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2019

Bilingual-GAN: A Step Towards Parallel Text Generation

Latent space based GAN methods and attention based sequence to sequence ...
research
01/01/2019

Text Infilling

Recent years have seen remarkable progress of text generation in differe...
research
08/27/2021

Tree Decomposition Attention for AMR-to-Text Generation

Text generation from AMR requires mapping a semantic graph to a string t...
research
08/26/2021

Can the Transformer Be Used as a Drop-in Replacement for RNNs in Text-Generating GANs?

In this paper we address the problem of fine-tuned text generation with ...
research
04/17/2020

Highway Transformer: Self-Gating Enhanced Self-Attentive Networks

Self-attention mechanisms have made striking state-of-the-art (SOTA) pro...
research
06/14/2020

Structural Autoencoders Improve Representations for Generation and Transfer

We study the problem of structuring a learned representation to signific...
research
11/14/2022

Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text Generation via Concentrating Attention

Recently, powerful Transformer architectures have proven superior in gen...

Please sign up or login with your details

Forgot password? Click here to reset