DeepAI AI Chat
Log In Sign Up

Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation

by   Ning Dai, et al.
FUDAN University

Disentangling the content and style in the latent space is prevalent in unpaired text style transfer. However, two major issues exist in most of the current neural models. 1) It is difficult to completely strip the style information from the semantics for a sentence. 2) The recurrent neural network (RNN) based encoder and decoder, mediated by the latent representation, cannot well deal with the issue of the long-term dependency, resulting in poor preservation of non-stylistic semantic content.In this paper, we propose the Style Transformer, which makes no assumption about the latent representation of source sentence and equips the power of attention mechanism in Transformer to achieve better style transfer and better content preservation.


page 1

page 2

page 3

page 4


Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization

Text style transfer aims to alter the style (e.g., sentiment) of a sente...

Disentangled Representation Learning for Text Style Transfer

This paper tackles the problem of disentangling the latent variables of ...

Dance Style Transfer with Cross-modal Transformer

We present CycleDance, a dance style transfer system to transform an exi...

GTAE: Graph-Transformer based Auto-Encoders for Linguistic-Constrained Text Style Transfer

Non-parallel text style transfer has attracted increasing research inter...

Learning to Select Bi-Aspect Information for Document-Scale Text Content Manipulation

In this paper, we focus on a new practical task, document-scale text con...

QuantArt: Quantizing Image Style Transfer Towards High Visual Fidelity

The mechanism of existing style transfer algorithms is by minimizing a h...

Style-A-Video: Agile Diffusion for Arbitrary Text-based Video Style Transfer

Large-scale text-to-video diffusion models have demonstrated an exceptio...