DeepAI AI Chat
Log In Sign Up

Multi-dimensional Style Transfer for Partially Annotated Data using Language Models as Discriminators

by   Navita Goyal, et al.

Style transfer has been widely explored in natural language generation with non-parallel corpus by directly or indirectly extracting a notion of style from source and target domain corpus. A common aspect among the existing approaches is the prerequisite of joint annotations across all the stylistic dimensions under consideration. Availability of such dataset across a combination of styles is a limiting factor in extending state-of-the art style transfer setups to multiple style dimensions. While cascading single-dimensional models across multiple styles is a possibility, it suffers from content loss, especially when the style dimensions are not completely independent of each other. In our work, we attempt to relax this restriction on requirement of jointly annotated data across multiple styles being inspected and make use of independently acquired data across different style dimensions without any additional annotations. We initialize an encoder-decoder setup with large transformer-based language models pre-trained on a generic corpus and enhance its re-writing capability to multiple styles by employing multiple language models as discriminators. Through quantitative and qualitative evaluation, we show the ability of our model to control for styles across multiple style-dimensions while preserving content of the input text and compare it against baselines which involve cascaded state-of-the-art uni-dimensional style transfer models.


ST^2: Small-data Text Style Transfer via Multi-task Meta-Learning

Text style transfer aims to paraphrase a sentence in one style into anot...

Audience-Centric Natural Language Generation via Style Infusion

Adopting contextually appropriate, audience-tailored linguistic styles i...

SHAPED: Shared-Private Encoder-Decoder for Text Style Adaptation

Supervised training of abstractive language generation models results in...

ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free Language Models

State-of-the-art poetry generation systems are often complex. They eithe...

Adapting Language Models for Non-Parallel Author-Stylized Rewriting

Given the recent progress in language modeling using Transformer-based n...

Incorporating Stylistic Lexical Preferences in Generative Language Models

While recent advances in language modeling have resulted in powerful gen...