Multi-dimensional Style Transfer for Partially Annotated Data using Language Models as Discriminators

10/22/2020
by   Navita Goyal, et al.
0

Style transfer has been widely explored in natural language generation with non-parallel corpus by directly or indirectly extracting a notion of style from source and target domain corpus. A common aspect among the existing approaches is the prerequisite of joint annotations across all the stylistic dimensions under consideration. Availability of such dataset across a combination of styles is a limiting factor in extending state-of-the art style transfer setups to multiple style dimensions. While cascading single-dimensional models across multiple styles is a possibility, it suffers from content loss, especially when the style dimensions are not completely independent of each other. In our work, we attempt to relax this restriction on requirement of jointly annotated data across multiple styles being inspected and make use of independently acquired data across different style dimensions without any additional annotations. We initialize an encoder-decoder setup with large transformer-based language models pre-trained on a generic corpus and enhance its re-writing capability to multiple styles by employing multiple language models as discriminators. Through quantitative and qualitative evaluation, we show the ability of our model to control for styles across multiple style-dimensions while preserving content of the input text and compare it against baselines which involve cascaded state-of-the-art uni-dimensional style transfer models.

READ FULL TEXT
research
04/24/2020

ST^2: Small-data Text Style Transfer via Multi-task Meta-Learning

Text style transfer aims to paraphrase a sentence in one style into anot...
research
08/31/2019

(Male, Bachelor) and (Female, Ph.D) have different connotations: Parallelly Annotated Stylistic Language Dataset with Multiple Personas

Stylistic variation in text needs to be studied with different aspects i...
research
01/24/2023

Audience-Centric Natural Language Generation via Style Infusion

Adopting contextually appropriate, audience-tailored linguistic styles i...
research
04/11/2018

SHAPED: Shared-Private Encoder-Decoder for Text Style Adaptation

Supervised training of abstractive language generation models results in...
research
06/01/2021

Improving Formality Style Transfer with Context-Aware Rule Injection

Models pre-trained on large-scale regular text corpora often do not work...
research
10/22/2020

Incorporating Stylistic Lexical Preferences in Generative Language Models

While recent advances in language modeling have resulted in powerful gen...
research
10/27/2022

Nearest Neighbor Language Models for Stylistic Controllable Generation

Recent language modeling performance has been greatly improved by the us...

Please sign up or login with your details

Forgot password? Click here to reset