Adapting Language Models for Non-Parallel Author-Stylized Rewriting

09/22/2019
by   Bakhtiyar Syed, et al.
0

Given the recent progress in language modeling using Transformer-based neural models and an active interest in generating stylized text, we present an approach to leverage the generalization capabilities of a language model to rewrite an input text in a target author's style. Our proposed approach adapts a pre-trained language model to generate author-stylized text by fine-tuning on the author-specific corpus using a denoising autoencoder (DAE) loss in a cascaded encoder-decoder framework. Optimizing over DAE loss allows our model to learn the nuances of an author's style without relying on parallel data, which has been a severe limitation of the previous related works in this space. To evaluate the efficacy of our approach, we propose a linguistically-motivated framework to quantify stylistic alignment of the generated text to the target author at lexical, syntactic and surface levels. The evaluation framework is both interpretable as it leads to several insights about the model, and self-contained as it does not rely on external classifiers, e.g. sentiment or formality classifiers. Qualitative and quantitative assessment indicates that the proposed approach rewrites the input text with better alignment to the target style while preserving the original content better than state-of-the-art baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2021

DRAG: Director-Generator Language Modelling Framework for Non-Parallel Author Stylized Rewriting

Author stylized rewriting is the task of rewriting an input text in a pa...
research
10/22/2020

Incorporating Stylistic Lexical Preferences in Generative Language Models

While recent advances in language modeling have resulted in powerful gen...
research
08/23/2019

Neural Poetry: Learning to Generate Poems using Syllables

Motivated by the recent progresses on machine learning-based models that...
research
08/08/2023

In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning

In this note, we explore inference-time alignment through in-context lea...
research
09/11/2019

Learning Dynamic Author Representations with Temporal Language Models

Language models are at the heart of numerous works, notably in the text ...
research
09/09/2019

The Trumpiest Trump? Identifying a Subject's Most Characteristic Tweets

The sequence of documents produced by any given author varies in style a...
research
10/18/2018

Unsupervised Neural Text Simplification

The paper presents a first attempt towards unsupervised neural text simp...

Please sign up or login with your details

Forgot password? Click here to reset