Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer

05/14/2021
by   Huiyuan Lai, et al.
0

Scarcity of parallel data causes formality style transfer models to have scarce success in preserving content. We show that fine-tuning pre-trained language (GPT-2) and sequence-to-sequence (BART) models boosts content preservation, and that this is possible even with limited amounts of parallel data. Augmenting these models with rewards that target style and content –the two core aspects of the task– we achieve a new state-of-the-art.

READ FULL TEXT
research
05/25/2022

Low Resource Style Transfer via Domain Adaptive Meta Learning

Text style transfer (TST) without parallel data has achieved some practi...
research
08/09/2022

HyperNST: Hyper-Networks for Neural Style Transfer

We present HyperNST; a neural style transfer (NST) technique for the art...
research
01/18/2016

Content Aware Neural Style Transfer

This paper presents a content-aware style transfer algorithm for paintin...
research
09/09/2021

Generic resources are what you need: Style transfer tasks without task-specific parallel training data

Style transfer aims to rewrite a source text in a different target style...
research
06/01/2021

Improving Formality Style Transfer with Context-Aware Rule Injection

Models pre-trained on large-scale regular text corpora often do not work...
research
03/24/2021

StyleKQC: A Style-Variant Paraphrase Corpus for Korean Questions and Commands

Paraphrasing is often performed with less concern for controlled style c...
research
08/16/2019

How Sequence-to-Sequence Models Perceive Language Styles?

Style is ubiquitous in our daily language uses, while what is language s...

Please sign up or login with your details

Forgot password? Click here to reset