Style Transfer as Unsupervised Machine Translation

08/23/2018
by   Zhirui Zhang, et al.
Beihang University
Microsoft
USTC
0

Language style transferring rephrases text with specific stylistic attributes while preserving the original attribute-independent content. One main challenge in learning a style transfer system is a lack of parallel data where the source sentence is in one style and the target sentence in another style. With this constraint, in this paper, we adapt unsupervised machine translation methods for the task of automatic style transfer. We first take advantage of style-preference information and word embedding similarity to produce pseudo-parallel data with a statistical machine translation (SMT) framework. Then the iterative back-translation approach is employed to jointly train two neural machine translation (NMT) based transfer systems. To control the noise generated during joint training, a style classifier is introduced to guarantee the accuracy of style transfer and penalize bad candidates in the generated pseudo data. Experiments on benchmark datasets show that our proposed method outperforms previous state-of-the-art models in terms of both accuracy of style transfer and quality of input-output correspondence.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/18/2022

Exploiting Social Media Content for Self-Supervised Style Transfer

Recent research on style transfer takes inspiration from unsupervised ne...
02/10/2020

A Probabilistic Formulation of Unsupervised Text Style Transfer

We present a deep generative model for unsupervised text style transfer ...
04/27/2021

SE-DAE: Style-Enhanced Denoising Auto-Encoder for Unsupervised Text Style Transfer

Text style transfer aims to change the style of sentences while preservi...
11/06/2020

Semi-Supervised Low-Resource Style Transfer of Indonesian Informal to Formal Language with Iterative Forward-Translation

In its daily use, the Indonesian language is riddled with informality, t...
01/15/2021

Empirical Evaluation of Supervision Signals for Style Transfer Models

Text style transfer has gained increasing attention from the research co...
08/22/2019

Unsupervised Text Summarization via Mixed Model Back-Translation

Back-translation based approaches have recently lead to significant prog...
03/15/2019

Formality Style Transfer with Hybrid Textual Annotations

Formality style transformation is the task of modifying the formality of...

Introduction

Language style transfer is an important component of natural language generation (NLG)

[Wen et al.2015, Li et al.2016, Sennrich, Haddow, and Birch2016a, Wintner et al.2017], as it enables NLG systems to control not only the topic of produced utterance but also attributes such as sentiment and gender. As shown in Figure 1, language style transfer aims to convert a sentence with one attribute (e.g., negative sentiment) to another with a different attribute (e.g., positive sentiment), while retaining its attribute-independent content (e.g., the properties of the product being discussed).

Figure 1: Some examples of language style transfer (e.g., from negative sentiment to positive sentiment). The arrow indicates the transformation of different words from the source attribute to the target attribute.

Recently, many methods have made remarkable progress in language style transfer. One line of research [Hu et al.2017, Shen et al.2017, Fu et al.2018]

leverages the auto-encoder framework to learn an encoder and a decoder, in which the encoder constructs a latent vector by removing the style information and extracting attribute-independent content from the input sentence, and the decoder generates the output sentence with the desired style. Another line involves a delete-retrieve-generate approach

[Li et al.2018, Xu et al.2018], in which attribute-related words are recognized and removed to generate a sentence containing only content information, which is used as a query to find a similar sentence with the target attribute from the corpus. Based on that, target attribute markers can be extracted and utilized to generate the final output sentence in a generation step.

Language style transfer can be regarded as a special machine translation (MT) task where the source sentence is in one style and the target sentence is in another style (as shown in Figure 1). In this paper, we leverage attention-based neural machine translation (NMT) models [Sutskever, Vinyals, and Le2014, Cho et al.2014, Bahdanau, Cho, and Bengio2014] to change the attribute of the input sentence by translating it from one style to another. Compared with auto-encoder methods, the attention mechanism can better make the decision on preserving the content words and transferring attribute-related words. Compared with the delete-retrieve-generate approach, our model is an end-to-end system without error propagation, and the generation of target attribute words is generated based on the context information instead of a retrieval step.

To train NMT-based systems, a large parallel corpus is required to tune the huge parameters and learn the correspondence between input and output words. However, for style transfer, sentence pairs with the same content but different attributes are difficult to acquire. Inspired by unsupervised MT approaches [Artetxe et al.2018, Lample, Denoyer, and Ranzato2018, Lample et al.2018], we propose a two-stage joint training method to boost a forward transfer system (source style to target style) and a backward one (target style to source style) using unpaired datasets. In the first stage, we build the word-to-word transfer table based on word-level style-preference information and word embedding similarity learnt from unpaired datasets. With the inferred transfer tables and pre-trained style specific language models, bidirectional (forward and backward) statistical machine translation (SMT) [Och2003, Chiang2007] transfer systems are built to generate a pseudo parallel corpus. In the second stage, we initialize bidirectional NMT-based transfer systems with the pseudo corpus from the first stage, which are then boosted with each other in an iterative back-translation framework. During iterative training, a style classifier is introduced to guarantee the high accuracy of style transfer result and punish the bad candidates in the generated pseudo data.

We conduct experiments on three style transfer tasks: altering sentiment of Yelp reviews, altering sentiment of Amazon reviews, and altering image captions between romantic and humorous. Both human and automatic evaluation results show that our proposed method outperforms previous state-of-the-art models in terms of both accuracy of style transfer and quality of input-output correspondence (meaning preservation and fluency). Our contributions can be summarized as follows:

  • Unsupervised MT methods are adapted to the style transfer tasks to tackle the lack of parallel corpus, with a three-step pipeline containing building word transfer table, constructing SMT-based transfer systems and training NMT-based transfer systems.

  • Our attention-based NMT models can directly model the whole style transfer process, and the attention mechanism can better make the decision of preserving the content words and transferring the attribute-related words.

  • A style classifier is introduced to control the noise generated during iterative back-translation training, and it is crucial to the success of our methods.

Figure 2: Illustration of the overall training framework of our approach. This framework consists of model initialization and iterative back-translation components, in which and

denote style preference probabilities of words

and , represents word similarity defined in the embedding space, and stand for the transfer probability of different words, and are source-to-target and target-to-source style transfer models, and denote the probabilities that a sentence belongs to different styles, and they are used to punish poor pseudo sentence pairs with wrong attributes.

Our Approach

Given two datasets and representing two different styles and respectively (e.g., for the sentiment, , ), style transfer can be formalized as learning the conditional distribution , which takes as inputs and generates a sentence retaining the content of while expressing in the style

. To model this conditional distribution, we adopt the attention-based architecture proposed by Bahdanau2014NeuralMT Bahdanau2014NeuralMT. It is implemented as an encoder-decoder framework with recurrent neural networks (RNN), in which RNN is usually implemented as Gated Recurrent Unit (GRU) 

[Cho et al.2014]

or Long Short-Term Memory (LSTM) networks 

[Hochreiter and Schmidhuber1997]. In our experiment, GRU is used as our RNN unit.

To learn style transfer using non-parallel text, we design an unsupervised sequence-to-sequence training method as illustrated in Figure 2. In general, our proposed approach can be divided into two stages: model initialization and iterative back-translation. In the first stage, given unaligned sentences and , we first build the transfer table to provide word-to-word transfer information, as well as two style specific language models. With the word-to-word transfer table and language models, we build two SMT-based transfer systems (source-to-target model and target-to-source model), with which we translate the unaligned sentences to construct the pseudo-parallel corpus. In the second stage, we use the pseudo data to pre-train bidirectional NMT-based transfer systems (source-to-target model and target-to-source model ). Based on the two initial systems, an iterative back-translation algorithm is employed to sufficiently exploit unaligned sentences and , with which bidirectional systems can achieve further improvements.

Model Initialization

Learning to style transfer with only non-parallel data is a challenging task, since the associated style expressions cannot be learnt directly. To reduce the complexity of this task, we first learn the transfer knowledge at the word level, with which we can upgrade to the sentence level. To achieve this goal, we first construct word-level transfer table in an unsupervised way. Many methods [Conneau et al.2017, Artetxe, Labaka, and Agirre2017] have been proposed to perform a similar task, but these methods rely on the homogeneity of the cross-lingual word embedding space and are only applied in the MT field. Since the two style transfer corpora are in one language, cross-lingual word embedding cannot be used to learn word-level transfer information. In order to gain proper word mapping between different attributes, we propose a new method which leverages the word embedding similarity and style preference of words to construct word-level transfer table.

The transfer probability between source word in style and target word in style can be decomposed into three parts:

(1)

where () denotes the probability that a word () belongs to a style (), represents grammatic similarity of and . We observe that attribute-relevance words and their proper expressions in a target attribute typically play the same grammatic role in the sentences. In our implementation,

is calculated with the normalized cosine similarity of word embedding

[Mikolov et al.2013], and

are estimated as follows:

(2)

where () represents the frequency of a word () appearing in datasets with attribute ().

Specifically, as shown in the model initialization part of Figure 2, we learn word embeddings of all the words using source style corpus and target style corpus , based on which, the grammatic similarity model can be learnt. Meanwhile, with style specific corpus and , we can gain the style preference models and . By incorporating these three models, we can approximate the transfer probability , which is used to extract high-confidence word-level style mapping. For instance, both “hate” and “love” play similar grammatical roles in the sentence, so their embeddings are very similar, and cosine-based similarity is very high. Additionally, “hate” is more inclined to appear in the negative text, while “love” is more likely to occur in the positive text. So the two style preference probabilities are also high, which lead to a high translation probability . The inverse translation table can be generated in the same way.

To upgrade the transfer knowledge from word-level to sentence-level, we build bidirectional SMT translation systems with transfer tables and style specific language models. Our style specific language models are based on 4-gram language models and trained using the modified Kneser-Ney smoothing algorithm over the corresponding corpus. The features of SMT translation systems are designed as two word-level translation probabilities, two language model scores, and one word count penalty. For the source-to-target translation system, all the feature weights are 1, except the source style language model as -1, and similarly, the weight of target language model is set to -1 with all the remains as 1 for target-to-source translation system. With the SMT-based translation systems, we generate the translations of unaligned sentences and , and pair them to construct pseudo-parallel data.

Iterative Back-Translation

With the pseudo data generated in the first stage, we pre-train bidirectional NMT-based style transfer systems ( and ). In this subsection, we will start with our unsupervised training objective, based on which an iterative back-translation method is designed to further improve initial NMT-based transfer models.

Given two unaligned datasets and labeled with attributes and respectively, the common unsupervised training objective is to maximize the likelihood of observed data:

(3)

where and denote the language probabilities of sentences and , and are model parameters of and respectively. Following zhang2018joint zhang2018joint’s derivation, we can get the lower bound of the training objective in Equation 3 as:

(4)

This new training objective actually turns the unsupervised problem into a supervised one by generating pseudo sentence pairs via a back-translation method [Sennrich, Haddow, and Birch2016b], in which the first term denotes that the pseudo sentence pairs generated by the source-to-target model are used to update the target-to-source model , and the second term means use of the target-to-source model to generate pseudo data for the training of the source-to-target model . In this way, two style transfer models ( and ) can boost each other in an iterative process, as illustrated in the iterative back-translation part of Figure 2.

In practice, it is intractable to calculate Equation 4, since we need to sum over all candidates in an exponential search space for expectation computation. This problem is usually alleviated by sampling [Shen et al.2016, Kim and Rush2016]. Following previous methods, the top-k translation candidates generated by beam search strategy are used for approximation.

In addition, with the weak supervision of the pseudo corpus, the learnt style transfer models are far from perfect, especially at the beginning of the iteration. The generated pseudo data may contains errors. Sometimes, the style of generated output is wrong, and such an error can be amplified in the iteration training. To tackle this issue, we introduce an external style classifier to provide a reward to punish poor pseudo sentence pairs. Specifically, the samples generated by or are expected to have high scores assigned by the style classifier. The objective of this reward-based training is to maximize the expected probability of pre-trained style classifier:

(5)

where () denotes the probability of style () given the generated sentence (). This probability is assigned by a pre-trained style classifier and is subjected to . For the style classifier, the input sentence is encoded into a vector by a bidirectional GRU with an average pooling layer over the hidden states, and a sigmoid output layer is used to predict the classification probability. The style classifier is trained by maximum likelihood estimation (MLE) using two datasets and .

Combining Equations 4 and 5, we get the final unsupervised training objective:

(6)

The partial derivative of with respect to and can be written as follows:

(7)
(8)

where and are the gradients specified with a standard sequence-to-sequence network. Note that when maximizing the objective function , we do not back-prop through the reverse model which generates the data, following zhang2018joint zhang2018joint and lample2018phrase lample2018phrase . The whole iterative back-translation training is summarized in Algorithm 1.

Input: Unpaired datasets and with different attributes and , initial NMT-based models and , style classifier ();
      Output: Bidirectional NMT-based style transfer models and ;

1:procedure training process
2:     while  Max_Epoches do
3:         Use model to translate dataset , yielding pseudo-parallel data ;
4:         Use model to translate dataset , yielding pseudo-parallel data ;
5:         Update model with Equation 7 using pseudo-parallel data , and ;
6:         Update model with Equation 8 using pseudo-parallel data , and ;
7:     end while
8:end procedure
Algorithm 1 Iterative Back-Translation Training

Experiments

Setup

To examine the effectiveness of our proposed approach, we conduct experiments on three datasets, including altering sentiments of Yelp reviews, altering sentiments of Amazon reviews, and altering image captions between romantic and humorous. Following previous work [Fu et al.2018, Li et al.2018], we measure the accuracy of style transfer and the quality of content preservation with automatic and manual evaluations.

Datasets

To compare our work with state-of-the-art approaches, we follow the experimental setups and datasets111https://github.com/lijuncen/Sentiment-and-Style-Transfer in Li2018DeleteRG Li2018DeleteRG’s work:

  • Yelp: This dataset consists of Yelp reviews. We consider reviews with a rating above three as positive samples and those below three as negative ones.

  • Amazon: This dataset consists of amounts of product reviews from Amazon [He and McAuley2016]. Similar to Yelp, we label the reviews with a rating higher than three as positive and less than three as negative.

  • Captions: This dataset consists of image captions [Gan et al.2017]. Each example is labeled as either romantic or humorous.

The statistics of the Yelp, Amazon and Captions datasets are shown in Table 1 and 2. Additionally, Li2018DeleteRG Li2018DeleteRG hire crowd-workers on Amazon Mechanical Turk to write gold output for test sets of Yelp and Amazon datasets,222The Captions dataset is actually an aligned corpus that contains captions for the same image in different styles, so we do not need to edit output for the test set of the Captions dataset. In our experiments, we also do not use these alignments.

in which workers are required to edit a sentence to change its sentiment while preserving its content. With human reference outputs, an automatic evaluation metric, such as BLEU

[Papineni et al.2002], can be used to evaluate how well meaning is preserved.

Dataset Attributes Train Dev Test
Yelp Negative 180K 2000 500
Positive 270K 2000 500
Amazon Negative 278K 1015 500
Positive 277K 985 500
Captions Humorous 6000 300 300
Romantic 6000 300 300
Table 1: Sentence count in different datasets.
Dataset Yelp Amazon Captions
Vocabulary 10K 20K 8K
Table 2: Vocabulary size of different datasets.
Yelp Amazon Captions
Classifier BLEU Classifier BLEU Classifier BLEU
CrossAligned 73.2% 9.06 71.4% 1.90 79.1% 1.82
MultiDecoder 47.0% 14.54 66.4% 9.07 66.8% 6.64
StyleEmbedding 7.6% 21.06 40.3% 15.05 54.3% 8.80
TemplateBased 80.3% 22.62 66.4% 33.57 87.8% 19.18
Del-Retr-Gen 89.8% 16.00 50.4% 29.27 95.8% 11.98
Our Approach 96.6% 22.79 84.1% 33.90 99.5% 12.69
Table 3: Automatic evaluation results on Yelp, Amazon and Captions datasets. “Classifier” shows the accuracy of sentences labeled by the pre-trained style classifier. “BLEU(%)” measures content similarity between the output and the human reference.
Yelp Amazon Captions
Att Con Gra Suc Att Con Gra Suc Att Con Gra Suc
CrossAligned 3.1 2.7 3.2 10% 2.4 1.8 3.4 6% 3.0 2.2 3.7 14%
MultiDecoder 2.4 3.1 3.2 8% 2.4 2.3 3.2 7% 2.8 3.0 3.4 16%
StyleEmbedding 1.9 3.5 3.3 7% 2.2 2.9 3.4 10% 2.7 3.2 3.3 16%
TemplateBased 2.9 3.6 3.1 17% 2.1 3.5 3.2 14% 3.3 3.8 3.3 23%
Del-Retr-Gen 3.2 3.3 3.4 23% 2.7 3.7 3.8 22% 3.5 3.4 3.8 32%
Our Approach 3.5 3.7 3.6 33% 3.3 3.7 3.9 30% 3.6 3.8 3.7 37%
Table 4: Human evaluation results on Yelp, Amazon and Captions datasets. We show average human ratings for style transfer accuracy (Att), preservation of meaning (Con), fluency of sentences (Gra) on a 1 to 5 Likert scale. ”Suc” denotes the overall success rate. We consider a generated output ”successful” if it is rated 4 or 5 on all three criteria (Att, Con, Gra).

Baselines

We compare our approach with five state-of-the-art baselines: CrossAligned [Shen et al.2017], MultiDecoder [Fu et al.2018], StyleEmbedding [Fu et al.2018], TemplateBased [Li et al.2018] and Del-Retr-Gen (Delete-Retrieve-Generate) [Li et al.2018]. The former three methods are based on auto-encoder neural networks and leverage an adversarial framework to help systems separate style and content information. TemplateBased is a retrieve-based method that first identifies attribute-relevance words and then replaces them with target attribute expressions, which are extracted from a similar content sentence retrieved from the target style corpus. Del-Retr-Gen is a mixed model combining the TemplateBased method and an RNN-based generator, in which the RNN-based generator produces the final output sentence based on the content and the extracted target attributes.

Training Details

For the SMT model in our approach, we use Moses333https://github.com/moses-smt/mosesdecoder

with a translation table initialized as described in the Model Initialization Section. The language model is a default smoothed n-gram language model and the reordering model is disabled. The hyper-parameters of different SMT features are assigned as described in the Model Initialization Section.

RNNSearch [Bahdanau, Cho, and Bengio2014]

is adopted as the NMT model in our approach, which uses a single layer GRU for the encoder and decoder networks, enhanced with a feed-forward attention network. The dimension of word embedding (for both source and target words) and hidden layer are set to 300. All parameters are initialized using a normal distribution with a mean of 0 and a variance of

, where and are the number of rows and columns of the parameter matrix [Glorot and Bengio2010]. Each model is optimized using the Adadelta [Zeiler2012] algorithm with a mini-batch size 32. All of the gradients are re-normalized if the norm exceeds 2. For the training iteration in Algorithm 1

, best 4 samples generated by beam search strategy are used for training, and we run 3 epochs for Yelp and Amazon datasets, 30 epochs for the Captions dataset. At test time, beam search is employed to find the best candidate with a beam size 12.

Automatic Evaluation

In automatic evaluation, following previous work [Shen et al.2017, Li et al.2018], we measure the accuracy of style transfer for the generated sentences using a pre-trained style classifier,444We train another style classifier for our iterative back-translation training process. and adopt a case-insensitive BLEU metric to evaluate the preservation of content. The BLEU score is computed using Moses multi-bleu.perl script. For each dataset, we train the style classifier on the same training data.

Table 3 shows the automatic evaluation results of different models on Yelp, Amazon and Captions datasets. We can see that CrossAligned obtains high style transfer accuracy but sacrifices content consistency. In addition, MultiDecoder and StyleEmedding can help the preservation of content but reduce the accuracy of style transfer. Compared with previous methods, TemplateBased and Del-Retr-Gen achieve a better balance between the transfer accuracy and the content preservation. Our approach achieves significant improvements over CrossAligned, MultiDecoder, StyleEmedding and Del-Retr-Gen in both transfer accuracy and quality of content preservation.

Compared with TemplateBased, our method achieves much better accuracy of style transfer, but with a lower BLEU score on the Captions dataset. The reason is that there are more different expressions to exhibit romantic and humorous compared with changing sentiment. A BLEU score based on a single human reference cannot precisely measure content consistency. In addition, as argued in Li2018DeleteRG Li2018DeleteRG, the BLEU metric, which is lack of automatic fluency evaluation, favors systems like TemplateBased, which only replaces a few words in the sentence. However, grammatical mistakes are easily made when replacing with inappropriate words. In order to reflect grammatical mistakes in the generated sentence, we conduct human evaluation with fluency as one of the criteria.

Models Yelp Amazon Captions
Classifier BLEU Classifier BLEU Classifier BLEU
SMT-based 94.6% 8.82 81.2% 7.46 82.8% 5.04
NMT-based (Iteration 0) 70.5% 15.81 68.4% 16.36 66.5% 8.72
NMT-based 96.6% 22.79 84.1% 33.90 99.5% 12.69
NMT-based (w/o Style Classifier) 80.4% 24.48 75.6% 35.34 79.3% 13.93
Table 5: Automatic evaluation results of each component of our approach on Yelp, Amazon andf Captions datasets.

Human Evaluation

While automatic evaluation provides an indication of style transfer quality, it can not evaluate the quality of transferred text accurately. To further verify the effectiveness of our approach, we perform a human evaluation on the test set. For each dataset, we randomly select 200 samples for the human evaluation (100 for each attribute). Each sample contains the transformed sentences generated by different systems given the same source sentence. Then samples are distributed to annotators in a double-blind manner.555We distribute each sample to 5 native speakers and use Fleiss’s kappa to judge agreement among them. The Fleiss’s kappa score is 0.791 for the Yelp dataset, 0.763 for the Amazon dataset and 0.721 for the Caption dataset. Annotators are asked to rate each output for three criteria on a likert scale from 1 to 5: style transfer accuracy (Att), preservation of content (Con) and fluency of sentences (Gra). Finally, the generated sentence is treated as “successful” when it is scored 4 or 5 on all three criteria.

Table 3 shows the human evaluation results. It can be clearly observed that our proposed method achieves the best performance among all systems, with 10%, 8% and 5% point improvements than Del-Retr-Gen on Yelp, Amazon and Captions respectively, which demonstrates the effectiveness of our proposed method. Compared with Del-Retr-Gen, our method is rated higher on all three criteria.

By comparing two evaluation results, we find that there is a positive correlation between human evaluation and automatic evaluation in terms of both accuracy of style transfer and preservation of content. This indicates the usefulness of automatic evaluation metrics in model development. However, since current automatic evaluation cannot evaluate the quality of transferred text accurately, the human evaluation is necessary and accurate automatic evaluation metrics are expected in the future.

Analysis

We further investigate the contribution of each component of our method during the training process. Table 5 shows automatic evaluation results of SMT-based and NMT-based transfer systems in our approach on the Yelp, Amazon and Captions datasets. We find that, with the help of word-level translations table and style specific language models, the SMT-based transfer model gains high accuracy in style transfer but fails in preserving content. Given pseudo data generated by the SMT-based model, the NMT-based model (Iteration 0) can better integrate the translation and language models, resulting in sentences with better content consistency. Using our iterative back-translation algorithm, the pre-trained NMT-based model can then be significantly improved in terms of both accuracy of style transfer and preservation of content. This result proves that the iterative back-translation algorithm can effectively leverage unaligned sentences. Besides, the style classifier plays a key role to guarantee the success of style transfer, without which, the transfer accuracy of the NMT-based model is obviously declining due to imperfect pseudo data. We also show some system outputs in Table 6.

Related Work

Language style transfer without a parallel text corpus has attracted more and more attention due to recent advances in text generation tasks. Many approaches have been proposed to build style transfer systems and achieve promising performance [Hu et al.2017, Shen et al.2017, Fu et al.2018, Li et al.2018, Prabhumoye et al.2018]. Hu2017TowardCG Hu2017TowardCG leverage an attribute classifier to guide the generator to produce sentences with desired attribute (e.g. sentiment, tense) in the Variational Auto-encoder (VAE) framework. Shen2017StyleTF Shen2017StyleTF first supply a theoretical analysis of language style transfer using non-parallel text. They propose a cross-aligned auto-encoder with discriminator architecture, in which an adversarial discriminator is used to align different styles.

Instead of only considering the style transfer accuracy as in previous work, Fu2017StyleTI Fu2017StyleTI introduce the content preservation as another evaluation metric and design two models, which encode the sentence into latent content representation and leverage the adversarial network to separate style and content information. Similarly, Prabhumoye2018StyleTT Prabhumoye2018StyleTT attempt to use external NMT models to rephrase the sentence and weaken the effect of style attributes, based on which, multiple decoders can better preserve sentence meaning when transferring style. More recently, Li2018DeleteRG Li2018DeleteRG design a delete-retrieve-generate system which hybrids the retrieval system and the neural-based text generation model. They first identify attribute-related words and remove them from the input sentence. Then the modified input sentence is used to retrieve a similar content sentence from the target style corpus, based on which corresponding target style expressions are extracted to produce the final output with an RNN-based generator.

Different from previous methods, we treat language style transfer as a special MT task where the source language is in one style and the target language in another style. Based on this, we adopt an attention-based sequence-to-sequence model to transform the style of a sentence. Further, following the key framework of unsupervised MT methods [Artetxe et al.2018, Lample, Denoyer, and Ranzato2018, Lample et al.2018] to deal with the problem of lacking parallel corpus, a two-stage joint training method is proposed to leverage unpaired datasets with attribute information.

However, there are two major differences between our proposed approach and the existing unsupervised NMT method: 1) Building the word-to-word translation system in unsupervised NMT relies on the homogeneity of cross-lingual word embedding space, which is impossible for a style transfer whose input and output are in the same language. To deal with that, we propose a new method taking advantage of style-preference information and word embedding similarity to build the word-to-word transfer system; 2) We leverage the style classifier to filter the bad generated pseudo sentences, and its score is used as rewards to stabilize model training. Table 5 shows that introducing a style classifier can better guarantee the transferred style. In summary, an unsupervised NMT method cannot be directly applied in style transfer tasks, and we modify two important components to make it work.

Conclusion

In this paper, we have presented a two-stage joint training method to boost source-to-target and target-to-source style transfer systems using non-parallel text. In the first stage, we build bidirectional word-to-word style transfer systems in a SMT framework to generate pseudo sentence pairs, based on which, two initial NMT-based style transfer models are constructed. Then an iterative back-translation algorithm is employed to better leverage non-parallel text to jointly improve bidirectional NMT-based style transfer systems. Empirical evaluations are conducted on Yelp, Amazon and Captions datasets, demonstrating that our approach outperforms previous state-of-the-art models in terms of both accuracy of style transfer and quality of input-output correspondence.

In the future, we plan to further investigate the use of our method on other style transfer tasks. In addition, we are interested in designing more accurate and complete automatic evaluation for this task.

References

  • [Artetxe et al.2018] Artetxe, M.; Labaka, G.; Agirre, E.; and Cho, K. 2018. Unsupervised neural machine translation. In ICLR.
  • [Artetxe, Labaka, and Agirre2017] Artetxe, M.; Labaka, G.; and Agirre, E. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In ACL.
  • [Bahdanau, Cho, and Bengio2014] Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473.
  • [Chiang2007] Chiang, D. 2007. Hierarchical phrase-based translation. Computational Linguistics.
  • [Cho et al.2014] Cho, K.; van Merrienboer, B.; Çaglar Gülçehre; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP.
  • [Conneau et al.2017] Conneau, A.; Lample, G.; Ranzato, M.; Denoyer, L.; and Jégou, H. 2017. Word translation without parallel data. CoRR abs/1710.04087.
  • [Fu et al.2018] Fu, Z.; Tan, X.; Peng, N.; Zhao, D.; and Yan, R. 2018. Style transfer in text: Exploration and evaluation. In AAAI.
  • [Gan et al.2017] Gan, C.; Gan, Z.; He, X.; Gao, J.; and Deng, L. 2017. Stylenet: Generating attractive visual captions with styles.

    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    955–964.
  • [Glorot and Bengio2010] Glorot, X., and Bengio, Y. 2010. Understanding the difficulty of training deep feedforward neural networks. In AISTATS.
  • [He and McAuley2016] He, R., and McAuley, J. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW.
  • [Hochreiter and Schmidhuber1997] Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9 8:1735–80.
  • [Hu et al.2017] Hu, Z.; Yang, Z.; Liang, X.; Salakhutdinov, R.; and Xing, E. P. 2017. Toward controlled generation of text. In ICML.
  • [Kim and Rush2016] Kim, Y., and Rush, A. M. 2016. Sequence-level knowledge distillation. In EMNLP.
  • [Lample et al.2018] Lample, G.; Ott, M.; Conneau, A.; Denoyer, L.; and Ranzato, M. 2018. Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755.
  • [Lample, Denoyer, and Ranzato2018] Lample, G.; Denoyer, L.; and Ranzato, M. 2018. Unsupervised machine translation using monolingual corpora only. In ICLR.
  • [Li et al.2016] Li, J.; Galley, M.; Brockett, C.; Spithourakis, G. P.; Gao, J.; and Dolan, W. B. 2016. A persona-based neural conversation model. In ACL.
  • [Li et al.2018] Li, J.; Jia, R.; Hé, H.; and Liang, P. 2018. Delete, retrieve, generate: A simple approach to sentiment and style transfer. In NAACL.
  • [Mikolov et al.2013] Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. In NIPS.
  • [Och2003] Och, F. J. 2003. Minimum error rate training in statistical machine translation. In ACL.
  • [Papineni et al.2002] Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL.
  • [Prabhumoye et al.2018] Prabhumoye, S.; Tsvetkov, Y.; Salakhutdinov, R.; and Black, A. W. 2018. Style transfer through back-translation. In ACL.
  • [Sennrich, Haddow, and Birch2016a] Sennrich, R.; Haddow, B.; and Birch, A. 2016a. Controlling politeness in neural machine translation via side constraints. In HLT-NAACL.
  • [Sennrich, Haddow, and Birch2016b] Sennrich, R.; Haddow, B.; and Birch, A. 2016b. Improving neural machine translation models with monolingual data. In ACL.
  • [Shen et al.2016] Shen, S.; Cheng, Y.; He, Z.; He, W.; Wu, H.; Sun, M.; and Liu, Y. 2016. Minimum risk training for neural machine translation. In ACL.
  • [Shen et al.2017] Shen, T.; Lei, T.; Barzilay, R.; and Jaakkola, T. S. 2017. Style transfer from non-parallel text by cross-alignment. In NIPS.
  • [Sutskever, Vinyals, and Le2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In NIPS.
  • [Wen et al.2015] Wen, T.-H.; Gasic, M.; Mrksic, N.; hao Su, P.; Vandyke, D.; and Young, S. J. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In EMNLP.
  • [Wintner et al.2017] Wintner, S.; Mirkin, S.; Specia, L.; Rabinovich, E.; and Patel, R. N. 2017. Personalized machine translation: Preserving original author traits. In EACL.
  • [Xu et al.2018] Xu, J.; Sun, X.; Zeng, Q.; Ren, X.; Zhang, X.; Wang, H.; and Li, W. 2018.

    Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach.

    In ACL.
  • [Zeiler2012] Zeiler, M. D. 2012. Adadelta: An adaptive learning rate method. CoRR abs/1212.5701.
  • [Zhang et al.2018] Zhang, Z.; Liu, S.; Li, M.; Zhou, M.; and Chen, E. 2018. Joint training for neural machine translation models with monolingual data. In AAAI.