Rigid Formats Controlled Text Generation

04/17/2020 ∙ by Piji Li, et al. ∙ Tencent 0

Neural text generation has made tremendous progress in various tasks. One common characteristic of most of the tasks is that the texts are not restricted to some rigid formats when generating. However, we may confront some special text paradigms such as Lyrics (assume the music score is given), Sonnet, SongCi (classical Chinese poetry of the Song dynasty), etc. The typical characteristics of these texts are in three folds: (1) They must comply fully with the rigid predefined formats. (2) They must obey some rhyming schemes. (3) Although they are restricted to some formats, the sentence integrity must be guaranteed. To the best of our knowledge, text generation based on the predefined rigid formats has not been well investigated. Therefore, we propose a simple and elegant framework named SongNet to tackle this problem. The backbone of the framework is a Transformer-based auto-regressive language model. Sets of symbols are tailor-designed to improve the modeling performance especially on format, rhyme, and sentence integrity. We improve the attention mechanism to impel the model to capture some future information on the format. A pre-training and fine-tuning framework is designed to further improve the generation quality. Extensive experiments conducted on two collected corpora demonstrate that our proposed framework generates significantly better results in terms of both automatic metrics and the human evaluation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

Code Repositories

SongNet

Code for ACL 2020 paper "Rigid Formats Controlled Text Generation":https://arxiv.org/abs/2004.08022


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent years have seen the tremendous progress in the area of natural language generation especially benefiting by the neural network models such as Recurrent Neural Networks (RNN) or Convolutional Neural Networks (CNN) based sequence-to-sequence (seq2seq) frameworks

Bahdanau et al. (2014); Gehring et al. (2017), Transformer and its variants Vaswani et al. (2017); Dai et al. (2019), pre-trained auto-regressive language models such as XLNet Yang et al. (2019) and GPT2 Radford et al. (2019), etc. Performance has been improved significantly in lots of tasks such as machine translation Bahdanau et al. (2014); Vaswani et al. (2017), dialogue systems Vinyals and Le (2015); Shang et al. (2015); Li (2020)

, text summarization

Rush et al. (2015); Li et al. (2017); See et al. (2017), story telling Fan et al. (2018); See et al. (2019), poetry writing Zhang and Lapata (2014); Lau et al. (2018); Liao et al. (2019), etc.

Figure 1: Examples of text with rigid formats. In lyrics, the syllables of the lyric words must align with the tones of the notation. In SongCi and Sonnet, there are strict rhyming schemes and the rhyming words are labeled in red color and italic font.

Generally, most of the above mentioned tasks can be regarded as free text generation, which means that no constraints on the format and structure, say the number of words and rhyming rules. Note that tasks of dialogue generation and story telling are almost in an open-ending generation style as long as the generated content is relevant with the conditional input text. Although there are formats constraints on the poetry text, the proposed models just treat the formats as kind of latent information and let the model capture this feature implicitly during training Liao et al. (2019). The model trained on the five-character quatrain corpus cannot generate seven-character verses. Moreover, it is impossible to trigger these models to generate satisfying results according to arbitrary new defined formats.

In practice we will confront some special text paradigms such as Lyrics (assume the music score is given), Sonnet (say Shakespeare’s Sonnets Shakespeare (2000)), SongCi (a kind of Ci. Ci is a type of lyric poetry in the tradition of Classical Chinese poetry.222http://en.wikipedia.org/wiki/Ci_(poetry), SongCi is the Ci created during Song dynasty), etc., and some examples are illustrated in Figure 1. The typical characteristics of these text can be categorized into three folds: (1) The assembling of text must comply fully with the predefined rigid formats. Assume that the music score is composed, then the lyricist must fill the lyric content strictly tally with the schemes lie in the notation. Take partial of song “Edelweiss” as shown in the first row of Figure 1 as example, the syllables of the lyric words must align with the tones of the notation. The second row of Figure 1 depicts the content of a SongCi created based on the CiPai of “Bu Suan Zi”. Given the CiPai, the number of characters and the syntactical structure of the content are also defined (e.g., the number of characters of each clause: 5, 5. 7, 5. 5, 5. 7, 5.). (2) The arrangement of the content must obey the defined rhyming schemes. For example, all the final words (words in red color and italic font) of the SongCi content in Figure1 are rhyming (the spelling of each word is: “zhu”, “yu”, “du”, and “gu”.). The example in the third row of Figure 1 comes from Shakespeare’s “Sonnet 116” Shakespeare (2000), the first four sentences. Usually, the rhyming schemes of Shakespeare’s Sonnets is “ABAB CDCD EFEF GG333http://en.wikipedia.org/wiki/Shakespeare%27s_sonnets. In the example, the rhyming words in scheme “ABAB” are “minds”, “love”, “finds”, and “remove”. (3) Even though the format is rigid, the sentence integrity must always be guaranteed. Incomplete sentence such as “love is not the” is inappropriate.

To the best of our knowledge, text generation based on the predefined rigid formats constraints has not been well investigated yet. In this work, we propose a simple and elegant framework named SongNet to address this challenging problem. The backbone of the framework is a Transformer-based auto-regressive language model. Considering the three folds characteristics mentioned above, we introduce sets of tailor-designed indicating symbols to improve the modeling performance, especially for the robustness of the format, rhyme, as well as sentence integrity. We improve the attention mechanism to impel the model to capture the future information on the format to further enhance sentence integrity. Inspired by BERT Devlin et al. (2019) and GPT Radford et al. (2018, 2019), a pre-training and fine-tuning framework is designed to further improve the generation quality. To verify the performance of our framework, we collect two corpora, SongCi and Sonnet, in Chinese and English respectively. Extensive experiments on the collected datasets demonstrate that our proposed framework can generate satisfying results in terms of both the tailor-designed automatic metrics including format accuracy, rhyming accuracy, sentence integrity, as well as the human evaluation results on relevance, fluency, and style.

Figure 2: The framework of our proposed model.

In summary, our contributions are as follows:

  • [topsep=0pt]

  • We propose to tackle a new challenging task: rigid formats controlled text generation. A pre-training and fine-tuning framework named SongNet is designed to address the problem.

  • Sets of symbols are tailor-designed to improve the modeling performance. We improve the attention mechanism to impel the model to capture the future information to further enhance the sentence integrity.

  • To verify the performance of our framework SongNet, we collect two corpora, SongCi and Sonnet, in Chinese and English respectively. We design several automatic evaluation metrics and human evaluation metrics to conduct the performance evaluation.

  • Extensive experiments conducted on two collected corpora demonstrate that our proposed framework generates significantly better results given arbitrary formats, including the cold-start formats or even the formats newly defined by ourselves.

2 Task Definition

The task of rigid formats controlled text generation is defined as follows:

Input: a rigid format :

(1)

where is the set of all possible formats. Note that we can define arbitrary new formats not restricted to the ones pre-defined in the corpus, thus . Format token denotes a place-holder symbol of which need to be translated into a real word token. Format contains words plus two extra punctuation characters “,” and “.”

Output: a natural language sentence which tally with the defined format :

where the example sentences are extracted from the Shakespeare’s Sonnets Shakespeare (2000). From the result we can observe that the count of words is 10 which is consistent with the format . The punctuation characters “,” and “.” are also correct. Thus, we claim that it is a format accuracy result. Also, since the two clause sentences are complete, we can get a good sentence integrity score. If is defined on the literary genres of SongCi or Sonnet which have rhyming constraints, the rhyming performance should be evaluated as well. Recall that can be arbitrary and flexible, thus we can rebuild a new format based on the generated result by masking partial content, say , then we may obtain better results by re-generating based on . We name this operation as polishing.

Finally, the target of this problem is to find a mapping function to conduct the rigid formats controlled text generation:

(2)

3 Framework Description

3.1 Overview

As shown in Figure 2, the backbone of our framework is a Transformer-based auto-regressive language model. The input can be the whole token sequences of samples from SongCi or Sonnet. We tailor-design several sets of indicating symbols to enhance the performance in terms of accuracy on format, rhyme, and sentence integrity. Specifically, symbols are introduced for format and rhyming modeling; Intra-position symbols are designed to represent the local positions of the tokens within each sentence aiming to improve the rhyming performance and the sentence integrity. Segment symbols are employed to identify the sentence border to further improve the sentence quality. Attention mechanism is improved to impel the model to capture the future format information such as the sentence ending markers. Similar to BERT Devlin et al. (2019) and GPT Radford et al. (2018, 2019), pre-training and fine-tuning paradigm is utilized to boost the performance of the original models.

3.2 Details

We use two sentences (as shown in Figure 1) “love is not love, …, bends with the remover to remove” extracted from the Shakespeare’s Sonnets Shakespeare (2000) as examples to describe the details of our framework SongNet. Since our basic model is a Transformer-based auto-regressive language model, during training, the input is “ love is not love, …, bends with the remover to remove. ”, and the corresponding output is a left-shifting version of the input (tokenized, and we ignore “…” for convenience and clarity):

where denotes the clause or sentence separator, and

is the ending marker of the whole sequence. The target of our framework is to conduct the formats controlled text generation. Therefore, the indicating symbols for format and rhyme as well as the sentence integrity are designed based on the target output sequence.

Format and Rhyme Symbols:

(3)

where we use to represent the general tokens; depict the punctuation characters; represent the rhyming tokens “love” and “remove”. and are kept.

Intra-Position Symbols:

(4)

denote the local positions of tokens within the same clause or sentence. Note that we align the position symbol indices in a descending order. The aim is to improve the sentence integrity by impelling the symbols capture the sentence dynamic information, precisely, the sense to end a sequence. For example, usually denote punctuation characters, thus should be the ending words of sentences.

Segment Symbols:

(5)

where is the symbol index for sentence . The purpose is to enhance the interactions between different sentences in different positions by defining the sentence index features.

During training, all the symbols as well as the input tokens are fed into the transformer-based language model. Contrast to Transformer Vaswani et al. (2017), BERT Devlin et al. (2019), and GPT2 Radford et al. (2019), we modify the traditional attention strategies slightly to fit our problem.

Specifically, for the input, we first obtain the representations by summing all the embeddings of the input tokens and symbols, as shown in the red solid box of Figure 2:

(6)

where is the layer index and is the state index.

is the embedding vector for input

. is the real token at position . , , and are three pre-defined symbols. is the global position index same as position symbols used in Transformer Vaswani et al. (2017).

Moreover, the state at time need to know some future information to grasp the global sequence dynamic information. For example, the model may want to know if it should close the decoding progress by generating the last word and a punctuation character to end the sentence. To represent the global dynamic information, we introduce another variable by only summing the pre-defined symbols as shown in the blue dash box of Figure 2:

(7)

After processing the input, two blocks of attention mechanisms are introduced to conduct the feature learning procedure. The first block is a masking multi-head self-attention component, and the second block is named global multi-head attention.

Masking Multi-Head Self-Attention:

(8)

where Slf-Att(), Ln(), and Ffn() represent self-attention mechanism, layer normalization, and feed-forward network respectively. Note that we only use the states whose indices as the attention context.

After obtaining from Equation (8), we feed it into the second attention block to capture the global dynamic information from .

Global Multi-Head Attention:

(9)

We can observe that all the context information from are considered. This is the reason why we name it as “global attention” and why the input real token information is NOT considered. Then the calculation of the unified first model layer is finished. We can iteratively apply these two attention blocks on the whole model layers until obtain the final representations . Note that is renewed layerly, however the global variable is fixed.

Finally, the training objective is to minimize the negative log-likelihood over the whole sequence:

(10)

3.3 Pre-training and Fine-tuning

Although our framework can be trained purely on the training dataset of the target corpus, usually the scale of the corpus is limited. For example, there are only about 150 samples in the corpus of Shakespeare’s Sonnets Shakespeare (2000). Therefore, we also design a pre-training and fine-tuning framework to further improve the generation quality.

Recall that in the task definition in Section 2, we claim that our model owns the ability of refining and polishing. To achieve this goal, we adjust the masking strategy used in BERT Devlin et al. (2019) to our framework according to our definitions. Specifically, we randomly (say 20%) select partial of the original content and keep them not changed when building the format symbols . For example, we will get a new symbol set for the example sentences:

where “love”, “bends” and “remove” are kept in the format .

After the pre-training stage, we can conduct the fine-tuning procedure directly on the target corpus without adjusting any model structure.

3.4 Generation

We can assign any format and rhyming symbols to control the generation. Given , we will obtain and automatically. And the model can conduct generation starting from the special token iteratively until meet the ending marker . Both beam-search algorithm Koehn (2004) and truncated top-k sampling Fan et al. (2018); Radford et al. (2019) method are utilized to conduct the decoding.

 

Model PPL Diversity (Distinct)
Val Test Ma-D-1 Mi-D-1 Ma-D-2 Mi-D-2
S2S 19.61 20.43 75.35 2.48 98.35 36.23
GPT2 148.11 104.99 - - - -
GPT2 w/ Fine-tuning 18.25 17.00 73.87 2.57 96.07 33.92
SongNet (only Pre-training) 24.41 16.23 74.84 4.59 95.09 54.98
SongNet (only Fine-tuning) 12.75 14.73 75.96 2.69 97.59 37.26
SongNet 11.56 12.64 75.04 2.66 97.29 36.78

 

 

Model Format Rhyme Integrity
Ma-F1 Mi-F1 Ma-F1 Mi-F1
S2S 44.32 38.16 53.80 52.27 8.302.06
GPT2 w/ Fine-tuning 35.70 35.20 53.48 52.50 45.9220.12
SongNet (only Pre-training) 29.12 29.46 53.77 53.13 30.9814.06
SongNet (only Fine-tuning) 99.81 99.83 79.23 78.63 2.140.10
SongNet 99.88 99.89 73.21 72.59 1.770.16

 

Table 1: Automatic evaluation results on SongCi

 

Model PPL Diversity (Distinct)
Val Test Ma-D-1 Mi-D-1 Ma-D-2 Mi-D-2
GPT2 w/ Fine-tuning 31.47 31.03 73.87 2.57 96.07 33.92
SongNet (only Pre-training) 28.56 28.07 49.92 25.14 85.35 65.70
SongNet (only Fine-tuning) 34.62 34.53 42.31 4.96 90.76 47.26
SongNet 27.46 27.63 43.01 10.43 80.06 56.14

 

 

Model Format Rhyme Integrity
Ma-F1 Mi-F1 Ma-F1 Mi-F1
GPT2 w/ Fine-tuning 2.03 1.91 5.20 6.24 15.773.63
SongNet (only Pre-training) 99.99 99.99 3.93 4.01 15.282.04
SongNet (only Fine-tuning) 99.25 99.99 7.50 7.41 18.862.59
SongNet 98.73 98.73 11.46 11.41 11.863.01

 

Table 2: Automatic evaluation results on Sonnet

4 Experimental Setup

4.1 Settings

The parameter size of our model are fixed in both the pre-training stage and the fine-tuning stage. The number of layers , and hidden size is 768. We employ 12 heads in both the masking multi-head self-attention block and the global attention block. Adam Kingma and Ba (2014) optimization method with Noam learning-rate decay strategy and 10,000 warmup steps is employed to conduct the pre-training.

4.2 Datasets

 

Corpus #Train #Dev #Test #Vocab
SongCi 19,244 847 962 5310
Sonnet 100 27 27 2801

 

Table 3: Statistics of the datasets SongCi and Sonnet.

We conduct all the experiments on two collected corpus with different literary genres: SongCi and Sonnet, in Chinese and English respectively. The statistic number are shown in Table 3. We can see that Sonnet is in small size since we only utilize the samples from the Shakespeare’s Sonnets Shakespeare (2000). Since SongCi and Sonnet are in different languages, thus we conduct the pre-training procedure on two large scale corpus in the corresponding languages respectively. For Chinese, we collect Chinese Wikipedia (1700M Characters) and a merged Chinese News (9200M Characters) corpus from the Internet. We did not conduct the word segmenting operations on the Chinese datasets, which means that we just use the characters to build the vocabulary, and the size is 27681. For English, same as BERT, we employ English Wikipedia (2400M words) and BooksCorpus (980M words) Zhu et al. (2015) to conduct the pre-training. We did not use BPE operation Sennrich et al. (2015) on this corpus considering the format controlling purpose. We keep the most frequent 50,000 words to build the vocabulary.

 

Model PPL Diversity (Distinct)
Val Test Ma-D-1 Mi-D-1 Ma-D-2 Mi-D-2
SongNet 12.75 14.73 75.96 2.69 97.59 37.26
SongNet-GRU 16.52 20.49 74.73 1.77 98.30 28.98
SongNet w/o C 13.51 15.38 75.42 2.48 97.36 34.85
SongNet w/o P 14.16 17.16 73.73 2.56 97.52 34.82
SongNet w/ inverse-P 13.40 15.13 74.95 2.54 97.76 35.65
SongNet w/o S 13.23 15.44 75.38 2.74 97.31 37.50

 

 

Model Format Rhyme Integrity
Ma-F1 Mi-F1 Ma-F1 Mi-F1
SongNet 99.81 99.83 79.23 78.63 2.140.10
SongNet-GRU 98.99 98.99 52.13 50.93 3.281.67
SongNet w/o C 84.73 85.39 78.59 78.24 1.770.53
SongNet w/o P 99.61 99.59 67.85 67.29 3.330.18
SongNet w/ inverse-P 99.68 99.69 65.89 65.43 2.240.21
SongNet w/o S 99.84 99.86 80.43 80.13 1.990.10

 

Table 4: Ablation analysis on SongCi

4.3 Evaluation Metrics

Besides PPL and Distinct Li et al. (2016), we also tailor-design several metrics for our task to conduct the evaluation for format, rhyme, and sentence integrity.

Format Assume that there are sentences defined in the format , and the generated results contains sentences . Without loss of generality, we align and from the beginning, and calculate the format quality according to the following rules: (1) the length difference ; (2) the punctuation characters must be same. For SongCi, we let and rule (2) must be conforming. For Sonnet, we relax the condition where we let and ignore rule (2). Assume that the number of format-correct sentences is , then we can obtain Precision , Recall , and F1-measure. We report both the Macro-F1 and Micro-F1 in the results tables.

Rhyme For SongCi, usually, there is only one group of rhyming words in one sample. As the example shown in Table 1, the pronunciation of the red rhyming words are “zhu”, “yü”, “du”, and “gu” respectively, and the rhyming phoneme is “u”. For the generated samples, we first use the tool pinyin444http://github.com/mozillazg/python-pinyin to get the pronunciations (PinYin) of the words in the rhyming positions, and then conduct the evaluation. For Shakespeare’s Sonnets corpus, the rhyming rule is clear “ABAB CDCD EFEF GG” and there are 7 groups of rhyming tokens. For the generated samples, we employ the CMU Pronouncing Dictionary555http://www.speech.cs.cmu.edu/cgi-bin/cmudict Speech@CMU (1998) to obtain the phonemes of the words in the rhyming positions. For example, the phonemes for word “asleep” and “steep” are [’AH0’, ’S’, ’L’, ’IY1’, ’P’] and [’S’, ’T’, ’IY1’, ’P’] respectively. And then we can conduct the evaluation by counting the overlapping units from both the original words and the extracted phonemes group by group. We report the Macro-F1 and Micro-F1 numbers in the results tables as well.

Integrity Since the format in our task is strict and rigid, thus the number of words to be predicted is also pre-defined. Our model must organize the language using the limited positions, thus sentence integrity may become a serious issue. For example, the integrity of “love is not love . ” is much better than“love is not the .

”. To conduct the evaluation of sentence integrity, we design a straightforward method by calculating the prediction probability of the punctuation characters before

given the prefix tokens:

(11)

where is the generated sequence of sentences. Smaller integrity metric value indicates higher sentence quality. To achieve this goal, we conduct pre-trainings for two GPT2 Radford et al. (2019) models on the large scale Chinese corpus and English corpus respectively. Then we utilize the GPT2 models to conduct the evaluation for sentence integrity.

Human Evaluations For SongCi, we sampled 50 samples for 25 CiPais. For Sonnet, the whole 27 samples in the test set are selected for human evaluation. We recruit three helpers to score the Relevance, Fluency, and Style. The rating criteria are as follows: Relevance: +2: all the sentences are relevant to the same topic; +1: partial sentences are relevant; 0: not relevant at all. Fluency: +2: fluent; +1: readable but with some grammar mistakes; 0: unreadable. Style: +2: match with SongCi or Sonnet genres; +1: partially match; 0: mismatch.

4.4 Comparison Methods

S2S Sequence-to-sequence framework with attention mechanism Bahdanau et al. (2014). We regard the format and rhyme symbols as the input sequence, and the target as the output sequence.

GPT2 We fine-tune the GPT2 models (the pre-training versions are used for sentence integrity evaluation) on SongCi and Sonnet respectively.

SongNet Out proposed framework with both the per-training and fine-tuning stages.

We also conduct ablation analysis to verify the performance of the defined symbols as well as the variants of model structures.

  • [topsep=0pt]

  • SongNet (only pre-tuning) Without the fine-tuning stage.

  • SongNet (only fine-tuning) Without the pre-training stage.

  • SongNet-GRU Employ GRU Cho et al. (2014) to replace Transformer as the core structure.

  • SongNet w/o C Remove the format and rhyme symbols .

  • SongNet w/o P Remove the intra-position symbols .

  • SongNet w/o S Remove the sentence segment symbols .

  • SongNet w/ inverse-P Arrange the intra-position indices in ascending order instead of the descending order.

Figure 3: Parameter tuning of on the metrics of Rhyme, Integrity, and Micro-Dist-2.
Table 5: Cases of the generated results for SongCi and Sonnet respectively. For SongCi, the number in Format (e.g., 3,5,7) denotes the number of tokens in one sentence. The rhyming words are labeled in red color and italic font following is the Pinyin. (Since cases are provided to confirm the format consistency, thus we did not conduct translation for the Chinese samples. Translation for Chinese poetry is also a challenging task.)
Table 6: Cases of the generated results given the formats with partial pre-defined content. Format token “_” needs to be translated to real word token.

5 Results and Discussions

5.1 Results

Please note that we mainly employ top- sampling method Fan et al. (2018); Radford et al. (2019) to conduct the generation, and we let here. The parameter tuning of is described in Section 5.3.

Table 1 and Table 2 depict the experimental results of SongNet as well as the baseline methods S2S and GPT2 on corpus SongCi and Sonnet respectively. It is obvious that our pre-training and fine-tuning framework SongNet obtain the best performance on most of the automatic metrics. Especially on the metric of Format accuracy, SongNet can even obtain a 98%+ value which means that our framework can conduct the generation rigidly matching with the pre-defined formats. On the metric of PPL, Rhyme accuracy, and sentence integrity, SongNet also performs significantly better in a large gap than the baseline methods such as S2S and GPT2 as well as the model variants only with the pre-training or fine-tuning stage.

Another observation is that some of the results on corpus Sonnet are not as good as the results on SongCi. The main reason is that Sonnet only contains 100 samples in the training set as shown in Table 3. Therefore, the model cannot capture sufficient useful features especially for the rhyming issue.

5.2 Ablation Analysis

We conduct ablation study on corpus SongCi and the experimental results are depicted in Table 4. It should note that all the models are purely trained on SongCi corpus without any pre-training stages. From the results we can conclude that the introduced symbols , , and indeed play crucial roles in improving the overall performance especially on the metrics of format, rhyme, and sentence integrity. Even though some of the components can not improve the performance simultaneously on all the metrics, the combination of them can obtain the best performance.

5.3 Parameter Tuning

Since we employ top- sampling as our main decoding strategy, thus we design several experiments to conduct the parameter tuning on . We let k to be 1, 5, 10, 20, 50, 500 respectively. We also provide the beam-search (beam=5) results for comparing and reference.

The parameter tuning results are depicted in Figure 3. From the results we can observe that large can increase the diversity of the results significantly. But the Rhyme accuracy and the sentence integrity will drop simultaneously. Therefore, in the experiments we let to obtain a trade-off between the diversity and the general quality.

5.4 Human Evaluation

 

Model Relevance Fluency Style
SongNet-SongCi 1.36 1.45 2.00
SongNet-Sonnet 0.58 0.42 0.83

 

Table 7: Human evaluation results.

For human evaluation, we just conduct the judging on the results generated by our final model SongNet. From the result we can observe that the results on corpus SongCi is much better than the ones on corpus Sonnet, which is because the corpus scale is different. And the the small scale also lead to dramatically dropping on all the metrics.

5.5 Case Analysis

Table 5 depicts several generated cases for SongCi and Sonnet respectively. For SongCi, the formats (CiPai) are all cold-start samples which are not in the training set or even newly defined. Our model can still generate high quality results on the aspects of format, rhyme as well as integrity. However, for corpus Sonnet, even though the model can generate 14 lines text, the quality is not as good as SongCi due to the insufficient training-set (only 100 samples). We will address this interesting and challenging few-shot issue in the future.

In addition, we mentioned that our model has the ability of refining and polishing given the format which contains some fixed text information. The examples of the generated results under this setting are shown in Table 6, which show that our model SongNet can generate satisfying results especially on SongCi.

6 Conclusion

We propose to tackle a challenging task called rigid formats controlled text generation. A pre-training and fine-tuning framework SongNet is designed to address the problem. Sets of symbols are tailor-designed to improve the modeling performance for format, rhyme, and sentence integrity. Extensive experiments conducted on two collected corpora demonstrate that our framework generates significantly better results in terms of both automatic metrics and human evaluations given arbitrary cold start formats.

References

  • D. Bahdanau, K. Cho, and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §1, §4.4.
  • K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder–decoder for statistical machine translation. In

    Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

    ,
    pp. 1724–1734. Cited by: 3rd item.
  • Z. Dai, Z. Yang, Y. Yang, W. W. Cohen, J. Carbonell, Q. V. Le, and R. Salakhutdinov (2019) Transformer-xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Cited by: §1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Cited by: §1, §3.1, §3.2, §3.3.
  • A. Fan, M. Lewis, and Y. Dauphin (2018) Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 889–898. Cited by: §1, §3.4, §5.1.
  • J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin (2017) Convolutional sequence to sequence learning. In

    Proceedings of the 34th International Conference on Machine Learning-Volume 70

    ,
    pp. 1243–1252. Cited by: §1.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
  • P. Koehn (2004) Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Conference of the Association for Machine Translation in the Americas, pp. 115–124. Cited by: §3.4.
  • J. H. Lau, T. Cohn, T. Baldwin, J. Brooke, and A. Hammond (2018) Deep-speare: a joint neural model of poetic language, meter and rhyme. arXiv preprint arXiv:1807.03491. Cited by: §1.
  • J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan (2016) A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 110–119. Cited by: §4.3.
  • P. Li, W. Lam, L. Bing, and Z. Wang (2017) Deep recurrent generative decoder for abstractive text summarization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2091–2100. Cited by: §1.
  • P. Li (2020) An empirical investigation of pre-trained transformer language models for open-domain dialogue generation. arXiv preprint arXiv:2003.04195. Cited by: §1.
  • Y. Liao, Y. Wang, Q. Liu, and X. Jiang (2019) GPT-based generation for classical chinese poetry. arXiv preprint arXiv:1907.00151. Cited by: §1, §1.
  • A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018)

    Improving language understanding with unsupervised learning

    .
    Technical report Technical report, OpenAI. Cited by: §1, §3.1.
  • A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI Blog 1 (8). Cited by: §1, §1, §3.1, §3.2, §3.4, §4.3, §5.1.
  • A. M. Rush, S. Chopra, and J. Weston (2015)

    A neural attention model for abstractive sentence summarization

    .
    In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 379–389. Cited by: §1.
  • A. See, P. J. Liu, and C. D. Manning (2017) Get to the point: summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1073–1083. Cited by: §1.
  • A. See, A. Pappu, R. Saxena, A. Yerukola, and C. D. Manning (2019) Do massively pretrained language models make better storytellers?. arXiv preprint arXiv:1909.10705. Cited by: §1.
  • R. Sennrich, B. Haddow, and A. Birch (2015) Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Cited by: §4.2.
  • W. Shakespeare (2000) Shakespeare’s sonnets. Yale University Press. Cited by: §1, §2, §3.2, §3.3, §4.2.
  • L. Shang, Z. Lu, and H. Li (2015) Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1577–1586. Cited by: §1.
  • Speech@CMU (1998) Carnegie-mellon university pronouncing dictionary for american english. Version 0.7b. Available at [http://www.speech.cs.cmu.edu/cgi-bin/cmudict]. Cited by: §4.3.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1, §3.2, §3.2.
  • O. Vinyals and Q. Le (2015) A neural conversational model. arXiv preprint arXiv:1506.05869. Cited by: §1.
  • Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le (2019) XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Cited by: §1.
  • X. Zhang and M. Lapata (2014) Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 670–680. Cited by: §1.
  • Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler (2015) Aligning books and movies: towards story-like visual explanations by watching movies and reading books. In

    Proceedings of the IEEE international conference on computer vision

    ,
    pp. 19–27. Cited by: §4.2.