Combining Learned Lyrical Structures and Vocabulary for Improved Lyric Generation

11/12/2018 ∙ by Pablo Samuel Castro, et al. ∙ Google 0

The use of language models for generating lyrics and poetry has received an increased interest in the last few years. They pose a unique challenge relative to standard natural language problems, as their ultimate purpose is reative, notions of accuracy and reproducibility are secondary to notions of lyricism, structure, and diversity. In this creative setting, traditional quantitative measures for natural language problems, such as BLEU scores, prove inadequate: a high-scoring model may either fail to produce output respecting the desired structure (e.g. song verses), be a terribly boring creative companion, or both. In this work we propose a mechanism for combining two separately trained language models into a framework that is able to produce output respecting the desired song structure, while providing a richness and diversity of vocabulary that renders it more creatively appealing.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the increased realism and sophistication of generative models, artists have been increasingly drawn to incorporate these methods into their creative process. The approaches vary, from transferring style from one artist to another (Dumoulin et al., 2016) to adapting a pre-existing process to produce abstract art that maximizes the likelihood of a category under a classification model (White, 2018).

Lyrics are a particularly challenging artistic endeavour; high-quality lyrics typically require following a specific lyric structure, the use of a rich vocabulary, a mastery of the language, and the use of poetic techniques such as metaphors and alliteration. Because of this, the use of machine learning models for the generation of lyrics has seen a slower increase. The few cases where machine learning models have been used for lyric generation have required a substantial amount of human intervention. In our submission to the Machine Learning and Creativity Workshop at NIPS 2017

(Castro et al., 2017)

we trained a Recurrent Neural Network (RNN) over a dataset of lyrics. We then manually curated the lyrics produced with renowned Canadian songwriter David Usher to rewrite one of his songs (Sparkle and Shine). Although a successful experiment in human-machine collaboration, the lyrics required more manual intervention than we would have liked. We recently switched to the more sophisticated Transformer Language Models (TLMs)

(Vaswani et al., 2017) to train over the same dataset. The results are of substantially improved, but although they seem to maintain the general structure of lyrics, they still suffer from a lack of variety.

2 Proposed Framework

Our approach combines two different TLMs. The first model () is trained to capture the structure of lyrics, while the second () is trained to provide a richer vocabulary than what is currently available in the lyrics dataset, while still leveraging the context of the existing lyrics. Given an initial input lyric , we combine these models to produce the next line as follows ( will be described below): .

: Our dataset consists of a large set of lyrics spanning multiple genres and decades (see Appendix A). Our inputs consist of the separate lines of all the song lyrics, while the targets are the same lines shifted by one (e.g. ). We pre-processed the lyrics by converting them into their respective Parts-of-Speech (PoS)111We used from Python’s library to extract PoS.. This was done to ensure that the Lyric model is only capturing lyric structure, but not vocabulary. We will refer to this conversion process as ; in other words our input-to-target mapping becomes .

: We picked a subset of Project Guttenberg’s Top 20 books Kaggle dataset222https://www.kaggle.com/currie32/project-gutenbergs-top-20-books (the list of books used is provided in Appendix B). We split each sentence into two parts: and ; the splitting point was chosen at about halfway through the sentence, without splitting any words (see Appendix C for more details). Denoting as string concatenation, a sentence is converted to an input-to-target mapping as: . The intuition behind this approach is that provides the context, while provides the structure to be “materialized”.

3 Empirical Evaluation

In order to evaluate our approach we generated 100 lyric verses using the following procedure. We randomly picked 100 lines from our lyrics dataset as starter lines . Then, for each model we incrementally built a verse of lines by setting . We are using beam search with max size 3, so each results in 3 different s. We consider the verse produced by each of these possibilities. This means that for each we produce up to different verses (depending on the input, the beam search may sometimes produce less than 3 variants). We compared our approach against two baselines: is a TLM trained only on the lyrics dataset; is a TLM trained only on the books dataset.

3.1 Quantatitative Evaluation

From the verses generated by each model we computed the number of words and average word length per line, the number line repeats in the verse (), and the fraction of words repeated from one line to the next. The results are presented in Table 1 and demonstrate that makes use of a much larger vocabulary and with fewer repeats. Given that the verses are 5 lines long, is repeating lines about half the time! Qualitative examples of the generations are presented in Appendix D and further confirm these quantitative results.

Words per line Word length Number of line Fraction of
per line repeats repeated words
Table 1: Statistics for the different models.

4 Discussion and Future Work

Although we are able to substantially improve the quality of the generated lyrics, there is still much work ahead of us. We would like to train over a larger set of books, and ones that are more current to have more modern vocabulary. An important aspect of lyric structure that we are investigating is having the generation adapt to rhyming structure and phonetic cadence, as this is something songwriters use often to fit a musical melody. As with most language models out there, semantic consistency still proves challenging, and is something we are actively investigating.

References

Appendix A Lyrics dataset

These are the genres used for the lyric structure model detailed in Section 2. We exclude Children’s Music and Hip-Hop, the latter to reduce the amount of profanity in the generations.

  • Alternative / indie

  • Country

  • Folk

  • Jazz

  • Metal

  • Pop

  • R-and-B / Soul

  • Rock

  • Soundtracks

In total this resulted in over 1 million input/target pairs, and about 91,000 for the test/validation sets.

Appendix B Kaggle books dataset

The books used for training the Lyric Vocabulary model detailed in Section 2 were:

  • A Tale of Two Cities by Charles Dickens

  • Adventures of Huckleberry Finn by Mark Twain

  • Alices Adventures in Wonderland by Lewis Carroll

  • Dracula by Bram Stoker

  • Emma by Jane Austen

  • Frankenstein by Mary Shelley

  • Great Expectations by Charles Dickens

  • Grimms Fairy Tales by The Brothers Grimm

  • Metamorphosis by Franz Kafka

  • Pride and Prejudice by Jane Austen

  • The Adventures of Sherlock Holmes by Arthur Conan Doyle

  • The Adventures of Tom Sawyer by Mark Twain

  • The Count of Monte Cristo by Alexandre Dumas

  • The Picture of Dorian Gray by Oscar Wilde

  • The Prince by Nicolo Machiavelli

  • The Yellow Wallpaper by Charlotte Perkins Gilman

In total this resulted in around 155,000 input/target pairs used for training, and around 13,000 for the test/validation sets.

Appendix C Word splitting mechanism

The split happens assuming that periods denote the end of a sentence (aside from abbreviations typically containing a period e.g. Mr. or St.) while also taking into account that quotes in books sometimes mark the end of a sentence, e.g. in the case of quotes being followed by an uppercase word. In order to avoid very long sentences skewing the structure of the proposed framework, we are splitting into subsentences of 15 words each, if the total word number in a sentence exceeds a threshold. Subsequently, we split each sentence into approximately half, in respect of not cutting words in half. This has been parameterized in case we choose to experiment with different ratios.

As an example, if we have the sentence: “The quick brown fox jumped over the fence”, our procedure would produce the following input and target phrases:

  • Input: The quick brown fox VBD IN DT NN

  • Target: jumped over the fence

Appendix D Qualitative Evaluation

We present some sample lyrics produced by the different models, using the same starter lines (in italics below). As discussed in Section 2, we generate lines incrementally: . In Table 2 we compare with , where the increased variety in outputs produced by is evident.

PureLyrics RichLyrics
come on , uh come on , uh
i’m not gonna write you a love song you remember the voice of the widow
’cause you tell me it’s i love the girl of the age
i’m not the one you wanted i have a regard for the whole
i’m not the one you wanted i have no doubt of the kind
i’m not the one you wanted i am sitting in the corner of the mantelpiece
i ’m your big and brave and handsome romeo i ’m your big and brave and handsome romeo
i’m the man of the woods, i’m the man of the woods you know my secret secret
i’m the man of the woods, i’m the woods you have my second estate
i’m the man of the woods, i’m the woods you suit your high origin
i’m the man of the woods, i’m the woods you have my cursed youth
i’m the man of the woods, i’m the woods you have my life
you told me you loved nobody else , you never would you told me you loved nobody else , you never would
you told me you loved nobody else but you you told me you wanted everything else, you never would
you told me you loved nobody but you he put his hand on the pillow of the marquis
you told me you loved nobody but you he put his cap on the ground like a stone
you told me you loved nobody but you he put his hand on the latch of a door
you told me you loved me but you loved me he put his key in the lock as a key
Table 2: Comparison between and models.

In Table 3 we compare with , which highlights how our proposal produces output that is more reminiscent of real lyrics, both in terms of phrase structure and length.

PureBooks RichLyrics
come on , uh come on , uh
she was a new man you remember the voice of the widow
but it was not i love the girl of the age
a thing to be done i have a regard for the whole
it was i have no doubt of the kind
a confession i am sitting in the corner of the mantelpiece
i ’m your big and brave and handsome romeo i ’m your big and brave and handsome romeo
you, and i’ll tell you all about it you know my secret secret
i don’t you have my second estate
understand you, you suit your high origin
said the young man, and we you have my cursed youth
shall be happy to-day you have my life
you told me you loved nobody else , you never would you told me you loved nobody else , you never would
have felt that you were coming to know of yourself you told me you wanted everything else, you never would
i have no he put his hand on the pillow of the marquis
doubt of that, he put his cap on the ground like a stone
said the young man, that you have been a he put his hand on the latch of a door
great fancy for a few minutes, and then another? he put his key in the lock as a key
Table 3: Comparison between and models.