AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer

08/25/2021
by   Jingwei Zhao, et al.
NYU college
0

Accompaniment arrangement is a difficult music generation task involving intertwined constraints of melody, harmony, texture, and music structure. Existing models are not yet able to capture all these constraints effectively, especially for long-term music generation. To address this problem, we propose AccoMontage, an accompaniment arrangement system for whole pieces of music through unifying phrase selection and neural style transfer. We focus on generating piano accompaniments for folk/pop songs based on a lead sheet (i.e., melody with chord progression). Specifically, AccoMontage first retrieves phrase montages from a database while recombining them structurally using dynamic programming. Second, chords of the retrieved phrases are manipulated to match the lead sheet via style transfer. Lastly, the system offers controls over the generation process. In contrast to pure learning-based approaches, AccoMontage introduces a novel hybrid pathway, in which rule-based optimization and deep learning are both leveraged to complement each other for high-quality generation. Experiments show that our model generates well-structured accompaniment with delicate texture, significantly outperforming the baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

01/21/2022

Can Machines Generate Personalized Music? A Hybrid Favorite-aware Method for User Preference Music Transfer

User preference music transfer (UPMT) is a new problem in music style tr...
08/17/2020

Learning Interpretable Representation for Controllable Polyphonic Music Generation

While deep generative models have become the leading methods for algorit...
03/19/2018

Music Style Transfer: A Position Paper

Led by the success of neural style transfer on visual arts, there has be...
03/19/2018

Music Style Transfer Issues: A Position Paper

Led by the success of neural style transfer on visual arts, there has be...
12/28/2018

A Framework for Automated Pop-song Melody Generation with Piano Accompaniment Arrangement

We contribute a pop-song automation framework for lead melody generation...
10/06/2017

Generating Nontrivial Melodies for Music as a Service

We present a hybrid neural network and rule-based system that generates ...
12/29/2016

A hybrid approach to supervised machine learning for algorithmic melody composition

In this work we present an algorithm for composing monophonic melodies s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Accompaniment arrangement refers to the task of reconceptualizing a piece by composing the accompaniment part given a lead sheet (a lead melody with a chord progression). When designing the texture and voicing of the accompaniment, arrangers are simultaneously dealing with the constraints from the original melody, chord progression, and other structural information. This constrained composition process is often modeled as a conditioned generation problem in music automation.

Despite recent promising advances in deep music generative models [yang2019deep, 9031528, huang2018music, dhariwal2020jukebox, ren2020popmag, zhu2018xiaoice, yang2017midinet], existing methods cannot yet generate musical accompaniment while capturing the aforementioned constraints effectively. Specifically, most algorithms fall short in preserving the fine granularity and structure of accompaniment in the long run. Also, it is difficult to explicitly control the generation process. We argue that these limits are mainly due to the current generation from scratch approach. In composition practice, however, arrangers often resort to existing pieces as accompaniment references. For example, a piano accompanist can improvise through off-the-shelf textures while transferring them into proper chords, which is essentially re-harmonizing a reference piece to fit a query lead sheet. In this way, the coherence and structure of the accompaniment are preserved from the reference pieces, and musicians also have control over what reference to choose.

To this end, we contribute AccoMontage, a generalized template-based approach to 1) given a lead sheet as the query, search for proper accompaniment phrases as the reference; 2) re-harmonize the selected reference via style transfer to accompany the query. We model the search stage as an optimization problem on the graph, where nodes represent candidate phrases in the dataset and edges represent inter-phrase transitions. Node scores are defined in a rule-based manner to reveal query-reference fitness, while edge scores are learned by contrastive learning to reveal smoothness of phrase transitions. As for the re-harmonization stage, we adopt the chord-texture disentanglement and transfer method in [wang2020learning, wang2020pianotree].

The current system focuses on arranging piano accompaniments for a full-length folk or pop song. Experimental results show that the generated accompaniments not only harmonize well with the melody but also contain more intra-phrase coherence and inter-phrase structure compared to the baselines. In brief, our contributions are:

  • A generalized template-based approach: A novel hybrid approach to generative models, where searching and deep learning are both leveraged to complement each other and enhance the overall generation quality. This strategy is also useful in other domains.

  • The AccoMontage system: A system capable of generating long-term and structured accompaniments for full-length songs. The arranged accompaniments have state-of-the-art quality and are significantly better than existing pure learning-based and template-based baselines.

  • Controllable music generation: Users can control the generation process by pre-filtering of two texture features: rhythm density and voice number.

2 Related Work

We review three topics related to symbolic accompaniment arrangement: conditional music generation, template-based arrangement, and music style transfer.

2.1 Conditional Music Generation

Conditional music generation takes various forms, such as generating chords conditioned on the melody [simon2008mysong, lim2017chord], generating melody on the underlying chords [zhu2018xiaoice, yang2017midinet], and generating melody from metadata and descriptions [zhang2020butter]. In particular, accompaniment arrangement refers to generating accompaniment conditioned on the lead sheet, and this topic has recently drawn much research attention. We even see tailored datasets for piano arrangement tasks [wang2020pop909].

For accompaniment arrangement, existing models that show satisfied arrangement quality typically apply only to short clips. GAN and VAE-based models are used to maintain inter-track music dependency [dong2018musegan, liu2018lead, jia2019impromptu], but limit music generation within 4 to 8 bars. Another popular approach is to generate longer accompaniment in a seq2seq manner with attention [wang2020learning, ren2020popmag, zhu2018xiaoice], but can easily converge to repetitive textural patterns in the long run. On the other hand, models that arrange for complete songs typically rely on a library of fixed elementary textures and often fail to generalize [chen2013automatic, wu2016emotion, liu2012polyphonic]. This paper aims to unite both high-quality and long-term accompaniment generation in one system, where “long-term” refers to full songs (32 bars and more) with dependencies to intra-phrase melody and chord progression, and inter-phrase structure.

2.2 Template-based Accompaniment Arrangement

The use of existing compositions to generate music is not an entirely new idea. Existing template-based algorithms include learning-based unit selection [bretan2016unit, xia_2018], rule-based matching [chen2013automatic, wu2016emotion]

, and genetic algorithms

[liu2012polyphonic]. For accompaniment arrangement, a common problem lies in the difficulty to find a perfectly matched reference especially when the templates contain rich textures with non-chordal tones. Some works choose to only use basic accompaniment patterns to avoid this issue [chen2013automatic, wu2016emotion, liu2012polyphonic]. In contrast, our study addresses this problem by applying the style transfer technique on a selected template to improve the fitness between the accompaniment and the lead sheet. We name our approach after generalized template matching.

2.3 Music Style transfer

Music style transfer [dai2018music] is becoming a popular approach for controllable music generation. Through music-representation disentanglement and manipulation, users can transfer various factors of a reference music piece, including pitch contour, rhythm pattern, chord progression, polyphonic texture, etc [wang2020learning, yang2019deep]. Our approach can be seen as an extension of music style transfer in which the “reference search” step is also automated.

3 Methodology

The AccoMontage system uses a generalized template-based approach for piano accompaniment arrangement. The input to the system is a lead sheet of a complete folk/pop song with phrase labels, which we call a query. The search space of the system is a MIDI dataset of piano-arranged pop songs. In general, we can derive the chord progression and phrase labels of each song in the dataset by MIR algorithms. In our case, the chords are extracted by [wang2020pop909] and the phrases are labeled manually [dai2020automatic]. We refer to each phrase of associated accompaniment, melody, and chords as a reference. For the rest of this section, we first introduce the feature representation of the AccoMontage system in Section 3.1, and then describe the main pipeline algorithms in Section 3.2 and 3.3. Finally, we show how to further control the arrangement process in Section 3.4.

3.1 Feature Representation

Given a lead sheet as the query, we represent it as a sequence of ternary tuples:

(1)

where , the melody feature of query phrase

, is a sequence of 130-D one-hot vectors with 128 MIDI pitches plus a hold and a rest state

[roberts2018hierarchical]; , the chord feature aligned with , is a sequence of 12-D chromagram vectors [yang2019deep, 9031528]; is a phrase label string denoting within-song repetition and length in bar, such as , , etc. [dai2020automatic]. is the number of phrases in lead sheet .

We represent the accompaniment reference space as a collection of tuples:

(2)

where , and are the melody and the chord feature of the -th reference phrase, represented in the same format as in the query phrases; is the accompaniment feature, which is a 128-D piano-roll representation the same as [wang2020learning]. is the volume of the reference space.

3.2 Phrase Selection

Assuming there are phrases in the query lead sheet, we aim to find a reference sequence:

(3)

where we match reference to the -th phrase in our query; and has the same length as .

Given the query’s phrase structure, the reference space forms a graph of layered structures shown as selection. Each layer consists of equal-length reference phrases and consecutive layers are fully connected to each other. Each node in graph describes the fitness between and , and each edge evaluates the transition from to . A complete selection of reference phrases corresponds to a path that traverses through all layers. To evaluate a path, We design a fitness model and a transition model as follows.

3.2.1 Phrase Fitness Model

We rely on the phrase fitness model to evaluate if a reference accompaniment phrase matches a query phrase. Formally, we define the fitness model as follows:

(4)

where

measures the similarity between two inputs. In our work, we use the cosine similarity.

is the Tonal Interval Vector (TIV) operator that maps a chromagram to a 12-D tonal interval space whose geometric properties concur with harmonic relationships of the tonal system [bernardes2016multi]. and are both rhythm features, which condense the original 130-D melody feature to 3-D that denotes an onset of any pitch, a hold state, and rest [yang2019deep]. and are chord features (chromagram) defined in Section 3.1 and we further augment the reference space by transposing phrases to all 12 keys. While computing the similarity, we consider the rhythm feature and TIV as 2-D matrices each with channel number 3 and 12. We calculate the cosine similarity of both features by feeding in their channel-flattened vectors.

Note that in Eq (4), we compare only the rhythm and chord features for query-reference matching. The underlying assumption is that if lead sheet is similar to another lead sheet in rhythm and chord progression, then ’s accompaniment will be very likely to fit as well.

3.2.2 Transition Model

We exploit the transition model to reveal the inter-phrase transition and structural constraints. Formally, we define the transition score between two reference accompaniment phrases as follows:

(5)

The first term in Eq (5) aims to reveal the transition naturalness of the polyphonic texture

between two adjacent phrases. Instead of using rule-based heuristics to process texture information, we resort to neural representation learning and contrastive learning. Formally, let

denote the feature vector that represents the accompaniment texture of . It is computed by:

(6)

where the design of is adopted from the chord-texture representation disentanglement model in [wang2020learning]. This texture encoder regards piano-roll inputs as images and uses a CNN to compute a rough “sketch” of polyphonic texture that is not sensitive to mild chord variations.

To reveal whether two adjacent textures follow a natural transition, we use a contrastive loss to simultaneously train the weight matrix in Eq (5) and fine-tune (with parameter ) in Eq (6):

(7)

where and are supposed to be consecutive pairs. is a collection of samples which contains and other randomly selected phrases from reference space . Following [bretan2016unit], we choose .

Figure 1: Phrase selection on the graph. Based on the lead sheet with an phrase structure, the search space forms a graph with four consecutive layers. Graph nodes are assigned with similarity scores, and edges with transition scores. The form term is part of the transition score.

For the form term , we introduce this term to bias a more well-structured transition. Concretely, if query phrases and share the same phrase label, we would prefer to also retrieve equal-labeled references, i.e., accompaniments with recapitulated melody themes. To maximize such likelihood, we define the form term:

(8)

where we define if and only if their step-wise cosine similarity is greater than 0.99.

3.2.3 Model Inference

The reference space forms a layered graph with consecutive layers fully connected to each other. In selection, we leverage the transition model to assign weights of edges and the fitness model to assign weights of nodes. Thus, the phrase selection is formulated as:

(9)

where and are as defined in Eq (4) and Eq (5), and and are hyper-parameters.

We optimize Eq (9) by dynamic programming to retrieve the Viterbi path as the optimal solution [forney1973viterbi]. The time complexity is , where is the number of query phrases and is the volume of the reference space.

In summary, the phrase selection algorithm enforces strong structural constraints (song-level form and phrase-level fitness) as well as weak harmonic constraints (chord term in Eq (4)) to the selection of accompaniment reference. We argue that this is a good compromise because strong harmonic constraints can potentially “bury” well-structured references due to unmatched chord when our dataset is limited. To maintain a better harmonic fitness, we resort to music style transfer.

3.3 Style Transfer

The essence of style transfer is to transfer the chord sequence of a selected reference phrase while keeping its texture. To this end, we adopt the chord-texture disentanglement VAE framework by [wang2020learning]. The VAE consists of a chord encoder and a texture encoder . takes in a two-bar chord progression under one-beat resolution and exploits a bi-directional GRU to approximate a latent chord representation . is introduced in Section 3.2.2 and it extracts a latent texture representation . The decoder takes the concatenation of and and decodes the music segment using the same architecture invented in [wang2020pianotree]. Sustaining texture input and varying chords, the whole model works like a conditional VAE which re-harmonizes texture based on the chord condition.

In our case, to re-harmonize the selecetd accompaniment to query lead sheet , the style transfer works in a pipeline as follows:

(10)

where is the re-harmonized result. The final accopmaniment arrangement result is .

3.4 Controllability

In the phrase selection stage, we essentially traverse a route on the graph. Intuitively, we can control generation of the whole route by assigning the first node. In our case, we filter reference candidates for the first phrase based on textural properties. The current design has two filter criteria: rhythm density and voice number. we define three intervals low, medium, and high for both properties and mask the references that do not fall in the expected interval.

  • Rhythm Density (RD): the ratio of time steps with note onsets to all time steps;

  • Voice Number (VN): the average number of notes that are simultaneously played.

4 Experiment

4.1 Dataset

We collect our reference space from POP909 dataset [wang2020pop909] with the phrase segmentation created by [dai2020automatic]. POP909 contains piano arrangements of 909 popular songs created by professional musicians, which is a great source of delicate piano textures. Each song has a separated melody, chord, and accompaniment MIDI track. We only keep the pieces with and meters and quantize them at 16th notes (chords at 4th). This derives 857 songs segmented into 11032 phrases. As shown in phrase statistics, we have four-bar and eight-bar phrases in majority, which makes sense for popular songs. We also use POP909 to fine-tune our transition model, during which we randomly split the dataset (at song level) into training (95%) and validation (5%) sets.

bars <4 4 5~7 8 >8
Phrases 1338 3591 855 3796 1402
Table 1: Length Distribution of POP909 Phrase

At inference time, the query lead sheets come from the Nottingham Dataset [nottingham], a collection of ~1000 British and American folk tunes. We also adopt and pieces quantized at 16th (chords at 4th). We label their phrase segmentation by hand, where four-bar and eight-bar phrases are also the most common ones.

Figure 2: Accompaniment arrangement for Castles in the Air from Nottingham Dataset by AccoMontage. The 32-bar song has an phrase structure which is captured during accompaniment arrangement. Second melodies and texture variations are also introduced to manifest music flow. Here we highlight some texture re-harmonization of 7th chords.

4.2 Architecture Design

We develop our model based on the chord-texture disentanglement model proposed by [wang2020learning], which comprises a texture encoder, a chord encoder, and a decoder. The texture encoder consists of a convolutional layer with kernel size

and stride

and a bi-directional GRU encoder [roberts2018hierarchical]

. The convolutional layer is followed by a ReLU activation

[nair2010rectified]

and max-pooling with kernel size

and stride . The chord encoder is a bi-directional GRU encoder. The decoder is consistent with PianoTree VAE [wang2020pianotree], a hierarchical architecture for polyphonic representation learning. The architecture of in the proposed transition model is the same as the texture encoder illustrated above. We directly take the chord-texture disentanglement model with pre-trained weights as our style transfer model. We fine-tune the transition model with and in Eq (7) as trainable parameters.

4.3 Training

Our model is trained with a mini-batch of 128 piano-roll pairs for 50 epochs using Adam optimizer

[kingma2017adam] with a learning rate from 1e-4 exponentially decayed to 5e-6. Note that each piano-roll pair contains 2 consecutive piano-rolls and 4 randomly sampled ones. We first pre-train a chord-texture disentanglement model and initialize using weights of the texture encoder in the pre-trained model. Then we update all the parameters of the proposed transition model using contrastive loss in Eq (7). We set both and in Eq (4) to 0.5. During inference, we set and in Eq (9) to and .

Figure 3: Pre-Filtering Control on Rhythm Density and Voice Number

4.4 Generated Examples

To this end, we show two long-term accompaniment arrangement examples by the Accomontage system. The first one is illustrated in Figure 2, in which we show a whole piece (32-bar music) piano arrangement (the bottom two staves) base on the lead sheet (the top stave). We see that the generated accompaniment matches with the melody and has a natural flow on its texture. Moreover, it follows the structure of the melody.

The second example shows that our controls on rhythm density and voice number are quite successful. To better illustrate, we switch to a piano-roll representation in Figure 3, where 9 arranged accompaniments for the same lead sheet is shown in a 3 by 3 grid. The rhythm density control increases from left to right, while the voice number control increases from top to bottom. We can see that both controls have a significant influence on the generated results.

4.5 Evaluation

4.5.1 Baseline Models

The AccoMontage system is a generalized template-based model that leverages both rule-based optimization and deep learning to complement each other. To evaluate, we introduce a hard template-based and a pure learning-based baseline to compare with our model. Specifically, the baseline model architectures are as follows:

Hard Template-Based (HTB): The hard template-based model also retrieves references from existing accompaniment, but directly applies them without any style transfer. It uses the same phrase selection architecture as our model while skipping the style transfer stage.

Pure Learning-Based (PLB): We adopt the accompaniment arrangement model in [wang2020learning], a seq2seq framework combining Transformer [vaswani2017attention] and chord-texture disentanglement. We consider [wang2020learning] the current state-of-the-art algorithm for controllable accompaniment generation due to its tailored design of harmony and texture representations, sophisticated neural structure, and convincing demos. The input to the model is a lead sheet and its first four-bar accompaniment. The model composes the rest by predicting every four bars based on the current lead sheet and previous four-bar accompaniment.

4.5.2 Subjective Evaluation

We conduct a survey to evaluate the musical quality of the arrangement performance of all models. In our survey, each subject listens to 1 to 3 songs randomly selected from a pool of 14. All 14 songs are randomly selected from the Nottingham Dataset, 12 of which have 32 bars and the other two 24 and 16 bars. Each song has three accompaniment versions generated by our and the baseline models. The subjects are required to rate all three accompaniment versions of one song based on three metrics: coherence, structure, and musicality. The rating is base on a 5-point scale from 1 (very poor) to 5 (excellent).

  • Coherence: If the accompaniment matches the lead melody in harmony and texture;

  • Structure:If the accompaniment flows dynamically with the structure of the melody;

  • Musicality: Overall musicality of accompaniment.

Figure 4: Subjective Evaluation Results.

A total of 72 subjects (37 females and 35 males) participated in our survey and we obtain 67 effective ratings for each metric. As in subject, the heights of bars represent the means of the ratings and the error bars represent the MSEs computed via within-subject ANOVA [scheffe1999analysis]. We report a significantly better performance of our model than both baselines in coherence and structure (), and a marginally better performance in musicality ().

4.5.3 Objective Evaluation

In the phrase selection stage, we leverage a self-supervised contrastive loss (Eq (7)) to enforce a smooth textural transition among reference phrases. We expect a lower loss for true adjacent phrase pairs than in other situations. Meanwhile, true consecutive pairs should hold a similar texture pattern with smaller differences in general properties.

We investigate the contrastive loss (CL) and the difference of rhythm density (RD) and voice number (VN) among three types of phrase pairs from the validation set. Namely, Random, Same Song, and Adjacent. Between totally randomly pairing and strict adjacency, Same Song refers to randomly selecting two phrases (not necessarily adjacent) from one song. Results are shown in Figure 5.

Figure 5: Evaluation of Transition Model. The contrastive loss (CL) and differences of RD and VN are calculated for three types of phrase pairs. A consistent decreasing trend illustrates reliable discernment of smooth transition.
Metric Phrase Acc. Song Acc. Rank@50
Value 0.2425 0.3769 5.8003
Table 2: Ranking Accuracy and Mean Rank

For contrastive loss and each property, we see a consistent decreasing trend from Random to Same Song and to Adjacent

. Specifically, we see the upper quartile of

Adjacent is remarkably lower than the lower quartile of Random for CL, which indicates a reliable textural discernment that ensures smooth phrase transitions. This is also proved by the metric of ranking accuracy and mean rank [bretan2016unit], where the selection rank of the true adjacent phrase out of randomly selected phrases (Rank@) is calculated. We follow [bretan2016unit] and adopt Rank@, and the results are shown in mrank. Phrase Acc. and Song Acc. each refers to the accuracy that the top-ranked phrase is Adjacent or belongs to the Same Song. The high rank of adjacent pairs illustrates our model’s reliability to explore smooth transitions.

5 Conclusion

In conclusion, we contribute a generalized template-based algorithm for the accompaniment arrangement problem. The main novelty lies in the methodology that seamlessly combines deep generation and search-based generation. In specific, searching is used to optimize the high-level structure, while neural style transfer is in charge of local coherency and melody-accompaniment fitness. Such a top-down hybrid strategy is inspired by how human musicians arrange accompaniments in practice. We aim to bring a new perspective not only to music generation, but to long-term sequence generation in general. Experiments show that our AccoMontage system significantly outperforms pure learning-based and template-based methods, being capable of rendering well-structured and fine-grained accompaniment for full-length songs.

6 Acknowledgement

The authors wish to thank Yixiao Zhang for his contribution to figure framing and proofreading. We thank Liwei Lin and Junyan Jiang for providing feedback on initial drafts of this paper and additional editing.

References