PopMAG: Pop Music Accompaniment Generation

08/18/2020
by   Yi Ren, et al.
0

In pop music, accompaniments are usually played by multiple instruments (tracks) such as drum, bass, string and guitar, and can make a song more expressive and contagious by arranging together with its melody. Previous works usually generate multiple tracks separately and the music notes from different tracks not explicitly depend on each other, which hurts the harmony modeling. To improve harmony, in this paper, we propose a novel MUlti-track MIDI representation (MuMIDI), which enables simultaneous multi-track generation in a single sequence and explicitly models the dependency of the notes from different tracks. While this greatly improves harmony, unfortunately, it enlarges the sequence length and brings the new challenge of long-term music modeling. We further introduce two new techniques to address this challenge: 1) We model multiple note attributes (e.g., pitch, duration, velocity) of a musical note in one step instead of multiple steps, which can shorten the length of a MuMIDI sequence. 2) We introduce extra long-context as memory to capture long-term dependency in music. We call our system for pop music accompaniment generation as PopMAG. We evaluate PopMAG on multiple datasets (LMD, FreeMidi and CPMD, a private dataset of Chinese pop songs) with both subjective and objective metrics. The results demonstrate the effectiveness of PopMAG for multi-track harmony modeling and long-term context modeling. Specifically, PopMAG wins 42%/38%/40% votes when comparing with ground truth musical pieces on LMD, FreeMidi and CPMD datasets respectively and largely outperforms other state-of-the-art music accompaniment generation models and multi-track MIDI representations in terms of subjective and objective metrics.

READ FULL TEXT
research
08/13/2020

MMM : Exploring Conditional Multi-Track Music Generation with the Transformer

We propose the Multi-Track Music Machine (MMM), a generative system base...
research
07/14/2022

Multitrack Music Transformer: Learning Long-Term Dependencies in Music with Diverse Instruments

Existing approaches for generating multitrack music with transformer mod...
research
09/13/2022

SongDriver: Real-time Music Accompaniment Generation without Logical Latency nor Exposure Bias

Real-time music accompaniment generation has a wide range of application...
research
04/21/2022

SinTra: Learning an inspiration model from a single multi-track music segment

In this paper, we propose SinTra, an auto-regressive sequential generati...
research
09/13/2017

On the Complex Network Structure of Musical Pieces: Analysis of Some Use Cases from Different Music Genres

This paper focuses on the modeling of musical melodies as networks. Note...
research
01/11/2023

WuYun: Exploring hierarchical skeleton-guided melody generation using knowledge-enhanced deep learning

Although deep learning has revolutionized music generation, existing met...
research
07/11/2021

PocketVAE: A Two-step Model for Groove Generation and Control

Creating a good drum track to imitate a skilled performer in digital aud...

Please sign up or login with your details

Forgot password? Click here to reset