Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls

07/19/2023
by   Lejun Min, et al.
0

We propose Polyffusion, a diffusion model that generates polyphonic music scores by regarding music as image-like piano roll representations. The model is capable of controllable music generation with two paradigms: internal control and external control. Internal control refers to the process in which users pre-define a part of the music and then let the model infill the rest, similar to the task of masked music generation (or music inpainting). External control conditions the model with external yet related information, such as chord, texture, or other features, via the cross-attention mechanism. We show that by using internal and external controls, Polyffusion unifies a wide range of music creation tasks, including melody generation given accompaniment, accompaniment generation given melody, arbitrary music segment inpainting, and music arrangement given chords or textures. Experimental results show that our model significantly outperforms existing Transformer and sampling-based baselines, and using pre-trained disentangled representations as external conditions yields more effective controls.

READ FULL TEXT

page 1

page 6

research
06/14/2023

Anticipatory Music Transformer

We introduce anticipation: a method for constructing a controllable gene...
research
02/11/2022

MusIAC: An extensible generative framework for Music Infilling Applications with multi-level Control

We present a novel music generation framework for music infilling, with ...
research
08/09/2023

JEN-1: Text-Guided Universal Music Generation with Omnidirectional Diffusion Models

Music generation has attracted growing interest with the advancement of ...
research
04/18/2019

Inspecting and Interacting with Meaningful Music Representations using VAE

Variational Autoencoders(VAEs) have already achieved great results on im...
research
07/10/2023

VampNet: Music Generation via Masked Acoustic Token Modeling

We introduce VampNet, a masked acoustic token modeling approach to music...
research
09/13/2022

SongDriver: Real-time Music Accompaniment Generation without Logical Latency nor Exposure Bias

Real-time music accompaniment generation has a wide range of application...
research
11/04/2022

Contrastive Learning for Diverse Disentangled Foreground Generation

We introduce a new method for diverse foreground generation with explici...

Please sign up or login with your details

Forgot password? Click here to reset