Symbolic Music Generation with Diffusion Models

by   Gautam Mittal, et al.

Score-based generative models and diffusion probabilistic models have been successful at generating high-quality samples in continuous domains such as images and audio. However, due to their Langevin-inspired sampling mechanisms, their application to discrete and sequential data has been limited. In this work, we present a technique for training diffusion models on sequential data by parameterizing the discrete domain in the continuous latent space of a pre-trained variational autoencoder. Our method is non-autoregressive and learns to generate sequences of latent embeddings through the reverse process and offers parallel generation with a constant number of iterative refinement steps. We apply this technique to modeling symbolic music and show strong unconditional generation and post-hoc conditional infilling results compared to autoregressive language models operating over the same continuous embeddings.


page 1

page 2

page 3

page 4


Global Context with Discrete Diffusion in Vector Quantised Modelling for Image Generation

The integration of Vector Quantised Variational AutoEncoder (VQ-VAE) wit...

Latent Normalizing Flows for Discrete Sequences

Normalizing flows have been shown to be a powerful class of generative m...

WaveGrad: Estimating Gradients for Waveform Generation

This paper introduces WaveGrad, a conditional model for waveform generat...

A Classifying Variational Autoencoder with Application to Polyphonic Music Generation

The variational autoencoder (VAE) is a popular probabilistic generative ...

Structured Denoising Diffusion Models in Discrete State-Spaces

Denoising diffusion probabilistic models (DDPMs) (Ho et al. 2020) have s...

Autoregressive Diffusion Models

We introduce Autoregressive Diffusion Models (ARDMs), a model class enco...

Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models

Diffusion probabilistic models (DPMs) are a class of powerful deep gener...