Jukebox: A Generative Model for Music

04/30/2020
by   Prafulla Dhariwal, et al.
4

We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multi-scale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples at https://jukebox.openai.com, along with model weights and code at https://github.com/openai/jukebox

READ FULL TEXT
research
02/20/2022

It's Raw! Audio Generation with State-Space Models

Developing architectures suitable for modeling raw audio is a challengin...
research
09/12/2016

WaveNet: A Generative Model for Raw Audio

This paper introduces WaveNet, a deep neural network for generating raw ...
research
08/30/2020

Hierarchical Timbre-Painting and Articulation Generation

We present a fast and high-fidelity method for music generation, based o...
research
01/16/2023

Msanii: High Fidelity Music Synthesis on a Shoestring Budget

In this paper, we present Msanii, a novel diffusion-based model for synt...
research
08/11/2022

Symbolic Music Loop Generation with Neural Discrete Representations

Since most of music has repetitive structures from motifs to phrases, re...
research
07/10/2023

VampNet: Music Generation via Masked Acoustic Token Modeling

We introduce VampNet, a masked acoustic token modeling approach to music...
research
06/09/2022

BigVGAN: A Universal Neural Vocoder with Large-Scale Training

Despite recent progress in generative adversarial network(GAN)-based voc...

Please sign up or login with your details

Forgot password? Click here to reset