Log In Sign Up

Generating Music with a Self-Correcting Non-Chronological Autoregressive Model

by   Wayne Chi, et al.

We describe a novel approach for generating music using a self-correcting, non-chronological, autoregressive model. We represent music as a sequence of edit events, each of which denotes either the addition or removal of a note—even a note previously generated by the model. During inference, we generate one edit event at a time using direct ancestral sampling. Our approach allows the model to fix previous mistakes such as incorrectly sampled notes and prevent accumulation of errors which autoregressive models are prone to have. Another benefit is a finer, note-by-note control during human and AI collaborative composition. We show through quantitative metrics and human survey evaluation that our approach generates better results than orderless NADE and Gibbs sampling approaches.


Transformer-XL Based Music Generation with Multiple Sequences of Time-valued Notes

Current state-of-the-art AI based classical music creation algorithms su...

Generative Autoregressive Networks for 3D Dancing Move Synthesis from Music

This paper proposes a framework which is able to generate a sequence of ...

Human Evaluation and Correlation with Automatic Metrics in Consultation Note Generation

In recent years, machine learning models have rapidly become better at g...

Polyphonic Piano Transcription Using Autoregressive Multi-State Note Model

Recent advances in polyphonic piano transcription have been made primari...

Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding

This paper proposes a new self-attention based model for music score inf...

Counterpoint by Convolution

Machine learning models of music typically break up the task of composit...

WuYun: Exploring hierarchical skeleton-guided melody generation using knowledge-enhanced deep learning

Although deep learning has revolutionized music generation, existing met...