Music Transformer

09/12/2018
by   Cheng-Zhi Anna Huang, et al.
0

Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-competition, and obtain state-of-the-art results on the latter.

READ FULL TEXT
research
09/12/2018

An Improved Relative Self-Attention Mechanism for Transformer with Application to Music Generation

Music relies heavily on self-reference to build structure and meaning. W...
research
02/01/2020

Pop Music Transformer: Generating Music with Rhythm and Harmony

The task automatic music composition entails generative modeling of musi...
research
10/28/2020

Higher Order Linear Transformer

Following up on the linear transformer part of the article from Katharop...
research
01/27/2023

On the Connection Between MPNN and Graph Transformer

Graph Transformer (GT) recently has emerged as a new paradigm of graph l...
research
05/31/2023

Monotonic Location Attention for Length Generalization

We explore different ways to utilize position-based cross-attention in s...
research
07/05/2019

A Bi-directional Transformer for Musical Chord Recognition

Chord recognition is an important task since chords are highly abstract ...
research
11/18/2018

Harmonic Recomposition using Conditional Autoregressive Modeling

We demonstrate a conditional autoregressive pipeline for efficient music...

Please sign up or login with your details

Forgot password? Click here to reset