Instrument Separation of Symbolic Music by Explicitly Guided Diffusion Model

09/05/2022
by   Sangjun Han, et al.
0

Similar to colorization in computer vision, instrument separation is to assign instrument labels (e.g. piano, guitar...) to notes from unlabeled mixtures which contain only performance information. To address the problem, we adopt diffusion models and explicitly guide them to preserve consistency between mixtures and music. The quantitative results show that our proposed model can generate high-fidelity samples for multitrack symbolic music with creativity.

READ FULL TEXT
research
09/21/2023

Performance Conditioning for Diffusion-Based Multi-Instrument Music Synthesis

Generating multi-instrument music from symbolic music representations is...
research
06/22/2022

Jointist: Joint Learning for Multi-instrument Transcription and Its Applications

In this paper, we introduce Jointist, an instrument-aware multi-instrume...
research
09/18/2019

Cutting Music Source Separation Some Slakh: A Dataset to Study the Impact of Training Data Quality and Quantity

Music source separation performance has greatly improved in recent years...
research
03/24/2022

Data-Driven Visual Reflection on Music Instrument Practice

We propose a data-driven approach to music instrument practice that allo...
research
11/05/2020

From Note-Level to Chord-Level Neural Network Models for Voice Separation in Symbolic Music

Music is often experienced as a progression of concurrent streams of not...
research
08/28/2023

InstructME: An Instruction Guided Music Edit And Remix Framework with Latent Diffusion Models

Music editing primarily entails the modification of instrument tracks or...
research
11/18/2020

Vertical-Horizontal Structured Attention for Generating Music with Chords

In this paper, we propose a lightweight music-generating model based on ...

Please sign up or login with your details

Forgot password? Click here to reset