Bach2Bach: Generating Music Using A Deep Reinforcement Learning Approach

12/03/2018
by   Nikhil Kotecha, et al.
0

A model of music needs to have the ability to recall past details and have a clear, coherent understanding of musical structure. Detailed in the paper is a deep reinforcement learning architecture that predicts and generates polyphonic music aligned with musical rules. The probabilistic model presented is a Bi-axial LSTM trained with a pseudo-kernel reminiscent of a convolutional kernel. To encourage exploration and impose greater global coherence on the generated music, a deep reinforcement learning approach DQN is adopted. When analyzed quantitatively and qualitatively, this approach performs well in composing polyphonic music.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2018

Generating Music using an LSTM Network

A model of music needs to have the ability to recall past details and ha...
research
02/08/2020

RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement Learning

This paper presents a deep reinforcement learning algorithm for online a...
research
06/09/2023

Everybody Compose: Deep Beats To Music

This project presents a deep learning approach to generate monophonic me...
research
02/05/2019

Polyphonic Music Composition with LSTM Neural Networks and Reinforcement Learning

In the domain of algorithmic music composition, machine learning-driven ...
research
11/15/2021

Piano Fingering with Reinforcement Learning

Hand and finger movements are a mainstay of piano technique. Automatic F...
research
07/04/2010

Computational Model of Music Sight Reading: A Reinforcement Learning Approach

Although the Music Sight Reading process has been studied from the cogni...
research
01/10/2023

Why People Skip Music? On Predicting Music Skips using Deep Reinforcement Learning

Music recommender systems are an integral part of our daily life. Recent...

Please sign up or login with your details

Forgot password? Click here to reset