Generating Music using an LSTM Network

04/18/2018
by   Nikhil Kotecha, et al.
0

A model of music needs to have the ability to recall past details and have a clear, coherent understanding of musical structure. Detailed in the paper is a neural network architecture that predicts and generates polyphonic music aligned with musical rules. The probabilistic model presented is a Bi-axial LSTM trained with a kernel reminiscent of a convolutional kernel. When analyzed quantitatively and qualitatively, this approach performs well in composing polyphonic music. Link to the code is provided.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2018

Bach2Bach: Generating Music Using A Deep Reinforcement Learning Approach

A model of music needs to have the ability to recall past details and ha...
research
08/02/2019

LSTM Based Music Generation System

Traditionally, music was treated as an analogue signal and was generated...
research
08/04/2022

Tokyo Kion-On: Query-Based Generative Sonification of Atmospheric Data

Amid growing environmental concerns, interactive displays of data consti...
research
04/21/2020

Music Generation with Temporal Structure Augmentation

In this paper we introduce a novel feature augmentation approach for gen...
research
08/23/2021

Differential Music: Automated Music Generation Using LSTM Networks with Representation Based on Melodic and Harmonic Intervals

This paper presents a generative AI model for automated music compositio...
research
12/08/2019

ragamAI: A Network Based Recommender System to Arrange a Indian Classical Music Concert

South Indian classical music (Carnatic music) is best consumed through l...
research
02/16/2021

Music Harmony Generation, through Deep Learning and Using a Multi-Objective Evolutionary Algorithm

Automatic music generation has become an epicenter research topic for ma...

Please sign up or login with your details

Forgot password? Click here to reset