Classical Music Generation in Distinct Dastgahs with AlimNet ACGAN

01/15/2019
by   Saber Malekzadeh, et al.
0

In this paper AlimNet (With respect to great musician, Alim Qasimov) an auxiliary generative adversarial deep neural network (ACGAN) for generating music categorically, is used. This proposed network is a conditional ACGAN to condition the generation process on music tracks which has a hybrid architecture, composing of different kind of layers of neural networks. The employed music dataset is MICM which contains 1137 music samples (506 violins and 631 straw) with seven types of classical music Dastgah labels. To extract both temporal and spectral features, Short-Time Fourier Transform (STFT) is applied to convert input audio signals from time domain to time-frequency domain. GANs are composed of a generator for generating new samples and a discriminator to help generator making better samples. Samples in time-frequency domain are used to train discriminator in fourteen classes (seven Dastgahs and two instruments). The outputs of the conditional ACGAN are also artificial music samples in those mentioned scales in time-frequency domain. Then the output of the generator is transformed by Inverse STFT (ISTFT). Finally, randomly ten generated music samples (five violin and five straw samples) are given to ten musicians to rate how exact the samples are and the overall result was 76.5

READ FULL TEXT
research
12/17/2018

Instrument-Independent Dastgah Recognition of Iranian Classical Music Using AzarNet

In this paper, AzarNet, a deep neural network (DNN), is proposed to reco...
research
12/09/2021

Music demixing with the sliCQ transform

Music source separation is the task of extracting an estimate of one or ...
research
06/19/2023

Multitrack Music Transcription with a Time-Frequency Perceiver

Multitrack music transcription aims to transcribe a music audio input in...
research
07/21/2021

Conditional Sound Generation Using Neural Discrete Time-Frequency Representation Learning

Deep generative models have recently achieved impressive performance in ...
research
11/03/2018

Time-Frequency Audio Features for Speech-Music Classification

Distinct striation patterns are observed in the spectrograms of speech a...
research
06/20/2019

Adversarial Learning for Improved Onsets and Frames Music Transcription

Automatic music transcription is considered to be one of the hardest pro...
research
06/19/2017

Kapre: On-GPU Audio Preprocessing Layers for a Quick Implementation of Deep Neural Network Models with Keras

We introduce Kapre, Keras layers for audio and music signal preprocessin...

Please sign up or login with your details

Forgot password? Click here to reset