Diagonal RNNs in Symbolic Music Modeling

04/18/2017
by   Y. Cem Subakan, et al.
0

In this paper, we propose a new Recurrent Neural Network (RNN) architecture. The novelty is simple: We use diagonal recurrent matrices instead of full. This results in better test likelihood and faster convergence compared to regular full RNNs in most of our experiments. We show the benefits of using diagonal recurrent matrices with popularly used LSTM and GRU architectures as well as with the vanilla RNN architecture, on four standard symbolic music datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2021

Controlling Recurrent Neural Networks by Diagonal Conceptors

The human brain is capable of learning, memorizing, and regenerating a p...
research
11/09/2018

Complex Unitary Recurrent Neural Networks using Scaled Cayley Transform

Recurrent neural networks (RNNs) have been successfully used on a wide r...
research
06/22/2021

Recurrent Neural Network from Adder's Perspective: Carry-lookahead RNN

The recurrent network architecture is a widely used model in sequence mo...
research
10/16/2015

Optimizing and Contrasting Recurrent Neural Network Architectures

Recurrent Neural Networks (RNNs) have long been recognized for their pot...
research
10/08/2020

A Fully Tensorized Recurrent Neural Network

Recurrent neural networks (RNNs) are powerful tools for sequential model...
research
08/07/2017

What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?

In neural image captioning systems, a recurrent neural network (RNN) is ...
research
07/20/2021

Music Tempo Estimation via Neural Networks – A Comparative Analysis

This paper presents a comparative analysis on two artificial neural netw...

Please sign up or login with your details

Forgot password? Click here to reset