Improving Minimal Gated Unit for Sequential Data

05/21/2019
by   Kazuki Takamura, et al.
0

In order to obtain a model which can process sequential data related to machine translation and speech recognition faster and more accurately, we propose adopting Chrono Initializer as the initialization method of Minimal Gated Unit. We evaluated the method with two tasks: adding task and copy task. As a result of the experiment, the effectiveness of the proposed method was confirmed.

READ FULL TEXT
research
03/31/2016

Minimal Gated Unit for Recurrent Neural Networks

Recently recurrent neural networks (RNN) has been very successful in han...
research
12/06/2018

The USTC-NEL Speech Translation system at IWSLT 2018

This paper describes the USTC-NEL system to the speech translation task ...
research
02/16/2023

Stabilising and accelerating light gated recurrent units for automatic speech recognition

The light gated recurrent units (Li-GRU) is well-known for achieving imp...
research
03/30/2018

Detecting Alzheimer's Disease Using Gated Convolutional Neural Network from Audio Data

We propose an automatic detection method of Alzheimer's diseases using a...
research
06/16/2017

One Model To Learn Them All

Deep learning yields great results across many fields, from speech recog...
research
06/24/2015

Attention-Based Models for Speech Recognition

Recurrent sequence generators conditioned on input data through an atten...
research
09/21/2023

Parallelizing non-linear sequential models over the sequence length

Sequential models, such as Recurrent Neural Networks and Neural Ordinary...

Please sign up or login with your details

Forgot password? Click here to reset