Thick-Net: Parallel Network Structure for Sequential Modeling

11/19/2019
by   Yu-Xuan Li, et al.
13

Recurrent neural networks have been widely used in sequence learning tasks. In previous studies, the performance of the model has always been improved by either wider or deeper structures. However, the former becomes more prone to overfitting, while the latter is difficult to optimize. In this paper, we propose a simple new model named Thick-Net, by expanding the network from another dimension: thickness. Multiple parallel values are obtained via more sets of parameters in each hidden state, and the maximum value is selected as the final output among parallel intermediate outputs. Notably, Thick-Net can efficiently avoid overfitting, and is easier to optimize than the vanilla structures due to the large dropout affiliated with it. Our model is evaluated on four sequential tasks including adding problem, permuted sequential MNIST, text classification and language modeling. The results of these tasks demonstrate that our model can not only improve accuracy with faster convergence but also facilitate a better generalization ability.

READ FULL TEXT

page 4

page 5

page 6

page 8

research
05/03/2017

Going Wider: Recurrent Neural Network With Parallel Cells

Recurrent Neural Network (RNN) has been widely applied for sequence mode...
research
04/22/2019

Adversarial Dropout for Recurrent Neural Networks

Successful application processing sequential data, such as text and spee...
research
11/05/2016

Quasi-Recurrent Neural Networks

Recurrent neural networks are a powerful tool for modeling sequential da...
research
02/18/2013

Maxout Networks

We consider the problem of designing models to leverage a recently intro...
research
05/14/2019

Deep Residual Output Layers for Neural Language Generation

Many tasks, including language generation, benefit from learning the str...
research
09/14/2023

Advancing Regular Language Reasoning in Linear Recurrent Neural Networks

In recent studies, linear recurrent neural networks (LRNNs) have achieve...

Please sign up or login with your details

Forgot password? Click here to reset