Are 2D-LSTM really dead for offline text recognition?

11/27/2018
by   Bastien Moysset, et al.
0

There is a recent trend in handwritten text recognition with deep neural networks to replace 2D recurrent layers with 1D, and in some cases even completely remove the recurrent layers, relying on simple feed-forward convolutional only architectures. The most used type of recurrent layer is the Long-Short Term Memory (LSTM). The motivations to do so are many: there are few open-source implementations of 2D-LSTM, even fewer supporting GPU implementations (currently cuDNN only implements 1D-LSTM); 2D recurrences reduce the amount of computations that can be parallelized, and thus possibly increase the training/inference time; recurrences create global dependencies with respect to the input, and sometimes this may not be desirable. Many recent competitions were won by systems that employed networks that use 2D-LSTM layers. Most previous work that compared 1D or pure feed-forward architectures to 2D recurrent models have done so on simple datasets or did not fully optimize the "baseline" 2D model compared to the challenger model, which was dully optimized. In this work, we aim at a fair comparison between 2D and competing models and also extensively evaluate them on more complex datasets that are more representative of challenging "real-world" data, compared to "academic" datasets that are more restricted in their complexity. We aim at determining when and why the 1D and 2D recurrent models have different results. We also compare the results with a language model to assess if linguistic constraints do level the performance of the different networks. Our results show that for challenging datasets, 2D-LSTM networks still seem to provide the highest performances and we propose a visualization strategy to explain it.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2016

Long Short-Term Memory based Convolutional Recurrent Neural Networks for Large Vocabulary Speech Recognition

Long short-term memory (LSTM) recurrent neural networks (RNNs) have been...
research
07/12/2018

A Comparison of Adaptation Techniques and Recurrent Neural Network Architectures

Recently, recurrent neural networks have become state-of-the-art in acou...
research
04/22/2020

Utterance-level Sequential Modeling For Deep Gaussian Process Based Speech Synthesis Using Simple Recurrent Unit

This paper presents a deep Gaussian process (DGP) model with a recurrent...
research
04/08/2021

Predicting Inflation with Neural Networks

This paper applies neural network models to forecast inflation. The use ...
research
07/24/2017

LV-ROVER: Lexicon Verified Recognizer Output Voting Error Reduction

Offline handwritten text line recognition is a hard task that requires b...
research
06/02/2017

Yeah, Right, Uh-Huh: A Deep Learning Backchannel Predictor

Using supporting backchannel (BC) cues can make human-computer interacti...
research
09/23/2019

Field typing for improved recognition on heterogeneous handwritten forms

Offline handwriting recognition has undergone continuous progress over t...

Please sign up or login with your details

Forgot password? Click here to reset