DeepAI AI Chat
Log In Sign Up

Rethinking Text Line Recognition Models

04/15/2021
by   Daniel Hernandez Diaz, et al.
0

In this paper, we study the problem of text line recognition. Unlike most approaches targeting specific domains such as scene-text or handwritten documents, we investigate the general problem of developing a universal architecture that can extract text from any image, regardless of source or input modality. We consider two decoder families (Connectionist Temporal Classification and Transformer) and three encoder modules (Bidirectional LSTMs, Self-Attention, and GRCLs), and conduct extensive experiments to compare their accuracy and performance on widely used public datasets of scene and handwritten text. We find that a combination that so far has received little attention in the literature, namely a Self-Attention encoder coupled with the CTC decoder, when compounded with an external language model and trained on both public and internal data, outperforms all the others in accuracy and computational complexity. Unlike the more common Transformer-based models, this architecture can handle inputs of arbitrary length, a requirement for universal line recognition. Using an internal dataset collected from multiple sources, we also expose the limitations of current public datasets in evaluating the accuracy of line recognizers, as the relatively narrow image width and sequence length distributions do not allow to observe the quality degradation of the Transformer approach when applied to the transcription of long lines.

READ FULL TEXT
06/04/2018

NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition

Scene text recognition has attracted a great many researches for decades...
09/20/2022

Relaxed Attention for Transformer Models

The powerful modeling capabilities of all-attention-based transformer ar...
08/20/2021

Type Anywhere You Want: An Introduction to Invisible Mobile Keyboard

Contemporary soft keyboards possess limitations: the lack of physical fe...
03/24/2023

MSdocTr-Lite: A Lite Transformer for Full Page Multi-script Handwriting Recognition

The Transformer has quickly become the dominant architecture for various...
05/06/2021

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer

Encoder-decoder models have made great progress on handwritten mathemati...
10/01/2022

A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition

Phoneme recognition is a very important part of speech recognition that ...
12/16/2022

Reducing Sequence Length Learning Impacts on Transformer Models

Classification algorithms using Transformer architectures can be affecte...