Rethinking Text Line Recognition Models

04/15/2021
by   Daniel Hernandez Diaz, et al.
0

In this paper, we study the problem of text line recognition. Unlike most approaches targeting specific domains such as scene-text or handwritten documents, we investigate the general problem of developing a universal architecture that can extract text from any image, regardless of source or input modality. We consider two decoder families (Connectionist Temporal Classification and Transformer) and three encoder modules (Bidirectional LSTMs, Self-Attention, and GRCLs), and conduct extensive experiments to compare their accuracy and performance on widely used public datasets of scene and handwritten text. We find that a combination that so far has received little attention in the literature, namely a Self-Attention encoder coupled with the CTC decoder, when compounded with an external language model and trained on both public and internal data, outperforms all the others in accuracy and computational complexity. Unlike the more common Transformer-based models, this architecture can handle inputs of arbitrary length, a requirement for universal line recognition. Using an internal dataset collected from multiple sources, we also expose the limitations of current public datasets in evaluating the accuracy of line recognizers, as the relatively narrow image width and sequence length distributions do not allow to observe the quality degradation of the Transformer approach when applied to the transcription of long lines.

READ FULL TEXT
research
08/30/2023

DTrOCR: Decoder-only Transformer for Optical Character Recognition

Typical text recognition methods rely on an encoder-decoder structure, i...
research
06/04/2018

NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition

Scene text recognition has attracted a great many researches for decades...
research
08/20/2021

Type Anywhere You Want: An Introduction to Invisible Mobile Keyboard

Contemporary soft keyboards possess limitations: the lack of physical fe...
research
03/24/2023

MSdocTr-Lite: A Lite Transformer for Full Page Multi-script Handwriting Recognition

The Transformer has quickly become the dominant architecture for various...
research
09/20/2022

Relaxed Attention for Transformer Models

The powerful modeling capabilities of all-attention-based transformer ar...
research
10/10/2019

On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention

Scene text recognition (STR) is the task of recognizing character sequen...
research
12/16/2022

Reducing Sequence Length Learning Impacts on Transformer Models

Classification algorithms using Transformer architectures can be affecte...

Please sign up or login with your details

Forgot password? Click here to reset