DeepAI AI Chat
Log In Sign Up

Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features

by   Byeonghu Na, et al.
KAIST 수리과학과

Linguistic knowledge has brought great benefits to scene text recognition by providing semantics to refine character sequences. However, since linguistic knowledge has been applied individually on the output sequence, previous methods have not fully utilized the semantics to understand visual clues for text recognition. This paper introduces a novel method, called Multi-modAl Text Recognition Network (MATRN), that enables interactions between visual and semantic features for better recognition performances. Specifically, MATRN identifies visual and semantic feature pairs and encodes spatial information into semantic features. Based on the spatial encoding, visual and semantic features are enhanced by referring to related features in the other modality. Furthermore, MATRN stimulates combining semantic features into visual features by hiding visual clues related to the character in the training phase. Our experiments demonstrate that MATRN achieves state-of-the-art performances on seven benchmarks with large margins, while naive combinations of two modalities show marginal improvements. Further ablative studies prove the effectiveness of our proposed components. Our implementation will be publicly available.


Visual-Semantic Transformer for Scene Text Recognition

Modeling semantic information is helpful for scene text recognition. In ...

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network

In this paper, we abandon the dominant complex language model and rethin...

CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

The attention-based encoder-decoder framework is becoming popular in sce...

An Efficient End-to-End Transformer with Progressive Tri-modal Attention for Multi-modal Emotion Recognition

Recent works on multi-modal emotion recognition move towards end-to-end ...

Efficient Multi-Modal Embeddings from Structured Data

Multi-modal word semantics aims to enhance embeddings with perceptual in...

Recurrent neural network transducer for Japanese and Chinese offline handwritten text recognition

In this paper, we propose an RNN-Transducer model for recognizing Japane...

Levenshtein OCR

A novel scene text recognizer based on Vision-Language Transformer (VLT)...

Code Repositories


scene text recognition recommendations

view repo