Towards Robust Visual Information Extraction in Real World: New Dataset and Novel Solution

by   Jiapeng Wang, et al.

Visual information extraction (VIE) has attracted considerable attention recently owing to its various advanced applications such as document understanding, automatic marking and intelligent education. Most existing works decoupled this problem into several independent sub-tasks of text spotting (text detection and recognition) and information extraction, which completely ignored the high correlation among them during optimization. In this paper, we propose a robust visual information extraction system (VIES) towards real-world scenarios, which is a unified end-to-end trainable framework for simultaneous text detection, recognition and information extraction by taking a single document image as input and outputting the structured information. Specifically, the information extraction branch collects abundant visual and semantic representations from text spotting for multimodal feature fusion and conversely, provides higher-level semantic clues to contribute to the optimization of text spotting. Moreover, regarding the shortage of public benchmarks, we construct a fully-annotated dataset called EPHOIE (, which is the first Chinese benchmark for both text spotting and visual information extraction. EPHOIE consists of 1,494 images of examination paper head with complex layouts and background, including a total of 15,771 Chinese handwritten or printed text instances. Compared with the state-of-the-art methods, our VIES shows significant superior performance on the EPHOIE dataset and achieves a 9.01 SROIE dataset under the end-to-end scenario.


TRIE: End-to-End Text Reading and Information Extraction for Document Understanding

Since real-world ubiquitous documents (e.g., invoices, tickets, resumes ...

EATEN: Entity-aware Attention for Single Shot Visual Text Extraction

Extracting entity from images is a crucial part of many OCR applications...

Visual Information Extraction in the Wild: Practical Dataset and End-to-end Solution

Visual information extraction (VIE), which aims to simultaneously perfor...

Multimodal deep networks for text and image-based document classification

Classification of document images is a critical step for archival of old...

TextMatcher: Cross-Attentional Neural Network to Compare Image and Text

We study a novel multimodal-learning problem, which we call text matchin...

Unified Chinese License Plate Detection and Recognition with High Efficiency

Recently, deep learning-based methods have reached an excellent performa...

Spatial Dual-Modality Graph Reasoning for Key Information Extraction

Key information extraction from document images is of paramount importan...

Please sign up or login with your details

Forgot password? Click here to reset