GMN: Generative Multi-modal Network for Practical Document Information Extraction

07/11/2022
by   Haoyu Cao, et al.
0

Document Information Extraction (DIE) has attracted increasing attention due to its various advanced applications in the real world. Although recent literature has already achieved competitive results, these approaches usually fail when dealing with complex documents with noisy OCR results or mutative layouts. This paper proposes Generative Multi-modal Network (GMN) for real-world scenarios to address these problems, which is a robust multi-modal generation method without predefined label categories. With the carefully designed spatial encoder and modal-aware mask module, GMN can deal with complex documents that are hard to serialized into sequential order. Moreover, GMN tolerates errors in OCR results and requires no character-level annotation, which is vital because fine-grained annotation of numerous documents is laborious and even requires annotators with specialized domain knowledge. Extensive experiments show that GMN achieves new state-of-the-art performance on several public DIE datasets and surpasses other methods by a large margin, especially in realistic scenes.

READ FULL TEXT
research
05/25/2021

ViBERTgrid: A Jointly Trained Multi-Modal 2D Document Representation for Key Information Extraction from Documents

Recent grid-based document representations like BERTgrid allow the simul...
research
02/05/2021

Metaknowledge Extraction Based on Multi-Modal Documents

The triple-based knowledge in large-scale knowledge bases is most likely...
research
06/02/2021

End-to-End Information Extraction by Character-Level Embedding and Multi-Stage Attentional U-Net

Information extraction from document images has received a lot of attent...
research
06/16/2022

RefCrowd: Grounding the Target in Crowd with Referring Expressions

Crowd understanding has aroused the widespread interest in vision domain...
research
03/09/2023

MMCosine: Multi-Modal Cosine Loss Towards Balanced Audio-Visual Fine-Grained Learning

Audio-visual learning helps to comprehensively understand the world by f...
research
06/01/2022

HYCEDIS: HYbrid Confidence Engine for Deep Document Intelligence System

Measuring the confidence of AI models is critical for safely deploying A...
research
10/01/2018

Bayesian Prediction of Future Street Scenes using Synthetic Likelihoods

For autonomous agents to successfully operate in the real world, the abi...

Please sign up or login with your details

Forgot password? Click here to reset