XDoc: Unified Pre-training for Cross-Format Document Understanding

10/06/2022
by   Jingye Chen, et al.
0

The surge of pre-training has witnessed the rapid development of document understanding recently. Pre-training and fine-tuning framework has been effectively used to tackle texts in various formats, including plain texts, document texts, and web texts. Despite achieving promising performance, existing pre-trained models usually target one specific document format at one time, making it difficult to combine knowledge from multiple document formats. To address this, we propose XDoc, a unified pre-trained model which deals with different document formats in a single model. For parameter efficiency, we share backbone parameters for different formats such as the word embedding layer and the Transformer layers. Meanwhile, we introduce adaptive layers with lightweight parameters to enhance the distinction across different formats. Experimental results have demonstrated that with only 36.7 achieves comparable or even better performance on a variety of downstream tasks compared with the individual pre-trained models, which is cost effective for real-world deployment. The code and pre-trained models will be publicly available at <https://aka.ms/xdoc>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2023

Teaching the Pre-trained Model to Generate Simple Texts for Text Simplification

Randomly masking text spans in ordinary texts in the pre-training stage ...
research
09/14/2021

YES SIR!Optimizing Semantic Space of Negatives with Self-Involvement Ranker

Pre-trained model such as BERT has been proved to be an effective tool f...
research
05/28/2023

Plug-and-Play Document Modules for Pre-trained Models

Large-scale pre-trained models (PTMs) have been widely used in document-...
research
03/06/2023

Angel-PTM: A Scalable and Economical Large-scale Pre-training System in Tencent

Recent years have witnessed the unprecedented achievements of large-scal...
research
11/20/2022

UniMASK: Unified Inference in Sequential Decision Problems

Randomly masking and predicting word tokens has been a successful approa...
research
06/29/2022

Diet Code is Healthy: Simplifying Programs for Pre-Trained Models of Code

Pre-trained code representation models such as CodeBERT have demonstrate...
research
04/21/2023

GeoLayoutLM: Geometric Pre-training for Visual Information Extraction

Visual information extraction (VIE) plays an important role in Document ...

Please sign up or login with your details

Forgot password? Click here to reset