Multidomain Document Layout Understanding using Few Shot Object Detection

08/22/2018 ∙ by Pranaydeep Singh, et al. ∙ ParallelDots, Inc. 16

We try to address the problem of document layout understanding using a simple algorithm which generalizes across multiple domains while training on just few examples per domain. We approach this problem via supervised object detection method and propose a methodology to overcome the requirement of large datasets. We use the concept of transfer learning by pre-training our object detector on a simple artificial (source) dataset and fine-tuning it on a tiny domain specific (target) dataset. We show that this methodology works for multiple domains with training samples as less as 10 documents. We demonstrate the effect of each component of the methodology in the end result and show the superiority of this methodology over simple object detectors.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The understanding of document layout in terms of finding logical components such as title, paragraphs etc. is a preliminary step towards retrieving information from images of documents. The amount of variability in real-world data coming from multiple domains e.g., documents, invoices etc. makes it a challenging computer vision problem that has intrigued researchers for decades.

The most basic version of the layout understanding task is to separate text from background and images, but the task has evolved to not only segregating these basic structures but also derived structures like paragraphs, lists and tables. Various image processing methodologies [8] [1] [7]

have approached the problem of understanding general documents as well as digitizing historical documents. With the onset of deep learning and data driven approaches, the problem was approached as a pixel-wise segmentation task

[12], where each pixel is assigned a class based on its surrounding pixels. In this paper, we explore a new tangent, where the problem is approached as a few-shot object detection problem to identify relevant areas in a document. The motivation is to understand document structure with as less as 10 tagged examples since digitization tasks generally don’t have an abundance of tagged data at hand. However, understanding documents is a complicated task and a dataset consisting of just 10 examples is not enough to train an object detector especially (as they’re fully supervised networks requiring large amounts of training data) to understand various structures, like tables or lists.

Hence, we use a transfer learning based approach where we give the network a general understanding of what basic features and structures are contained in a document and then proceed to train on a few-shot task for understanding of specific document types like invoices, resumes, academic papers, journals etc. A few-shot task is described widely as training the model using just a handful of tagged examples.

The initial network which is to be later used for fine-tuning needs to have a wide understanding of document structures and substructures and needs to be trained extensively for it to yield good results when fine-tuned with very less samples. There was no relevant dataset which accommodated these needs and hence, we artificially generated a simple dataset using HTML. We refer to this dataset as Source Dataset. We then proceed to train the described model on this dataset. This trained model now serves as the backbone of all future models we fine-tuned. Using as little as 10, and up to 50 images, we demonstrate that the obtained model learns to understand document structures. We also show that the methodology can be extended to any number of domains with few examples from each. In this paper, we demonstrate the methodology and its application to Invoices and Resume images. We call these domains as Target Domains and the datasets as Target Datasets.

Our contributions consists of the following points

  • [noitemsep]

  • Applying state of the art object detection techniques for Document Layout Understanding

  • Introducing a generalized algorithm which can perform Layout Understanding in multiple domains using just few tagged images (eg: 10).

Figure 1: Sample images from the Artificial Dataset

2 Related Work

There are two sub-parts to the Document Layout Analysis problem

  • Geometric Layout Analysis

  • Logical Layout Analysis

Geometric Layout Analysis (GLA) is centred around understanding the basic geometric layout of a document, such as skew, page decomposition, text detection etc. Logical Layout Analysis (LLA) focuses on understanding the implied semantic labels in a document, like captions, subheading, table headings etc. GLA has been addressed mainly by image processing methods like Hough Transforms and Binarization. While the GLA problem is as old as Image Processing itself, LLA is a more recent problem and the one which we attempt to solve. Approaches employed in LLA mainly follow the bottom-up approach. Bottom-up approaches work by finding the smallest entities like words or characters and attempt to aggregate them using a distance metric and an aggregation algorithm like K-Nearest Neighbors or K-D Trees. These approaches

[8] [1] [7]

have the advantage of being mostly unsupervised but involve tuning a lot of heuristics. They are also not scalable to document layouts which are different from those the algorithm is tuned on. Comparisons of such approaches are also covered by

[11] [6]. The most popular and widely used of these approaches is the Docstrum [8]

algorithm. It uses KNN to aggregate the minute structures into lines and then employs heuristics like, perpendicular distance and angle between lines to combine them into text blocks. While deep learning approaches to LLA also exist, these approaches

[12] [3] require vast amounts of training data and only learn a fixed set of labels and are thus not useful for few-shot tasks with a wide variety of different labels. We explore an object detection based approach to LLA, which can be fine-tuned on as less as 10 images to understand semantic labels like address, total bill amount, skills, education etc.

Few shot object detection is a task where the tagged training set is very small (say 1-50 images total). Previous work has been explored on the PASCAL VOC/COCO/ImageNet dataset.

[2] introduce a Low-shot Object Detector (LSTD) model which is pretrained on a huge Source Dataset and fine-tuned on a small (low-shot) target dataset. The LSTD model is based on Single Shot Detector (SSD) [5] and Faster-RCNN (FRCNN) [10]

. Broadly, they use the SSD network to detect foreground segments and a classifier which takes ROIPooled features from the SSD feature maps to classify the detected regions. There are two regularizations introduced by

[2], Background Regularization (BGR) and Tk-Regularization (Tk-R) which helps them in learning from just few examples in the target dataset. The Source dataset in our case is more basic while [2] assume the Source dataset to be very huge and comprehensive.

Figure 2: Overview of the proposed method
Figure 3: Sample tagged images from the Invoice Dataset

3 Architecture

Our architecture is a two-step object detector. The first step is the detector (inspired from LSTD) which detects the foreground regions and the second step is the ML classifier which predicts the domain-specific layout class.

For the first step, we leverage a better feature extractor for the object detector. We use the Feature Pyramid Networks [4] as our feature extractor. This (FPN based SSD) achieves state-of-the-art performance for a single model on PASCAL VOC dataset (object detection) as shown here 111https://github.com/kuangliu/torchcv.

On the Target dataset, many of the target classes cannot be distinguished by visual features alone. Hence we resorted to using a separate classifier (as opposed to the FRCNN based LSTD classifier) for the detected boxes. This involves taking text based features. Hence, while fine-tuning, a better alternative to this classifier is used in our system. The learning of target domain is made easier and faster by making use of the background regularization constraint.

4 Methodology

The task can be described as few shot document layout understanding. Our methodology consists of the following parts

  1. [noitemsep]

  2. Creating the artificial (Source) dataset.

  3. Pretraining the model on the Source dataset.

  4. Finetuning the model on the domain-specific (Target) dataset.

  5. Training the ML classifier on the Target dataset (is combined with Step 3)

4.1 Dataset Generation

Our artifical dataset contains 160,000 images spanning multiple scales and sizes, accommodating for asymmetrically placed structures and elements. The dataset contained 8 basic layout classes

  1. [noitemsep]

  2. Title

  3. Heading

  4. Sub-Heading

  5. Text Block

  6. List

  7. Table

  8. Image Content

  9. Image/Table Caption

The textual content in the dataset was taken from a text dump consisting of a variety of online sources. The images were taken from a small dataset collected from Google Images. Apart from random images, the image dataset contained specific images collected using relevant keywords like graphs, tables, charts etc. Few examples from the artificial dataset along with its taggings are shown in Figure 1

Figure 4: Overview of the ML Classifier

4.2 Training

We train the LSTD model as it is on the Source Dataset. Once our model is trained on the Source Dataset, we move to fine-tune the model on the Target Datasets. Here we apply BGR. As mentioned earlier, we found that the performance of the inbuilt classifier in LSTD was not performing to our satisfaction, hence we decided to pass the foreground detections from the network through a seperate classifier.

Target Classification:

To tackle the domain specific layout classes, we employed few ways to extract the best features so that we can train a classifier. We extracted the text from the detected box and used bag-of-words approach for getting the textual features. We also used other features related to the spatial configuration of the detected box. We use these features to train a machine learning algorithm to classify the detected bounding box to one of the classes. This is described in Figure

4

4.3 Implementation Details

For creating the artificial dataset, we generated HTML files which correspond to web documents and exported them into images using a webdriver. For the layout detection step, we implemented the LSTD network in PyTorch library. We use the FPNSSD from torchcv library

222https://github.com/kuangliu/torchcv. For all experiments, we use SGD optimizer with learning rate of 0.0001 and momentum 0.9. We use L2 penalty of 0.0005.

For the layout classification step, to extract text from a detected box we use the open-source LSTM-based Tesseract 4.0333https://github.com/tesseract-ocr/tesseract. We get our classifier using the tpot toolkit [9]

, which uses genetic programming to optimize machine learning pipelines.

While reporting the results, we take the IoU threshold for the object detection accuracy metrics as 0.5

5 Invoice Dataset

5.1 Dataset description

We collected 170 invoices which includes variations in structure, domain and template. We refer to this as the Invoice Dataset. We manually tag this dataset into layouts of 5 main categories -

  1. [noitemsep]

  2. Logo

  3. Address

  4. Bill/Invoice Information

  5. Tables

  6. (Total) Amount Information

Few example images from the dataset are shown in Fig 3. We use a fixed set of 100 images as our test set. We train our model on different (incremental) number of training images (k) and report the results correspondingly.

No of training
images (k)
Mean
Precision
Mean
Recall
Mean
F1 Score
10 0.4721 0.5188 0.4943
20 0.4962 0.5444 0.5192
30 0.5012 0.5791 0.5373
40 0.5244 0.601 0.5601
50 0.5316 0.6101 0.5682
60 0.5599 0.6214 0.589
70 0.56 0.6354 0.5953
Table 1: LSTD End to End accuracy on Invoice Dataset
No of training
images (k)
Precision Recall F1 Score
0 0.144 0.4214 0.2147
10 0.5992 0.6212 0.61
20 0.611 0.7062 0.655
30 0.6203 0.7755 0.6893
40 0.6767 0.7901 0.729
50 0.6742 0.7992 0.7314
60 0.7017 0.8001 0.7484
70 0.7292 0.8132 0.7689
Table 2: LSTD foreground detection accuracy on Invoice Dataset
No of training
images (k)
Precision Recall F1 score
10 0.1078 0.1991 0.1399
20 0.1377 0.235 0.1736
30 0.1744 0.2768 0.214
40 0.1957 0.2998 0.2368
50 0.3018 0.3036 0.3027
60 0.3738 0.315 0.3419
70 0.3888 0.3445 0.3653
Table 3: LSTD without Source pretraining on Invoice Dataset - foreground detection accuracy
No of training
images (k)
Precision Recall F1 score
70 0.7718 0.8135 0.7921
Table 4: ML Classifier accuracy on Invoice Dataset
Figure 5: Sample tagged images from the Resume Dataset

6 Resume Dataset

6.1 Dataset description

The resume dataset is a set of 100 images collected from various sources containing resumes from different domains and layouts. As with the invoice dataset, this was manually tagged into 6 main categories:

  1. [noitemsep]

  2. Education

  3. Experience

  4. Bio

  5. Skills

  6. Summary

  7. Other

Example Images are shown in Fig 5. A fixed set of 50 images is used as the test set and training is done on an incremental number of training images ranging from 10 to 50.

No of training
images (k)
Mean
Precision
Mean
Recall
Mean
F1 Score
10 0.6144 0.5888 0.6013
20 0.6398 0.6011 0.6198
30 0.6587 0.6218 0.6397
40 0.6712 0.6325 0.6513
50 0.6946 0.634 0.6629
Table 5: LSTD End to End accuracy on Resume Dataset
No of training
images (k)
Precision Recall F1 Score
0 0.035 0.4311 0.06
10 0.8228 0.821 0.8219
20 0.8542 0.8224 0.838
30 0.8655 0.8291 0.8469
40 0.9123 0.8363 0.8726
50 0.8977 0.8343 0.8659
Table 6: LSTD foreground detection accuracy on Resume Dataset
No of training
images (k)
Precision Recall F1 score
10 0.3797 0.3571 0.368
20 0.3859 0.3928 0.3893
30 0.5238 0.5238 0.5238
40 0.5178 0.7532 0.6137
50 0.60946 0.61309 0.61037
Table 7: LSTD without Source pretraining on Resume Dataset - foreground detection accuracy
No of training
images (k)
Precision Recall F1 score
50 0.804 0.8946 0.8469
Table 8: ML Classifier accuracy on Resume Dataset

7 Results

Dataset Precision Recall F1 score
Invoice 0.0547 0.1935 0.0853
Resume 0.2415 0.2559 0.2485
Table 9: Table: Baseline (Docstrum) accuracy
Figure 6: Sample predictions of the baseline method on both Datasets

Baselines: The Docstrum algorithm [8]

serves as our baseline. The algorithm converts images to grayscale and binarizes them. It further finds the connected components and their centroids. It then looks for the K-nearest neighbours (K=5) of each component. Vectors are plotted from each centroid to its neighbours and these angles help in skew correction. The nearest-neighbor distance histogram has several peaks and these peaks typically represent between-character spacing, between-word spacing and between-line spacing. These values are then used to construct lines, words and text blocks with some predetermined tolerance for each spacing value.

We use Docstrum to construct blocks and then determine the accuracy using the manually annotated ground truth results on both the target datasets ie. Invoices and Resumes. Sample outputs for the same are shown in Figure 6

Table 1-4 shows the various results on the Invoice Dataset, while Table 5-8 shows the results on the Resume Dataset.

Table 2, 6 shows the accuracy of just the foreground detections (LSTD detections) while Table 4, 8 shows the accuracy of just the ML Classifier on the foreground ROIs. Table 1,5 shows the end to end accuracy of both the foreground detection and ML Classifier combined. It is clearly evident from the results that the method works great even for 10 training examples.

Importance of our Source pretraining is shown by the results given in Table 3, 7. These are the results from the models which are trained from scratch instead of finetuning the pretrained model on Source dataset. One can notice an accuracy improvement of at least 40% on F1 scores of Target Domain Layout Detection task.

The above results also demonstrate the superiority of the methodology over simple object detectors.

8 Discussion

Interesting observation with regards to Zero Shot Transfer Learning is that the performance mainly depends on how varied one’s Source Dataset is. Upon qualitative analysis, we found that the Resume Dataset is more varied than our artifical dataset when compared to Invoice Dataset and our artificial dataset. This explains why even with 0 samples, directly applying Source trained model on the test images of Target Datasets work reasonably in the case of Invoice Dataset. Further improvements can be made in the Source Dataset generation to make it more generalized and varied, will help narrow this gap.

There are two regularizations introduced by [2], Background Regularization (BGR) and Tk-Regularization (Tk-R). We use BGR to make the learning of Target domain easier and faster. This is achieved by making the learning of background part in the Target domain easier through this constraint. Tk-R tries to bridge the gap between predictions of the classifier on Source and Target domain. Tk-R was not useful as the default (FRCNN based) classifier does not perform well due to the reasons mentioned earlier.

Figure 7: Sample predictions from our system on the test images of Resume Dataset
Figure 8: Sample predictions from our system on the test images of Invoice Dataset

9 Conclusion

In this work, we have shown that object detection techniques can be used for Document Layout understanding. We have also shown that the proposed methodology can be scaled across multiple domains with just need of few tagged examples. The results also demonstrate the superiority of the methodology over existing object detection techniques.

Document Layout analysis techniques assumes great importance in the information age as more and more documents are digitized and needs to be retrieved by understanding their content similar to digital content. Such techniques are useful in automating manually intensive business processes such as processing KYC documents or invoices. Document Layout analysis techniques also opens up the possibilities for businesses to mine documents such as paper receipts and extract valuable insights from them for market research purposes. Getting a large annotated corpus of data can be time-consuming and expensive for practical use-cases which further demonstrates the practical utility of our approach.

References