1.1 Background and Motivation
The background of the research described in the thesis is related to scholarly communication in digital era and the problems it encounters. Our main objective is to facilitate the communication among the scientists and improve the knowledge propagation in the scientific world. These goals are accomplished by equipping digital libraries and research infrastructures with means allowing them to support the process of consuming the growing volume of scientific literature by researchers and scientists.
In the scientific world communicating the ideas, describing the planned, ongoing and completed research and finally reporting discoveries and project results is typically realized by publishing and reading scientific literature, mostly in the form of articles published in journals or conference proceedings. Originally scientific literature was distributed in the form of printed paper, but within the last 30 years we have witnessed the digital revolution which has moved this aspect of scientific communication to electronic media.
Along with the media change we have also observed a huge increase in the volume of available scientific literature. The exact total number of existing scientific articles is not known, but the statistics gathered from popular electronic databases show the scale we are dealing with. For example DBLP database111http://dblp.uni-trier.de/, which provides bibliographic information on scientific literature from computer science discipline only, currently contains approximately 3 million records. PubMed Central222http://www.ncbi.nlm.nih.gov/pmc/ is a full text free archive of 3.6 million biomedical and life sciences journal articles. PubMed333http://www.ncbi.nlm.nih.gov/pubmed, a freely available index of biomedical abstracts, including the entire MEDLINE database, contains 25 million references. Finally, Scopus database444http://www.scopus.com/, which collects publications from a much wider range of disciplines that DBLP or PubMed, currently contains 57 million records.
There also have been a number of attempts of estimating the total number of scientific articles or a specific subset of them. For example Bjorket al. [BjorkRL09] estimated the number of peer-reviewed journal articles published by 2006 to be about 1,350,000 using data from the ISI citation database555http://ip-science.thomsonreuters.com. Jinha [Jinha2010258] used this result and a number of assumptions related to a steady increase in the number of researchers, journals and articles, and arrived at the estimation of the total number of journal articles ever published to be more than 50 million as of 2009. Finally, Khabsa and Giles [Khabsa2014] studied the volume of scholarly documents written in English and available on the web by analysing the coverage of two popular academic search engines: Google Scholar666https://scholar.google.pl/ and Microsoft Academic Search777http://academic.research.microsoft.com/. Their estimates show that at least 114 million documents are accessible on the web, with at least 27 million available without any subscription or payment.
In addition to the total volume of already published scientific literature being huge, we are also observing a substantial increase in the number of new articles published every year. According to Larsen and von Ins [LarsenI10], there are no indications that the growth rate in the number of published peer-reviewed journal articles has decreased within the last 50 years, and at the same time, the publication using new channels, such as conference proceedings, open archives and web pages, is growing fast. The statistical data obtained from DBLP and PubMed databases show similar trends (Figures 1.1 and 1.2).
Writing and publishing articles is only one side of scholarly communication. At the other end there are the consumers of the literature, usually also scientists and researchers, interested in new ideas and discoveries in their own field, or trying to get familiar with the state of the art in new fields. Keeping track of the latest scientific findings and achievements published in journals or conference proceedings is a crucial aspect of their work. Ignoring this task results in deficiencies in the knowledge related to the latest discoveries and trends, which in turn can lower the quality of their own research, make results assessment much harder and significantly limit the possibility to find new interesting research areas and challenges.
Unfortunately, due to the huge and still growing volume of scientific literature, keeping track with the latest achievements is a major challenge for the researchers. Scientific information overload is a severe problem that slows down the scholarly communication and knowledge propagation across the academia.
The digital era resulted not only in moving the literature from paper to digital media, but in fact changed the way modern research is conducted. Research infrastructures equip the users with the resources and services supporting all stages of the research in many disciplines. Digital libraries provide means for storing, organizing and accessing digital collections of research-related data of all kinds, such as documents, datasets or tools.
These modern infrastructures support the process of studying scientific literature by providing intelligent search tools, proposing similar and related documents (Figure 1.3), building and visualizing interactive citation and author networks (Figure 1.4), providing various citation-based statistics, and so on. This enables the users to effectively explore the map of science, quickly get familiar with the current state of the art of a given problem and reduce the volume of articles to read by retrieving only the most relevant and interesting positions.
Unfortunately, building the services supporting the readers is not a trivial task. Such intelligent, high-quality services and tools require reliable, machine-readable metadata of the digital library resources. Unfortunately, in practice a large portion of the resources is typically available to a great extent as unstructured text, intended for human readers, but poorly understood by machines. Good quality metadata is not always available, sometimes it is missing, full of errors or fragmentary, even for fairly recently published articles.
There are two complementary solutions to this problem. The easiest way to provide high quality metadata for scientific documents is to gather this information directly from the author when a document is submitted to the system for the first time. Since we are interested in a wide range of metadata, possibly including the metadata of all the references placed in the document and its full text, inputting the metadata even for a single document can be tedious and time-consuming, and thus error-prone. Therefore it would be very helpful to assist the user by providing the metadata extracted from the document automatically, which can be then verified and corrected manually. Such solutions result in a substantial time saving and much better metadata quality. An example of such an intelligent interface from Mendeley is shown in Figure 1.5.
On the other hand, digital libraries already have to deal with huge number of existing documents with missing or fragmentary metadata records. Since processing this huge volume by human experts would be extremely ineffective, we have to rely on automatic tools able to process large collections and provide reliable metadata for the documents in an unsupervised manner. Unfortunately, already existing metadata extraction tools are not accurate, flexible or comprehensive enough.
1.2 Problem Statement
The main goal of our research is to solve the problem of missing metadata information by providing an automatic, accurate and flexible algorithm for extracting wide range of metadata directly from scientific articles.
Even limited to analysing scientific literature only, the problem of extracting the document’s metadata remains difficult and challenging, mainly due to the vast diversity of possible layouts and styles used in articles. In different documents the same type of information can be displayed in different places using a variety of formatting styles and fonts. For instance, a random subset of 125,000 documents from PubMed Central contains publications from nearly 500 different publishers, many of which use original layouts and styles in their articles.
In general solving the metadata extraction problem requires addressing the two major tasks: the analysis of the layout of the document, the difficulty of which varies with the input document formats, and understanding the roles played by all the fragments in the document.
The result of the research is an accurate automatic algorithm for extracting rich metadata directly from a scientific publication. Proposed algorithm takes a single publication in PDF format on the input, performs a thorough analysis of the document and outputs a structured machine-readable metadata record containing:
a rich set of document’s basic metadata, such as title, abstract, keywords, authors’ full names, their affiliations and email addresses, journal name, volume, issue, year of publication, etc.,
a list of references to other documents given in the article along with their metadata such as the document’s authors, title, journal name or year,
structured full text with sections and subsections hierarchy.
Designed as a universal solution, the algorithm is able to handle a vast variety of scientific articles reasonably well, instead of being perfect in processing a limited number of document layouts only. We achieved this by employing supervised and unsupervised machine-learning algorithms trained on large, diverse datasets. This decision made the method well-suited for analysing heterogeneous document collections, and also resulted in increased maintainability of the system, as well as its ability to adapt to new, previously unseen document layouts.
Since our main objective was to provide a useful, accurate solution to a practical problem, machine learning-based solutions are accompanied with a number of rules and heuristics. This approach proved to work very well in practice, although perhaps lacks the simplicity and elegance of algorithms based purely on machine learning.
The evaluation we conducted showed good performance of the proposed metadata extraction algorithm. The comparison to other similar systems also proved our algorithm performs better than competition for most metadata types.
Proposed algorithm is very useful in the context of digital libraries for both automatic extraction of reliable metadata from large heterogeneous document collections and assisting the users in the process of providing metadata for the submitted documents.
1.3 Key Contributions
The extraction algorithm we developed is based to a great extent on well-known supervised and unsupervised machine-learning techniques accompanied with heuristics. Nevertheless, the research contains the following innovatory ideas and extensions:
One of the key contributions is the architecture of the entire extraction workflow and the decomposition of the problem into smaller, well-defined tasks.
The page segmentation algorithm was enhanced with a few modifications increasing its accuracy.
We developed a large set of numeric features for text fragments of the document capturing all aspects of the content and appearance of the text and allowing to classify fragments with high accuracy.
We also developed a set of features for citation and affiliation tokens, which allow to parse affiliations and citations with high accuracy.
A clustering-based approach was proposed for extracting reference strings from the document.
We also proposed an algorithm based on normal scores of various statistics for selecting section header lines from the text content of the document.
Finally, we developed an efficient, scalable method of building gold standard publication datasets.
1.4 Thesis Structure
The thesis is structured as follows. In Chapter 2 we describe current state of the art with respect to scientific document analysis and automatic metadata extraction. Chapter 3 provides all the details related to the overall algorithm architecture, its internal decomposition into individual tasks and approaches employed for solving all of them. In Chapter 4 we thoroughly describe the datasets and methodology used to assess the quality of the algorithm and report the evaluation results, including the comparison with other similar systems. Chapter 5 summarizes the research. Appendix A provides the detailed results of the evaluation and all the tests performed, and finally Appendix B covers the practical aspects of the available algorithm implementation.
2.1 Metadata and Content Formats
In this section we present a number of document formats useful for creating and storing academic articles, focusing on the most popular ones. Described formats are optimized for many different purposes, and as a result they differ a lot in the type of information they are able to store and the stage of the document’s life they are mostly useful in. In the context of automatic document analysis, the most important feature of a format is its machine-readability, which determines the ability of extracting the information from the documents by automatic tools.
In general we deal with three types of formats:
formats optimized for creating and editing the documents, such as MS Word formats or LaTeX,
formats optimized for presentation, mostly used for exchanging and storing the documents, but not for manipulating them, such as PDF,
modern, machine-readable formats storing various aspects of documents, such as the content and physical and/or logical structure.
One of the most popular formats used for creating and editing documents are of course those related to Microsoft Word111https://products.office.com/en-us/word, a widely used word processor. Microsoft Word uses several file formats, and the default one varies with the version of the software.
In the 1990s and early 2000s the default format was .DOC222https://msdn.microsoft.com/en-us/library/office/cc313153%28v=office.12%29.aspx. It is a very complex binary format, where a document is in fact a hierarchical file system within a file. The format was optimized for the software performance during editing and viewing the files, and not for machine understanding. For many years .DOC format specification was closed. Some specifications for Microsoft Office 97 were first published in 1997 under a restrictive license, and remained available until 1999. Since 2006 the specification was available under a restrictive license on request. In 2008 Microsoft released a .DOC format specification under the Microsoft Open Specification Promise. Unfortunately, due to the format complexity and missing descriptions of some features, automatic analysis of .DOC files still requires some amount of reverse engineering.
Starting from Microsoft Office 2007, the default format is Office Open XML333http://officeopenxml.com/, which comprises formats for word processing documents, spreadsheets and presentations as well as specific formats for mathematical formulae, graphics, bibliographies etc. The format uses WordprocessingML as the markup language for word processing documents. In comparison to .DOC, OOXML is much more machine-readable thanks to the usage of XML and open specifications.
Another format used for creating and editing documents, popular especially in academia, is LaTeX444https://www.latex-project.org/. As opposed to Microsoft Word, writers using LaTeX write in plain text and use markup tagging to define styles, the document structure, mathematical formulae, citations, and so on. LaTeX uses the TeX typesetting program for formatting its output, and is itself written in the TeX macro language. LaTeX documents can be processed by machines, although it is often used as an intermediate format only.
Portable Document Format (PDF) [pdfref] is currently the most popular format for exchanging and storing the documents, including the contents of scientific publications. The format is optimized for presentation, PDF documents look the same no matter what application software, operating system or hardware is used for creating or viewing them.
A PDF document is in fact a collection of objects that together specify the appearance of a list of pages and their content. A single page contains a PDF content stream which is a sequence of text, graphics and image objects printed on the page, along with all the information related to the position and appearance of all the objects.
A text object in a PDF stream specifies the text to be painted on the page, as well as the font, size, position, and other geometric features used to print the text. Listing 1 shows an example text object, which results in writing a string ”PDF” using 10-point font identified by F13 font source (typically Helvetica), 360 typographic points from the bottom of the page and 288 typographic points from its left edge.
A text object can contain three types of operators:
text state operators, used to set and modify text state parameters, such as character spacing, word spacing, horizontal scaling, text font and text font size,
text positioning operators, which control the placement of chunks that are subsequently painted, for example they can be used to move the current position to the next line with or without an offset,
text showing operators, used to paint the text accordingly to the current state and position parameters.
Depending on the software and method used to create a PDF file, a single text-showing operator can be used to print a single character, word, line, or any other chunk of continuous text without line breaks. Spaces may be included in the text strings painted on the pages, or may be a result of moving the current cursor to a different position. Some text decorations, such as underline or strikethrough, can be produced using specialized fonts or printed independently on top of the text as geometric objects.
What is more, PDF format does not preserve any information related to the logical structure of the text, such as words, lines, paragraphs, enumerations, column layout, sections, section titles or even the reading order of text chunks. This information has to be deduced from the geometric features of the text chunks. The text in a PDF file may be also present not in the form of text operators, but as images of scanned pages. In such cases only optical character recognition can be used to extract the text content from a file. All these issues make PDF format very difficult to understand by machines.
Another format specifying the precise positions and the appearance of the text in a document is TrueViz [LeeK03], an XML-based, machine-readable format. TrueViz stores the geometric structure of the document containing pages, zones, lines, words and characters, along with their positions, dimensions, font information and the reading order.
Modern XML-based machine-readable formats can be used for storing both structured metadata and the content of the documents, preserving various characteristics related to the appearance and meaning of the text. For example NLM JATS555http://jats.nlm.nih.gov/ (Journal Article Tag Suite) defines a rich set of XML elements and attributes for describing scientific publications. Documents in JATS format can store a wide range of structured metadata of the document (title, authors, affiliations, abstract, journal name, identifiers, etc.), the full text (the hierarchy of sections, headers and paragraphs, structured tables, equations, etc.), the document’s bibliography in the form of a list of references along with their identifiers and metadata, and also the information related to the text formatting.
Other similar XML-based formats are: the format developed by Text Encoding Initiative (TEI)666http://www.tei-c.org which is semantic rather than presentational, and Dublin Core Schema777http://dublincore.org/schemas/, a small set of vocabulary terms that can be used to describe documents.
In our algorithm we use three formats described above. PDF, as the most popular format for storing the documents in digital libraries, is the input format to the entire algorithm. TrueViz is used as an intermediate format to serialize the geometric model of the input document inferred from the PDF file. The output format is NLM JATS, as a widely used machine-readable format able to store both the metadata of the document as well as structured full text in hierarchical form.
2.2 Relevant Machine Learning Techniques
This section describes briefly machine learning tasks and techniques related to extracting metadata from documents. We focus mainly on the algorithms used in our work.
2.2.1 General Classification
Classification is one of the most useful technique in the context of extracting information from documents. Classification can be used to determine the roles played in the document by its fragments of various granularity.
Classification refers to the problem of assigning a category (a label from a known label set) to an instance. In supervised machine learning this is achieved by learning a model (a classification function) from a set of instances with known labels, called the training set, and applying the learned function to new instances with unknown labels. Instances are typically represented by features of various types (binary, numerical, categorical).
There are many known classification algorithms, for example: linear classifiers (including LDA, naive Bayes, logistic regression, Support Vector Machines), which make classification decisions based on a linear combination of instance features, k-Nearest Neighbors algorithm, in which the decision is based on the labels of instances close to the input instance according to some metric, or decision trees, which make a decision based on a sequence of ”questions” related to the values of individual features.
Our extraction algorithm makes extensive use of Support Vector Machines. SVM [BoserGV92, Vapnik98, Cristianini10]
is a powerful classification technique able to handle a large variety of input and work effectively even with training data of a small size. SVM is a binary classifier (able to handle label sets containing exactly two elements) based on finding the optimal separation hyperplane between the observations of two classes. It is little prone to overfitting, generalizes well, does not require a lot of parameters and can deal with highly dimensional data. SVM is widely used for content classification and achieves very good results in practice.
Let the classification instances be represented by vectors of real-valued features. Let us also assume that the label set contains exactly two elements. SVM is a linear model of the form
is a feature vector representing the classification instance,
denotes a fixed feature-space transformation,
and are parameters determined during the training based on the training instances,
new instances are classified according to the sign of .
The training set contains vectors with corresponding target values (labels) , where . If we assume the training set is linearly separable, then there exists at least one choice of the parameters and such that the function satisfies for points having and similarly, for points having . In short, we have for all training data points. There of course might exist many choices of the parameters separating the classes entirely. The objective of the training phase is to find the parameters resulting in the best separation.
In SVM we are interested in finding the parameters which maximize the margin in the training set, which is the smallest distance between the decision boundary and any of the points of a given class. Formally, the task of the training is to find
Since the direct solution of this problem would be very complex, we often convert this into an equivalent problem easier to solve by rescaling and , so that we have for the point that is the closest to the decision boundary. For all data points we then have and the optimization problem now becomes the equivalent
If the feature space is not linearly separable, then there is no hyperplane separating the training data points of two classes. To deal with this, we allow some data points to be on the wrong side of the hyperplane, but we use a penalty which increases with the distance to the decision boundary. We introduce slack variables, , one per training instance, and the condition becomes . Our new optimization problem now becomes the following:
where is the regularization parameter (the penalty parameter of the error term).
For practical reasons, we usually operate not on function directly, but on a kernel function . The most popular kernel functions are:
radial basis function (RBF): ,
The kernel function, as well as its parameters , and/or are typically set prior to the training. Usually some procedure is adopted in order to determine the best kernel function and its parameters. One of the most widely used is a grid search, in which various combinations of parameters are used to assess the classifier performance on a validation set and the parameters giving the best scores are chosen.
Since SVM is a binary classifier, usually we need a strategy of dealing with multiple target classes. Two most popular strategies are one-vs.-all and one-vs.-one. In one-vs.-all strategy we train a single classifier per class, with the instances of the given class as positive samples and all other samples as negatives. In one-against-one approach [Knerr90] we train a single classifier per each pair of classes using only the samples of these classes, resulting in classifiers for classes. During the classification a voting strategy might be used. Early works of applying this strategy to SVM-based classification include, for example [Kreel99].
2.2.2 Sequence Classification
A special case of classification, sequence classification, is also often encountered in document analysis domain. In sequence classification we are interested in analysing a sequence of instances rather than independent instances.
More formally, the input is a sequence of instances and we are interested in finding a sequence of corresponding class labels from a known label set. We would like to predict a vector of class labels given an observed feature vector , which is typically divided into feature vectors . Each contains various information about the instance at position .
Sequence classification can be approached as any other classification problem by simply treating sequence elements as independent classification instances, and the successor and/or predecessor relations might be reflected in the instances features. On the other hand, sequences can also be seen as special cases of graphs and analysed with graphical modelling tools. In graphical modelling we model probabilistically arbitrary graphs, which represent the conditional dependence structure between random variables (labels and features).
A lot of effort in learning with graphical models has focused on generative models that explicitly model a joint probability distributionover both features and output labels and usually have the form
. One very popular approach from this family are Hidden Markov Models (HMM).
HMM models a chain of variables, where every variable can be in a certain state (states correspond to the labels) and emit observations (observations correspond to the features). In HMM we assume that each state depends only on its immediate predecessor, and each observation variable depends only on the current variable’s state. The model comprises the initial probability distribution for the state of the first variable in a sequence, the transition probability distribution from one variable’s state to the next variable and emission probability distributions. The classification is performed with the use of Viterbi algorithm, which infers the most probable label sequence based on observed features.
Apart from generative models, another family of approaches are discriminative models, which instead of modelling the joint probability focus only on the conditional distribution . This approach does not include modelling , which is not needed for classification and often hard to model because it may contain many highly dependent features. Modelling the dependencies among inputs can lead to complex and unmanageable models, but ignoring them can result in reduced performance. Because dependencies that involve only variables in play no role in the discriminative models, they are better suited to including rich, overlapping features.
A very popular model in the discriminative family is Conditional Random Fields (CRF) [LaffertyMP01]. CRF combines the advantages of classification and graphical modeling, bringing together the ability to model multivariate, highly dependent data with the ability to leverage a large number of input features for prediction. CRF can be seen as a discriminative variant of HMM.
In general CRF can be used to model arbitrary graphs. A special case, a linear-chain CRF models sequences of variables and is a distribution that takes the form
is a set of real-valued feature functions, which typically are based on two consecutive class labels and the entire observations sequence,
is a vector of feature weights which are learned during the training phase,
is a normalization function to make a valid probability:
After we have trained the model, we can predict the labels of a new input using the most likely labeling by calculating
In the case of a linear-chain CRF, finding the most probable label sequence can be performed efficiently and exactly by variants of the standard dynamic programming algorithm for HMM, Viterbi algorithm.
CRF is trained by a maximum likelihood estimation, that is, the parameters are chosen such that the training data has the highest probability under the given model.
Clustering is another very useful technique in document analysis. It can be employed whenever we wish to group a set of objects into disjoint subsets called clusters, such that the objects in the same cluster have similar characteristics. Two widely used clustering techniques are: hierarchical clustering and k-means clustering.
Hierarchical clustering not only groups objects into clusters, but also results in a hierarchy of clusters. In a bottom-up approach each object starts as a single-element cluster, and the clusters are iteratively merged accordingly to a certain strategy. In a top-down approach we start with a single cluster containing the entire set and the clusters are then iteratively split. Both approaches result in a tree-like hierarchy, where the root is a cluster containing the entire set, the leaves represent the individual elements and the remaining nodes are clusters of various granularity.
In order to decide which clusters should be combined or where a cluster should be split we need a measure of distance between sets of observations. This is typically based on a distance metric between individual points. Some commonly used metrics are:
minimum distance between pairs of observations (single linkage clustering):
maximum distance between pairs of observations (complete linkage clustering):
average distance between pairs of observations (average linkage clustering):
In k-means clustering the number of target clusters has to be known in advance. The clusters are represented by the means of the observations and each data point belongs to the closest mean according to a given distance metric.
More formally, given a set of observations , where each observation is a real vector, k-means clustering aims to partition the observations into sets so that the within-cluster sum of squares is minimized. In other words, its objective is to find:
where is the mean of vectors in .
K-means algorithm works in iterations. At the beginning we choose vectors as the initial centroids of the clusters. Then every point in the data set is assigned to the nearest cluster centroid and the centroids are recalculated as the means of their assigned points. This is repeated until there are any changes in the points assignments. The algorithm converges to a local minimum, but there is no guarantee a global minimum will be found. To obtain better results the algorithm can be repeated several times with different initial centroids.
2.3 Document Analysis
In this section we describe the state of the art in the area of scientific literature analysis. The section covers a number of tasks related to the problem, including layout analysis and information extraction.
Extracting metadata and content from scientific articles and other documents is a well-studied problem. Older approaches expected scanned documents on the input and were prepared for executing full digitization from bitmap images. Nowadays we have to deal with growing amount of born-digital documents, which do not require individual character recognition.
Extracting information from documents is a complex problem and usually has to be divided into smaller subtasks. Typical tasks related to the extraction problem include:
Preprocessing, which can be understood as parsing the input document and preparing a model of it for further analysis. The difficulty depends heavily on the input format. In the case of scanned documents, an OCR has to be performed. For PDFs the input text objects need to be parsed. Highly machine-readable formats, such as NLM JATS or TrueViz are comparatively easy to process.
Page segmentation, in which we detect basic objects on the pages of the document, for example text lines or blocks (zones). Similarly as before, depending on the format it might be sufficient to parse the input XML-based file, or a more complicated analysis of mutual positions of chunks or characters has to be performed.
Reading order resolving, in which we determine the order, in which all the text elements should be read. In western culture the text is usually read from the top to the bottom and from left to right, but the resolver has to take into account the column layout, various decorations such as page numbers, headers or footers, text elements floating around images and tables, etc.
Region classification, in which we detect the roles played by different regions in the document. The classification may be performed on the instances of various kinds (such as zones, lines or text chunks, images, other graphical objects) and can be based on many different features related to both text content and the way the objects are displayed on the document’s pages.
Parsing, which assigns sequences of labels to text string. Parsing is typically used to detect metadata in shorter fragments of the text, such as citation or affiliation strings, author full names, etc.
In the following subsections we discuss the state of the art in the following tasks: page segmentation (Section 2.3.1), reading order resolving (Section 3.2.3), document content classification (Section 2.3.3), sequence parsing (Section 2.3.4). Finally, in Section 2.3.5 we describe available systems and tools for processing scientific publications and extracting useful metadata and content from them.
2.3.1 Page Segmentation
Page segmentation refers to the task of detecting objects of various kinds in a document’s pages. The objects we are interested in can be zones (regions separated geometrically from other parts, such as blocks of text or images), text lines and/or words. Most approaches assume an image of the page is present on the input, and require additional OCR phase, as well as noise removal. Some of the algorithms can be adapted to analyzing born-digital documents, where the input is rather a bag of characters or chunks appearing on the page.
One of the most famous and widely used page segmentation algorithm is XY-cut proposed by Nagy et al. [NagySV92]
. XY-cut is a top-down algorithm which recursively divides the input page into blocks. The result of the algorithm is a tree, in which the root represents the entire document page and the leaf nodes are the final blocks. The tree is built from the top, by recursively dividing the current rectangular region into two rectangular parts by cutting it horizontally or vertically. The place of a cut is determined by detecting valleys (empty horizontal or vertical space touching both up and down, or left and right fragment edge). By default the widest valley is chosen as the cut and the entire process stops when there are no more valleys wider than a predefined threshold. XY-cut is a simple and efficient algorithm, though sensitive to the skew of the page.
Run-length smearing algorithm (RLSA) proposed by Wong et al. [WongCW82] expects an image of the page on the input and analyses the bitmap of white and black pixels. It is based on a simple observation, that zones typically are dense and contain a lot of black pixels separated only by a small number of white pixels. The first phase of the algorithm is called smearing. During smearing the sequences of pixels (rows or columns of the bitmap) are analysed and black pixels separated by only a low number (less than some predefined threshold) of white pixels are joined together by transforming the separating white pixels into black ones. Smearing is performed vertically and horizontally separately with different thresholds and the resulting bitmaps are then combined in a logical AND operation. Then, one additional horizontal smearing is performed using a smaller threshold, which results in a smoothed final bitmap. Next, connected component analysis is performed on the pixels to obtain document zones. Finally, each block’s mean height and mean run-length of black pixels is compared to the mean values calculated over all blocks on the page. Based on this each block is classified into one of four classes: text, horizontal black line, vertical black line or image.
The whitespace analysis algorithm proposed by Baird [Baird94] is based on analysing the structure of the white background in document images. First, the algorithm finds a set of maximal white rectangles called covers, the union of which completely covers the background. Then, the covers are sorted using a sorting key based on the rectangle area combined with a weighting function, which assigns higher weight to tall and long rectangles (as meaningful separators of text blocks). Next, we gradually construct the union of the covers in the resulting order, covering more and more of the white background. At each step, connected components within the remaining uncovered parts are considered candidates for text blocks. This process stops at some point, determined by a stopping rule, which results in the final segmentation. The stopping rule is defined as a predicate function of two numerical properties of segmentations: the sorting key of the covers and the fraction of the cover set used so far.
Breuel [breuel02] describes two geometric algorithms for solving layout analysis-related problems: finding a set of maximal empty rectangles covering the background whitespace of a document page image and finding constrained maximum likelihood matches of geometric text lines in the presence of obstacles. The combination of these algorithms can be used to find text lines in a document in the following manner: after finding the background rectangles, they are evaluated as candidates for column separators (called gutters or obstacles) based on their aspect ratio, width, text columns width and proximity to text-sized connected components, and finally, the whitespace rectangles representing the gutters are used as obstacles in a robust least square text-line detection algorithm. This approach is not sensitive to font size, font style, or scan resolution.
The Docstrum algorithm proposed by O’Gorman [OGorman93] is a bottom-up page segmentation approach based on the analysis of the nearest-neighbor pairs of connected components extracted from the document image. After noise removal, nearest neighbors are found for each connected component. Then, the histograms of the distances and angles between nearest-neighbor pairs are constructed. The peak of the angle histogram gives the dominant skew (the text line orientation angle) in the document image. This skew estimate is used to compute within-line nearest neighbor pairs. Next, text lines are found by computing the transitive closure on within-line nearest neighbor pairs using a threshold. Finally, text-lines are merged to form text blocks using a parallel distance threshold and a perpendicular distance threshold. The algorithm uses a significant number of threshold values and behaves the best if they are tuned for a particular document collection.
The Voronoi-diagram based segmentation algorithm proposed by Kise et al. [KiseSI98] is also a bottom-up approach. It is based on a generalization of Voronoi diagram called area Voronoi diagram, where the regions are generated by a set of non-overlapping figures of any shape rather than individual points, and the distance between a point and a figure is defined as a minimal distance between the point and any point belonging to a figure. At the beginning the algorithm computes the connected components and samples the points from the boundaries of them. Then, an area Voronoi diagram is generated using sample points obtained from the borders of the connected components. The Voronoi edges that pass through any connected component are deleted to obtain an area Voronoi diagram. Finally, superfluous Voronoi edges are deleted to obtain boundaries of document components. An edge is considered superfluous if the minimum distance between its associated connected components is small, the area ratio of the two connected components is above a certain threshold or at least one of its terminals is neither shared by another Voronoi edge nor lies on the edge of the document image. The algorithm works well even for non-Manhattan layouts and is not sensitive to line skew or text orientation.
The above six algorithms were evaluated and compared by Shafait et al. [ShafaitKB08]. They propose a pixel-accurate representation of a document’s page along with several performance measures to identify and analyze different classes of segmentation errors made by page segmentation algorithms. The algorithms were evaluated using a well-known University of Washington III (UW-III) database [Guyon97], which consists of 1,600 English document images with Manhattan layouts scanned from different archival journals with manually edited ground-truth of entity bounding boxes, including text and non-text zones, text lines and words. On average, Docstrum along with the Voronoi-based algorithm achieved the lowest error rates in most categories. Docstrum is also the only algorithm, which by default detects both text lines and zones.
In addition, International Conference on Document Analysis and Recognition (ICDAR) hosted a number of page segmentation competitions starting from 2001. The last competition for tools and systems of general segmentation purpose took place in 2009 [AntonacopoulosPBP09]. Its aim was to evaluate new and existing page segmentation methods using a realistic dataset and objective performance measures. The dataset used comprised both technical articles and magazine pages and was selected from the expanded PRImA dataset [AntonacopoulosBPP09].
In 2009 four systems were submitted to the competition. DICE (Document Image Content Extraction) system is based on classifying individual pixels into machine-print text, handwriting text and photograph [BairdMAC07], followed by a post-classification methodology [AnBX07] which enforces local uniformity without imposing a restricted class of region shapes. The Fraunhofer Newspaper Segmenter system is based on white [breuel02] and black [ZhengLDP01] separator detection followed by a hybrid approach for page segmentation [JainY98]. The REGIM-ENIS method is designed primarily for degraded multi-script multi-lingual complex official documents, containing also tabular structures, logos, stamps, handwritten text and images. Finally, Tesseract [Smith09], an extension of the Tesseract OCR system, in which the page layout analysis method uses bottom-up methods, including binary morphology and connected component analysis, to estimate the type (text, image, separator, or unknown) of connected components.
According to the competition results, the Fraunhofer Newspaper Segmenter method performed the best, improving on both the state-of-the-art methods (ABBYY FineReader and OCRopus) and the best methods of the ICDAR2007 page segmentation competition.
The segmentation algorithms can be adapted to process born-digital documents, although in some cases it might be non-trivial. In the case of born-digital document we often deal with characters, their dimensions and positions, but we lack the pixel-accurate representation of the pages, and thus the algorithms analysing individual pixels, such as RLSA or Voronoi would require preprocessing.
2.3.2 Reading Order Resolving
Another task related to document layout analysis is reading order resolving. Reading order resolving aims at determining the order, in which all the elements on a given page should be read. One might be interested in the order of elements of various types, such as zones, lines, or words. An accurate solution has to take into account many different aspects of the document layout, such as: column layout, presence of images and other text fragments not belonging to the main text flow, various language scripts, etc.
XY-cut algorithm, described in the previous section, can be naturally extended to output extracted zones in their natural reading order. In XY-cut algorithm every cut divides the current page fragment into two blocks positioned left-right or up-down to each other. If we assume that the text should be read from left to right and from top to bottom, then it is enough to always assign left or top part to the left child in the constructed tree, and similarly, right and bottom part should be always assigned to the right child. After the tree is complete, an in-order leaves traversal gives the resulting reading order of the extracted zones.
There are, however, a few serious problems with this approach. First of all, the algorithm is not able to process non-manhattan layouts (such as pages containing L-shaped zones). This is not a big problem in the case of scientific publications, since most of them use manhattan layout. There is also an issue of choosing the right threshold for the minimum width of the valley, which might vary from one document to another. The most problematic is the issue of choosing the best cut when there are a number of possible valleys to choose from. The default decision in XY-cut is to cut the regions along the widest valley, which works well for page segmentation, but often results in incorrect reading order. For example a multi-column page might get cut horizontally in the middle dividing all the columns, because the gaps between paragraphs or sections happen to occur in one horizontal line creating a valley wider than the gap between the columns.
Various approaches addressing these issues have been proposed. For example Ishitani et al. [Ishitani03] describe a bottom up approach, in which some objects on the page are merged prior to applying XY-cut, using three heuristics based on local geometric features, text orientation and distance among vertically adjacent layout objects. As observed by Meunier [Meunier05], this aims at reducing the probability of dealing with multiple cutting alternatives, but it does not entirely prevent them from occurring. Meunier proposes a different approach, in which an optimal sequence of XY cuts is determined using dynamic programming and a score function, which prefers column-based reading order. This results in a cutting strategy which favors vertical cuts against horizontals ones, based on the heights of blocks.
Another example of a reading order algorithm is the approach proposed by Breuel [Breuel03]. It is based on topological sorting and can be used for determining the reading order of text lines. At the beginning, four simple rules are used to determine the order between a subset of line pairs on a page, giving a partial order. The rules are based on mutual coordinate positions and overlap, for example: line segment comes before line segment if their ranges of x-coordinates overlap and if line segment is above line segment on the page. Finally, a topological sorting algorithm is applied to find a global order consistent with previously determined partial order.
As opposed to previous methods, Aiello et al. [AielloMT02]
propose to employ linguistic features in addition to the geometric hints. In their approach the input is first divided into two parts: metadata and body. The reading order is determined separately for these subsets, and finally the two orders are combined using rules. Each reading order is determined in two steps performed by the following modules: a spatial reasoning module, based on spatial relations, and a natural language processing module, based on lexical analysis. The modules are applied in order: first, the spatial reasoning module identifies a number of possible reading orders by solving a constraint-satisfaction problem, where constraints correspond to rules such as ”documents are usually read from top to bottom and from left to right”. The natural language processing module identifies the linguistically most probable reading orders among those returned by the first module using part-of-speech tagging and assessing the probabilities of POS sequences of possible reading orders.
Finally, Malerba et al. [MalerbaCB08]
propose a learning-based method for reading order detection. In their approach the domain specific knowledge required for this task is automatically acquired from a set of training examples by applying logic programming techniques. The input of the learning algorithm is the description of chains of layout components defined by the user, and the output is a logical theory which defines two predicates: ”first to read” and ”successor”. The algorithm uses only spatial information of the page elements. In the recognition phase learned rules are used to reconstruct the reading order, which in this case contains reading chains and may not define a total ordering.
2.3.3 Content Classification
Content classification, the purpose of which is to find the roles played by different objects in the document, is a crucial task in analysing the documents. The problem has been addressed by numerous researchers. Proposed solutions differ a lot in the approach used (usually rule- or machine learning-based), classified objects (zones, lines or text chunks), used features and characteristics (geometrical, formatting, textual, etc.) and target labels. Some examples of the target labels include: title, authors, affiliation, address, email, abstract, keywords, but also header, footer, page number, body text, citation.
Rule-based systems were more popular among the older algorithms. Such approach does not require building a training set and performing the training, but since the rules are usually constructed manually, it does not generalize well and is not easily maintainable. Rule-based approaches are well-suited for homogeneous and stable document sets, with only few different documents layouts.
An example of a rule-based classification is PDFX system described by Constantin et al. [ConstantinPV13]. In this approach page elements are converted to geometric and textual features and hand-made rules are used to label them. The target label set contains: front matter labels (title, abstract, author, author footnote), body matter labels (body text, h1 title, h2 title, h3 title, image, table, figure caption, table caption, figure reference, table reference, bibliographic item, citation) and other (header, footer, side note, page number, email, URI).
Flynn et al. [FlynnZMZZ07] describe a system which also can be seen as a variant of the rule-based approach. Their algorithm uses a set of templates associated with document layouts. A template can be understood as a set of rules for labelling page elements. A document is first assigned to a group of documents of similar layout and then corresponding template is used to assign labels to elements. The target label set depends on the layout, some examples include: title, author, date.
Also in the algorithm proposed by Giuffrida et al. [GiuffridaSY00] hand-made rules are used to label text chunks. In this approach, text strings annotated with spatial and visual properties, such as positions, page number and font metrics, are used as ”facts” in a knowledge base. Basic document metadata, including title, authors, affiliations, relations author-affiliation, is extracted by a set of hand-made rules that reason upon those facts. Some examples of rules include: ”the title is written in the top half of the first page with the biggest font”, or ”authors’ list follows the title immediately”.
Mao et al. [MaoKT04] also propose a rule-based system for extracting basic metadata, including the title, authors, affiliations and abstract, from scanned medical journals. The system is used for MEDLINE database. Its iterative process includes human intervention, which corrects the zone labelling obtained from the previous rules. Corrected results are then used to develop specialized geometric and contextual features and new rules from a set of issues of each journal.
Rule-based approaches are especially popular for locating the regions containing references. This is related to the fact that fairly common and clear differences between these sections and other document parts can be easily expressed by hand-made rules and heuristics.
For example Pdf-extract888http://labs.crossref.org/pdfextract/ system uses a combination of visual cues and content traits to detect references sections. The rules are to a great extent based on the observation that the reference section of a scientific document tends to have a significantly higher ratio of proper names, initials, years and punctuation with comparison to other regions.
In the approach proposed by Gao et al. [GaoTL09] a rule-based method is used to locate citation regions in electronic books. The rules are based on the percentage of text lines in the page containing certain Chinese words such as ”reference”, ”bibliography”, years and family names.
Also in the system described by Gupta et al. [GuptaMCS09] the reference blocks are found by estimating the probability that each paragraphs belongs to references using parameters based on paragraph length, presence of keywords, author names, years and other text clues.
Kern and Klampfl [KernK13] also propose a heuristics-based approach for locating the references sections. Their algorithm first iterates over all blocks in the reading order and uses regular expressions and a dictionary of references section titles to find the reference headers. Then all the lines are collected until another section heading, for example ”Acknowledgement”, ”Autobiographical”, ”Table”, ”Appendix”, ”Exhibit”, ”Annex”, ”Fig”, ”Notes”, or the end of the document, is found. Headers and footers lines are recognized based on the comparison of the blocks across neighboring pages based on their content and geometrical position and they are not added to the references content.
Supervised machine learning-based approaches are far more popular for classifying the document fragments. They are more flexible and generalize better, in particular when we have to deal with diverse document collections. Proposed methods differ in classification algorithms, document fragments that undergo the classification (text chunks, lines or blocks) and extracted features. Some examples of the classification algorithms used for this task include: Hidden Markov Models, Support Vector Machines, neural classifiers, Maximum Entropy, Conditional Random Fields.
For example, Cui and Chen [CuiC10] propose a classification approach in which text blocks (small pieces of text, often smaller than one logical line) are classified with an HMM classifier using features based on location and the font information. The target labels include: title, author, affiliation, address, email and abstract. A straightforward HMM-based approach would just label the stream of text blocks, but the authors modified it to take into account the structure of the lines containing the classified blocks. Based on the location of the text chunks, the HMM state transition matrix is divided into two separate matrices: one for the state transition probability within the same line and the other for the state transition probability between lines. A modified Viterbi algorithm uses these new matrices to find the most probable label sequence.
Han et al. [HanGMZZF03] perform a two-stage classification of document header text lines with the use of Support Vector Machines and only text-related features. They use a rich set of labels: title, author, affiliation, address, note, email, date, abstract, introduction, phone, keyword, web, degree, pubnum and page. In the first step the lines are classified independently of each other using features related to text and dictionaries. The second step makes use of the sequential information among lines by extending the feature vectors with the classes of a number of preceding and following lines. Iteratively a new classifier is trained using extended feature vectors and lines are reclassified, until the process converges (there are only few changes in the class assignments).
Another example of an SVM-based approach is described by Kovacevic et al. [Kovacevic2011]. In their method the lines of text on the first page of documents are classified into the following classes: title, authors, affiliation, address, email, abstract, keywords and publication note using both geometric (formatting, positions) and text-related (lexical, NER) features. The authors experimented with different models (decision trees, Naive Bayes, k-Nearest Neighbours and Support Vector Machines) and different strategies for multi-class classification. Based on the results obtained during the classification experiments, an SVM model with a one-vs.-all strategy was chosen, as giving the best performance on a manually produced test set.
Lu et al. [LuKWG08] also use SVM to classify the lines of the text in scanned scientific journals. They use the following classes: title, author, volume, issue, start page, end page, start page index and start page image and geometric, formatting and textual features of the text lines. The approach is tested on scanned historical documents nearly two centuries old.
proposes a Multi-Layer Perceptron (MLP) classifier to identify regions that could contain the title and the authors of the paper by classifying the text blocks. The features include: graphical features (related to the position on the page, the width and height of the region, which page it is on), textual features (the number of characters, bold or italics characters), and neighbor features (such as the number of neighboring regions and their distance).
Team-Beam algorithm proposed by Kern et al. [KernJHG12] uses an enhanced Maximum Entropy classifier for assigning labels to document fragments. The approach works in two stages: first the blocks are classified as title, subtitle, journal, abstract, author, email, affiliation, author-mixed or other, and then the tokens within blocks related to author metadata are classified as given name, middle name, surname, index, separator, email, affiliation-start, affiliation, other. The algorithm uses and enhanced version of Maximum Entropy classifier, which takes the classification decision of preceding instances into account to improve the performance and to eliminate unlikely label sequences. The features used for classification are derived from the layout, the formatting, the words within and around a text block, and common name lists.
In the approach proposed by Lopez [Lopez09] the regions of the document are classified using 11 different CRF models cooperating together at various levels of a document’s structure. Each specialized model aims at solving a concrete classification task. The main model classifies the fragments of the entire document into header, body, references, etc. Other models are used for classifying the header fragments, parsing affiliation, author and dates strings, classifying body parts into titles, paragraphs and figures, parsing references and so on. Each model has its own set of features and training sets. The features are based on position-, lexical- and layout-related information.
Cuong et al. [CuongCKL15] also use CRF for labelling the fragments of the documents. In their approach the input is a document in plain text, and therefore they do not use geometric hints present for example in the PDF files. They describe methods for solving three tasks: reference parsing (where the reference tokens are labelled as title, author, year, etc.), section labelling (where the sequence of document’s sections are given the functional label, such abstract, acknowledgement, background, categories, conclusions, discussions, evaluation, general terms, introduction, methodology, references, related works) and finally assigning labels such as author and affiliation to the lines of the document’s header. The instances are classified using a higher order semi-Markov Conditional Random Fields to model long-distance label sequences, improving upon the performance of the linear-chain CRF model.
Finally, Zou et al. [ZouLT10] propose a binary SVM classifier for locating the references sections in the document. The text zones are classified using both geometric and textual features.
2.3.4 Sequence Parsing
Parsing refers to extracting metadata from strings by annotating their fragments with labels from a particular label set. In the context of scientific documents analysis parsing can be used for example to extract metadata such as title, authors, source or date from citation strings, dividing authors’ fullnames into given names and surnames, recognizing days, months and years in date strings or extracting institution name, address and country name from an affiliation string.
Similarly as in the case of content classification, there are two widely used families of approaches used for parsing. One popular family of methods is based on regular expressions or knowledge bases. The advantage of these techniques is that they usually can be implemented in a straightforward manner without gathering any training data or performing the training.
For example Gupta et al. [GuptaMCS09] propose a simple regexp-based approach for classifying fragments of citation strings as particular metadata classes: authors, title, publication and year of publication. The regular expressions are hand-made and the algorithm is also enhanced by using a publication database of a domain of interest (zoology) to lookup the title in case the default approach failed.
Jonnalagadda and Topham [Jonnalagadda11] describe NEMO system, which is able to parse affiliations using rules and a number of dictionaries. The parsing includes extracting fragments related to country, email address, URL, state, city, street address and organization name. The fragments are extracted in consecutive steps using 30 different manually verified dictionaries, such as the dictionaries in Geoworldmap database999http://www.geobytes.com/geoworldmap/, the mapping between internet domains and countries, stop words list, organization-related keywords, address-related keywords or zip code dictionary.
Day et al. [DayTSHLWWOH07] propose a knowledge-based approach for parsing citations in order to extract the following metadata: author, title, journal, volume, issue, year, and page. The method is based on a hierarchical knowledge representation framework called INFOMAP. First the data from the Journal Citation Reports (JCR) indexed by the ISI and digital libraries is collected and fed into the knowledge base. The format of INFOMAP is a tree-like knowledge representation scheme that organizes knowledge of reference concepts in a hierarchical fashion, which contains characteristic patterns occurring in citations. To extract metadata from a citation, the template matching engine uses dynamic programming to match it with the syntax templates.
Vilarinho et al. [VilarinhoSGMM07] also propose a knowledge-based approach for citation parsing. In their method, the knowledge base stores typical words for each citation metadata type, which is then used to assign labels to the citation tokens. After that, tokens left unassociated in the previous step are further analyzed and labels are assigned to them based on rules related to their neighbourhood and relative position in the citation string.
Unfortunately, similarly as in the case of general classification, rule-based approaches are poorly adaptable and do not generalize well. For this reason, machine learning-based approaches, which are much more flexible, are far more popular for sequence parsing. These methods typically leverage the sequence-related information in addition to the tokens themselves, either by decoding them in the features or using dedicated algorithms for sequence labelling.
For example Zhang et al. [ZhangZLT11] propose SVM for classifying the reference’s tokens into the following classes: citation number, author names, article title, journal title, volume, pagination and publication year. Their method uses structural SVM [TsochantaridisHJA04], an extension of SVM designed for predicting complex structured outputs, such as sequences, trees and graphs. The features are related to dictionaries of author names, article titles and journal titles, patterns for name initials or years, the presence of digits and letters and the position of the token. Additionally, two kinds of contextual features are used: the features of the neighboring tokens and the labels assigned to those tokens.
Hetzner [Hetzner08] proposes to parse citation strings using HMM in order to extract: author, booktitle, date, editor, institution, journal title, location, note, number, pages, publisher, techtitle, title and volume. The model includes two HMM states per each metadata class: a ”first” state for the first token in the subsequence and a ”rest” state, along with a set of separator states (which represent words and punctuation that are not part of metadata fields) for every metadata class pair, and a terminating ”end” state. The tokens are mapped to a small alphabet of emission symbols, which is composed of symbols representing punctuation, particular words, classes of words and word features.
Yin et al. [YinZDY04] propose to parse citations using a bigram HMM, where emission symbols are token words. Different from the traditional HMM, which typically uses word frequency, this model also considers the words’ bigram sequential relation and position information in text fields. In particular, a modified model is used for computing the emission probability, while keeping the structure of HMM unchanged. In the new model, the probability of emitting symbol at given state composes of beginning emission probability (the probability that the state emits word as the first word) and inner emission probability (the probability that the state emits the word as the inner word).
Ojokoh et al. [OjokohZT11] propose even more advanced approach based on a trigram HMM, where a state of a current token depends on the states of two preceding tokens, instead of one. A modified Viterbi algorithm is used to infer the most probable sequence of token labels. Only 20 symbols are used for the emission alphabet, these are based on specific characters (for example a comma, a dot, a hyphen), regular expressions (for example checking whether the token is a number), also a dictionary of state names and a list of common words found in specific metadata fields.
Definitely the most popular technique for citation parsing is linear-chain CRF. In practice, it achieves better results than HMM and is more flexible as able to handle a lot of overlapping features of the tokens, whereas in HMM the tokens have to be mapped to a dictionary of emission symbols.
ParsCit described by Councill et al. [CouncillGK08] is an open-source library for citation parsing based on CRF. The labels assigned to the citation tokens include: author, booktitle, date, editor, institution, journal, location, note, pages, publisher, tech, title and volume. The features are related to the token identity, punctuation, numbers, letters, cases, dictionaries (for example dictionaries of publisher names, place names, surnames, female and male names, and months).
Also Gao et al. [GaoTL09] use CRF to parse citations in Chinese electronic books in order to extract: author, editor, title, publisher, date, page number, issue, volume, journal, conference, book, note, location and URL. The parsing is supported by a knowledge base storing the most common words in citation strings, the punctuation marks used to separate fields, Chinese family names, English names, publishing houses in China, journal names, conference names, places and dates, and so on. Apart from textual features, layout-related features are also used. Finally, the tool takes advantage of document layout consistency to enhance citation parsing through clustering techniques. The main citation format used in the book is detected and used to correct minor mistakes occurred during parsing.
Kern and Klampfl [KernK13] also propose a citation parsing algorithm based on CRF. The model uses the following labels of the tokens: author given name, author surname, author other, editor, title, date, publisher, issue, book, pages, location, conference, source, volume, edition, issue, url, note, and other. In order to integrate sequence-related information, the algorithm takes the classification decision of four preceding instances into account. In addition to the typical text-related features, the model also incorporates layout and formatting information using a set of binary features specifying whether the font of the tokens inside a sliding window from -2 to +2 tokens is equal to the font of the current token.
Another example of a CRF-based approach is the citation parser proposed by Zhang et al. [ZhangCY11]. The algorithm extracts: author (further separated into surname and given name), title, source (for example journal, conference, or other source of publication), volume, pages (further separated into first page and last page), and year. The features are based on traits such as whether the token contains a capital letter, all capital letters, a digit, all digits, and other symbols (such as Roman and Greek characters, hyphens, etc.), as well as the length of the token.
In Enlil system described by Do et al. [DoCCK13] linear-chain CRF classifier is used to parse both author names and affiliation strings in order to recognize the name (author or organization name), symbol (characters marking the relations between authors and affiliations) and separator. The features are both text- (such as token identity, punctuation, number) and layout-related (fonts, subscript, superscript).
CRF is also used extensively in GROBID system described by Lopez [Lopez09] to parse various entities, for example citations, affiliations, author names or date strings. Each task has its own CRF model, training set and a set of features, which are based on position, lexical and layout information.
Cuong et al. [CuongCKL15] is another example of a CRF-based citation parser. In their approach the citation tokens are labelled as: author, booktitle, date, editor, institution, journal, location, note, pages, publisher, tech, title or volume. The tokens are classified using a higher order semi-Markov Conditional Random Fields to model long-distance label sequences, improving upon the performance of the linear-chain CRF.
Finally, Zou et al. [ZouLT10] compared two algorithms for citation parsing. One relies on sequence statistics and trains a CRF. The other focuses on local feature statistics and trains an SVM to classify each individual word, which is followed by a search algorithm that systematically corrects low confidence labels if the label sequence violates a set of predefined rules. The approaches achieved very similar high accuracies.
2.3.5 Extraction Systems
This section describes tools and systems able to extract various types of metadata and content from scientific literature. The approaches differ in the scope of extracted information, methods used, input and output formats, availability and licenses.
Typically at the beginning of the document processing some kind of layout analysis is performed, and then the regions of the document are classified using various algorithms. The metadata extracted from documents usually contains the title, authors, affiliations, emails, abstract, keywords, and so on. These fragments are usually located in the document using rules or machine learning. Extracting bibliography-related information typically includes locating the references sections in the document using rules or machine learning, splitting their content into individual references and parsing them. The analysis of the middle part of the document might require locating the paragraphs, tables, figures, section titles, sometimes determining the hierarchy of sections or the roles of the sections as well.
For example Flynn et al. [FlynnZMZZ07] propose a metadata extraction approach which can be seen as a variant of a rule-based approach. First input PDF documents are OCRed using a commercial tool ScanSoft’s OmniPage Pro, which results in a XML-based representation containing the layout and the text organized in pages, regions, paragraphs, lines and words, accompanied by information such as font face, font size and font style, alignment and spacings. The metadata is then extracted using independent templates, each of which is a set of simple rules associated with a particular document layout. Processed document is first assigned to a group of documents of similar layout, and then the corresponding template is used to extract the document’s metadata.
Mao et al. [MaoKT04] propose a rule-based system for extracting title, author, affiliation, and abstract from scanned medical journals. The system is used for MEDLINE database. The documents are first OCRed, and then undergo an iterative process which includes human intervention for correction the zone labelling resulting from applied rules. Corrected results are then used to develop geometric and contextual features and rules optimized for the set of issues of a given journal.
Hu et al. [HuLCMZ05] describe a machine learning-based approach for extracting titles from general documents, including presentations, book chapters, technical papers, brochures, reports and letters. As a case study, Word and PowerPoint document are used. During pre-processing, the units (text chunks with uniform format) are extracted from the top region of the first page of a document. These units are then transformed to features and classified as title_begin, title_end or other. Two types of features were used: format features (font size, alignment, boldface, the presence of blank lines) and linguistic features (keywords specific for titles and other document parts, number of words). Four models were employed (Maximum Entropy Model, Perceptron with Uneven Margins, Maximum Entropy Markov Model, and Voted Perceptron). The authors have observed that Perceptron-based models perform better in terms of extraction accuracy.
Cui and Chen [CuiC10] describe a system for extracting title, author, affiliation, address, email and abstract from PDF documents. In this approach, text extraction and page segmentation is done with the use of pdftohtml, a third-party open-source tool. The resulting HTML document contains a set of text blocks, which are small pieces of text, often less than one logical line, along with their location and font information. These blocks are labelled with the target metadata classes with the use of an enhanced HMM classifier.
Han et al. [HanGMZZF03] extract metadata (title, author, affiliation, address, note, email, date, abstract, introduction, phone, keyword, web, degree and page) from the headers of scientific papers in plain text format. The metadata is extracted by classifying the text lines with the use of a two-stage SVM classification based on text-related features.
Another example of an SVM-based approach is the metadata extractor used in CRIS systems described by Kovacevic et al. [Kovacevic2011]
. In this approach PDF articles are first converted to HTML, which preserves the formatting and layout-related information. Then, the lines of text in the first page of the document are classified using both geometric and text-related features. Extracted metadata contains:title, authors, affiliation, address, email, abstract, keywords and publication note.
Lu et al. [LuKWG08] analyse scanned scientific journals in order to obtain volume metadata (such as name, number), issue metadata (volume number, issue number, etc.) and article metadata (title, authors, volume, issue and pages range
). In their approach scanned pages are first converted to text using OCR techniques. Then, rule-based pattern matching on the feature vectors of the text lines is used to recognize and analyze volume and issue title pages, while article metadata is extracted using SVM and geometric, formatting, distance, layout and textual features of text lines. The approach is tested on scanned historical documents nearly two centuries old.
Marinai [Marinai09] first extracts characters from PDF documents using JPedal package, which results in a set of basic objects on each page accompanied with additional information such as their position and font size. Then, the blocks are merged in the horizontal and vertical directions, avoiding to join separate columns or paragraphs, using simple rule-based heuristics. Each region is then transformed to feature vectors and a Multi-Layer Perceptron (MLP) classifier is used to identify regions that could contain the title and the authors of the paper. The classifier uses features related to both the layout and the text. Additionally, the information gathered from DBLP citation database is used to assist the tool by checking the extracted metadata.
Enlil, described by Do et al. [DoCCK13], is a tool able to extract authors, affiliations and relations between them from scientific publications in PDF format. In this approach a PDF file is first OCRed with the use of OmniPage, which results in an XML version of the document that stores both the textual and spatial information for each word that appears on each page. The system is built on top of SectLabel module from ParsCit [LuongNK10], which is used to detect authors and affiliations blocks in the text by classifying text lines using CRF. The lines classified as author or affiliation are then tokenized into chunks and the tokens are labelled using a linear-chain CRF classifier with the following classes: name, symbol and separator. The model uses both text- and layout-related features. Finally, a binary SVM classifier is applied to author-affiliation pairs to extract the relations between them. The features used in this model are related to the information provided by the parsing module and the distances between the author and the affiliation fragments.
In the citation extraction system described by Gupta et al. [GuptaMCS09] the documents are first scanned, OCRed and converted into PDF format. The PDF documents are then converted into HTML using Abby PDF Reader. Then reference block is found by estimating the probability that each paragraph belongs to references using parameters based on paragraph length, presence of keywords, the author names, the presence of the year and other text clues. Regular expressions are then used to extract metadata from citation strings. The algorithm also uses external publication database to correct the extraction results.
Zou et al. [ZouLT10] propose a two-step process using statistical machine learning algorithms for extracting bibliography data from medical articles in HTML format. The algorithm first locates the references with a binary SVM classifier using geometric and text features for text zones. For reference parsing two algorithms were used: CRF and SVM followed by a search algorithm that systematically corrects low confidence labels if the label sequence violates a set of predefined rules.
Gao et al. [GaoTLLQW11] describe CEBBIP, a tool able to extract the chapter and section hierarchy from Chinese books. The overall approach is based on the observation that within a book, some features related to formatting and fonts are shared among elements of the same type, such as the headings, footnotes or citations. At the beginning the tool performs page layout analysis by merging small page objects (eg. characters, words, lines) into bigger ones in a bottom-up manner using position and font-related heuristics. Then the global typographies characteristics, such as columns, header and footer, page body area, text line directions, line spacing of body text, and fonts used in various components (headings, paragraphs, etc.) are extracted. For example, to detect headers and footers, similarities of the text and geometric position between the top/bottom lines on neighboring pages is exploited. Columns are identified by detecting the recurring white spaces in multiple pages. Then the page objects are clustered based on general typesseting, and the output clusters serve as the prototypes of similar blocks. After that, the system uses a learning-based classification method to label the blocks in each cluster as headings, figure/table captions, footnotes. The table of contents hierarchy is extracted from the ”Table of contents” section with the use of heuristics and associated with the headings extracted from the text.
CEBBIP is also able to extract the bibliographic data [GaoTL09]. In this approach a rule-based method is used to locate citation data in a book and the data is segmented into citation strings of individual references with the use of heuristics based on the citation markers and spaces. A learning-based approach (CRF) is employed to parse citation strings. Finally, the tool takes advantage of document layout consistency to enhance citation data segmentation and parsing through clustering techniques. The main citation format used in the book is detected and used to correct the parsing results.
Giuffrida et al. [GiuffridaSY00] extract the content from PostScript files using a tool based on pstotext, which results in a set of strings of text annotated with spatial/visual properties, such as positions, page number and font metrics. These strings become ”facts” in a knowledge base. Basic document metadata, including title, authors, affiliations, relations author-affiliation, is extracted by a set of hand-made rules that reason upon those facts.
The same system also uses rules to extract the titles of the sections from the text of the document. The algorithm first determines whether the section titles are numbered. If the sections are numbered, various schemes of numbering are examined. If the sections are not numbered, heuristics based on the text size and line space are used. Additionally the algorithm looks for titles commonly appearing in the documents, such as ”Introduction”, ”Overview”, ”Motivation” or ”References” to find hints for the font size typical for the titles in a given document.
PDFX, described by Constantin et al. [ConstantinPV13], is a rule-based system able to extract basic metadata, structured full text and unparsed reference strings from scientific publications. PDFX can be used for converting scholarly articles in PDF format to their XML representation by annotating fragments of the input documents. The analysis comprises two main stages. During the first one the geometric model of the article’s content is constructed to determine the organization of textual and graphical units on every page using a library from the Utopia Documents PDF reader. The model comprises pages, words and bitmap images, along with their features such as bounding box, orientation, textual content or font information. During the second stage, different logical units are identified by rules based on their discriminative features. The following block types are used: title, author, abstract, author footnote, reference, body, (sub)section, (sub)section heading, figure, table, caption, figure/table reference. PDFX is a closed source system, available only as a web service101010http://pdfx.cs.man.ac.uk/.
Pdf-extract111111http://labs.crossref.org/pdfextract/ is an open-source tool for identifying and extracting semantically significant regions of scholarly articles in PDF format. Pdf-extract can be used to extract the title and a list of unparsed bibliographic references of the document. The tool uses a combination of visual cues and content traits to perform structural analysis in order to determine columns, headers, footers and sections, detect references sections and finally extract individual references. Locating the references section is based on the observation, that it tends to have a significantly higher ratio of proper names, initials, years and punctuation than other sections. The reference section is divided into individual references also based on heuristics.
Team-Beam algorithm proposed by Kern et al. [KernJHG12] is able to extract a basic set of metadata (title, subtitle, the name of the journal, conference or venue, abstract, the names of the authors, the e-mail addresses of the authors and their affiliations) from PDF documents using an enhanced Maximum Entropy classifier. In their approach first the PDF file is processed by PDFBox library121212https://pdfbox.apache.org/. Then, clustering techniques are used in order to build words, lines and text blocks from the output obtained from PDFBox library. The structure is built from the bottom, and each type is built by two steps: merge (done by hierarchical clustering) and split (k-means clustering for splitting incorrectly merged objects). Then, reading order is determined using the approach described in [AielloMT02]. Next, a machine learning approach is employed to extract the metadata in two stages: first the blocks of text are classified into metadata types and then the tokens within blocks related to author metadata are classified in order to extract given names, middle names, surnames, etc.
Team-Beam also contains a bibliography extraction component described by Kern and Klampfl [KernK13]. In this approach, the references section is located using heuristics related to a list of typical section titles. The individual references are extracted based on a simple version of k-means clustering algorithm applied to the text lines. The only feature used is the minimal x-coordinate of a line’s bounding box. The algorithm clusters the line set into two subsets, representing the first lines of the references and the rest. Finally, the references are parsed using a CRF token classifier.
Team-Beam provides also the functionality of extracting the body text of the article along with the table of contents hierarchy [KlampflGJK14] based on an unsupervised method. After performing the segmentation and detecting the reading order, the text blocks are categorized by a sequential pipeline of detectors, each of which labels a specific type of block: decorations, such as page numbers, headers, and footers (done by analysing the similarity between blocks on the neighbouring pages), figure and table captions (based on heuristics), main text (hierarchical clustering applied to blocks based on alignment, font, and width-related features), section headings (heuristics based on fonts, distances and regular expressions), sparse blocks and tables (again heuristics). Each of these detectors is completely model-free and unsupervised.
Lopez [Lopez09] describes GROBID, a system useful for analysing scientific publication in PDF format. GROBID uses pdf2xml/Xpdf for processing PDF files and CRF in order to extract a rich set of document’s metadata, full text with section titles, paragraphs and figures, and a list of parsed bibliographic references with their metadata. The extraction is performed by a cascade of 11 specialized CRF models, which start by labelling the parts of the document as header, body, references, etc., and then focuses on each part. There are separate models specializing in classifying the header parts, parsing affiliations, author data, dates, selecting the header title lines, paragraphs, figures, extracting individual references and parsing them. Each model has its own set of features and training data. The features are based on position, lexical and layout information. The system is available as open-source131313https://github.com/kermitt2/grobid.
ParsCit system, described by Councill et al. [CouncillGK08], is an open-source system for extracting parsed references from plain text files. Reference sections are identified using heuristics, which search for particular section titles such as ”References” or ”Bibliography”. If such a label is found too early in the document, the process continues. The references sections are then split into individual strings also using heuristics. Regular expressions are used for detecting characteristic markers indicating the beginning of the reference, and if no such markers are found, the system splits the lines based on their length and other indicators related to whether the line ends with a dot and whether it contains tokens characteristic for author names. Reference parsing is realized by a CRF model labelling token sequence in the reference string. The tokens’ features are related to punctuation, numbers, letters, cases and dictionaries (such as publisher names, place names, surnames, female and male names, months).
ParsHed, described by Cuong et al. [CuongCKL15], is a ParsCit’s module able to extract the basic metadata (title, authors, abstract, etc.) from the document’s plain text. The extraction is done by classifying the header’s text lines using higher order semi-Markov Conditional Random Fields, which model long-distance label sequences, improving upon the performance of the linear-chain CRF model.
SectLabel, described by Luong et al. [LuongNK10] is also a ParsCit’s module, useful for extracting a logical structure of a document, both in PDF or plain text formats. In this approach, PDFs are first converted to XML using Nuance OmniPage, and the output XML includes the coordinates of paragraphs, lines and words within a page, alignment, font size, font face or format. SectLabel performs then two tasks: logical structure classification and generic section classification. For both tasks CRF with lexical and formatting features is used. In the first task the ordered sequence of document’s text lines is labeled with the following categories: address, affiliation, author, bodyText, categories, construct, copyright, email, equation, figure, figureCaption, footnote, keywords, listItem, note, page, reference, sectionHeader, subsectionHeader, subsubsectionHeader, table, tableCaption and title. In the second task, the sequence of the headers of the sections is labelled with a class denoting the purpose of the section, including: abstract, categories, general terms, keywords, introduction, background, related work, methodology, evaluation, discussion, conclusions, acknowledgments, and references. ParsCit is also an open source system141414http://aye.comp.nus.edu.sg/parsCit/.
Unfortunately, as of now, there has not been any competition for evaluating the tools extracting rich metadata from scientific publications. Semantic Publishing Challenge151515http://2015.eswc-conferences.org/program/semwebeval,161616https://github.com/ceurws/lod/wiki/SemPub2015, hosted in 2015 by European Semantic Web Conference171717http://2015.eswc-conferences.org/, contained some tasks related to analysing scientific articles. Our algorithm [TkaczykB15] was the winner in Task 2, which included the problems of extracting authors, affiliations and citations from PDF documents [DiIorioLDV15].
The extraction systems available online, namely GROBID, ParsCit, PDFX and Pdf-extract, are the most similar to our work in terms of potential applications in the context of digital libraries and research infrastructures. Unfortunately, they all have major drawbacks. Pdf-extract is rule-based and focuses almost only on extracting the references without parsing them. PDFX is also rule-based and closed-source, available only as a web service. The open-source version of ParsCit processes only plain text documents, ignoring the layout, and also does not output structured document metadata record, but rather the annotated input text along with the confidence scores. Finally, GROBID does not extract the section hierarchy and uses the same machine-learning method for all tasks, without taking into account the specifics of particular problems.
Our research brings together various advantages of the previous works, resulting in one accurate solution. The algorithm is comprehensive and analyses the entire document, including the document’s header, body and references. It processes born-digital PDF documents and focuses not only on the textual features, but also on the layout and appearance of the text, which carries a lot of valuable hints for classification. Extensive use of machine learning-based algorithms makes the system well-suited for heterogeneous document collections, increases its flexibility and the ability to adapt to new, previously unseen document layouts. Careful decomposition of the extraction problem allows for treating each task independently of others, with each solution selected and tuned for the task’s specific needs. The system is open-source and available as a Java library181818https://github.com/CeON/CERMINE and a web service191919http://cermine.ceon.pl. In Section 4 we report the results of the comparison of our method to four similar systems available online: GROBID, ParsCit, PDFX and Pdf-extract.
3.1 Algorithm Overview
The extraction algorithm accepts a single scientific publication on the input, inspects the entire content of the document and outputs a single record containing: the document’s metadata information, parsed bibliography and structured body content.
The input document format is PDF [pdfref]. The algorithm is optimized for processing born-digital documents and does not perform optical character recognition. As a result, PDF documents containing scanned pages in the form of images are not properly processed.
The output format is NLM JATS111http://jats.nlm.nih.gov/. The output contains all the information the algorithm is able to extract from a given document structured into three parts: front (the document’s metadata), body (the middle part of the document, its proper text in a structured form containing the hierarchy of sections) and back (the bibliography section).
The algorithm is composed of the following stages (Figure 3.1):
Layout analysis (Section 3.2) — The initial part of the entire algorithm. During layout analysis the input PDF file is analysed in order to extract all text fragments and their geometric characteristics.
Document region classification (Section 3.3) — The goal of the classification is to assign a single label to each text fragment of the document. The labels denote the function a given fragment plays in the document.
Metadata extraction (Section 3.4) — In this stage structured metadata is extracted from previously labelled document.
Bibliography extraction (Section 3.5) — The purpose of this stage is to extract parsed bibliography in a structured form from previously labelled document.
Body extraction (Section 3.6) — The goal of this stage is to extract full text and section hierarchy from labelled document.
3.2 Document Layout Analysis
Document layout analysis is the initial phase of the extraction algorithm. Its goal is to detect all the text fragments in the input PDF document, compute their geometric characteristics and produce a geometric hierarchical model of the document.
The input is a single file in PDF format and the output is a geometric hierarchical model of the document. The model holds the entire text content of the article, while also preserving the information related to various aspects of the way elements are displayed in the input PDF file.
Intuitively, the output model represents the document as a list of pages, each page contains a list of text zones (blocks), each zone contains a list of lines, each line contains a list of words, and finally each word contains a list of characters. Each element in this hierarchical structure can be described by its text content and the position on its page. The order of the elements in the lists corresponds to the natural reading order of the text, that is the order, in which the fragments should be read. In this tree structure every text element belongs to exactly one element of higher level.
A single page of a given document is a rectangle-shaped area, where the text elements are placed. The position of any point on the page is defined by two coordinates: (the horizontal distance to the left edge of the page) and (the vertical distance to the top edge of the page). The origin of the coordinate system is the left upper corner of the page, and the coordinates are given in typographic points (1 typographic point equals to 1/72 of an inch). The positions of all the text elements are defined with respect to this coordinate system.
The model stores text elements of various granularity: characters, words, lines and zones. Every text element belongs to exactly one document page and represents a fragment of a text written on the page. The position of the element on its page is defined by two points: left upper and right lower corner of its bounding box, which is a rectangle with edges parallel to the page’s edges enclosing a given text element.
Formally the levels in the model can be defined in terms of sets. For any set let’s denote its partition as . In other words, is any set meeting the following conditions:
For a given document let’s define the following sets:
Characters. Let be the set of all characters visible in the document. For every character we define its text , where is the alphabet used within document , and its bounding box given by two points: left upper corner and right lower corner .
Words. Let be the set of all words in the document. Intuitively, a word is a continuous sequence of characters placed in one line with no spaces between them. Punctuation marks and typographical symbols can be separate words or parts of adjacent words, depending on the presence of spaces.
Lines. Let be the set of all lines in the document. Intuitively, a line is a sequence of words that forms a consistent fragment of the document’s text. Words placed geometrically in the same line of the page, which are parts of neighbouring columns, should not belong to the same line in the model. Hyphenated words that are divided into two lines should appear in the model as two separate words that belong to different lines.
Zones. Let be the set of all zones in the document. Intuitively, A zone is a consistent fragment of the document’s text, geometrically separated from surrounding fragments and not divided into columns.
Pages. Finally, let be the set of all pages in the document.
We can also define a parent function, which for any character, word, line or zone returns the element’s parent in the structure:
The sets , , , and are totally ordered sets. The order corresponds to the natural reading order of the elements in the document, that is the order in which the text should be read. The order of the elements respects the set hierarchy, in particular
For every word, line and zone we also define a bounding box as a minimal rectangle enclosing all contained elements:
The model of a document described in this section is built incrementally by three steps executed in a sequence: character extraction (Section 3.2.1), page segmentation (Section 4.2) and reading order resolving (Section 3.2.3). Each steps updates the structure with new information. Table 3.1 summarizes the basic information about the steps.
|1. Character extraction||Extracting individual characters along with their page coordinates and dimensions from the input PDF file.||iText library|
|2. Page segmentation||Constructing the document’s geometric hierarchical structure containing (from the top level) pages, zones, lines, words and characters, along with their page coordinates and dimensions.||enhanced Docstrum|
|3. Reading order resolving||Determining the reading order for all structure elements.||bottom-up heuristics|
3.2.1 Character Extraction
Character extraction is the first step of the entire extraction process. Its purpose is to parse the input PDF file and build initial simple geometric model of the input document, which stores only the pages and individual characters.
Let be the given input document. The purpose of character extraction is to:
determine — a set of document’s pages along with their order,
determine — a set of characters visible in the document,
assign characters to pages, that is find a function , which for given character returns the page the character is displayed on.
Character extraction does not find other elements of the model, and does not determine the order of the characters. The output of character extraction is a list of pages, each of which contains a set of characters.
The implementation of character extraction is based on open-source PDF parsing library iText222http://itextpdf.com/. The document’s pages and their order are explicitly given in the source of the input file. To extract characters, we iterate over all text-related PDF operators, keeping track of the current text state and text positioning parameters. During the iteration we extract text strings from text-showing operators along with their bounding boxes. The strings are then split into individual characters and their individual widths and positions are calculated. Finally, all the coordinates are translated from the PDF coordinate system to the system used in our geometric model. The mapping between characters and pages is determined directly by the position of the text-showing operators in the input PDF file.
Due to the features of the PDF format and iText library, the resulting bounding boxes are in fact not the smallest rectangles enclosing the characters, but often are slightly bigger, depending on the font and size used. In particular the bounding boxes of the characters printed in the same line using the same font have usually the same vertical position and height. Figure 3.2 shows an example fragment of a page from a scientific publication with characters’ bounding boxes, as returned by iText library. Fortunately, these approximate coordinates are sufficient for further steps of the algorithm.
For performance reasons we enhanced character extraction with an additional cleaning phase, which in some rare cases reduces the number of extracted characters. In general the PDF text stream can contain text-showing operators, which do not result in any text visible to the document’s reader. For example a text string might be printed in a position outside of the current page, or text fragments can be printed in the same place, causing one fragment to cover the other. We also encountered PDF files, in which text-showing operators were used for printing image fragments, which resulted in tens of thousands tiny characters on one page, that do not contribute to the proper text content of the document. In such rare cases it is very difficult to extract a logical, useful text from the PDF stream. What is more, the number of characters is a significant factor of the algorithm performance (more details are given in Section 4.7). The algorithm attempts to detect such problems during character extraction step and reduce the number of characters by removing suspicious characters, if needed.
The cleaning phase comprises the following steps. First, we remove those characters, that would not be visible on the page, because their coordinates are outside of the pages limits. Then, we detect and remove duplicates, that is characters with the same text and bounding boxes. Finally, we check whether the density of the characters on each page is within a predefined threshold. If the overall density exceeds the limit, we use a small sliding window to detect highly dense regions and all the characters from these regions are removed.
Individual characters extracted in this step are the input for the page segmentation step.
3.2.2 Page Segmentation
The goal of page segmentation is to extract the remaining levels of the model described previously: words, lines and zones. It is achieved by grouping characters into larger objects in a bottom-up manner.
Let be the given document, — a list of document’s pages and — a set of extracted characters. From character extraction step we also have the function , which assigns a parent page to every character.
The purpose of page segmentation is to find:
: a partition of the set corresponding to the words of the document,
: a partition of the set corresponding to the text lines of the document,
: a partition of the set corresponding to the text zones of the document.
The function , which is defined by the partitions, should satisfy the following condition:
Page segmentation does not determine the order of the elements. The result of page segmentation is a partial model, in which the analysed document is represented by a list of pages, each of which contains a set of zones, each of which contains a set of lines, each of which contains a set of words, each of which contains a set of characters.
Figure 3.3 shows a group of words with their bounding boxes printed on a page of a scientific publication. As the picture shows, punctuation marks as well as hyphenation characters usually belong to the proper word preceding them. In general words in the model should be understood geometrically rather than logically — as a continuous sequence of characters without a space or other white character.
Figure 3.4 shows a group of lines with their bounding boxes. As shown in the picture, the lines respect the multi-column document layout.
Figure 3.5 shows a fragment of a scientific publication with example zones and their bounding boxes. In general a zone contains lines that are close to each other, even if they play a different role in the document (for example section title and paragraph).
Page segmentation is implemented with the use of a bottom-up Docstrum algorithm [OGorman93]. Docstrum is an accurate algorithm able to recognize both text lines and zones. The algorithm can be fairly easily adapted to process born-digital documents: it is sufficient to treat individual characters as connected components, which in the original algorithm are calculated from a page image.
In our case the algorithm’s input is a single page containing a set of characters, which are clustered hierarchically based on geometric traits. The algorithm is based to a great extent on the analysis of the nearest-neighbor pairs of individual characters:
First, five nearest components for every character on the page are identified (red dotted lines in Figure 3.6). The distance between two characters is the Euclidean distance between the centers of their bounding boxes.
In order to calculate the text orientation (the skew angle) we analyze the histogram of the angles between the elements of all nearest-neighbor pairs. The peak value is assumed to be the angle of the text. Since in the case of born-digital documents the skew is almost always horizontal, this step would be more useful for documents in the form of scanned pages. All the histograms used in Docstrum are smoothed to avoid detecting local abnormalities. An example of a smoothed histogram is shown in Figure 3.7.
Next, within-line spacing is estimated by detecting the peak of the histogram of distances between the nearest neighbors. For this histogram we use only those pairs, in which the angle between components is similar to the estimated text orientation angle (blue solid lines in Figure 3.6).
Similarly, between-line spacing is also estimated with the use of a histogram of the distances between the nearest-neighbor pairs. In this case we include only those pairs, that are placed approximately in the line perpendicular to the text line orientation (green dashed lines in Figure 3.6).
Next, line segments are found by performing a transitive closure on within-line nearest-neighbor pairs. To prevent joining line segments belonging to different columns, the components are connected only if the distance between them is sufficiently small.
The zones are then constructed by grouping the line segments on the basis of heuristics related to spatial and geometric characteristics. Each line segment pair is examined and the decision is made whether they should be in the same zone. If both horizontal and vertical distance are within predefined limits, the current zones of the line segments are merged.
Finally, line segments belonging to the same zone and placed in one line horizontally are merged into final text lines.
All the threshold values used in the algorithm have been obtained by manual experiments performed on a validation dataset. The experiments also resulted in adding a few improvements to the Docstrum-based implementation of page segmentation:
the distance between connected components, which is used for grouping components into line segments, has been split into horizontal and vertical distance (based on estimated text orientation angle),
fixed maximum distance between lines that belong to the same zone has been replaced with a value scaled relatively to the line height,
merging of lines belonging to the same zone has been added,
rectangular smoothing window has been replaced with Gaussian smoothing window,
merging of highly overlapping zones has been added,
words determination based on within-line spacing has been added.
Section 4.2 reports the results of the comparison of the performance of the original Docstrum and the enhanced version used in our algorithm.
The resulting hierarchical structure is the input for the next step, reading order resolving.
3.2.3 Reading Order Resolving
The purpose of reading order resolving is to determine the right sequence, in which all the structure elements should be read. More formally, its task is to find a total order for the sets of zones, lines, words and characters. The order of the pages is explicitly given in the input PDF file.
An example document page with a reading order of the zones is shown in Figure 3.8. The reading order is very important in the context of the body of the document and bibliography sections, but much less meaningful for the areas of the document containing metadata.
Algorithm 3.1 shows the pseudocode of reading order resolving step. The algorithm is based on a bottom-up strategy:
At the beginning the characters are sorted within words horizontally, from left to right (line 6 in Algorithm 3.1).
Similarly, the words are sorted within lines also horizontally, from left to right (line 8 in Algorithm 3.1).
Next, the lines are sorted vertically within zones, from top to bottom (line 10 in Algorithm 3.1).
In the final step we sort zones. Sorting zones is done with the use of simple heuristics similar to those used in PDFMiner tool333http://www.unixuser.org/euske/python/pdfminer/. We make use of an observation that the natural reading order in most modern languages descends from top to bottom, if successive zones are aligned vertically, otherwise it traverses from left to right. There are few exceptions to this rule, for example Arabic script, and such cases would currently not be handled properly by the algorithm.
The zones are sorted in the following steps:
We use the following formula to calculate the distance between all pairs of zones on a given page:
for any zone is the area of the zone’s bounding box,
for any zone set placed in the same page, is the area of the smallest rectangle containing all the zones in ,
is the cosine of the slope connecting the centers of left edges of the zones,
is the cosine of the slope connecting the centers of the zones.
We use the angle of the slope of the vector connecting zones to make sure that in general zones aligned vertically are closer than those aligned horizontally.
Using this distance we apply a hierarchical clustering algorithm, repeatedly joining the closest zones and zone sets. This results in a binary tree, where the leaves represent individual zones, other nodes can be understood as groups of zones and the root represents the set of all zones on the page (line 12 in Algorithm 3.1).
Next, we visit every node in the tree and swap the children if needed (lines 13-17 in Algorithm 3.1). The decision process for every node is based on a sequence of rules. The first matched rule determines the decision result:
if two groups can be separated by a vertical line, their order is determined by the x-coordinate (case 1 in Figure 3.9),
if two groups can be separated by a horizontal line, their order is determined by the y-coordinate (case 2 in Figure 3.9),
if the groups overlap, we calculate and , which are horizontal and vertical distance between the centers of the right and left child of the node. The children are swapped if (case 3 and 4 in Figure 3.9).
Finally, an in-order tree traversal gives the desired zones order (line 18 in Algorithm 3.1).
Reading order resolving concludes the layout extraction stage of the extraction algorithm. The result is a fully featured geometric model of the document, containing the entire text content of the input file as well as the geometric characteristics related to the way the text is displayed in the input PDF file.
3.3 Document Region Classification
The goal of content classification is to determine the role played by every zone in the document by assigning a general category to it. We use the following classes: metadata (document’s metadata, containing title, authors, abstract, keywords, and so on), references (the bibliography section), body (publication’s text, sections, section titles, equations, figures and tables, captions) and other (acknowledgments, conflicts of interests statements, page numbers, etc.).
Formally, the goal of document region classification is to find a function
The classification is performed by a Support Vector Machine classifier using a large set of zone features of various nature. SVM is a very powerful classification technique able to handle a large variety of input and work effectively even with training data of a small size. The algorithm is little prone to overfitting. It does not require a lot of parameters and can deal with highly dimensional data. SVM is widely used for content classification and achieves very good results in practice.
The features we developed capture various aspects of the content and surroundings of the zones and can be divided into the following categories:
geometric — based on geometric attributes, some examples include: zone’s height and width, height to width ratio, zone’s horizontal and vertical position, the distance to the nearest zone, empty space below and above the zone, mean line height, whether the zone is placed at the top, bottom, left or right side of the page;
sequential — based on sequence-related information, some examples include: the label of the previous zone (according to the reading order), the presence of the same text blocks on the surrounding pages, whether the zone is placed in the first/last page of the document;
formatting — related to text formatting in the zone, examples include: font size in the current and adjacent zones, the amount of blank space inside zones, mean indentation of text lines in the zone;
lexical — based upon keywords characteristic for different parts of narration, such as: affiliations, acknowledgments, abstract, keywords, dates, references, or article type; these features typically check, whether the text of the zone contains any of the characteristic keywords;
heuristics — based on heuristics of various nature, such as the count and percentage of lines, words, uppercase words, characters, letters, upper/lowercase letters, digits, whitespaces, punctuation, brackets, commas, dots, etc; also whether each line starts with enumeration-like tokens, or whether the zone contains only digits.
The features used by the classifier were selected semi-automatically from a set of 103 features with the use of the zone validation dataset. The final version of the classifier uses 54 features. More details about the selection procedure and results can be found in Section 4.3.1.
The best SVM parameters were also estimated automatically using the zone validation dataset. More detailed results can be found in Section 4.3.2.
Since our problem is a multiclass classification problem, it is reduced to a number of binary classifiers with the use of ”one vs. one” strategy.
Document region classification allows to split the content of the input file into three areas of interest: metadata, body and references, which are later on analysed in three parallel specialized extraction paths.
3.4 Metadata Extraction
The geometric model of the input document enhanced with zone categories is the input to metadata extraction stage, which is a part of the algorithm specializing in extracting the proper metadata of the document. During metadata extraction only zones labelled as metadata are analysed.
The algorithm is able to extract the following information:
title (string): the title of the document,
authors (a list of strings): the full names of all the authors, in the order given in the document,
affiliations (a list of tuples): a list of parsed affiliations of the authors of the document, in the order given in the document; a single affiliation contains:
raw text of the affiliation (string),
organization name (string),
country (string and two-character country ISO code).
emails (a list of strings): a list of emails of the authors of the document,
abstract (string): the abstract provided by the authors,
keywords (a list of strings): the article’s keywords listed in the document,
journal (string): the name of the journal in which the article was published,
volume (string): the volume in which the article was published,
issue (string): the issue in which the article was published,
year (string): the year of publication,
pages (string): the pages range of the published article,
DOI (string): DOI identifier of the document.
The algorithm analyses only the content of the input document, and only the information explicitly given in the document are extracted. No information is acquired from external sources or inferred based on the text of the document. All information listed above is optional, and there is no guarantee that it will appear in the resulting metadata record.
The default output format is NLM JATS. Listing 3 shows an example metadata record.
|1. Metadata zone classification||Classifying the zones labelled previously as metadata into specific metadata classes.||SVM|
|2. Authors and affiliations extraction||Extracting individual author names, affiliation strings and determining the relations between them.||heuristics|
|3. Affiliation parsing||Extracting organization, address and country from affiliation strings.||CRF|
|4. Metadata cleaning||Extracting atomic metadata information from labelled zones, cleaning and forming the final record.||simple rules|
Table 3.2 lists the steps executed during the metadata extraction stage. The details of the implementations are provided in the following sections: metadata zone classification (Section 3.4.1), authors and affiliations extraction (Section 3.4.2), affiliation parsing (Section 4.4) and metadata cleaning (Section 3.4.4).
3.4.1 Metadata Classification
Metadata classification is the first step in the metadata extraction stage. Its goal is to classify all zones labelled previously as metadata into specific metadata classes: title (the title of the document), author (the names of the authors), affiliation (authors’ affiliations), editor (the names of the editors), correspondence (addresses and emails), type (the type specified in the document, such as ”research article”, ”editorial” or ”case study”), abstract (document’s abstract), keywords (keywords listed in the document), bib_info (for zones containing various bibliographic information, such as journal name, volume, issue, DOI, etc.), dates (the dates related to the process of publishing the article).
Formally, the goal of metadata zone classification is to find a function
Metadata classifier is based on Support Vector Machines and is implemented in a similar way as category classification. The classifiers differ in target zone labels, the features and SVM parameters used. The features, as well as SVM parameters were selected using the same procedure, described in Sections 4.3.1 and 4.3.2. The final classifier contains 53 features.
The decision of splitting zone classification into two separate classification steps, as opposed to implementing only one classification step, was based mostly on aspects related to the workflow architecture and maintenance. In fact both tasks have different characteristics and needs. The goal of the category classifier is to divide the article’s content into three general areas of interest, which can be then analysed independently in parallel, while metadata classifier focuses on far more detailed analysis of only a small subset of all zones.
The implementation of the category classifier is more stable: the target label set does not change, and once trained on a reasonably large and diverse dataset, the classifier performs well on other layouts as well. On the other hand, metadata zones have much more variable characteristics across different layouts, and from time to time there is a need to tune the classifier or retrain it using a wider document set. What is more, in the future the classifier might be extended to be able to capture new labels, not considered before (for example a special label for zones containing both author and affiliation, a separate label for categories or general terms).
For these reasons we decided to implement content classification in two separate steps. As a result the two tasks can be maintained independently, and for example adding another metadata label to the algorithm does not change the performance of recognizing the bibliography sections. It is also possible that in the future the metadata classifier will be reimplemented using a different technique, allowing to add new training cases incrementally, for example using a form of online machine learning.
As a result of metadata classification the zones labelled previously as metadata have specific metadata labels assigned, which gives the algorithm valuable hints where different metadata types are located in the document.
3.4.2 Affiliation-Author Relation Determination
As a result of classifying the document’s fragments, we usually obtain a few regions labelled as author or affiliation. In this step individual author names and affiliation strings are extracted and the relations between them are determined.
More formally, the goal of author-affiliation relation extraction is to determine for a given document :
— a list of document’s author full names,
— a set of document’s affiliation strings,
— a relation author-affiliation, where if and only if the affiliation string represents the author’s affiliation.
In general the implementation is based on heuristics and regular expressions, and the details depend on article’s layout. There are two main styles used: (1) author names are grouped together in a form of a list, and affiliations are also placed together below the author’s list, at the bottom of the first page or even just before the bibliography section (an example is shown in Figure 3.10), and (2) each author is placed in a separate zone along with its affiliation and email address (an example is shown in Figure 3.11).
First step is to recognize the type of layout of a given document. If the document contains at least two zones labelled as affiliation placed approximately in the same horizontal line, the algorithm treats it as type (2), otherwise — as type (1).
In the case of a layout of the first type (Figure 3.10), at the beginning authors’ lists are split using a predefined lists of separators. Then we detect affiliation indexes based on predefined lists of symbols and also geometric features, in particular y-position of the characters. Detected indexes are then used to split affiliation lists and assign affiliations to authors.
In the case of a layout of the second type (Figure 3.11), each author is already assigned to its affiliation by being placed in the same zone. It is therefore enough to detect author name, affiliation and email address. We assume the first line of such a zone is the author name, email is detected based on regular expressions, and the rest is treated as the affiliation string.
3.4.3 Affiliation Parsing
Extracted affiliation strings are the input to affiliation parsing step [TkaczykTB15], the goal of which is to recognize affiliation fragments related to institution, address and country. Additionally, country names are decorated with their ISO codes. Figure 3.12 shows an example of a parsed affiliation string.
More formally, let be the alphabet used in the document and — the non-empty affiliation string. Let’s also denote as a set of all (possibly empty) substrings of :
The goal of affiliation parsing is to find:
— the name of the institution,
— the address of the institution,
— the name of the country,
such that , and are pairwise non-overlapping substrings.
The first step of affiliation parsing is tokenization. The input string is divided into a list of tokens , such that , and each is a maximum continuous substring containing only letters, only digits or a single other character.
After tokenization each token is classified as institution, address, country or other. The classification is done by a linear-chain Conditional Random Fields classifier, which is a state-of-the-art technique for sequence classification able to model sequential relationships and handle a lot of overlapping features.
The classifier uses the following binary features:
WORD — every word (the token itself) corresponds to a feature.
RARE — whether the word is rare, that is whether the training set contains less than a predefined threshold occurrences or it.
NUMBER — whether the token is a number.
ALLUPPER — whether it is all uppercase word.
ALLLOWER — whether it is all lowercase word.
STARTUPPER — whether it is a lowercase word that starts with an uppercase letter.
COUNTRY — whether the token is contained in the dictionary of country words.
INSTITUTION — whether the token is contained in the dictionary of institution words.
ADDRESS — whether the token is contained in the dictionary of address words.
The dictionaries were compiled by hand using the resources from [Jonnalagadda11]. All the features exists in five versions: for the current token, for the two preceding tokens, and for the two following tokens.
After the classification the neighbouring tokens with the same label are concatenated. The resulting , and are the first occurrences of substrings labelled accordingly. Theoretically, the affiliation can contain multiple fragments of a certain label; in practice, however, as a result of the training data we used, one affiliation contains usually at most one substring of each kind: institution, address and country.
3.4.4 Metadata Cleaning
The purpose of the final step of metadata extraction stage is to gather the information from labelled zones, extracted author names, parsed affiliations and relations between them, clean the metadata and export the final record.
The cleaning is done with a set of heuristic-based rules. The algorithm performs the following operations:
removing the ligatures from the text,
concatenating zones labelled as abstract,
removing hyphenation from the abstract based on regular expressions,
as type is often placed just above the title, it is removed from the title zone if needed (based on a small dictionary of types),
extracting email addresses from correspondence and affiliation zones using regular expressions,
associating email addresses with authors based on author names,
pages ranges placed directly in bib_info zones are parsed using regular expressions,
if there is no pages range given explicitly in the document, we also try to retrieve it from the pages numbers on each page,
parsing dates using regular expressions,
journal, volume, issue and DOI are extracted from bib_info zones based on regular expressions.
Metadata cleaning is the final step of the metadata extraction stage. It results in the final metadata record of the document, containing the proper document metadata and exported as front section of the resulting NLM JATS file.
3.5 Bibliography Extraction
Bibliography extraction is next to metadata extraction another specialized extraction stage of the algorithm. During bibliography extraction, zones labelled previously as references are analyzed in order to extract parsed bibliographic references listed in the document.
The result of bibliography extraction is a list of bibliographic references, each of which is a tuple that can contain the following information:
raw reference (string): raw text of the reference, as it was given in the input document,
type (string): type of the referenced document; possible values are: journal paper, conference paper, technical report,
title (string): the title of the referenced document,
authors (a list of pairs of given name and surname): the full names of all the authors,
source (string): the name of the journal in which the article was published or the name of the conference,
volume (string): the volume in which the article was published,
issue (string): the issue in which the article was published,
year (string): the year of publication,
pages (a pair of first and last page): the range of pages of the article,
DOI (string): DOI identifier of the referenced document.
Each reference on the output contains the raw text and type; other information is optional. The output of bibliography extraction corresponds to the back section of the resulting NLM JATS record. Listing 3 shows an example of such a section.
|1. Reference strings extraction||Dividing the content of references zones into individual reference strings.||k-means clustering|
|2. Reference parsing||Extracting metadata information from references strings.||CRF|
|3. Reference cleaning||Cleaning and exporting the final record.||heuristics|
Table 3.3 lists the steps executed during bibliography extraction stage. The detailed descriptions are provided in the following sections: reference extraction (Section 3.5.1), reference parsing (Section 3.5.2) and reference cleaning (Section 3.5.3).
3.5.1 References Extraction
Zones labelled as references by category classifier contain a list of reference strings, each of which can span over one or more text lines. The goal of reference strings extraction is to split the content of those zones into individual reference strings.
Let’s denote as the set of all zones in the document labelled as references:
Let also be the set of all lines from the references zones:
The goal of reference extraction is to find a partition such that each set along with the order inherited from the set represents a single reference string. Let’s denote as the reference of a given line , that is . The partition should respect the reading order in the line set, that is
Each line belongs to exactly one reference string, some of them are first lines of their reference, others are inner or last ones. The sequence of all text lines belonging to bibliography section can be represented by the following regular expression: [samepage=true] ( ¡first line of a reference¿ ( ¡inner line of a reference¿* ¡last line of a reference¿ )? )*
The task of grouping text lines into consecutive references can be solved by determining which lines are the first lines of their references. A set of such lines is shown in Figure 3.13. More formally, we are interested in finding a set , such that
Finding the set is equivalent to finding the partition , since every set can be constructed by taking a first line and adding all the following lines until the next first line or the end of the line sequence is reached.
The pseudocode of the algorithm is presented in Algorithm 3.2. To find the set , we transform all lines to feature vectors and cluster them into two disjoint subsets. Ideally one of them is the set of all first lines () and the other is equal to . The cluster containing the first line in (the smallest with respect to the order) is assumed to be equal to .
For clustering lines we use k-means algorithm with Euclidean distance metric. In this case , since the line set is clustered into two subsets. As initial centroids we set the first line’s feature vector and the vector with the largest distance to the first one. We use the following features:
whether the line starts with an enumeration pattern — this feature activates only if there exists a preceding line with the same pattern, but labelled with the previous number, and if there exists a following line with the same pattern, but labelled with the next number,
whether the previous line ends with a dot,
the ratio of the length of the previous line to the width of the previous line’s zone,
whether the indentation of the current line within its zone is above a certain threshold,
whether the vertical distance between the line and the previous one is above a certain threshold (calculated based on the minimum distance between references lines in the document).
The result of the references extraction step is a list of bibliographic references in the form of raw strings, that undergo parsing in the next step.
3.5.2 References Parsing
Reference strings extracted previously contain important reference metadata. During parsing metadata is extracted from reference strings and the result is the list of document’s parsed bibliographic references. The information we extract from the strings include: author (including given name and surname), title, source, volume, issue, pages (including the first and last page number from a range) and year. An example of a parsed reference is shown in Figure 3.14.
Formally, the task can be defined similarly as the task of affiliation parsing described in Section 4.4. The implementation is also similar: first a reference string is tokenized into a sequence of tokens. The tokens are then transformed into vectors of features and classified by a linear-chain CRF classifier. The classifiers differ in target labels and used features.
The token classifier uses the following token labels: first_name (author’s first name or initial), surname (author’s surname), title, source (journal or conference name), volume, issue, page_first (the lower bound of pages range), page_last (the upper bound of pages range), year and text (for separators and other tokens without a specific label).
The main feature is the token (the word) itself. This feature activates only if the number of its occurrences in the validation dataset exceeds a certain threshold. We also developed 34 additional binary features:
features checking whether all characters in the token are: digits, letters, letters or digits, lowercase letters, uppercase letters, Roman numerals;
whether the token starts with an uppercase letter;
whether the token is: a single digit, a lowercase letter, an uppercase letter;
whether the token is present in the dictionaries of: cities, publisher words, series words, source words, number/issue words, pages words, volume words;
whether the token is: an opening/closing parenthesis, an opening/closing square bracket, a comma, a dash, a dot, a quotation mark, a slash;
whether the token is equal to ”and” or ”&”;
whether the token is a dash placed between words;
whether the token is a single quote placed between words;
whether the token is a year.
It is worth to notice that the token’s label depends not only on its feature vector, but also on the features of the surrounding tokens. To reflect this in the classifier, the token’s feature vector contains not only features of the token itself, but also features of two preceding and two following tokens, similarly as in the case of the affiliation parser.
After token classification fragments labelled as first_name and surname are joined together based on their order to form consecutive authors, and similarly fragments labelled as page_first and page_last are joined together to form pages range. Additionally, in the case of title or source labels, the neighbouring tokens with the same label are concatenated.
As a result of reference parsing step, we have a list of the document’s bibliographic references, each of which is a tuple containing the raw reference strings as well as the metadata extracted from it.
3.5.3 References Cleaning
Similarly to metadata cleaning, references cleaning is the last step of the bibliography extraction stage. Its purpose is to clean previously extracted data and export the final record.
During references cleaning the following operations are performed:
The ligatures are removed from the text.
Hyphenation is removed from the strings based on regular expressions.
DOI is recognized in the reference strings by a regular expression. The reference parser is not responsible for extracting this information, because the dataset used for training the token classifier does not contain enough references with DOI.
Finally, the type of the reference (journal paper, conference proceedings or technical report) is detected by searching for specific keywords in the reference string.
Reference cleaning is the last step of bibliography extraction. The entire stage results in a list of parsed bibliographic references, corresponding to the back section of the output NLM JATS record.
3.6 Structured Body Extraction
Structured body extraction is, next to metadata extraction and bibliography extraction, another specialized extraction stage of the algorithm. The purpose of structured body extraction is to obtain the main text of the document in the hierarchical form composed of sections, subsections and subsubsections by the analysis of the middle region of the document labelled previously as body.
Intuitively, the result of structured body extraction is the full text of the document represented by a list of sections, each of which might contain a list of subsections, each of which might contain a list of subsubsections. Each structure part (section, subsection and subsubsection) has the title and the text content.
More formally, for a given document we denote as the set of all structure parts. We have , where is a set of the sections of the document, is a (possibly empty) set of subsections and is a (possibly empty) set of subsubsections. The following statements are also true for the structured parts:
The hierarchical structure of the document parts is defined by a parent function , which maps the elements to their parents in the structure, in particular:
All the sets , and are totally ordered sets, where the order corresponds to the natural reading order of the parts of the document. The order of the elements also respects the section hierarchy, in particular:
Every structure part has its title and the text content . The text content is understood as the text associated directly with the given element, in particular the text contents of the children of a given element are not part of its text content; in order to obtain the full content of a given element one has to recursively iterate over its descendants and concatenate their contents. The text content of every element precedes the text content of its descendants with respect to the document’s reading order.
The output of body extraction corresponds to the body section of the resulting NLM JATS record. Listing 4 shows an example of such a section. The paragraphs are shortened for conciseness.
|1. Text content filtering||Filtering out fragments related to the tables, images and equations from body parts of the document.||SVM|
|2. Section headers detection||Detecting the body lines containing the titles of sections, subsections and subsubsections.||heuristics|
|3. Section hierarchy determination||Dividing the section headers into levels and building the section hierarchy.||heuristic clustering|
|4. Structured body cleaning||Cleaning and exporting the final structured body content.||heuristics|
Table 3.4 lists the steps executed during body extraction stage. The detailed descriptions are provided in the following sections: text content filtering (Section 3.6.1), section headers detection (Section 3.6.2), section hierarchy determination (Section 3.6.3) and structured body cleaning (Section 3.6.4).
3.6.1 Text Content Filtering
Text content filtering is the first step in the body extraction stage. The purpose of this step is to locate all the relevant (containing section titles and paragraphs) parts in the body of the document. The task is accomplished by classifying the body zones into one of the two classes: body_content (the parts we are interested in) and body_other (all non-relevant fragments, such as tables, table captions, the text belonging to images, image captions, equations, etc).
More formally, the goal of text filtering is to find a function
The classifier is based on Support Vector Machines and is implemented in a similar way as category and metadata classifiers. It differs from them in target zone labels, the features and SVM parameters used. The features, as well a