Modern methods in natural language processing and information retrieval are heavily dependent on large collections of text. The World Wide Web is an inexhaustible source of content for such applications. However, a common problem is that Web pages include not only main content, but also ads, hyperlink lists, navigation, previews of other articles, banners, etc. This boilerplate/template content has often been shown to have negative effects on the performance of derived applications [15, 24].
The task of separating main text in a Web page from the remaining content is known in the literature as “boilerplate removal”, “Web page segmentation” or “content extraction”. Established popular methods for this problem use rule-based or machine learning algorithms. The most successful approaches first perform a splitting of an input Web page into text blocks, followed by a binary labeling of each block as either main content or boilerplate.
In this paper, we propose a hidden Markov model on top of neural potentials for the task of boilerplate removal. We leverage the representational power of convolutional neural networks (CNNs) to learn unary and pairwise potentials over blocks in a page-based on complex non-linear combinations of DOM-based traditional features. At prediction time, we find the most likely block labeling by maximizing the joint probability of a label sequence using the Viterbi algorithm. The effectiveness of our method is demonstrated on standard benchmarking datasets.
The remainder of this document is structured as follows. Section 2 gives an overview of related work. Section 3 formally defines the main-content extraction problem, introduces the block segmentation procedure and details our model. Section 4 empirically demonstrates the merit of our method on several benchmark datasets for content extraction and document retrieval.
2 Related Work
Early approaches to HTML boilerplate removal use a range of heuristics and rule-based methods. Finnet al.  design an effective system called Body Text Extractor (BTE). It relies on the observation that the main content contains longer paragraphs of uninterrupted text, where HTML tags occur less frequently compared to the rest of the Web page. Looking at the cumulative distribution of tags as a function of the position in the document, Finn et al. identify a flat region in the middle of this distribution graph to be the main content of the page. While simple, their algorithm has two drawbacks: (1) it only makes use of the location of HTML tags and not of their structure, thus losing potentially valuable information, and (2) it can only identify one continuous stretch of main content which is unrealistic for a considerable percentage of modern Web pages.
To address these issues, several other algorithms have been designed to operate on DOM trees, thus leveraging the semantics of the HTML structure [11, 19, 6]. The problem with these early methods is that they make intensive use of the fact that pages used to be partitioned into sections by <table> tags, which is no longer a valid assumption.
In the next line of work, the DOM structure is used to jointly process multiple pages from the same domain, relying on their structural similarities. This approach was pioneered by Yi et al.  and was improved by various others . These methods are very suitable for detecting template content that is present in all pages of a website, but have poor performance on websites that consist of a single Web page only. In this paper we focus on single-page content extraction without exploiting the context of other pages from the same site.
Gottron et al.  propose Document Slope Curves and Content Code Blurring
methods that are able to identify multiple disconnected content regions. The latter method parses the HTML source code as a vector of 1’s, representing pieces of text, and 0’s, representing tags. This vector is then smoothed iteratively, such that eventually it finds active regions where text dominates (content) and inactive regions where tags dominate (boilerplate). This idea of smoothing was extended to also deal with the DOM structure[4, 21]. Chakrabarti et al.  assign a likelihood of being content to each leaf of the DOM tree while using isotonic smoothing to combine the likelihoods of neighbors with the same parents. In a similar direction, Sun et al.  use both the tag/text ratio and DOM tree information to propagate DensitySums through the tree.
Machine learning methods offer a convenient way to combine various indicators of “contentness”, automatically weighting hand-crafted features according to their relative importance. The FIASCO system by Bauer et al. 
uses Support Vector Machines (SVM) to classify an HTML page as a sequence of blocks that are generated through a DOM-based segmentation of the page and are represented by linguistic, structural and visual features. Similar works of Kohlschütteret al.  also employ SVMs to independently classify blocks. Spousta et al.  extend this approach by reformulating the classification problem as a case of sequence labeling where all blocks are jointly tagged. They use conditional random fields to take advantage of correlations between the labels of neighboring content blocks. This method was the most successful in the CleanEval competition .
In this paper, we propose an effective set of block features that capture information from adjacent neighbors in the DOM tree. Additionally, we employ a deep learning framework to automatically learn non-linear features combinations, giving the model an advantage over traditional linear approaches. Finally, we jointly optimize the labels for the whole Web page according to local potentials predicted by the neural networks.
Boilerplate removal is the problem of labeling sections of the text of a Web page as main content or boilerplate (anything else) . In the following, we discuss the various steps of our method. The complete pipeline is also illustrated in Figure 1.
We expect raw Web page input to be written in (X)HTML markup. Each document is parsed as a Document Object Model tree (DOM tree) using Jsoup . We preprocess this DOM tree by i) removing empty nodes or nodes containing only whitespace, ii) removing nodes that do not have any content we can extract: e.g. <br>, <checkbox>, <head>, <hr>, <iframe>, <img>, <input>.
We make use of the parent and grandparent DOM tree relations. In a raw DOM tree, however, these relationships are not always meaningful. Figure 2 shows a typical fragment of a DOM tree where two neighboring nodes share the same semantic parent (<ul>) but not the same DOM parent. To improve the expressiveness of tree based features (such as “the number of children of a node’s parent”), we recursively merge single child parent nodes with their respective child. We call the resulting tree-structure the Collapsed DOM (CDOM).
3.2 Block Segmentation
Our content extraction algorithm is based on sequence labeling. A Web page is treated as a sequence of blocks that are labeled main content or boilerplate. There are multiple ways to split a Web page into blocks, the most popular currently used being i) Lines in the HTML file, ii) DOM leaves, iii) Block-level DOM leaves. We opt for using the most flexible DOM leaves strategy, described as follows. Sections on a page that require different labels are usually separated by at least one HTML tag. Therefore, it is safe to consider DOM leaves (#text nodes) as the blocks of our sequence. A potential disadvantage of this approach is that a hyperlink in a text paragraph can receive a different label than its neighboring text. Under this scheme, an empirical evaluation of Web2Text shows no cases where parts of a textual paragraph are wrongly labeled as boilerplate, while the rest are marked as main content.
3.3 Feature Extraction
Features are properties of a node that may be indicative of it being content or boilerplate. Such features can be based on the node’s text, CDOM structure or a combination thereof. We distinguish between block features and edge features.
Block features capture information on each block of text on a page. They are statistics collected based on the block’s CDOM node, parent node, grandparent node and the root of the CDOM tree. In total, we collect 128 features for each text block, e.g. “the node is a <p> element”, “average word length”, “relative position in the source code”, “the parent node’s text contains an email address”, “ratio of stopwords in the whole page”, etc.
We clip and standardize all non-binary features to be approximately Gaussian with zero mean and unit variance across the training set. For a full overview of all 128 features, please refer to Appendix0.A.
Edge features capture information on each pair of neighboring text blocks. We collect 25 features for each such pair. Define the tree distance of two nodes as the sum of the number of hops from both nodes to their first common ancesor. The first edge features we use are binary features corresponding to a tree distance of 2, 3, 4 and . Another feature signifies if there is a line break between the nodes in an unstyled HTML page. Finally, we collect features b70–b89 from Appendix 0.A for the common ancestor CDOM node of the two text blocks.
3.4 CNN Unary and Pairwise Potentials
We assign unary potentials to each text block to be labeled and pairwise potentials to each pair of neighboring text blocks. In our case, potentials are probabilities as explained below. The unary potentials , are the probabilities that the label of a text block is content or boilerplate, respectively. The two potentials sum to one. The pairwise potentials , , and are the transition probabilities of the labels of a pair of neighboring text blocks. These pairwise potentials also sum to one for each text block pair.
The two sets of potentials are modeled using CNNs with 5 layers, ReLU non-linearity between layers, filter sizes offor the unary network and of
for the pairwise network. All filters have a stride of 1 and kernel sizes
respectively. The unary CNN receives a sequence of block features corresponding to the sequence of text blocks to be labeled and outputs unary potentials for each block. The pairwise CNN receives a sequence of edge features corresponding to the sequence of edges to be labeled and outputs the pairwise potentials for each block. We use zero padding to make sure that each layer produces a sequence of the same size as its input sequence. The outputs for the unary network are sequences of 2 values per block that are normalized using softmax. The outputs for the pairwise network are sequences of 4 values per block-pair that are normalized in the same way. Thus, the output for the blockdepends indirectly on a range of blocks around it. We employ dropout regularization with rate 0.2 and weight decay with rate .
For the unary potentials, we minimize the cross-entropy
where is the true label of block , are the parameters of the unary network and is the index of the last text block in the sequence.
For the pairwise network, we minimize the cross-entropy
where are the parameters of the pairwise network.
The joint prediction of the most likely sequence of labels given an input Web page works as follows. We denote the sequence of text blocks on the page as and write the probability of a corresponding labeling being the correct one as
is an interpolation factor between the unary and pairwise terms. We usein our experiments. This expression describes a hidden Markov model and it is maximized using the Viterbi algorithm  to find the optimal labeling given the predicted CNN potentials.
Our experiments are grouped in two stages. We begin by assessing Web2Text’s performance at boilerplate removal on a high-quality manually annotated corpus of Web pages. In a second step, we turn towards a much larger collection and investigate how improved content extraction results in superior information retrieval quality. Both experiments highlight the benefits of Web2Text over state-of-the-art alternatives.
4.1 Training Data
CleanEval 2007  is the largest publicly available dataset for this task. It contains 188 text blocks per Web page on average. It consists of an original split of development (60 pages) and test (676 pages) sets. We divide the development set into a training set (55 pages) and a test set (5 pages). Since our model has more than 10,000 parameters, it is likely that the original training set is too small for our method. Thus, we did a second split of the CleanEval as follows: training (531 pages), validation (58 pages) and test (148 pages).
4.1.1 Automatic Block Labeling.
To our knowledge, the existing corpora (including CleanEval) for boilerplate detection pose an additional difficulty. These datasets consist only of pairs of Web pages and corresponding cleaned text (manually extracted). As a consequence, the alignment between the source text and cleaned text, as well as block labeling, have to be recovered. Some methods (e.g. ) rely on expensive manual block annotations. One of our contributions is the following automatic recovery procedure of the aligned (block, label) pairs from the original (Web page, clean text) pairs. This allows us to leverage more training data compared to previous methods.
We first linearly scan the cleaned text of a Web page using windows of 10 consecutive characters. Each such snippet is checked for uniqueness in the original Web page (after spaces trimming). If such a unique match is found, then it can be used to divide both the cleaned text and the original Web page in two parts on which the same matching method can be applied recursively in a divide-et-impera fashion. After all unique snippets are processed, we use dynamic programming to align the remaining splitted parts of the clean text with the corresponding splitted parts of the original Web page blocks. In the end, in the rare case that the content of a block is only partially matched with the cleaned text, we mark it as content iff at least 23 of its text is aligned.
4.2 Training Details
The unary and pairwise potential-predicting networks are trained separately with the Adam optimizer  and a learning rate of for 5000 iterations. Each iteration processes a mini-batch of 9-text-block long Web page excerpts. We perform early stopping, observing no improvements after this number of steps. We then pick the model corresponding to the lowest error on the validation set.
We compare Web2Text to a range of methods described in the literature or deployed in popular libraries. BTE  and Unfluff  are heuristic methods. [17, 16] is a popular machine learning system that offers various content extraction settings111We were not able to find code for re-training this system. which we used in our experiments (see Table 1). CRF  achieves one of the best results on CleanEval. This machine learning model trains a Conditional Random Field on top of block features in order to perform block classification. However, as explained in Section 4.1.1, CRF relies on a different Web page block splitting and on expensive manual block annotations. As a consequence, we were not able to re-train it and thus only used their out-of-the-box model pre-trained on the original CleanEval split. For a fair comparison, we also train on the original CleanEval split, but note below that our neural network has many more parameters and will suffer from using so few training instances.
4.3.1 Model Sizes.
The CRF model  contains 9,705 parameters. In comparison, our unary CNN network contains 17,960 parameters, while the pairwise CNN contains 12,870 parameters, the total number of parameters for the joint structured model being 30,830. This explains why the original train set is too small for our model.
4.4 Content Extraction Results
Table 1 shows the results of this experiment. All the metrics are block based, where all blocks are weighted equally. We note that Web2Text obtains state-of-the-art accuracy, recall and F1 scores compared to popular baselines including previous CleanEval winners. Note that these numbers are obtained by evaluating each method using the same block segmentation procedure, namely the DOM leaves strategy described in Section 3.2. We additionally note that, compared to using Web2Text only with the unary CNN, the gains of the hidden Markov model are marginal in this experiment.
4.4.1 Running times.
Web2Text takes 54ms per Web page on average; 35ms for DOM parsing and feature extraction, and 19ms for the neural network forward pass and Viterbi algorithm. These measurements were done on a Macbook with a 2.8 GHz Intel Core i5 processor.
4.5 Impact on Retrieval Performance
Besides the previously presented intrinsic evaluation of text extraction accuracy, we are interested in the performance gains that other derived tasks experience when operating on the output of boilerplate removal systems of varying quality. To this end, our extrinsic evaluation studies the task of ad hoc document retrieval. Search engines that index high-quality output of text extraction systems should be better able to answer a given user-formulated query than systems indexing raw HTML or naïvely cleaned content. Our experiments are based on the well-known ClueWeb12 collection of Web pages.222http://lemurproject.org/clueweb12/ It is organized in two well-defined document sets, the full CW12-A corpus of 733M organic Web documents (27.3 TB of uncompressed text) as well as the smaller, randomly sampled subset CW12-B of 52M documents (1.95 TB of uncompressed text). The collection is indexed using the Indri search engine and retrieval runs are conducted using two state-of-the-art probabilistic retrieval models, the query likelihood model  (QL) as well as a relevance-based language model  (RM). Our 50 test queries alongside their relevance judgments originate from the 2013 edition of the TREC Web Track .
Table 2 highlights the quality of each combination of retrieval model and collection when indexing either raw or cleaned Web content. Within each combination, statistical significance of performance differences between raw and cleaned HTML content is denoted by an asterisk. Models that significantly outperform all other text extraction methods are indicated by . We can note that, in general, retrieval systems indexing CW12-A deliver stronger results than those operating only on the CW12-B subset. Due to the random sampling process, many potentially relevant documents are missing from this smaller collection. Similarly, across all comparable settings, the query likelihood model (QL) performs significantly better than the relevance model (RM). As hypothesized earlier, text extraction can influence the quality of subsequent document retrieval. We note that low-recall methods (BTE, article-ext, largest-ext, Unfluff) cause losses in retrieval performance, as relevant pieces of content are incorrectly removed as boilerplate. At the same time, the most accurate models (CRF, Web2Text) were able to introduce improvements across all metrics. Web2Text, in particular, outperformed all baselines at significance level . We note that, for this experiment, Web2Text was trained on our CleanEval split as explained in Section 4.1.
This paper presents Web2Text 333Our source code is publicly available: https://github.com/dalab/web2text, a novel algorithm for main content extraction from Web pages. The method combines the virtues of popular sequence labeling approaches such as CRFs  with deep learning methods that leverage the DOM structure as a source of information. Our experimental evaluation on CleanEval benchmarking data shows significant performance gains over all state-of-the-art methods. In a second set of experiments, we demonstrate how highly accurate boilerplate removal can significantly increase the performance of derived tasks such as ad hoc retrieval.
This research is funded by the Swiss National Science Foundation (SNSF) under grant agreement numbers 167176 and 174025.
-  Marco Baroni, Francis Chantree, Adam Kilgarriff, and Serge Sharoff. CleanEval: a competition for cleaning web pages. In LREC, 2008.
-  Daniel Bauer, Judith Degen, Xiaoye Deng, Priska Herger, Jan Gasthaus, Eugenie Giesbrecht, Lina Jansen, Christin Kalina, Thorben Kräger, Robert Märtin, Martin Schmidt, Simon Scholler, Johannes Steger, Egon Stemle, and Stefan Evert. FIASCO: Filtering the internet by automatic subtree classification, osnabruck. In Building and Exploring Web Corpora: Proceedings of the 3rd Web as Corpus Workshop, incorporating CleanEval, volume 4, pages 111–121, 2007.
-  Deepayan Chakrabarti, Ravi Kumar, and Kunal Punera. Page-level template detection via isotonic smoothing. In Proceedings of the 16th international conference on World Wide Web, pages 61–70. ACM, 2007.
-  Deepayan Chakrabarti, Ravi Kumar, and Kunal Punera. A graph-theoretic approach to webpage segmentation. In Proceedings of the 17th international conference on World Wide Web, pages 377–386. ACM, 2008.
-  Kevyn Collins-Thompson, Paul Bennett, Fernando Diaz, Charlie Clarke, and Ellen Voorhees. Overview of the TREC 2013 web track. In Proceedings of the 22nd Text Retrieval Conference (TREC’13), 2013.
-  Sandip Debnath, Prasenjit Mitra, Nirmal Pal, and C Lee Giles. Automatic identification of informative sections of web pages. IEEE transactions on knowledge and data engineering, 17(9):1233–1246, 2005.
-  Aidan Finn, Nicholas Kushmerick, and Barry Smyth. Fact or fiction: Content classification for digital libraries. Unrefereed, 2001.
-  Adam Geitgey. Unfluff – an automatic web page content extractor for node.js!, 2014.
-  John Gibson, Ben Wellner, and Susan Lubar. Adaptive web-page content identification. In Proceedings of the 9th annual ACM international workshop on Web information and data management, pages 105–112. ACM, 2007.
-  Thomas Gottron. Content code blurring: A new approach to content extraction. In Database and Expert Systems Application, 2008. DEXA’08. 19th International Workshop on, pages 29–33. IEEE, 2008.
-  Suhit Gupta, Gail Kaiser, David Neistadt, and Peter Grimm. DOM-based content extraction of HTML documents. In Proceedings of the 12th international conference on World Wide Web, pages 207–214. ACM, 2003.
-  Jonathan Hedley. Jsoup HTML parser, 2009.
-  Rong Jin, Alex G Hauptmann, and ChengXiang Zhai. Language model for information retrieval. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 42–48. ACM, 2002.
-  Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  Christian Kohlschütter. A densitometric analysis of web template content. In Proceedings of the 18th international conference on World wide web, pages 1165–1166. ACM, 2009.
-  Christian Kohlschütter et al. Boilerpipe – boilerplate removal and fulltext extraction from HTML pages. Google Code, 2010.
-  Christian Kohlschütter, Peter Fankhauser, and Wolfgang Nejdl. Boilerplate detection using shallow text features. In Proceedings of the third ACM international conference on Web search and data mining, pages 441–450. ACM, 2010.
-  Victor Lavrenko and W Bruce Croft. Relevance based language models. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 120–127. ACM, 2001.
-  Shian-Hua Lin and Jan-Ming Ho. Discovering informative content blocks from web documents. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 588–593. ACM, 2002.
-  Miroslav Spousta, Michal Marek, and Pavel Pecina. Victor: the web-page cleaning tool. In 4th Web as Corpus Workshop (WAC4)-Can we beat Google, pages 12–17, 2008.
-  Fei Sun, Dandan Song, and Lejian Liao. Dom based content extraction via text density. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 245–254. ACM, 2011.
-  Karane Vieira, Altigran S Da Silva, Nick Pinto, Edleno S De Moura, Joao Cavalcanti, and Juliana Freire. A fast and robust method for web page template detection and removal. In Proceedings of the 15th ACM international conference on Information and knowledge management, pages 258–267. ACM, 2006.
-  Andrew J Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. In The Foundations Of The Digital Wireless World: Selected Works of AJ Viterbi, pages 41–50. World Scientific, 2010.
-  Lan Yi, Bing Liu, and Xiaoli Li. Eliminating noisy information in web pages for data mining. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 296–305. ACM, 2003.