Chronicling America is a product of the National Digital Newspaper Program, a partnership between the Library of Congress and the National Endowment for the Humanities to digitize historic newspapers. Over 16 million pages of historic American newspapers have been digitized for Chronicling America to date, complete with high-resolution images and machine-readable METS/ALTO OCR. Of considerable interest to Chronicling America users is a semantified corpus, complete with extracted visual content and headlines. To accomplish this, we introduce a visual content recognition model trained on bounding box annotations of photographs, illustrations, maps, comics, and editorial cartoons collected as part of the Library of Congress’s Beyond Words crowdsourcing initiative and augmented with additional annotations including those of headlines and advertisements. We describe our pipeline that utilizes this deep learning model to extract 7 classes of visual content: headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements, complete with textual content such as captions derived from the METS/ALTO OCR, as well as image embeddings for fast image similarity querying. We report the results of running the pipeline on 16.3 million pages from the Chronicling America corpus and describe the resulting Newspaper Navigator dataset, the largest dataset of extracted visual content from historic newspapers ever produced. The Newspaper Navigator dataset, finetuned visual content recognition model, and all source code are placed in the public domain for unrestricted re-use.
Chronicling America, an initiative of the National Digital Newspaper Program - itself a partnership of the Library of Congress and the National Endowment for the Humanities - is an invaluable resource for academic, local, and public historians; educators and students; genealogists; journalists; and members of the public to explore American history through the uniquely rich content preserved within historic local newspapers. Over 16 million pages of newspapers published between 1789 to 1963 are publicly available online through a search portal, as well as via a public API. Among the page-level data are 400 DPI images, as well as METS/ALTO OCR, a standard maintained by the Library of Congress that includes text localization [about_chronam].
The 16.3 million Chronicling America pages included in the Newspaper Navigator cover 174 years of American history, inclusive of 47 states, Washington, D.C., and Puerto Rico. In Figure 1, we show choropleth maps displaying the geographic coverage of the 16.3 million Chronicling America newspaper pages included in the Newspaper Navigator dataset. In Figure 2, we show the temporal coverage of these pages. The coverage reflects the selection process for determining which newspapers to include in Chronicling America; for an in-depth examination, please refer to [quoth, chronicling_america_guidelines]. The selection process should be considered in the methodology of any research performed using the Newspaper Navigator dataset.
While the images and OCR in Chronicling America provide a wealth of information, users interested in extracted visual content, including headlines, are currently restricted to general keyword searches or manual searches over individual pages in Chronicling America. For example, staff at the Library of Congress have produced a collection of Civil War maps in historic newspapers to date, but the collection is far from complete due to the difficulty of manually searching over the hundreds of thousands of Chronicling America pages from 1861 to 1865 [civil_war]
. A complete dataset would be of immense value to historians of the Civil War. Likewise, collecting all of the comic strips from newspapers published in the early 20th century would provide comic researchers with a corpus of unprecedented scale. In addition, users currently have no reliable method of determining what disambiguated articles appear on each page, presenting challenges for natural language processing (NLP) approaches to studying the corpus. A dataset of extracted headlines not only gives researchers insight into the individual articles that appear on each page but also enables users to ask questions such as, “Which news topics appeared above the fold versus below the fold in what newspapers?” Indeed, the digital humanities questions that could be asked with such a dataset abound. And yet, the possibilities extend beyond the digital humanities to include public history, creative computing, educational use within the classroom, and public engagement with the Library of Congress’s collections.
To begin the construction of larger datasets of visual content within Chronicling America and to engage the American public, the Library of Congress Labs launched a crowdsourcing initiative called Beyond Words111https://labs.loc.gov/work/experiments/beyond-words/ in 2017. With this initiative, volunteers were asked to draw bounding boxes around photographs, illustrations, comics, editorial cartoons, and maps in World War 1-era newspapers in Chronicling America; they were also asked to transcribe captions by correcting the OCR within each bounding box annotation, as well as record the content creator. Approximately 10,000 verified Beyond Words annotations have been collected to date.
Our research builds on the crowdsourced Beyond Words annotations by utilizing the bounding boxes drawn around photographs, illustrations, comics, editorial cartoons, and maps, as well as additional annotations including ones marking headlines and advertisements, to finetune a pre-trained Faster-RCNN implementation from Detectron2’s Model Zoo [ren_faster_2015, detectron2]
. Our visual content recognition model predicts bounding boxes around these 7 different classes of visual content in historic newspapers. This paper presents our work training this visual content recognition model and constructing a pipeline for automating the identification of this visual content in Chronicling America newspaper pages. Drawing inspiration from the Beyond Words workflow, we extract corresponding textual content such as headlines and captions by identifying text from the METS/ALTO OCR that falls within each predicted bounding box. This method is effective at captioning because Beyond Words volunteers were asked to include captions and relevant textual content within their bounding box annotations. Lastly, in order to enable fast similarity querying for search and recommendation tasks, we generate image embeddings for the extracted visual content using ResNet-18 and ResNet-50 models pre-trained on ImageNet. This resulting dataset, which we call the Newspaper Navigator dataset, is the largest collection of extracted visual content from historic newspapers ever produced.
Our contributions are as follows:
We present a publicly available pipeline for extracting visual and textual content from historic newspaper pages, designed to run at scale over terabytes of image data. Visual content categories include headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements.
We release into the public domain a finetuned Faster-RCNN model for this task that achieves 63.4% bounding box mean average precision (mAP)222
Mean average precision is the standard metric used for benchmarking object detection models, incorporating intersection over union to assess precision and recall. We describe the metric in more detail in Section6. on a validation set of World War 1-era Chronicling America pages.
We present the Newswpaper Navigator dataset, a new public dataset of extracted headlines and visual content, as well as corresponding textual content such as titles and captions, produced by running the pipeline over 16.3 million historic newspaper pages in Chronicling America. This corpus represents the largest dataset of its kind ever produced.
3 Related Work
3.1 Corpora & Datasets
Over the past 15 years, efforts across the world to digitize historic newspapers have been remarkably successful [digitalturn]. In addition to Chronicling America, examples of large repositories of digitized newspapers include Trove [trove], Europeana [pekarek_europeana_2012, willems_europeana_2015], Delpher [delpher], The British Newspaper Archive [british], OurDigitalWorld [ourdigitalworld], Papers Past [paperspast], NewspaperSG [newspapersg], newspapers.com [newspapersdotcom] and Google Newspaper Search [chaudhury_google_2009]
. The availability of newspapers at the scale of millions of digitized pages has inspired the construction of datasets for supervised learning tasks related to digitized newspapers. In addition to Beyond Words, datasets for historic newspaper recognition include the National Library of Luxembourg’s historic newspaper datasets[BnL] that include segmented articles and advertisements; CHRONIC, a dataset of 452,543 images in historic Dutch newspapers [chronic]; and Europeana’s SIAMESET, a dataset of 426,777 advertisements in historic Dutch newspapers [siameset]
. Datasets for machine learning tasks with historical documents include READ-BAD[readbad] and DIVA-HisDB [diva-hisdb]. However, all of these datasets are designed to serve as training sets rather than as comprehensive datasets of extracted content from full corpora. Our work instead seeks to use the Beyond Words dataset to train a visual content recognition model in order to process the visual content in the Chronicling America corpus comprising 16+ million historic newspaper pages.
3.2 Visual Content Extraction
Other researchers have built tools and pipelines for extracting and analyzing visual content from historic documents, including newspapers, using deep learning.333For approaches to historic document classification that do not utilize deep learning, see for example [lee_ITS]. PageNet utilizes a Fully Convolutional Network for pixel-wise page boundary extraction for historic documents [pagenet]. dhSegment is a deep learning framework for historical document processing, including pixel-wise segmentation and extraction tasks [dhsegment]. Liebl and Burghardt benchmarked 11 different deep learning backbones for the pixel-wise segmentation of historic newspapers, including the separation of layout features such as text and tables [liebl2020evaluation]. The AIDA collaboration at the University of Nebraska-Lincoln has applied deep learning techniques to newspaper corpora including Chronicling America and the Burney Collection of British Newspapers [lorang_patterns, lorang_application, lorang_using] for tasks such as poetic content recognition [DLIB_AIDA, AAAI_AIDA], as well as visual content recognition using dhSegment [UNL_report]. Instead of a pixel-wise approach, we instead utilize bounding boxes, resulting in higher performance. In addition, our pipeline recognizes 7 different classes of visual content (headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements), extracts corresponding OCR, and generates image embeddings. Lastly, we deploy our visual content recognition pipeline at scale.
3.3 Article Disambiguation
Article disambiguation for historic newspaper pages has long been of interest to researchers. Groups that have studied this task include the IMPRESSO project [impresso], NewsEye project [newseye], and Google Newspaper Search [chaudhury_google_2009].444Related work has focused on content segmentation for books [bamman_books]. Of particular note is the approach taken by Google Newspaper Search, which extracted headline blocks using OCR font size and area-perimeter ratio as features and utilized the extracted headlines to segment each page into individual articles [chaudhury_google_2009].555To our knowledge, the extraction and classification of visual content was outside of the scope of the project. We, too, focus on headline extraction because it serves as its own form of article disambiguation. However, unlike previous approaches, we treat headline extraction as a visual task at the image level, rather than a textual
task at the OCR level. Our novel approach is to leverage the visual distinctiveness of headlines on the newspaper pages and train a classifier to predict bounding boxes around headlines on the page. The headline text within each bounding box is then extracted from the underlying METS/ALTO OCR.
Lastly, it should be noted that proper article disambiguation requires the ability to filter out text from advertisements due to the ubiquity of advertisements. As with headlines, we treat advertisement identification as a visual task rather than a textual task because the advertisements are so naturally identified by their visual features. Because our visual content recognition model robustly identifies advertisements, we are able to disambiguate newspaper text from advertisement text.
3.4 Image Embeddings for Cultural Heritage Collections
In recent years, researchers have utilized image embeddings for visualizing and exploring visual content in cultural heritage collections. The Yale Digital Humanities Lab’s PixPlot interface [pixplot] and the National Neighbors project [nationalneighbors] utilize Inception v3 embeddings [inceptionv3]. Google Arts & Culture’s t-SNE Map utilizes embeddings produced by the Google search pipeline [tsnemap]. The Norwegian National Museum’s Principal Components project [principalcomponents]
uses finetuned Caffe image embeddings[caffe]. Olivia Vane utilizes VGG-16 embeddings to visualizing the Royal Photographic Society Collection [vane]. Likewise, Brian Foo has created a visualization of The American Museum of Natural History’s image collection [amnh] using VGG-16 embeddings [vgg]. Refik Anadol uses embeddings to visualize the SALT Research collection [anadol]. Regarding visual content in historic newspapers in particular, Wevers and Smits have utilized Inception v3 embeddings to perform large-scale analysis of the CHRONIC and SIAMESET datasets derived from historic Dutch newspapers [visualturn]. Their work includes the deployment of SIAMESE, a recommender system for advertisements in historic newspapers, as well as an analysis of training a new classification layer on top of the Inception embeddings to predict according to custom categories [visualturn].
Indeed, in addition to supporting visualizations of latent spaces that capture semantic similarity, image embeddings are desirable for visual search and recommendation tasks due to the ability to perform fast similarity querying with them. Using ResNet-18 and ResNet-50 [resnet] models pre-trained on ImageNet, we generate image embeddings for the extracted visual content, which are included in the Newspaper Navigator dataset in order to support a range of visual search and recommendation tasks for the Chronicling America corpus.
All code discussed in this paper can be found in the public GitHub repository https://github.com/LibraryOfCongress/newspaper-navigator
and is open source, placed in the public domain for unrestricted re-use. In addition, included in the repository are the finetuned visual content recognition model, the training set on which the model was finetuned, a Jupyter notebook for experimenting with the visual content recognition model, and a slideshow of predictions.
5 Constructing the Training Set
5.1 Repurposing the Beyond Words Annotations
To create a training set for our visual content recognition model, we repurposed the publicly available annotations of photographs, illustrations, maps, comics, and editorial cartoons derived from Beyond Words, a crowdsourcing initiative launched by the Library of Congress to engage the American public with the visual content in World War 1-era newspapers in Chronicling America. The Beyond Words platform itself was built using Scribe [scribe]. The crowdsourcing workflow consisted of three different tasks that volunteers could choose to perform:
Mark, in which users were asked to “draw a rectangle around each unmarked illustration or photograph excluding those in advertisements [and] enclose any caption or text describing the picture and the illustrator or photographer” [mark].
Transcribe, in which users were asked to correct the OCR of the caption for each marked illustration or photograph, transcribe the author’s name, and note the category (“Editorial Cartoon,” “Comics/Cartoon,” “Illustration,” “Photograph,” or “Map”) [transcribe].
Verify, in which users were asked to select the transcription of another volunteer that most closely matches the printed caption. Users were also able to filter out bad regions or provide their own transcriptions in the event that neither transcription was of good quality [verify].
Up to 6 different individuals may have interacted with each annotation during this process. The annotation required achieving at least 51% agreement with volunteers at the Transcribe and Verify steps.
In order to finetune the visual content recognition model, it was first necessary to reformat the crowdsourced Beyond Words annotations into a proper data format for training a deep learning model. We chose the Common Objects in Context (COCO) dataset format [coco], a standard data format for object detection, segmentation, and captioning tasks adopted by Facebook AI Research’s Detectron2 deep learning platform for object detection [detectron2].
The verified Beyond Words annotations used as training data were downloaded from the Beyond Words public website on December 1, 2019. To convert the JSON file available for download into a deep learning training set, we wrote a Python script to pull down the Chronicling America newspaper images utilized by Beyond Words and format the annotations according to the COCO standard. The script is available in the Newspaper Navigator GitHub repository.
We reiterate that the instructions for the “Mark” step asked users to “enclose any caption or text describing the picture and the illustrator or photographer” [mark]; therefore, a model trained on these annotations learns to include relevant text within the bounding boxes for visual content, which can then be extracted from the corresponding METS/ALTO OCR in an automated fashion.
5.2 Adding Annotations
Because headlines and advertisements were not included in the Beyond Words workflow, we added annotations for headlines and advertisements for all images in the dataset. These annotations are not verified, as each page was annotated by only one person. In addition, due to the low number of annotated maps in the Beyond Words data (79 in total), we added annotations of 122 pages containing maps, which were retrieved by performing a keyword search of “map” on the Chronicling America search portal restricted to the years 1914-1918. We then downloaded the pages on which we identified maps, and we annotated all 7 categories of visual content on each page. Like the headline and advertisement annotations, these annotations are not verified.
5.3 Training Set Statistics
The augmented Beyond Words dataset in COCO format can be found in the Newspaper Navigator repository and is available for unrestricted re-use in the public domain. The dataset contains World War 1-era Chronicling America pages with annotations in total. The category breakdown of annotations appears in Table 1.
|Training/Validation Set Statistics|
6 Training the Visual Content Recognition Model
To train a visual content recognition model for identifying the 7 classes of different newspaper content, we chose to finetune a pre-trained Faster-RCNN object detection model from Detectron2’s Model Zoo using Detectron2 [detectron2]
and PyTorch[pytorch]. Because model inference was the bottleneck on runtime in our pipeline, we chose the Faster-RCNN R50-FPN backbone, the fastest such backbone according to inference time. Though we could have utilized the highest performing Faster-RCNN backbone, which achieved approximately 5% higher mean average precision on the COCO [coco] pre-training task at the expense of 2.5x the inference time, qualitative evaluation of predictions with the finetuned R50-FPN backbone indicated that the model was performing sufficiently for our purposes. Furthermore, we conjecture that the performance of our visual content recognition model is limited by noise in the training data, rather than model architecture and selection, for two reasons. First, the ground-truth Beyond Words labels were not complete because volunteers were only required to draw one bounding box per page (though more could be added). Second, there was non-trivial disagreement between Beyond Words annotators for the bounding box marking task due to the heterogeneity of visual content layouts and the resulting ambiguities in the annotation task.666In regard to the accuracy of the annotations, it is worth noting that Beyond Words was launched as an experiment; consequently, there were no interventions in workflow or community management after its launch, and the accuracy of the resulting annotations should be assessed accordingly.
|Category||AP||# in Val. Set|
All finetuning was performed using PyTorch [pytorch]
on a g4dn.2xlarge Amazon EC2 instance with a single NVIDIA T4 GPU. Finetuning the R50-FPN backbone was evaluated on a held-out validation set according to an 80%-20% split; the JSON files containing the training and validation splits are available for download with the GitHub repository. We used the following hyperparameters: a base learning rate of, a batch size of , and proposals per image. RESIZE_SHORTEST_EDGE and RANDOM_FLIP were utilized as data augmentation techniques.777These are the only two data augmentation methods currently supported by Detectron2.
Using early stopping, the model was finetuned for 77 epochs, which required 17 hours of runtime on the NVIDIA T4 GPU. The model weights file is publicly available and can be found in the GitHub repository for this project.
We report a mean average precision on the validation set of 63.4%; average precision for each category, as well as the number of validation instances in each category, can be found in Table 2
. We chose average precision as our evaluation metric because it is the standard metric utilized in the computer vision community for benchmarking object detection tasks. Given a fixed intersection over union (IoU) threshold to evaluate whether a prediction is correct, average precision is computed by sorting all classifications according to prediction score, generating the corresponding precision-recall curve, and modifying it by drawing the smallest-area curve containing it that is also monotonically decreasing. According to the COCO standard, average precision is then computed by averaging the precision interpolated over 101 different recall values and 10 IoU thresholds from 50% to 95%. For our calculations, we utilized all predictions with confidence scores greater than 0.05 and discarded predictions with confidence scores below this threshold.888A confidence score of 0.05 is the default threshold cut for retaining predictions in Detectron2.
7 The Pipeline
7.1 Building the Manifest
In order to create a full index of digitized pages for the pipeline to process, we used a forked version of the AIDA collaboration’s chronam-get-images repository999https://github.com/bcglee/chronam-get-images to generate a manifest for each newspaper batch consisting of filepaths for each page in the batch.101010More information on the batches can be found at https://chroniclingamerica.loc.gov/batches. Manifests consisting of 16,368,424 Chronicling America pages were compiled in total on March 17, 2020.
7.2 Steps of the Pipeline
In Figure 3, we present a diagram showing the pipeline workflow. Each manifest was processed in series by our pipeline. The pipeline code consists of six distinct steps:
Downloading the image and METS/ALTO XML for each page and downsampling the image by a factor of 6 to produce a lower resolution JPEG. Downsampling was performed to reduce I/O and memory consumption, as well as to avoid the overhead introduced by the downsampling that Detectron2 would have to perform before each forward pass during model inference. This step was run in parallel across all 48 CPU cores on each EC2 instance. The files were pulled down directly from the Library of Congress’s public AWS S3 buckets.
Running the visual content recognition model inference on each image to produce bounding box predictions complete with coordinates, predicted classes, and confidence scores. This step was run in parallel across all 4 GPUs on each EC2 instance. Predictions with confidence scores greater than 0.05 were saved. We chose to save predictions with low confidence scores in order to allow a user to select a threshold cut based on the user’s ideal tradeoff between precision and recall.
Extracting the OCR within each predicting bounding box. This step required parsing the METS/ALTO XML and was run in parallel across all 48 CPU cores on each EC2 instance.
Cropping and saving the extracted visual content as downsampled JPEGs (for all classes other than headlines). This step was run in parallel across all 48 CPU cores on each EC2 instance.
Generating ResNet-18 and ResNet-50 embeddings for the cropped and saved images with confidence scores of greater than or equal to 0.5. This step was implemented using a forked version of img2vec111111https://github.com/bcglee/img2vec [img2vec_2019]. This step was run in parallel across all 4 GPUs on each EC2 instance. The ResNet-18 and ResNet-50 embeddings were extracted from the penultimate layer of each respective architecture after being trained on ImageNet (the models themselves were downloaded from torchvision.models in PyTorch [pytorch]). The 2,048-dimensional ResNet-50 embeddings were selected due to ResNet-50’s high performance and fast inference time relative to other image recognition models [benchmark]. The 512-dimensional ResNet-18 embeddings were also generated due to their lower dimensionality, enabling faster computation for search and recommendation tasks.
Saving the extracted metadata and cropped images. The format of the saved metadata is described in the next section.
7.3 Running the Pipeline at Scale
All pipeline processing was performed on 2 g4dn.12xlarge Amazon AWS EC2 instances, each with 48 Intel Cascade Lake vCPUs and 4 NVIDIA T4 GPUs. All pipeline code was written in Python 3. In total, the pipeline successfully processed 16,368,041 pages () in 19 days of wall-clock time. The manifests of pages that were successfully processed, as well as the 383 pages that failed, can be found in the Newspaper Navigator GitHub Repository.
|Newspaper Navigator Dataset Statistics|
|Category||Count Confidence Score|
8 The Newspaper Navigator Dataset
8.1 Statistics & Visualizations
A statistical breakdown of extracted content in the Newspaper Navigator dataset is presented in Table 3. Because the choice of threshold cut on confidence score affects the number of resulting visual content in the Newspaper Navigator dataset, we include statistics for three different threshold cuts of 0.5, 0.7, and 0.9.
In each plot, the middle line corresponds to a cut of 0.7 on confidence score, and the upper and lower bounds of the confidence interval in light blue correspond to cuts of 0.5 and 0.9, respectively.Note that the y-axis scales vary per category in both plots. The fraction of each page covered is included because it is a more consistent metric for complicated visual content layouts (such as photo montages and classified ads): predicted bounding boxes can vary greatly in number while still remaining correct and covering the same regions in aggregate.
In Figure 4, we show visualizations of the number of photographs, illustrations, maps, comics, editorial cartoons, headlines, and advertisements in the Newspaper Navigator dataset according to year of publication. These visualizations show the average number of appearances per page of each of the seven classes over time, as well as the average fraction of the page covered by each of the seven classes over time. As in Table 3, we show three different cuts. With these visualizations, we can observe trends such as the rise of photographs at the turn of the 20th century and the gradual increase in the amount of page space covered by headlines from 1880 to 1920.
To demonstrate questions that we can begin to answer with the Newspaper Navigator dataset, we have included Figure 5, a visualization showing maps of the Civil War identified by searching the visual content for all 278,094 pages published between 1861 and 1865 in the dataset.121212The visualization was created using [collagemaker]. From this collection alone, researchers can study Civil War battles, the history of cartography, differences in print trends for northern and southern newspapers, and map reproduction patterns (“virality”).
8.2 Dataset Access and Format
The Newspaper Navigator dataset can be accessed via the Newspaper Navigator GitHub repository. We introduce the data format below, but more detailed instructions for use can be found at the webpage. For each processed page, an associated JSON file contains the following metadata:
filepath [str]: the path to the image, relative to the Chronicling America file structure.131313For example, see https://chroniclingamerica.loc.gov/data/batches/.
pub_date [str]: the publication date of the page, in the format YYYYMMDD.
boxes [list:list]: a list containing the normalized coordinates of predicted boxes, indexed according to [, , , ], where is the top-left corner of the box relative to the standard image origin, and is the bottom-right corner .
scores [list:float]: a list containing the confidence score associated with each box (only predicted boxes with a confidence score were retained).
pred_classes [list:int]: a list containing the predicted class for each box using the following mapping of integers to classes:
4 Editorial Cartoon
ocr [list:str]: a list containing the OCR of white space-separated strings identified within each box.
visual_content_filepaths [list:str]: a list containing the filepath for each cropped image (except headlines, which were not cropped and saved).
Another JSON file with the same file name with the suffix “_embeddings” includes the image embeddings in the following format; any prediction with a confidence score of less than does not have a corresponding embedding:
resnet_50_embeddings [list:list]: a list containing the 2,048-dimensional ResNet-50 embedding for each image (except headlines, for which embeddings were not generated).
resnet_18_embeddings [list:list]: a list containing the 512-dimensional ResNet-18 embedding for each image (except headlines, for which embeddings were not generated).
8.3 Pre-packaged Datasets
In order to make the Newspaper Navigator dataset accessible to those without coding experience, we have also packaged smaller datasets derived from the Newspaper Navigator dataset that can be downloaded in bulk from our GitHub repository. These derivative datasets are grouped geographically and temporally and cover both visual content and textual content (machine-readable headlines, captions of visual content, etc.). One such example is the collection of Civil War maps shown in Figure 5. We will continue to add derivative datasets as Newspaper Navigator evolves.
|Performance for 19th Century Newspaper Pages|
|Category||AP (1850-1875)||AP (1875-1900)|
9.1 Generalization to 19th Century Newspaper Pages
Given that the visual content recognition model has been trained on World War 1-era newspapers, it is natural to question the generalization ability of the model to 19th century newspapers. Though Figure 4 reveals trends consistent with intuition, such as the emergence of photographs in historic newspapers at the turn of the 20th century, it is still worthwhile to quantify generalization. To do so, we randomly selected 500 newspaper pages from 1850 to 1875 and 500 newspaper pages from 1875 to 1900 and annotated these pages. In Table 4, we present the average precision for headlines, advertisements, and illustrations in the test sets using our annotations as the ground truth (other classes were omitted due to their rarity in the annotated pages). Comparing the results in Table 4 to the results on the validation data in 2, we observe a moderate dropoff in performance for pages published between 1875 and 1900, as well as a more major dropoff in performance for pages published between 1850 and 1875. However, the extracted visual content from the pre-1875 pages in the Newspaper Navigator dataset is still of sufficient quality to enable novel analysis, as evidenced by the extracted collection of Civil War maps shown in Figure 5.
9.2 Partnering with Volunteer Crowdsourcing Initiatives
Our work is also a case study in partnering machine learning projects with volunteer crowdsourcing initiatives, a promising paradigm in which annotators are volunteers who learn about a new topic by participating. With the growing efforts of cultural heritage crowsdsourcing initiatives such as the Library of Congress’s By the People [bythepeople], Smithsonian’s Digital Volunteers [smithsonian], the United States Holocaust Memorial Museum’s History Unfolded [historyunfolded], Zooniverse [zooniverse], the New York Public Library’s Emigrant City [emigrantcity], The British Library’s LibCrowds [libcrowds], The Living with Machines project [livingwithmachines], and Trove’s newspaper crowdsourcing initiative [trove_crowd], there are more opportunities than ever to utilize crowdsourced data for machine learning tasks relevant to cultural heritage, from handwriting recognition to botany taxonomic classification [botany]. These partnerships also have the potential to provide insight into project design, decisions, workflows, and the context of the materials for which crowdsourcing contributions are sought. Along with Dielemann et al.’s work [galaxyzoo]
training a neural network to classify galaxies using crowdsourced data from GalaxyZoo, we hope that our project encourages more machine learning researchers to partner with volunteer crowdsourcing projects, especially to study topics pertinent to cultural heritage.
In this paper, we have described our pipeline for extracting, categorizing, and captioning visual content, including headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements in historic newspapers. We present the Newspaper Navigator dataset, a dataset of these 7 types of extracted visual content from 16.3 million pages from Chronicling America. This is the largest dataset of its kind ever produced. In addition to releasing the Newspaper Navigator dataset, we have released our visual content recognition model for historic newspapers, as well as a new training dataset for this task based on annotations from Beyond Words, the Library of Congress Labs’s crowdsourcing initiative for annotating and captioning visual content in World War 1-era newspapers in Chronicling America. All code has been placed in the public domain for unrestricted re-use.
11 Future Work
Future work on the pipeline itself includes improving the visual content recognition model’s generalization ability for pre-20th century newspaper pages, especially for the 10.4% of the pages in the Newspaper Navigator dataset published before 1875. This could be accomplished by finetuning on a more diverse training set, which could be constructed by partnering with another volunteer crowdsourcing initiative such as the Living with Machines project [livingwithmachines]. One could also imagine training an ensemble of visual content recognition models on different date ranges. Given that only 10.4% of pages in the Newspaper Navigator dataset were published before 1875, it is straightforward to re-run the pipeline with an improved visual content recognition model on these pages.141414This simply requires replacing the model weights file and filtering the pages for processing by date range.
To improve the textual content extracted from the OCR, future work includes training an NLP pipeline to correct systematic OCR errors. In the second step of the Beyond Words pipeline, volunteers were asked to correct or enter the OCR that appears over each marked bounding box, resulting in approximately 10,000 corrected textual annotations to date. It is straightforward to construct training pairs of input and output in order to train a supervised model to correct OCR. Other approaches to OCR postprocessing include utilizing existing post-hoc OCR correction pipelines [impresso_ocr, nguyen_2019_b, nguyen_2019_a, datamunging], all of which could be benchmarked on the aforementioned Beyond Words training pairs.
The future work that excites us most, however, consists of the many ways that the Newspaper Navigator dataset can be used. Our immediate future work consists of building a new search user interface called Newspaper Navigator that will be user tested in order to evaluate new methods of exploratory search. However, future work also includes investigating a range of digital humanities questions. For example, the Viral Texts [vt5, vt6, vt3, vt1, vt4, vt2] and Oceanic Exchanges [oceanicexchanges] projects have studied text reproduction patterns in 19th century newspapers, including newspapers in Chronicling America; the Newspaper Navigator dataset allows us to study photograph reproduction in 20th century newspapers. In addition, using the headlines in Newspaper Navigator, we can study which news cycles appeared in different regions of the United States at different times. These examples are just a few of many that we hope will be examined with the Newspaper Navigator dataset. We hope to inspire a wide range of digital humanities, public humanities, and creative computing projects.
The authors would like to thank Laurie Allen, Leah Weinryb Grohsgal, Abbey Potter, Robin Butterhof, Tong Wang, Mark Sweeney, and the entire National Digital Newspaper Program staff at the Library of Congress; Molly Hardy at the National Endowment for the Humanities; Stephen Portillo, Daniel Gordon, and Tim Dettmers at the University of Washington; Michael Haley Goldman, Eric Schmalz, and Elliott Wrenn at the United States Holocaust Memorial Museum; and Gabriel Pizzorno at Harvard University for their invaluable advice with this project. Lastly, the authors would like to thank everyone who has contributed to Chronicling America and Beyond Words, without whom none of this work would be possible.
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant DGE-1762114, the Library of Congress Innovator-in-Residence Position, and the WRF/Cable Professorship.