DeepAI AI Chat
Log In Sign Up

The Newspaper Navigator Dataset: Extracting And Analyzing Visual Content from 16 Million Historic Newspaper Pages in Chronicling America

05/04/2020
by   Benjamin Charles Germain Lee, et al.
University of Washington
15

Chronicling America is a product of the National Digital Newspaper Program, a partnership between the Library of Congress and the National Endowment for the Humanities to digitize historic newspapers. Over 16 million pages of historic American newspapers have been digitized for Chronicling America to date, complete with high-resolution images and machine-readable METS/ALTO OCR. Of considerable interest to Chronicling America users is a semantified corpus, complete with extracted visual content and headlines. To accomplish this, we introduce a visual content recognition model trained on bounding box annotations of photographs, illustrations, maps, comics, and editorial cartoons collected as part of the Library of Congress's Beyond Words crowdsourcing initiative and augmented with additional annotations including those of headlines and advertisements. We describe our pipeline that utilizes this deep learning model to extract 7 classes of visual content: headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements, complete with textual content such as captions derived from the METS/ALTO OCR, as well as image embeddings for fast image similarity querying. We report the results of running the pipeline on 16.3 million pages from the Chronicling America corpus and describe the resulting Newspaper Navigator dataset, the largest dataset of extracted visual content from historic newspapers ever produced. The Newspaper Navigator dataset, finetuned visual content recognition model, and all source code are placed in the public domain for unrestricted re-use.

READ FULL TEXT

page 1

page 3

page 6

page 10

08/13/2017

Interstitial Content Detection

Interstitial content is online content which grays out, or otherwise obs...
12/12/2012

Product/Brand extraction from WikiPedia

In this paper we describe the task of extracting product and brand pages...
04/24/2023

Generating Topic Pages for Scientific Concepts Using Scientific Publications

In this paper, we describe Topic Pages, an inventory of scientific conce...
08/22/2019

Toward Maximizing the Visibility of Content in Social Media Brand Pages: A Temporal Analysis

A large amount of content is generated everyday in social media. One of ...
04/19/2021

Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model

In this work, we show the process of building a large-scale training set...
02/18/2019

A Broad Evaluation of the Tor English Content Ecosystem

Tor is among most well-known dark net in the world. It has noble uses, i...