DeepAI AI Chat
Log In Sign Up

Can Common Crawl reliably track persistent identifier (PID) use over time?

by   Henry S. Thompson, et al.

We report here on the results of two studies using two and four monthly web crawls respectively from the Common Crawl (CC) initiative between 2014 and 2017, whose initial goal was to provide empirical evidence for the changing patterns of use of so-called persistent identifiers. This paper focusses on the tooling needed for dealing with CC data, and the problems we found with it. The first study is based on over 10^12 URIs from over 5 * 10^9 pages crawled in April 2014 and April 2017, the second study adds a further 3 * 10^9 pages from the April 2015 and April 2016 crawls. We conclude with suggestions on specific actions needed to enable studies based on CC to give reliable longitudinal information.


page 1

page 2

page 3

page 4


Change detection optimization in frequently changing web pages

Web pages at present have become dynamic and frequently changing, compar...

You, the Web and Your Device: Longitudinal Characterization of Browsing Habits

Understanding how people interact with the web is key for a variety of a...

Identifying KDM Model of JSP Pages

In this report, we propose our approach that identifies a KDM model of J...

An Empirical Investigation On Search Engine Ad Disclosure

This representative study of German search engine users (N=1,000) focuse...

Usability and Aesthetics: Better Together for Automated Repair of Web Pages

With the recent explosive growth of mobile devices such as smartphones o...

Improving Ads-Profitability Using Traffic-Fingerprints

This paper introduces the concept of traffic-fingerprints, i.e., normali...

ICLab: A Global, Longitudinal Internet Censorship Measurement Platform

Researchers have studied Internet censorship for nearly as long as attem...