Even though web archives hold billions of archived web pages , or mementos, obtaining a sample of mementos can be difficult. We describe the steps we took to create a data set of 16,627 mementos of 3,698 unique live web URIs (Uniform Resource Identifiers) from 17 public web archives. We use this collection in our study of identifying changes and transformations in the content of mementos over time (our preliminary work can be found in [6, 7, 8]).
To obtain a memento, lookup by URI-R111URI-R identifies an original resource from the live web (as described in Section 2) is widely supported by most web archives, but this requires a user to know the URI of an original page. For instance, we expect to find mementos of well-known URI-Rs (e.g., www.cnn.com) in massive web archives, such as the Internet Archive (web.archive.org), as these archives try to capture the entire web by employing large-scale web crawlers. Other web archives focus on preserving special collections. For instance, the UK Web Archive (webarchive.org.uk/ukwa/) was established with the objective of archiving only UK websites (e.g., www.parliament.uk/) . Other web archives, such as perma.cc, webcitation.org, and archive.is, capture web pages on demand, so they only preserve pages submitted by users, not through crawling the web. Table 1 shows a list of 17 public web archives:
General: Archives preserve any web page discovered through large-scale web crawlers.
On-demand: In general, only web pages (URIs) submitted by users are captured, but the archive might also create archived collections or obtain a copy of collections captured by other archives.
National: Archives preserve a government or country’s web content. They might capture web pages with one or more specific Top Level Domain.
Organizational: Archives preserve web pages that are about specific organizations, such as the European Union.
|Archive URI||Archive Name||Purpose|
|swap.stanford.edu||Stanford Web Archive Portal||General|
|web.archive.org||The Internet Archive||General and on-demand|
|archive.bibalex.org||Bibliotheca Alexandrina’s Internet Archive||National|
|arquivo.pt||The Portuguese Web Archive (PWA)||National|
|collectionscanada.gc.ca||Library and Archives Canada||National|
|digar.ee||The Estonian Web Archive||National|
|nationalarchives.gov.uk||The National Archives||National|
|vefsafn.is||The Icelandic Web Archive||National|
|webarchive.loc.gov||Library of Congress Web Archives||National|
|webarchive.org.uk||The UK Web Archive (UKWA)||National|
|webarchive.proni.gov.uk||Public Record Office of Northern Ireland (PRONI)||National|
|webharvest.gov||Congressional & Federal Government Web Harvests||National|
|archive-it.org||Archive-It - Web Archiving Services for Libraries and Archives||On-demand|
|europarchive.org||The European Archive||Organizational|
Table 2 shows the actual number of collected mementos, denoted by URI-M222URI-M identifies an archived version (memento) of an original resource (as described in Section 2), per archive and also the distribution of selected mementos through time. We explain in Section 3 how we obtained this set of 16,627 mementos (illustrated in Table 2).
During the process of collecting mementos, we obtained many archived pages, but because of the requirements for our target study, we only selected 16,627 mementos. The requirements include:
Downloading mementos is a slow operation and since the bottleneck is the archives themselves, parallelization will not help. We chose a target of completing the download of all mementos from all the archives within 40 hours. We also planned to do no more than two such downloads per week in order to limit the load on the archives.
Since we want to study changes in the playback of mementos over time, we chose 200 as the minimum number of URI-Rs per archive.
The number of selected mementos from each web archive should not exceed 1,600. This condition should help reducing the difference between large archives and small archives in terms of the number of sampled mementos.
The main purpose of this paper is to document how the dataset of mementos was created so it can be reused by other studies.
In order to automatically collect portions of the web, some web archives employ web crawling software, such as the Internet Archive’s Heritrix [23, 21]. Having a set of seed URIs placed in a queue, Heritrix will start by fetching web pages identified by those URIs, and each time a web page is downloaded, Heritrix writes the page to a WARC file , extracts any URIs from the page, places those discovered URIs in the queue, and repeats the process.
Memento [25, 26] is an HTTP protocol extension that uses time as a dimension to access the web by relating the current web resources to their prior states. The Memento protocol is supported by most public web archives including the Internet Archive. The protocol introduces two HTTP headers for content negotiation. First, Accept-Datetime is an HTTP Request header through which a client can request a prior state of a web resource by providing the preferred datetime, for example,
Accept-Datetime: Mon, 09 Jan 2017 11:21:57 GMT
Second, the Memento-Datetime HTTP Response header is sent by a server to indicate the datetime at which the resource was captured, for instance,
Memento-Datetime: Sun, 08 Jan 2017 09:15:41 GMT
The Memento protocol also defines the following terminology:
URI-R - identifies an original resource from the live web
URI-M - identifies an archived version (memento) of the original resource at a particular point in time
URI-T - a resource (TimeMap) that provides a list of mementos (URI-Ms) for a particular original resource (URI-R)
URI-G - a resource (TimeGate) that supports content negotiation based on datetime to access prior versions of an original resource (URI-R)
Figure 1 shows an example of requesting a TimeMap of www.cnn.com from the Internet Archive. This TimeMap has a list of over 227,000 URI-Ms of the original page www.cnn.com captured between June 20, 2000 and March 07, 2019.
A Memento aggregator can be used to retrieve TimeMaps aggregated from multiple web archives. The Memento Aggregator from Los Alamos National Laboratory (LANL)  is one implementation of a Memento aggregator that provides TimeMaps across different web archives both with (a) native support of the Memento protocol and (b) by proxy support of the Memento protocol. MemGator [5, 4] is another implementation of a Memento aggregator and an open source project that provides a variety of customization options, such as allowing users to specify a list of web archives to retrieve TimeMaps from, but it only aggregates TimeMaps from archives that natively support the Memento protocol.
On the playback of a memento, archives rewrite or transform the original content so that the memento is rendered appropriately in the user’s browser. The transformation process includes adding HTML tags to the original content to indicate when the memento was created and retrieved, and rewriting all URI-Rs of embedded resources so they point to the archive, not to the live web. Archives also add banners which provide information about both the memento being viewed and the original page.
In addition to the rewritten content, most archives allow accessing unaltered, raw, archived content (i.e., retrieving the archived version of the original content without any type of transformation by the archive). The most common mechanism to retrieve the raw content, which is supported by different Wayback Machine implementations [15, 19], is by adding id_ after the timestamp in the requested URI-M.
We collected URI-Rs from four different sources. The first 500 URI-Rs were from Moz , which provides a list of the top 500 domains on the web. The second set consists of 1,535 URI-Rs from a previous study  about investigating memento damage. The third set contains 6,657,856 URI-Rs that are publicly available in the HTTP Archive . The final set of URI-Rs (8,774,352) is from the Web Archives for Historical Research group (WAHR) .
We included the first two sources even though they are relatively small compared to the other two sources because (1) we wanted our final selected set of URI-Rs to have some top/well-known web pages (i.e., URI-Rs from Moz), and (2) the URI-Rs from the study of memento damage contains a mixture of URI-Rs with different path lengths (e.g., www.example.com/path/to/file.html). The main characteristic of URI-Rs that belong to the first and third source (i.e., Moz and HTTP Archive) is that the URI-R consists of a domain name only (e.g., www.example.com).
The URI-Rs from WAHR are extracted from tweets about the hashtags #climatemarch, #MarchForScience, #porteouverte,
#paris, #Bataclan, #parisattacks, #WomensMarch, and #YMMfire between December 11, 2015 and May 3, 2017. Table 3 shows the number of collected URI-Rs including the number of URI-Rs by hashtag for the WAHR source. The total number of unique URI-Rs from all four sources is 8,220,606 after removing duplicates.
|NAME||Access time||URI-Rsllll||URI-Rs after removing duplicates|
|WAHR||#climatemarch||2017-04-19 – 2017-05-03||175,278||41,674|
|#MarchForScience||2017-04-12 – 2017-04-26||299,124||90,318|
|#porteouverte #paris #Bataclan #parisattacks||2015-12-11||5,561,037||857,490|
|#WomensMarch||2017-01-12 – 2017-01-28||2,403,637||526,903|
We merged all 8,220,606 unique URI-Rs from the four sources into a single list. The order of how URI-Rs are placed on the list is as follows:
Moz’s URI-Rs were placed on the top of this list followed by URI-Rs from our Memento damage study.
We repeatedly selected 10 URI-Rs from HTTP Archive and 10 URI-Rs from WAHR, choosing 10 from a different hashtag each round.
The order of URI-Rs in the list is important because we decided to work with a smaller number of URI-Rs for our study. Thus, out of 8,220,606 URI-Rs, we only selected the first 10,000 URI-Rs that fulfill the conditions explained in Section 3.1.
3.1 Method 1: selecting the first 10,000 URI-Rs from the initial set of 8,220,606 URI-Rs
URI-Rs must be canonicalized to determine whether or not a URI-R with a particular domain name and file path length has already been selected. We used the canonicalization function that is part of PyWb . The function indicates that http://www.example.com, http://www.example.com:80, and www.EXAMPLE.com are the same, as shown in Figure 2. The output of this canonicalization function is in Sort-friendly URI Reordering Transform (SURT) format .
In addition to the canonicalization function, we issued an HTTP HEAD request to discover if two URI-Rs redirect to the same web resource. As Figure 3 shows, sending a HTTP HEAD request to www.fb.com and facebook.com will result in a “301” redirect to https://www.facebook.com/, which is the URI-R we select, rather than the first two URI-Rs.
Also, the selected URI-Rs must contain a variety of file path lengths that we group into the following five sets, each of which contains 2,000 URI-Rs with:
: Path length of zero
: Path length of one
: Path length of two
: Path length of three
: Path length of four or more
The final two conditions for selecting the first 10,000 URI-Rs are:
URI-Rs with the same file path length should not have the same domain name. For example, if www.youtube.com/watch?v=cpPG0bKHYKc has already been selected, then www.youtube.com/watch?v=hFhiV5X5QM4 will not be selected. This may help to collect more unique URI-Rs and vary the content we plan to study.
The TimeMaps of selected URI-Rs must contain at least one memento as our further work is to study any change or transformation in the content of mementos over time.
To retrieve TimeMaps, we used the LANL Memento Aggregator. Once a TimeMap is downloaded, we reduced the number of mementos in the TimeMap to one memento per year from each archive. TimeMaps returned from LANL’s aggregator have more information and metadata than we need for our further study. Therefore, we wrote two Python scripts available on Github333https://github.com/oduwsdl/mementos-fixity. The script timemap.py extracts only URI-Ms and their Memento-Datetime from the returned TimeMaps, while the second script yearly-filter.py filters TimeMaps by selecting one memento (the first) per year by archive. Figure 4 shows an example of a TimeMap with 64 mementos of the URI-R http://www.f
utureofmusic.org/about/positions.cfm, and Figure 5 shows the corresponding TimeMap after filtering. It contains only 10 mementos.
Table 4 shows the number of selected URI-Rs per source and path length and Table 5 shows that 13% of the selected URI-Rs currently have either the HTTP status code 4xx or 5xx. Even though these URI-Rs are no longer live, they are archived.
|#porteouverte #paris #Bataclan #parisattacks||8||758||716||855||711||3,048|
|HTTP status code|
Table 6 (column Method 1) shows the list of 16 web archives from which the mementos of our 10,000 URI-Rs are collected (there is one archive, nationalarc
hives.gov.uk, that has not been counted yet because it has not contributed any mementos). The total number of URI-Rs in the table exceeds 10,000 because a URI-R often has mementos in multiple archives, resulting in some URI-Rs being counted multiple times, but the total number of unique URI-Rs is still 10,000. The total number of URI-Ms in all TimeMaps is 12,988,039. This number drops to 48,199 URI-Ms after applying the one memento per year filter.
|Method 1||Method 2||Method 3||Method 4|
From Table 6, we notice that several archives have a small number of URI-Rs and URI-Ms. Since we want to study the playback fidelity of the web archives, we chose 200 as the minimum number of URI-Rs per archive. After applying Method 1, we used the three methods (Sections 3.2, 3.3, and 3.4) to discover additional mementos from web archives that have fewer than 200 URI-Rs.
3.2 Method 2: Discovering additional URI-Rs from the HTML of already collected mementos
For each archive that has not satisfied the 200 URI-Rs condition we downloaded the raw content of already collected mementos from the archive and extracted all URI-Rs found in the HTML. Using the LANL Memento Aggregator, we requested the TimeMap of each URI-R that had not already been selected. We applied this method for the following archives:
Three archives are not included in the list above. The first reason being that Method 2 can not be applied for nationalarchives.gov.uk because the archive has not yet provided any mementos. The second is that the archives europarchive.org and digar.ee satisfied the condition of 200 URI-Rs after applying Method 2 to swap.stanford.edu and vefsafn.is, respectively.
As shown in Table 7, any new discovered URI-Rs/URI-Ms with this method caused the information from all archives to be updated even for archives that already had more than 200 URI-Rs. Figure 6 shows an example of URI-Rs extracted from the HTML of the memento:
The URI-Rs are extracted from the attribute href in the <a> tags (using the Python script extract_urirs.py444https://github.com/oduwsdl/mementos-fixity). We downloaded the TimeMap of the URI-R www.inria.fr/ which had not previously been selected. As Figure 7 shows, the TimeMap does not only contain mementos from vefsafn.is but also mementos from the eight archives: web.archive.org, archive.bibalex.org, webcitation.org, webarchive.loc.gov, archive-it.org, archive. is, vefsafn.is, and digar.ee.
With Method 2, we now have 200 URI-Rs for the following additional archives:
Table 6 (column Method 2) shows the new archives that satisfy the condition of 200 URI-Rs and the four archives which still did not satisfy the condition.
3.3 Method 3: URI-Rs discovered in archives’ published lists
Some archives make lists of URI-Rs they collect available on the web. Archives may also publish lists of URI-Ms associated with each URI-R. We found these published collections for three archives (Table 8) that had not met the 200 URI-R minimum.
|collectionscanada.gc.ca||collectionscanada.gc.ca /webarchives/url-list/i ndex-e.html||2,613||27,232|
We downloaded the published list of URI-Rs only (URI-Ms were not included in this list) from the archive webarchive.org.uk. Then, using the LANL Memento Aggregator we retrieved TimeMaps that at least contain one memento in the UK Web Archive, of the first 192 URI-Rs. Table 6 (column Method 3) shows that this method helps two archives to reach 200 URI-Rs (i.e., webarchive.pro ni.gov.uk and webarchive.org.uk), but at the same time, a new web archive appears in the TimeMaps, nationalarchives.gov.u k, raising the total number of archives to 17.
Next, we downloaded lists of URI-Rs and URI-Ms made available by the two web archives collectionscanada.gc.ca and nationalarchives.gov.uk. We only extracted the number required to reach 200 URI-Rs per archive. With this method, we did not need a Memento aggregator since the archives already provide a list of mementos, but for the sake of consistency, we used the LANL’s Aggregator to download TimeMaps, so we can update information for the other archives. Table 6 (column Method 3) shows that for perma.cc we only needed to discover 89 additional URI-Rs to reach 200 URI-Rs. Table 9 shows how the number of URI-Rs/URI-Ms has increased after applying Method 3 to each of the three archives.
3.4 Method 4: Sending TimeMap requests directly to an archive
The LANL Memento Aggregator may serve cached TimeMaps , which may result in TimeMaps that do not contain recently created mementos. For this reason we decided to request TimeMaps for the already selected URI-Rs directly from perma.cc. Figure 8 shows an example of requesting the TimeMap of the URI-R www.whitehouse.gov from perma.cc (the archive uses other domain names like perma-archives.org). It contains 57 mementos. By this method, we were able to obtain the additional 89 URI-Rs for perma.cc shown in Table 6 (column Method 4).
3.5 Filtering by download time, the maximum number of mementos, and HTTP status codes
At this point, the selected set contained 86,387 URI-Ms, 20,490 total, and 11,222 unique URI-Rs from 17 different web archives. For our target study, we downloaded the rewritten and raw mementos 10 times. We ran 17 parallel processes where each process downloaded mementos from a specific archive. We found that download time varies between web archives. For example, it took about 40 hours to download 733 mementos from webharvest.gov and 12 hours to download 1,011 mementos from nationalarchives.gov.uk. Thus, we decided to change the number of mementos per archive to what could be downloaded within 40 hours, and the number of mementos must not exceed 1,600 per archive. This produced 18,472 mementos. Unfortunately, we did not check the HTTP status when selecting mementos to make sure they are “200 OK” or archival 4xx/5xx responses (i.e., they have the HTTP response header Memento-Datetime for the archives that support the Memento protocol). After selecting the 18,472 mementos, we found that about 10%, 1,975, of these mementos had the HTTP status code of non-archival 4xx or 5xx (1,498 are from archive.bibalex.org) as the example in Figure 9 shows. Thus, we removed most of the 4xx/5xx mementos and kept only 130 (out of 1,975) because we wanted to keep track of these mementos. We could not replace those removed mementos because by this time we had already used the selected dataset in our study, and it was not possible to recover any excluded mementos. This resulted in 16,627 mementos remaining.
3.6 Final set
Table 10 shows the final numbers of selected URI-Rs and URI-Ms per archive (available on GitHub555https://github.com/oduwsdl/mementos-fixity). The table shows that three archives have fewer than 200 URI-Rs for the following reasons:
perma.cc: It took about 40 hours to download 182 mementos from perma.cc, including the raw mementos.
archive.bibalex. org: We removed 1,498 mementos because they returned the “503 Service Unavailable” HTTP response code.
collectionscanada.gc.ca: We removed mementos of two URI-Rs that returned the “503 Service Unavailable” HTTP response code.
Figure 10 shows the distribution of URI-Ms between 1996 and 2017. The main reason for having fewer mementos in years 1996-2005 is because most web archives did not exist during those early years [13, 9]. Figure 11 shows the number of URI-Rs per path length. The number of distinct URI-Rs is 3,698, and of those 1,996 (54%) have a path length of zero and the remaining 1,702 URIs (46%) have a path length greater than or equal to one.
In this paper we describe four methods to discover 16,627 mementos from 17 public web archives. We use the LANL Memento Aggregator to look up mementos by submitting the URI-R of original web pages (Method 1). For archives that have fewer than 200 URIs, we collect additional mementos by extracting URI-Rs from the HTML of already discovered mementos (Method 2). As our third method, we use published lists of original web pages and their associated mementos made available by several web archives. Finally, we request TimeMaps directly from the archive perma.cc (Method 4). Even though the process of discovering mementos resulted in a total of 80,387 mementos (after applying the one memento per year filter), we downsampled this number to 16,627 due to our constraints of limiting to 1,600 URI-Ms per archive, being able to download all the mementos in less than 40 hours, and the condition that the number of URI-Rs per archive should be greater than or equal to 200.
This work is supported in part by The Andrew W. Mellon Foundation (AMF) grant 11600663.
-  The HTTP Archive Tracks How the Web is Built. https://httparchive.org/downloads.php (4 2017), accessed on 2017 April 15
-  The Moz Top Pages. https://moz.com/top500 (6 2017), accessed on 2017 June 8
-  WARC file format (ISO 28500:2017) (2017)
-  Alam, S.: A Memento Aggregator CLI and Server in Go. https://github.com/oduwsdl/MemGator (2016)
-  Alam, S., Nelson, M.L.: MemGator-A portable concurrent memento aggregator: Cross-platform CLI and server binaries in Go. In: Proceedings of the 16th ACM/IEEE Joint Conference on Digital Libraries (JCDL). pp. 243–244 (2016)
-  Aturban, M., Alam, S., Nelson, M.L., Weigle, M.C.: Archive Assisted Archival Fixity Verification Framework. In: Proceedings of the 19th ACM/IEEE Joint Conference on Digital Libraries (JCDL) (2019)
-  Aturban, M., Kelly, M., Alam, S., Berlin, J.A., Nelson, M.L., Weigle, M.C.: ArchiveNow: Simplified, Extensible, Multi-Archive Preservation. In: Proceedings of the 18th ACM/IEEE Joint Conference on Digital Libraries (JCDL). pp. 321–322 (2018)
-  Aturban, M., Nelson, M.L., Weigle, M.C.: Difficulties of Timestamping Archived Web Pages. Tech. Rep. arXiv:1712.03140 (December 2017)
-  Bailey, J., Grotke, A., McCain, E., Moffatt, C., Taylor, N.: Web Archiving in the United States: A 2016 Survey. http://ndsa.org/documents/WebArchivingintheUnitedStates_A2016Survey.pdf (February 2017)
-  Bailey, S., Thompson, D.: Building the UK’s First Public Web Archive. D-Lib Magazine 12(1), 1082–9873 (2006)
Bornand, N.J., Balakireva, L., Van de Sompel, H.: Routing Memento requests using binary classifiers. In: Proceedings of the 16th ACM/IEEE Joint Conference on Digital Libraries (JCDL). pp. 63–72 (2016)
-  Brunelle, J.F., Kelly, M., SalahEldeen, H., Weigle, M.C., Nelson, M.L.: Not all Mementos are created equal: Measuring the impact of missing resources. International Journal on Digital Libraries 16(3-4), 283–301 (2015)
-  Costa, M., Gomes, D., Silva, M.J.: The evolution of web archiving. International Journal on Digital Libraries 18(3), 191–205 (2017)
-  International Internet Preservation Consortium (IIPC): OpenWayback. https://github.com/iipc/openwayback/wiki (October 2005)
-  International Internet Preservation Consortium (IIPC): OpenWayback. https://iipc.github.io/openwayback/2.1.0.RC.1/administrator_manual.html (2015)
-  Kahle, B.: Wayback Rising! now 731,667,951,000 web objects (counting images and pages) active on https://web.archive.org . 731 billion! Thank you for all the support, it makes a difference. go @internetarchive. https://twitter.com/brewster_kahle/status/1118172506777509890 (April 2019)
-  Kreymer, I.: PyWb - Web Archiving Tools for All. https://github.com/ikreymer/pywb (December 2013)
-  Kreymer, I.: Webrecorder - a web archiving platform and service for all (2015), https://webrecorder.io
-  Kreymer, I.: Rewriter. https://github.com/webrecorder/pywb/blob/master/docs/manual/rewriter.rst (2018)
-  Kumar, R.: Sort-friendly URI Reordering Transform (SURT) python module. https://github.com/internetarchive/surt (2017)
-  Mohr, G., Stack, M., Ranitovic, I., Avery, D., Kimpton, M.: An Introduction to Heritrix An open source archival quality web crawler. In: Proceedings of the 4th International Web Archiving Workshop (IWAW) (2004)
-  Ruest, N., Milligan, I., Deschamps, R., Lin, J., Library and Archives Canada: Web Archives for Historical Research Group Dataverse (WAHR). https:/dataverse.scholarsportal.info/dataverse/wahr (5 2017), accessed on 2017 May 3
-  Sigurdsson, K.: Incremental crawling with Heritrix. In: Proceedings of the 5th International Web Archiving Workshop (IWAW) (2005)
-  Tofel, B.: Wayback for accessing web archives. In: Proceedings of the 7th International Web Archiving Workshop (IWAW). pp. 27–37 (2007)
-  Van de Sompel, H., Nelson, M.L., Sanderson, R.: HTTP framework for time-based access to resource states – Memento, Internet RFC 7089. http://tools.ietf.org/html/rfc7089 (2013)
-  Van de Sompel, H., Nelson, M.L., Sanderson, R., Balakireva, L.L., Ainsworth, S., Shankar, H.: Memento: Time Travel for the Web. Tech. Rep. arXiv:0911.1112 (2009)