The CENDARI Infrastructure

12/15/2016 ∙ by Nadia Boukhelifa, et al. ∙ 0

The CENDARI infrastructure is a research supporting platform designed to provide tools for transnational historical research, focusing on two topics: Medieval culture and World War I. It exposes to the end users modern web-based tools relying on a sophisticated infrastructure to collect, enrich, annotate, and search through large document corpora. Supporting researchers in their daily work is a novel concern for infrastructures. We describe how we gathered requirements through multiple methods to understand the historians' needs and derive an abstract workflow to support them. We then outline the tools we have built, tying their technical descriptions to the user requirements. The main tools are the Note Taking Environment and its faceted search capabilities, the Data Integration platform including the Data API, supporting semantic enrichment through entity recognition, and the environment supporting the software development processes throughout the project to keep both technical partners and researchers in the loop. The outcomes are technical together with new resources developed and gathered, and the research workflow that has been described and documented.



There are no comments yet.


page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The CENDARI infrastructure is the technical result of the CENDARI project CENDARI (2015), a European Infrastructure project funded by the EU for 2012–2016. The infrastructure is designed to provide and support tools for historians and archivists. The tools are web-based, using modern web-oriented technologies and standards. The CENDARI infrastructure is innovative because it is designed to address multiple scenarios with two types of actors: researchers and cultural heritage institutions providing data. Both benefit from the platform, although the novelty concerns the researchers more. To address researchers’ needs, the platform provides a note-taking environment to perform the initial steps of historical research, such as gathering sources, writing summaries, elaborating ideas, planning or transcribing. CENDARI integrates online available resources initially curated by cultural heritage institutions with the explicit goal of integrating them into the infrastructure to support the research process. Researchers from the project visited these institutions and negotiated data sharing agreements. For archivists, an Archival Directory was implemented, which allows the description of material based on the international Encoded Archival Description (EAD) standard ( The resulting infrastructure not only provides access to more than 800,000 archival and historical sources, but integrates them into a collection of tools and services developed by the project as a digital resource for supporting historians in their daily work.

The researchers’ primary entry point into CENDARI is via the Note-Taking Environment (NTE). It enables curation of notes and documents prepared by researchers within various individual research projects. Each project can be shared among colleagues and published once finished. These notes and documents can be linked to the data existing in the CENDARI Repository. This comprises both the linking of entities against standard references such as DBPedia and connection to archival descriptions. A faceted search feature is part of the NTE and provides access to all data, and in addition connects to the TRAME service Trame (2016) for extended search in distributed medieval databases. The Repository is based on the CKAN software (, manages the data, and provides a browser-based user interface to access it.

To support the creation of curated directories of institutions holding relevant material for both research domains, and to raise awareness about the “hidden” archives that do not have a digital presence or are less known but relevant, we integrated the AtoM software ({}), enabling historians and archivists to add new and enrich existing archival descriptions in a standardised way, following the EAD.

Once data is collected, an internal component, called the Litef Conductor (Litef) processes it for further semantic enrichment. It then sends the text extracted from the documents to the ElasticSearch ({}) search engine, invokes the high-quality Named Entity Recognition and Disambiguation Service (NERD)  Lopez (2009), and generates semantic data inferred from several document formats such as archival descriptions or XML encoded texts.

The connection to the semantic web is further extended through ontologies developed specifically for the intended use cases and connected through the Knowledge Base (KB). Researchers can upload their own ontologies to the Repository through the Ontology Uploader tool. To explore the semantic data collected, the Pineapple Resource Browser provides a search and browse web interface.

To summarise, the CENDARI technical and scientific contributions are:

  • The use of participatory design sessions as a method to collect users’ needs,

  • The design of the infrastructure as a research support tool,

  • The deep integration of multiple resources through a data integration and enrichment process to further support the researchers’ workflow.

CENDARI has generated several benefits and outcomes:

  • An integrated system to conduct the early stages of historical research,

  • A catalogue of hidden archives in Europe (more than 5,000 curated institutional and archival descriptions with special attention given to archives without own digital presence),

  • A rich set of Archival Research Guides to help and guide historians by providing transnational access to information Edmond et al. (2015a),

  • Integrated access to heterogeneous data from multiple data sources and providers,

  • A rich set of resources accessible and searchable from the web.

2 Related Work

CENDARI is related to several other Digital Humanities (DH) projects which tackle aspects of the humanities research workflow from either a generalist or topic-specific perspective. The project is directly connected to the Digital Research Infrastructure for the Arts and Humanities(DARIAH) (, harmonizing national DH initiatives across Europe, with a goal to build and sustain an infrastructure that supports and enables the research of humanities scholars in the digital age. From its inception, CENDARI was designed to rely on and ultimately hand over the resulting infrastructure to DARIAH for sustainability and reuse.

The European Holocaust Research Infrastructure (EHRI) project aims to assist the process of conducting transnational and comparative studies of the Holocaust by bringing together in an online portal ( archival material across a geographically dispersed and fragmented archival landscapeBlanke and Kristel (2013). Its principle goal is related to data collection, integration, and curation.

The TextGrid Virtuelle Forschungsumgebung für Geisteswissenschaften ( developed a virtual research environment targeted specifically at the humanities and textual research. Its open source components are implemented as a desktop client to work with TEI encoded documents and to create and publish scholarly editions. Following the end of the project the infrastructure components and technology were integrated into DARIAH.

To understand the needs of historians in the context of transnational history, we have used two key methods: the development of use cases and participatory design sessions. Participatory design has been adopted by many disciplines where stakeholders cooperate to ensure that the final product meets their needs. For interactive software design, which is the focus of this work, the aim is to benefit from different expertise: designers know about the technology and users know their workflow and its context. In a similar vein, Muller and Druin Muller (2003) describe participatory design as belonging to the in-between domain of end-users and technology developers which is characterised by reciprocal learning and the creation of new ideas through negotiation, co-creation, and polyvocal dialogues across and through differences.

Until recently, participatory design was not a common practice in humanities projects. Claire Warwick Warwick (2012) provides explanations for this paucity: It was often assumed that the resources created in digital humanities would be used by humanities scholars… there was little point asking them what they needed, because they would not know, or their opinion about how a resource functioned, because they would not care. It was also assumed that technical experts were the people who knew what digital resources should look like… their opinion was the one that counted, since they understood the details of programming, databases, XML and website building. The plan, then, was to provide good resources for users, tell them what to do and wait for them to adopt digital humanities methods”. Digital humanities has since then changed perceptions and we can now see a number of projects adopting participatory design to learn about users and their requirements Mattern et al. (2015); Wessels et al. (2015); Visconti (2016).

3 Design and Development Process

Assisting historians and archivists in their research on transnational history is an important but abstract goal. To design relevant tools, CENDARI first had to understand related needs and requirements of historians and archivists. It turned out that the work process of these researchers is not well described in books or research articles so the initial design phase of the project consisted in obtaining information about the needs and practised research processes of our target researchers.

Figure 1: The Integrated NTE for Historians: (A) the search and browse panel, (B) the resources panel, (C) the central panel dedicated to editing, tagging and resolution, and (D) the visualisation panel.

3.1 Use Case

Let’s start by an example of a historian’s workflow, acting as a use case for historical research carried out in CENDARI. It has been developed alongside the Archival Research Guide(ARG) on Science and Technology written by one of the historians in the project. This ARG examines the shift within the transnational scientific network during and after the First World War (WWI), which resulted in the isolation of the German Empire within the international scientific community. The research question is whether this shift can also be found in the correspondence of individual scientists. There was an intense international exchange between physicists before the war, and a particular prominent example is the case of the British scientist Ernest Rutherford, later Nobel Price winner and president of the British Royal Society.

As a first step, a new project with the name “Ernest Rutherford Correspondence” was created in the NTE (see Fig. 1). Afterwards available resources were searched through the faceted search, integrated in the NTE, and selected results (consisting of descriptions of archival material) were saved into a note.

The next step was a visit to an archive in Berlin, which holds the biggest collection of Rutherford’s correspondence in Germany. Several notes were taken there and added to the project space in the NTE. Photographs of the letters were taken, uploaded to the private research space and later transcribed in the working space, using the provided image viewer.

The transcription of each document was processed with the NERD service, integrated within the NTE. This service produces many results; since the visualisation of these entities cannot always be done automatically the user might have to resolve entities manually, for example by selecting the appropriate place or person from the list of alternatives provided in the resources panel of the NTE. The visualisations on the panel to the right show Ernest Rutherford’s correspondents in a histogram, on a map the places from where he wrote this letters (such as Montreal and Manchester, where he lived from 1907 onwards), and the timeline shows how Rutherford’s German correspondents did not receive any letters from him after 1914. In this way, the research hypothesis - abrupt ending of exchanges between German and Anglo-Saxon scientists from the beginning of the First World War onwards - is being substantiated.

From the list of resources available in archives and libraries it can be discovered that Cambridge University Library holds the Correspondence and Letters of Ernest Rutherford and thus the most important and comprehensive part of his heritage. If the researcher is not able to visit Cambridge for example, because of lack of financial support, the researcher can ask a colleague abroad to contribute to the endeavour by visiting Cambridge and checking the correspondence in the archive. In order to enable a researcher abroad to contribute to the research project, it can be shared with him by ticking a box in the project edit page. That does not mean that the project is publicly visible, as all content is inherently private in the NTE. The project is simply being shared with one colleague based in another country. In this manner, collaborative and transnational research becomes possible. Once the material in Cambridge has been explored and partially described, more complex interpretations and the deeper layers of the research question can be pursued. In this case, for example, the correspondence and interaction between Ernest Rutherford and Otto Hahn in the early 1930s, and Rutherford’s influence on Hahn’s discovery of the nuclear fission.

3.2 Participatory Design Workshops

There are many participatory design methods including sitings, workshops, stories, photography, dramas and games Muller (2003). We were inspired by low-fidelity prototyping methods Beaudouin-Lafon and Mackay (2002) because they provide concrete artifacts that serve as an effective medium for communication between users and designers. In particular, to explore the CENDARI virtual research environment design space, we used brainstorming and video prototyping. Together, these two techniques can help explore new ideas and simulate or experience new technology.

For brainstorming, a group of people, ideally between three and seven in number, is given a topic and a limited amount of time. Brainstorming has two phases: an idea generation phase and an evaluation phase. The aim of the first stage is to generate as many ideas as possible. Here quantity is more important than quality. In the second stage, ideas are reflected upon and only a few selected for further work (e.g. video prototyping). The selection criteria could be a group vote where each person picks their favourite three ideas.

Video prototyping is a collaborative design activity that involves demonstrating ideas for interaction with a system in front of a video camera. Instead of describing the idea in words, participants demonstrate what it would be like to interact with the system. The goal is to be quick and to capture the important features of the system that participants want to be implemented. A video prototype is like a storyboard: participants describe the whole task including the user’s motivation and context at the outset and the success of the task at the end. In this way, the video prototype is a useful way to determine the minimum viable system to achieve the task successfully.

We organised three participatory design sessions Boukhelifa et al. (2015) with three different user groups: WWI historians, medievalists, and archivists and librarians. The aim of these sessions was to understand how different user groups would want to search, browse, and visualise (if at all) information from archival research. In total there were 49 participants (14 in the first, 15 in the second and 20 in the third). Each session was run as a one-day workshop and was divided into two parts. The morning began with presentations of existing interfaces for access and visualisation. In order to brainstorm productively in the afternoon, participants needed to have a clear idea of the technical possibilities currently available. In the afternoon, participants divided into groups of four and brainstormed ideas for searching, browsing, and visualisation functions, and then they created paper and video prototypes for their top three ideas. There were 30 video prototypes in total, each consisting of a brief (30 seconds to 4 minutes) mock-up and demonstration of a key function. Everyone then met to watch and discuss the videos.

Figure 2: The three participatory design sessions held with WWI historians, librarians and medievalists.

Findings: These participatory workshops served as an initial communication link between the researchers and the technical developers and they were an opportunity for researchers to reflect on their processes and tasks, both those they perform using current resources and tools and those they would like to be able to perform. Even though there were different groups of users involved in these workshops, common themes emerged from the discussions and the prototypes. The outcomes of the participatory sessions were three-fold: a set of functional requirements common to all user groups, high-level recommendations to the project and a detailed description of historians workflow (section  3.3).

In terms of functional requirements, participants expressed a need for: networking facilities (e.g. to share resources or to ask for help); robust search interfaces (e.g. search for different types of documents and entities or by themes such as language, period or concept); versatile note-taking tools that take into consideration paper-based and digital workflows and support transcription of documents and annotations; and interactive visualisations to conceptualise information in ways that are difficult in text-based forms.

The participatory design workshops were concluded with the following high-level recommendations to CENDARI: (i) to combine existing workflows with new digital methods in ways that save researchers’ time. In particular, notes can be augmented over time, and researchers willingness to share them might depend on the note-taking stage and their motivation for sharing; (ii) all researchers have data that they do not use after publication or end of their projects. If this project can offer benefits for making this data available with proper citations, such an initiative could encourage researchers to release their data. This could change working practises and bring more transparency to historical research. Indeed, linking the management of research data to publication and presenting tangible benefits for researchers are important factors in attracting contributors; and (iii) to work closely with researchers to develop testbeds early in the project rather than seek feedback at the end of software development. Researchers who are currently working on a project are likely to have useful data and an interest in sharing it. These projects could be implemented as use cases demonstrating the utility of our virtual research environment.

In order to create a technical system that can adequately support these functional requirements and recommendations, information collected from the participatory workshops was translated into precise descriptions of functions, which were then evaluated by technical experts and used as the basis for technical development. As part of the followup from these workshops, it was decided to translate important functions demonstrated in the video prototypes into a written format in the form of use cases. An example of such use cases is described at the beginning of section 3.

3.3 Historian Research Workflow

Through the discussions and exchanges gathered during the participatory design sessions and standard references Iggers (2005), CENDARI technical partners identified a workflow that seems to match the work of a broad range of historians in the early phases of their research, although we do not claim that every historian follows it. We summarise it here to refer to it later concerning the tools and mechanisms that the infrastructure supports.

  1. Research preparation: all historians start a new project by gathering questions, hypotheses, and possible ways of answering or validating them. This phase is usually informal and carried-out using a note-book, either paper-based or computer based. During the participatory design sessions, researchers complained that the notes they take are hardly organised, often spread over many computer files or notebook pages.

  2. Sources selection: to answer the questions and validate the hypotheses, researchers gather books, articles, web-accessible resources, and make a list of primary and secondary sources to read. This stage is repeated each time new information is collected.

  3. Planning of visits to Archives and Libraries: relevant sources are often accessible only from specific archives or libraries, because the sources are unique; even the catalogues are not always accessible online, or not precise enough to answer questions. Physical visits to the institutions are then necessary.

  4. Archive and Library visit: working at archives or libraries involves note-taking, transcribing, collecting scans and photos of documents for later exploitation. Sometimes, even more work is involved when archive boxes have never been opened before and some exploration and sorting is needed, hopefully (but not always) improving the catalogues.

  5. Taking Notes: during their visits in archives and libraries, historians take notes or annotate copies of archival records. These notes and annotations generally follow the main research interests of the researchers, but also contain side glances to related topics or possible research areas. Most of the time they can be understood as in-depth descriptions of the sources and thus as a complement to the metadata available in finding aids.

  6. Transcription: primary sources consulted are often transcribed, either partially or exhaustively to facilitate their reading, but also to enhance their searchability. These transcriptions serve as the basis for literal citations in publications.

  7. Research refinement and annotation: from the information gathered, some questions are answered, some hypotheses are validated or invalidated, but new questions arise as well as new hypotheses. In particular, this list of questions comes repeatedly with regard to the following facts: (a) who are the persons mentioned in the documents. Some are well known, others are not and yet are frequently mentioned. Understanding who the persons are and why they were involved is a recurring question in the research process. (b) where are the places mentioned? Finding them on a map is also a recurring issue. (c) what are the organisations mentioned in the documents, and their relationship with the persons and places. (d) clarifying the temporal order of events is also essential and dates often appear with a high level of uncertainty.

  8. Knowledge organisation and structuring: after enough facts have been gathered, historians try to organise them in high-level structures, with causalities, interconnections, or goals related to persons and organisations. This phase consists also in taking notes, but also in referring to other notes, transcriptions, and by organising the chunks of information from previous notes in multiple ways, not related to the documents where they come from but rather from higher-level classifications or structures.

  9. Refinement and writing: at some point, the information gathered and organised is sufficient for writing an article or a book. All the bibliographic references are gathered, as well as the list of documents consulted in archives and libraries, to be referenced in a formal publication.

  10. Continuation and Expansion: often a research work is reused later either as a continuation of the initial research, or with variations reusing some of the structures (other places, other organisations, other times).

  11. Collaboration support: while collaborations are possible and realised in the physical world, sharing of gathered material is limited due to the difficulty of copying every physical information between collaborators. In contrast, a web-based setting allows the sharing of all the resources related to a research project. Historians and archivists have expressed the desire, during all the participatory design sessions, to experiment digital-based collaborations.

Archivists do not follow the same workflow but they share some of the concerns, in particular the need to identify persons, places, organisations, and temporal order, since this information is essential to their cataloguing activity. They also have a higher-level engagement with the material that is useful in making sense of institutional logic or just understanding the contents and provenance of particular boxes and collections. We note that the workflow described above is non-linear as reported in other studies (e.g. Mattern et al. (2015)). Researchers can at any stage acquire new information, reorganise their data, refine their hypotheses or even plan new archive or library visits.

The CENDARI platform has been built to support this workflow with web-based tools, and to augment it with collaboration, sharing, and faceted search through the gathered and shared documents. Main functional requirements can be summarised as: taking notes, transcribe, annotate, search, visualise, and collaborate.

3.4 Iterative Approach and Support Technologies

The software development in the project was carried out by adopting agile development methods. The software components were developed in short iterative release cycles and direct user feedback was encouraged and incorporated. The software is released as open source and the code is hosted on and maintained through GitHub CENGitHub (2016). Where possible CENDARI used the existing solutions provided by DARIAH, such as the JIRA ({}) ticketing system. Researchers who contacted relevant institutions for inclusion of their holdings into the platform used it to support and document the workflow from the first contact through negotiations and agreements for data sharing, to the final ingestion. Following the positive experience of historians using the tool, JIRA remained the obvious choice for bug and issue tracking during the software development. By tightly integrating the applications with the DARIAH development platform and establishing close communication between historians and developers, all parties involved in the development process were able to engage in direct and problem oriented development cycles. Ultimately, the release cycles were reduced to one week and included dedicated test and feedback sessions as well as development focusing on small numbers of issues that were identified and prioritised collaboratively in the group.

One of the decisions that enabled these rapid cycles was the adoption of a DevOps model to manage and develop the CENDARI infrastructure. By applying methods of agile development to system administration and simultaneously combining the development and management of the applications and infrastructure, as discussed in e.g. Kim et al. (2013), the effort and amount of required work from code commit to deployment was dramatically reduced. This was achieved by implementing automated code build and deployment using the Jenkins CI ({}) platform and Puppet ({}) configuration management on dedicated staging and production servers.

4 The CENDARI Infrastructure

The CENDARI infrastructure technically combines integration of existing components and tools, extension of other components and tools and tailored development of the missing pieces. The overall goal was to offer a great user experience to the researchers, part of which was already implemented by existing tools, while avoiding development of an infrastructure from scratch.

Finding a set of tools that can be either developed or combined to form an integrated environment was a challenge. We realised that in order to address the user requirements it was necessary to provide multiple tools to support several stages of the researchers’ workflow, and the production of research outputs and formats, as no single tool could offer all the required features. The challenging part was to select and decide upon a limited number of components, technologies and tools which users could use intuitively and without extensive training.

To implement this infrastructure, a modular approach was undertaken. In using existing tools and services, we were able to offer many tools for some of the major features. At the same time, several important parts of the infrastructure (see Fig. 3) were developed especially for CENDARI.

At the data persistence level we take a polyglot approach: relational databases such as PostgreSQL ({}) and MySQL ({}) for most web-based tools, ElasticSearch for our faceted search, and the Virtuoso ( triple store to manage generated semantic data and and triples created by the NTE.

In the application layer, the central data exchange and processing is carried out by the Data API, a component implemented within the project (see Section 6.2). The data is stored in the Repository, based on CKAN and can be accessed through a web browser. Within the project, several extensions to CKAN were developed to support the data harvesting and user authentication mechanisms. The development of the Data API and the Litef Conductor focused on providing specific data services required for the project.

Several user applications in CENDARI support the historian’s workflows (see  3.3): the NTE, the Archival Directory CENArch (2015) and the Pineapple Resource Browser Tool (

The NTE, described in detail in Section 5, combines access to the faceted search and the repository data with individual research data. The NTE is an adaptation and extension of the EditorsNotes system ({}). The Archival Directory, based on the AtoM software, is used for manual creation and curation of archival descriptions. It has a strong transnational focus and includes “hidden” archives and institutions, “little known or rarely used by researchers” CENArch (2015). At present it offers information about more than 5000 institutional and archival descriptions curated by CENDARI researchers. Pineapple provides free-text search and faceted browsing through our Knowledge Base (KB), containing resources and ontologies from both domains (WWI and Medieval). Technically, Pineapple is a SPARQL client, sending predefined parametrised queries to Virtuoso to extract and render information about available resources, related entities such as people, places, events or organisations, or resources which share same entity mentions from different data sources. These resources are generated by the semantic processing services (see 6.2.4) or are integrated from the Medieval knowledge base TRAME (see below). Pineapple provides a web-based user interface, and uses content negotiation ({}) to provide a REST-based (non-SPARQL) interface to the KB, delivering JSON formatted data. Advanced and ontology-savvy users can use the Ontology Viewer tool or upload own ontologies via the Ontology Uploader.

All CENDARI services authenticate users through the DARIAH AAI  (, a federated authentication solution based on Shibboleth ({}), for which dedicated plugins for AtoM and CKAN were also created. The DARIAH AAI handles the verification of users and the creation of user accounts including support for password management. From an end user’s perspective, this is a Single-Sign-On experience, where CENDARI components are visible as one of many DARIAH services. While Shibboleth is used to authenticate users, access permissions to the individual resources within the CENDARI infrastructure are handled by the Repository.

In addition to the main applications listed above, two other components were developed/adjusted: the NERD service Lopez (2009) and the TRAME application to search for distributed medieval resources Trame (2016). These are hosted and maintained by the developing partner institutions and provided through defined and documented interfaces for CENDARI and other third parties.

The setup of the main applications was implemented using configuration management as shown in Fig. 3, which provides a high-level overview of the infrastructure design, distinguishing between the internal Application and Data Layers and the user facing Presentation layer vertically. The model also shows the distinction between the Front Office and Back Office, split over two machines, separating the common user applications from those components used by power users only. During the development of the infrastructure, two instances of both servers were used to allow for integration tests between components in a dedicated staging environment before deployment to the production servers.

Figure 3: CENDARI Infrastructure Model, originally by Stefan Buddenbohm

The modular approach to designing this infrastructure is exemplified by the role of the Data API as a central component for data exchange between the user facing applications and the Repository, including user authorisation on the resource level. All user-facing applications communicate trough the Data API. Thus by adopting the API any underlying repository solution could be used.

CENDARI was designed from the start as an infrastructure that can ultimately be sustained and reused through DARIAH. To achieve this, we used tools and solutions offered by DARIAH or other parties, and integrated the services into the existing ecosystem where possible, such as the development platform, the AAI, CKAN, AtoM. Additionally, the technical design of the infrastructure was aligned with the efforts undertaken in parallel to migrate the TextGrid infrastructure into the DARIAH ecosystem.

5 Note-Taking Environment

Most historians collect notes in files using standard text editors. The files are sometimes well sorted in folders, but most of the time the files are difficult to find due to the loose organisation of early research projects. The information related to the notes are also scattered in multiple locations. Sometimes, historians take pictures at archives or libraries; these pictures end-up stored in whatever location their camera or system stores them. The file names associated with these pictures are also unrelated to the notes or research projects. Even if some historians are well organised, it takes them considerable time and discipline to organise their virtual working space on their own computer. And even with the strongest discipline, it is almost impossible to link the multiple documents together, connecting notes to pictures, scans, PDF files, spreadsheets, or other kinds of documents they use. With all these problems in mind, and to facilitate the collaboration between researchers, we have designed and implemented the NTE in tight integration with the whole CENDARI infrastructure. The NTE implements the historian’s workflow described in Section 3.3.

5.1 Overview

The NTE is designed to manage documents and notes gathered for a project. Typically, a project is a thesis, or a journal article, but more generally, it is a container for gathering and linking information, refining it, collaborate, and prepare publications. The final publication or production is not done inside the NTE since there are already many editing environments to perform that task.

The main user interface of the NTE has three main panels coordinated using brushing and linking Becker and Cleveland (1987): the search and browse panel (Fig. 1(A)); a library where the user can manage projects and browse allocated resources (Fig. 1(B)); a central space for editing, linking and tagging resources (Fig. 1(C)); and a visualisation space for showing trends and relationships in the data (Fig. 1(D)).

The resources panel: resources are organised per project into three main folders corresponding roughly to the way historians organise their material on their machines. The notes folder contains files, each file is a note describing archival material related to a project. The user can select a note and its content is shown in the central panel. The user can edit the note, tag words to create entities such as event, organisation, person, place and tag. She can add references to documents which can be letters, newspaper articles, contracts or any text that acts as evidence for an observation or a statement in a note. Documents can contain named entities, a transcript, references to other documents and resources, as well as scanned images, which are displayed in the high resolution web-based image viewer at the bottom of the central panel.

The central panel: acts as a viewing space for any type of resource and mimics the function of a working desk of a historian. This is where entity tagging and resolution takes place. The user may not be entirely clear about the true identity of an entity, for instance, in the case of a city name that exists in different countries. She has an option to manually resolve it, by assigning a unique resource identifier URI (e.g. to a unique Wikipedia entry).

The visualisation panel:

provides useful charts to show an overview of entities, distributions, frequencies and outliers in the resources of the project

, and a map which shows the location of any place entity.

In summary, the NTE supports the following features: (1) Editing and annotation: through a rich set of editing, formatting and tagging options provided by RDFaCE ({}) (2) Faceted Search: for thematic search and access to resources; (3) Visualisation: showing histograms of three entity types (names, places and events) and a geographical map with support for aggregation (other visualisations could be easily integrated); (4) Automatic Entity Recognition: in the form of an integrated NERD service in the editor (5) Interaction: the visualisations support selection, highlight and pan&zoom for the map. Brushing and linking is implemented with one flow direction for consistency from left to right. This is to support the workflow in the NTE: select resource, view&update, then tag and visualise. Note that in addition referencing, collaboration and privacy setting are available in the NTE. In terms of privacy settings, by default notes are private and entities are public, but users can change these permissions.

5.2 Technologies

The NTE implements a client-server architecture, the client side relying on modern web technologies (HTML5, JavaScript, D3.js), and the server on the Django web framework ({}). Faceted browsing and search functionalities are implemented using ElasticSearch, which unifies the exploration of all resources provided by the project. Django provides a rich ecosystem of extensions that we experimented with to support e.g. ElasticSearch and authentication. As we needed a tight integration to communicate precisely between services, we resorted to implementing our own Django extensions for faceted-search support, indexing and searching with ElasticSearch, access to semantic web platforms, and support for very large images through an image server. Although the principles behind all these services are well-known, the implementation is always unexpectedly difficult and time-consuming when it comes to interoperability and large datasets.

One example of scalability relates to the faceted search, which allows searching a large set of documents through important type of information or “facets”. A search query is made of two parts: a textual query as search engine support, and a set of facet names and facet values to restrict the search to these particular values. For example, a search can be related to a date range 1914–1915 (the date facet with an interval value), and a person such as “Wilhelm Röntgen” (a person facet restricted to one name), plus a query term such as “Columbia”. The result of a faceted search is a set of matching documents, showing snippets of text where searched terms occur, and a list of all facets and all facet values appearing in the matching documents (or the 10 most frequent facet values to avoid saturating the user interface). We defined 10 facet types: application where document was created, person who is the document’s creator, language used, name of the research project holding the document, and mentions of dates/periods (date), organisation names (org), historical persons (person), geo-coordinates (location), places (place) and document identifiers (ref) e.g. ISBN, URL.

ElasticSearch allows defining a structure for searching (called a mapping), and provides powerful aggregation functions to support scalability. For example, for each query, we receive the list of matching geographical locations and dates; if we show one point per result, the visualisations are over-plotted and the communication between the server and the web client takes minutes to complete. A typical historical project references easily 10,000 locations and thousands of names. Searching over many projects multiplies the number. Furthermore, CENDARI also provides reference datasets such as DBPedia that defines millions of locations and dates. Therefore, our faceted search relies on ElasticSearch aggregation mechanisms, returning ranges and aggregates. Locations are returned as geohash identifiers with a count of matches in each geohash area, which enables visualising results quickly at any zoom level. Dates are also returned as intervals with the number of documents for the specified interval.

The NTE fulfils its role of editor, image viewer, and search interface at scale. It currently serves about 3 millions entities and approximately 800,000 documents with a latency around 1–10 seconds depending on the number of users and complexity of the search queries.

6 Data Integration and Semantic Services

The primary objectives of the Data Integration Platform (DIP) were to integrate relevant archival and historical content from disparate sources into a curated repository, to provide tools for describing and integrating hidden archives and to implement semantic services for enquiring and interlinking of content. The DIP directly or indirectly supports several stages in the historian’s research workflow (from Section 3.3): selection of sources (2), planning to visit/visiting archives and libraries (3), knowledge organisation and structuring (8), research refinement and annotation (9), and searching through relevant material. It contributes to CENDARI’s enquiry environment by offering new ways to discover meaning and perform historical researchCENDARI (2015). It ensures that data from external sources remains available in the exact format and version in which they have been used for the research in the first place, thus contributing to the reproducibility of the research. It preserves the original data and their provenance information and sustains the derivatives from the data processing, transformation or data modification operations, ensuring data traceability.

To create a representative and rich pool of resources related to the modern and medieval history, the team identified and contacted more than 250 cultural heritage institutions. We encountered a significant diversity among institutions in the level of digitisation and digital presence. Some institutions provide excellent digital access, while others are still in the process of digitising their original analogue finding aids. It should be noted that neither the digital presence itself, nor the existence of digitised material guarantees that the material is publicly available and accessible outside of the institution’s own internal system. Furthermore, differences exist among institutions’ data provision protocols (when available).

To address these challenges and to access and harvest the content from different institutions, we had to establish a flexible and robust data acquisition workflow, confronting at the same time legal, social and technical challenges as described in detail in Edmond et al. (2015a). Our process is consistent with the FAIR data principles, designed to make data Findable, Accessible, Interoperable, and Re-usable Wilkinson et al. (2016). Harvested data is preserved in the original form, enriched during processing (see 6.2), and further interlinked based on the enriched and extracted information. Data is retrievable by a CENDARI identifier, along with its data origin e.g. the provider institution or the original identifiers. The NTE and Pineapple provide search and browse functionality for end users, while the Data API exposes data in a machine-readable fashion. When data is processed or enriched, DIP makes sure that a full log of applied transformations and final outputs are preserved and FAIR. An exception to some of the principles concerns private research data since we decided to balance transparency and confidentiality for metadata extracted from researchers’ notes. Additionally, the FAIRness of the integrated data at large also depends on the original data sources.

6.1 Incremental Approach to Data Integration

Data in CENDARI originate from archives, libraries, research institutions, researchers or other types of contributors of original content including data created in CENDARI. They have the following characteristics in common: variety of data licenses and usage policies; heterogeneous formats, conventions or standards used to structure data; multilinguality and diverse granularity and content quality. In addition, initial prospects suggested that CENDARI would have to accommodate data beyond just the textual and include audio and video materials, thus building an infrastructure with high tolerance for such heterogeneity.

This scenery leads to the term data soup, defined as “a hearty mixture of objects, descriptions, sources and XML formats, database exports, PDF files and RDF-formatted data” Edmond et al. (2015a). From a higher level perspective, the data soup comprises raw data, knowledge base data, connected data and CENDARI-produced data which require different data management approaches Edmond et al. (2015b). A mapping into a common data model (as applied in most data integration approaches) would not be possible or preferred, for several reasons: lack of a priori knowledge about data—plenty of standards or custom data formats; often standard formats were brought in multitude of flavors, sometimes even with contradictory semantics e.g “creator” was used both as an author of an archival description or as a person who wrote a letter to another person; non-existence of a widely accepted domain data model— WWI and medievalist groups had different requirements and perspectives on data granularity. The development of a single, widely accepted new schema (or extension of an existing one) takes time and does not guarantee flexibility for future modifications of the schema and the tools. New data acquisitions may impose new changes in the metadata schema, which, apart from modifying the tools, causes further delays to the data integration scenarios. It also increases the risk of incompleteness, as data would be limited only to the structure supported by a single schema, thus a resource not fitting the schema would have to be omitted.

Even if CENDARI would have established an all-inclusive new metadata schema, it would still have not guaranteed that it will serve researchers’ needs, ensure their transnational and interdisciplinary engagement, and provide an enquiry-savvy environment. Such a schema would either be highly generic and comprehensive, thus defeating the purpose of having a schema; or it would be too specific, thus failing to fulfil the needs of both current and future researcher groups.

Consequently, it was necessary to adapt a lean model to the data integration, avoiding thorough domain modelling until a better understanding about user requirements and scenarios develops. The resulting system should enable integration of data processing components in a pay-as-you-go fashion Hedeler et al. (2013), deliver early results and corrective feedback on the quality and consistency of data, perform refinement and data enrichment in an incremental rather than prescribed way.

For this purpose we adopted an approach combining two conceptual frameworks: dataspace Franklin et al. (2005) and blackboard Hayes-Roth (1985). This allowed us to separate the data integration process from the data interpretation and developments of domain specific application models and tools Edmond et al. (2015b); Edmond et al. (2015a). The dataspace promotes coexistence of data from heterogeneous sources and a data unification agnostic to the domain specifics. Such a system contributes to creating and inferring new knowledge, and benefits from the “dirt in data” Yoakum-Stover (2010) by preserving the information in its original form, enabling serendipitous discoveries in data. The blackboard architecture is often illustrated with the metaphor of a group of specialists working on a problem Hayes-Roth (1985), sharing their solutions to the “blackboard” and reusing them to develop further solutions until the problem is solved. We applied a slightly modified blackboard model, a specialist being a data processing service (agent) triggered by a data insertion or update; the processing service produces an output which may be used by other services and components; additionally, it specifies the certainty score of the output, which is used to filter out the less optimal results.

6.2 Data Integration and Processing Components

CENDARI workflows are grouped around three major processes: collection, indexing and enquiry (Fig.  4)Edmond et al. (2015a). The DIP is essential for the data collection and indexing and integrates services to transform and process data for searching and semantic enquiries by the end users CENTools (2016).

Figure 4: Data integration workflows within the CENDARI infrastructure Edmond et al. (2015a)

The Repository is a central storage for harvested data and CENDARI-created content. It assigns basic provenance information to the data and ensures that data is versioned. In addition, it keeps the overall authorisation information at a single place. The Repository implements the dataspace model and has no understanding of the data semantics.

The Data acquisition components support the collection process by harvesting and ingesting content into the Repository. They vary from dedicated harvesters for content providers APIs, to a Scraper service, which extracts structured data directly from the web pages of selected providers, in absence of other data interfaces. Additionally, a special data synchronisation component was developed in order to transfer the Archival Directory data into the Repository.

The Knowledge Base (KB) is the central node for CENDARI’s domain knowledge, based on the Virtuoso triple store. While Repository data are file-based resources, the KB is where historical knowledge, formalised through ontologies, vocabularies or user annotations, is persisted in a schema-free, but still structured form. The KB is populated from several strands: acquisition of existing structured domain ontologies, acquisition of knowledge from collected non-structured/non-RDF resources through semantic data processing, or via the NTE user annotations. The KB supports the dataspace model, implements a RDF named graph for each resource, and provides authorisation features during the research refinement, annotation and knowledge organisation processes. The KB is accessed through the NTE and Pineapple.

The Data API ( is a REST interface which provides machine-readable access to all data in the repository accordingly to the user permissions. It is agnostic towards semantically rich descriptions and is primarily aware of the dataspace, provenance, authorisation and data format. The Data API in addition provides unified access to all derivatives of the original data in the repository, generated through the processing services.

Data processing components enlist a document dispatching service and several internal or external services for resource indexing, acting as a modified blackboard system. These are the Litef Conductor, TikaIndexer, Named Entity Recognition and Disambiguation (NERD) services, Semantic processing services, VirtuosoFeeder and ElasticFeeder services. The following sections provide a short overview of these services.

6.2.1 Litef Conductor (Litef)

Originally named LIve TExt Framework, Litef implements a dispatching mechanism for invocation of integrated data processing services, either developed by third parties or by CENDARI. It separates the (general or domain specific) data processing from the internal organisation of data, authentication and access permissions and avoids deep internal dependencies between participating data processing services. Litef reacts on the addition or update of a resource in the Repository and passes it to all interested data processing services. The results of the processing and their log are stored in the file-system and available via the Data API in read-only mode.

For each data processing service Litef implements a small plug-in, an indexer, which informs Litef about the types of resources it is interested in and the type of result it produces. This is a simple concept, but is expressive enough to define even complex processing flows. The plugin-based indexing architecture ensures that the system can be extended with new processing services, either to support additional formats, or to perform other specific data processing in the future.

6.2.2 Plain-Text Indexing, Metadata Extraction, Indexing for Search

Common processing steps for most of the resources are plain-text extraction and metadata extraction. These have been integrated in Litef as a standalone Java library named TikaExtensions (, based on the Apache Tika toolkit ( The library implements dedicated parsers for most frequent and preferred formats111EAD/XML, EAG/XML, EDM RDF, METS/XML, MODS/XML, OAI-PMH records, TEI/XML. For other formats, default parsers were used, providing less precise extraction. Litef transforms the parsed output from the library and generates several resources: a plain-text file containing extracted textual selection, a key-value pairs file serialising the output from the library, and a search index document in JSON format, in accordance with the defined search facets (see 5.2). Where possible, a link to the original location of the resource is provided. The ElasticFeeder service then recognises newly generated index document and sends it to the ElasticSearch service, integrated by the NTE, enabling search across all resources, independent from the tool where the resource was originally created.

This approach allows us to separate the metadata extraction logic from Litef internals and to iteratively improve it in a pay-as-you-go fashion, as our knowledge about data developed. Furthermore, it allows us to reuse a wide variety of already available Tika-based metadata extraction plugins, to customise them or integrate new parsers. Note that the library can be easily reused outside of the CENDARI context.

6.2.3 NERD Services

The participatory design workshops showed that historians are mostly interested in identifying places, persons, events, dates and institutions in archival material, annotating them as entities and linking them to the resources where these terms originally appeared. To support the entity identification over a large corpus of data, the NERD service was used for automatic entity extraction from pre-processed plain text content. Two NERD services were developed in the project, one for English language Lopez (2009) and another for multiple languages (Bulgarian, German, Greek, English, Spanish, Finnish, French, Italian, Ripuarisch Platt, Latin, Dutch, Serbian (Cyrillic), Swedish) Meyer (2016). Both services expose very similar REST APIs and provide JSON-formatted outputs of the recognised entities and the confidence of the result. Although their entity recognition methods vary, they both use Wikipedia for entity disambiguation.

6.2.4 Semantic data extraction and processing

For each resource in the repository, Litef creates a semantic representation as a named document graph with extracted metadata added as properties of the resource within that graph. Depending on the processing results of the NERD services, semantic representations will be created for entity resources of type person, place, event, and organisation, following the CENDARI ontology structures. For the most common formats, such as EAD and EAG, more accurate semantic mapping is performed. All outputs of the semantic processing services are persisted in the KB.

6.3 The development of CENDARI Knowledge Base (KB)

A number of ontologies (see conceptualisations in (Gruber, 1993)) were used within the CENDARI Knowledge Organisation Framework developments. These vary from metadata schema for describing archival institutions, through controlled vocabularies, gazetteers, to domain ontologies structuring knowledge about both supported research domains.

We created extensive ontology development guidelines (CENOnt, 2014), focusing on the reuse of an existing ontology element set and suitable instances for each domain. In a joint workshop, researchers from both domains identified similar types of concepts and entities to be represented, broadly fitting within the Europeana Data Model (EDM) classes ( Agent, Place, Timespan, Event, and Concept. Domain differences could be accommodated through EDM extensions, allowing finer level of granularity whilst enabling unification at a courser level of granularity and fostering the data interoperability. For example, to coincide with the anniversary of the start of WWI, many research projects published WWI data. However the format and the quality of this data varied considerably: Excel spreadsheet, data scraped from web pages, and as a badly-formed single large RDF file. We implemented custom solutions to transform relevant existing vocabularies into appropriate EDM extension. Transformed ontologies were aligned to provide better integrated coverage of the domain than was provided by any single ontology on its own. Due to the nature of data, we used a terminological approach to the alignment (for other approaches see (Shvaiko and Euzenat, 2013)), more specifically a character-based similarity measure and the I-SUB technique (Stoilos et al., 2005). The concepts in the ontologies also made use of MACS (Clavel-Merrin, 2004) to facilitate multilingual access to the resources for English, German and French.

Transformed ontologies form a large part of the KB, along with the data automatically generated by the DIP. Data can be browsed and searched through Pineapple, providing a unified view over transformed ontologies, the semantic data extracted from heterogeneous archival resources and, medieval manuscript ontologies, including visual navigation through the text and the structure of the medieval manuscripts. As an example, for an entry about “Somme”, Pineapple will deliver entries from DBpedia, WWI Linked Open Data (, and Encyclopedia 1914-1918 ( ontologies ({}). By navigating to one of them (e.g. the “Battle of Somme” event), more information about the event and potentially associated archival resources is displayed. For the latter, more details are available, such as the extracted text from the resource raw data, generated mentions of organisations, places, periods, persons or other semantically related archival resources.

Transformed domain ontologies were published in the Repository with the Ontology Uploader, which creates additional provenance metadata, based on the OAI-ORE resource maps model ({}), expressing the relationships between the original and transformed data.

Element set ontologies developed or adapted within the project are published in raw format on GitHub CENGitHub (2016) and available for visual exploration through the WebVowl tool  ( A smaller portion of the KB was created by researchers, through the NTE. They were primarily focused on identifying and tagging entities within notes and uploaded documents, and resolving them against DBPedia (see  5). This knowledge was captured according to the general purpose ontology built into the original software used for the NTE. Provisioning of a richer semantic editor or annotator tool to support user friendly ontology developments by researchers, along with relations between entities, notes and any other archival resources, proved to be very challenging. Further developments required strong balance between the flexibility of the tool and the simplicity of the user interface, which was deemed beyond the scope of the CENDARI project.

7 Discussion

The goal of the CENDARI project was to support historical researchers in their daily work. All the infrastructure described has been designed with that goal in mind. This goal is relatively original in the digital humanities landscape and required a lot of experiments and a complicated infrastructure. Although the goal has been reached technically, we only know at the end of the project what the essential components needed are and the capabilities and digital literacy of historians in general. We try to summarise here the positive outcomes of the project and some of the pitfalls.

It is very clear at the end of the project that historians benefit immensely from supporting tools such as the one offered by CENDARI. Data management at the historian level has become a technical nightmare without proper support. As discussed in Section 3, CENDARI benefited from the feedback offered by historians to express their workflows and needs; to our knowledge, they were never explicitly stated before in such a constructive way. Even if our workflow does not support all the research steps performed by historians, it supports a large portion of them.

The infrastructure to support faceted search and semantic enrichment is very complex. Is it worth the effort when large companies such as Google and Microsoft are investing in search engines? Our answer is positive: historians need more specific support than what search engines currently offer. It may sound like a banality, but here is the right place to state that not all data is digital; rather the opposite is true if cultural heritage is being taken into account. Unique historical data exists only in paper format. The largest part of historical material is neither digitised nor available via metadata. While search engines are targeted at modern-life resources, historians want information on past time-ranges. They also have to deal with places that changed their name or even disappeared. Modern search tools are not meant for these goals and should be supplemented by more focused tools like the ones we designed. Our ingestion and enrichment tools are complex because they need to deal with multiple types of data, and extract useful information out of them in order to be usable by historians. Offering a faceted search engine is very useful to historians because it helps them contextualise the search results through time, location, etc. However, extracting good quality entities has a cost that explains the complexity of CENDARI. In addition to the automatic extraction of entities from existing documents and data, it is now clear that a large portion of archive and library data will never get digitised, as acknowledged by all the institutions that the CENDARI project interacted with. Therefore, CENDARI has decided to use the data gathered by researchers as historical resource; we believe that this decision is essential to better support historical research in general. We believe that the “political” decision to keep the historians’ notes private by default but to publish the tagged entities publicly is an effective way to spread and share historical knowledge with little risk of disclosing historians’ work.

We realised that there is a tension between the capabilities technology can provide and the digital training of historians to understand and use these capabilities. Our user interface supports some powerful functions simply, but there are limitations. For example, allowing manual or automatic tagging of notes and transcriptions was perceived as very useful and important, but we only support seven types of entities because the user interface would become very complicated if we needed to support more. Allowing more complex enrichment is possible using the RDFace editor that we provide, but has seldom been used because of the complexity of manually tagging text through ontologies.

We gathered knowledge about how researchers work and some typical workflows. We documented all these and started a co-evolution between our tools and the historians’ capabilities and needs, but more iterations became necessary: our agile development processes, as outlined in section 3.4, allowed us to perform short iterations, allowing code deployments to production in mere hours before evaluation workshops. But conversely this added additional complexity at every step. On the other hand, some level of simplification is always needed to reach a new target audience, such as historian researchers. Early integration of the infrastructural components is essential to ensure timely feedback and reduce friction later, but in a setup of parallel and distributed development efforts with several teams on individual development iteration schedules, efficient and clear communication among all participants is a crucial factor to align the work and create a common and collaborative development effort.

By mixing agile methods and long term planning, CENDARI built a reproducible infrastructure that has since been taken over by DARIAH in an effort to ensure its sustainability and availability for future use by historians and scholars from other disciplines alike.

8 Conclusion

Through the functional requirements identified during the Participatory Design Workshops, a firm basis could be laid in order to support historians in performing their specific workflow. The tools and services provided by the CENDARI research infrastructure favor ordering, organisation, search, taking notes, annotations, transcribing, but also the analysis of networks and time-space-relationships as well as sharing of resources. Researchers are thus supported in drawing together resources and facts about persons, organisations and structures in the timeframe under consideration. These can be pulled together in order to make patterns visible which cannot be easily taken into focus by classical historiographical methods.

An interesting result of the CENDARI project was the formulation of requirements by historians which support not just their specific research workflow, but rather the research process as a whole. The visualisations and the built-in collaboration functionalities of the CENDARI infrastructure – like sharing of resources, the establishment of collaborative projects or the possibility of collaborative writing of articles – seem at first glance secondary to the research process, but enhance the analysis of search results and the community of historians in general. This can be seen as the “pedagogical” offer of the infrastructure. While historians are generally trained to work all by themselves, the infrastructure offers a range of possibilities for collaborative information collection and writing. It thus lays the basis for a truly collaborative and transnational historiography.

Furthermore, the examples resulting from the collaboration of information engineers, historians and archivists are very promising beyond the achievement of the CENDARI project. The development of ontologies and the possibility of their collaborative enlargement by users can be regarded as a potentially fruitful domain for interaction between the disciplines involved. Another example is the enrichment of metadata by researchers in international standard formats and their hand-over to the cultural heritage institutions which established and provided the metadata. Quite obviously an important part of historians’ work in archives consists of a deeper description of archival material than the one provided by archivists and librarians, who aim at a much more formal level of description. Provided the interoperability of the data through the infrastructure, enriched metadata shared by users and cultural heritage institutions can be described as a win-win-situation for all the sides involved. To achieve a cross-domain level of interoperability of data and services, however, syntactic conformance and semantic data per se are not sufficient. Enabling tools for researchers to structure their knowledge and map it across different domains calls for joint efforts in the domain modelling, technical implementation across research infrastructures, training and communication with researchers and strong research community participation. Could a medieval archaeologist working with e.g. ARIADNE ( benefit from CENDARI? Our answer is positive, but we are aware that, additional service-level integration or semantic-data level and ontology developments alignment would be needed for flawless user experience.

The infrastructure built by the CENDARI project does not support several steps of classical hermeneutic interpretation which is typical for historians. It can be questioned whether there will ever be tools to support humanists in the specific practises and competences which mark this profession — the observation and interpretation of ambivalence and polysemies, of ambiguities and contradiction, and the differentiated analysis of cultural artifacts. The broad range of tools, services and resources offered by the CENDARI infrastructure underlines the fact that not every need formulated by historians can be satisfied and a mission creep with respect to requirements has to be avoided.

9 Acknowledgments

The research leading to these results has received funding from the European Union Seventh Framework Programme ([FP7/2007-2013] [FP7/2007-2011]) under grant agreement n°284432.

We are thankful to all CENDARI data providers who contributed with their content and made it available for research.

We would like to express our sincere gratitude to all CENDARI partners for their great contributions to the development and setup of the infrastructure. The fusion of researchers, archivists, librarians and IT experts had made the CENDARI project a unique learning experience for all of us.


  • (1)
  • Beaudouin-Lafon and Mackay (2002) Michel Beaudouin-Lafon and Wendy Mackay. 2002. Prototyping Development and Tools. In Human Computer Interaction Handbook, Julie A. Jacko and Andrew Sears (Eds.). Lawrence Erlbaum Associates, Hillsdale, NJ, USA, 1006–1031.
  • Becker and Cleveland (1987) Richard A. Becker and William S. Cleveland. 1987. Brushing Scatterplots. Technometrics 29, 2 (1987), 127–142.
  • Blanke and Kristel (2013) Tobias Blanke and Conny Kristel. 2013. Integrating holocaust research. International Journal of Humanities and Arts Computing 7, 1-2 (2013), 41–57.
  • Boukhelifa et al. (2015) Nadia Boukhelifa, Emmanouil Giannisakis, Evanthia Dimara, Wesley Willett, and Jean-Daniel Fekete. 2015. Supporting Historical Research Through User-Centered Visual Analytics. In EuroVis Workshop on Visual Analytics (EuroVA), E. Bertini and J. C. Roberts (Eds.). The Eurographics Association, Calgiary, Italy.
  • CENArch (2015) CENArch 2015. CENDARI Archival Directory. (2015).
  • CENDARI (2015) CENDARI 2015. Collaborative European Digital Archive Infrastructure. (2015).
  • CENGitHub (2016) CENGitHub 2016. Cendari Development Repository on GitHub. (2016). Accessed: 2016-04-26.
  • CENOnt (2014) CENOnt 2014. Deliverable 6.3: Guidelines for Ontology Building. (2014).
  • CENTools (2016) CENTools 2016. Deliverable D7.4: Final releases of toolkits. (2016). Accessed: 2016-04-27.
  • Clavel-Merrin (2004) Genevieve Clavel-Merrin. 2004. MACS (Multilingual Access to Subjects): A Virtual Authority File Across Languages. Cataloging & Classification Quarterly 39, 1-2 (2004), 323–330.
  • Edmond et al. (2015a) Jennifer Edmond, Jakub Beneš, Nataša Bulatović, Milica Knežević, Jörg Lehmann, Francesca Morselli, and Andrei Zamoiski. 2015a. The CENDARI White Book of Archives. Technical Report. CENDARI.
  • Edmond et al. (2015b) Jennifer Edmond, Natasa Bulatovic, and Alexander O’Connor. 2015b. The Taste of “Data Soup” and the Creation of a Pipeline for Transnational Historical Research. Journal of the Japanese Association for Digital Humanities 1, 1 (2015), 107–122.
  • Franklin et al. (2005) Michael Franklin, Alon Halevy, and David Maier. 2005. From Databases to Dataspaces: A New Abstraction for Information Management. SIGMOD Rec. 34, 4 (Dec. 2005), 27–33.
  • Gruber (1993) Thomas R. Gruber. 1993. A Translation Approach to Portable Ontology Specifications. Knowl. Acquis. 5, 2 (June 1993), 199–220.
  • Hayes-Roth (1985) Barbara Hayes-Roth. 1985. A Blackboard Architecture for Control. Artif. Intell. 26, 3 (Aug. 1985), 251–321.
  • Hedeler et al. (2013) Cornelia Hedeler, Alvaro A. A. Fernandes, Khalid Belhajjame, Lu Mao, Chenjuan Guo, Norman W. Paton, and Suzanne M. Embury. 2013. Advanced Query Processing: Volume 1: Issues and Trends. Springer, Berlin, Heidelberg, Chapter A Functional Model for Dataspace Management Systems, 305–341.
  • Iggers (2005) Georg G. Iggers. 2005. Historiography in the Twentieth Century: From Scientific Objectivity to the Postmodern Challenge. Wesleyan University Press, Middletown, CT, USA. 208 pages.
  • Kim et al. (2013) Gene Kim, Kevin Behr, and George Spafford. 2013. The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win (1st ed.). IT Revolution Press.
  • Lopez (2009) Patrice Lopez. 2009. GROBID: Combining automatic bibliographic data recognition and term extraction for scholarship publications. In Research and Advanced Technology for Digital Libraries. Springer, Berlin, Germany, 473–474.
  • Mattern et al. (2015) Eleanor Mattern, Wei Jeng, Daqing He, Liz Lyon, and Aaron Brenner. 2015. Using participatory design and visual narrative inquiry to investigate researchers? data challenges and recommendations for library research data services. Program: electronic library and information systems 49 (2015), 408–423. Issue 4.
  • Meyer (2016) Alexander Meyer. 2016. Multilingual Named Entity Recognition and Resolution. (2016).
  • Muller (2003) Michael J. Muller. 2003. Participatory Design: The Third Space in HCI. In The Human-computer Interaction Handbook, Julie A. Jacko and Andrew Sears (Eds.). Lawrence Erlbaum Associates, Hillsdale, NJ, USA, 1051–1068.
  • Shvaiko and Euzenat (2013) P. Shvaiko and J. Euzenat. 2013. Ontology Matching: State of the Art and Future Challenges. IEEE Transactions on Knowledge and Data Engineering 25, 1 (Jan 2013), 158–176.
  • Stoilos et al. (2005) G Stoilos, G Stamou, and S Kollias. 2005. A string metric for ontology alignment. In Semantic Web - ISWC 2005, Proceedings (LEcture Notes in Computer Science), Gil, Y and Motta, E and Benjamins, VR and Musen, MA (Ed.), Vol. 3729. Springer-Verlag, Berlin, Germany, 624–637. 4th International Semantic Web Conference (ISWC 2005), Galway, IRELAND, NOV 06-10, 2005.
  • Trame (2016) Trame 2016. TRAME: Text and manuscript transmission of the Middle Ages in Europe. (2016). Accessed: 2016-04-26.
  • Visconti (2016) Amanda Visconti. 2016. Infinite Ulysses. (2016).
  • Warwick (2012) Claire Warwick. 2012. Studying users in digital humanities. In Digital Humanities in Practice, Claire Warwick, Melissa M Terras, and Julianne Nyhan (Eds.). Facet Publishing in association with UCL Centre for Digital Humanities, London, Chapter 1.
  • Wessels et al. (2015) Bridgette Wessels, Keira Borrill, Louise Sorensen, Jamie McLaughlin, and Michael Pidd. 2015. Understanding Design for the Digital Humanities. Studies in the Digital Humanities. Sheffield: HRI Online Publications. (2015).
  • Wilkinson et al. (2016) Mark D. Wilkinson, Michel Dumontier, IJsbrand Jan Aalbersberg, and others. 2016. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 3, 160018 (2016). DOI: 
  • Yoakum-Stover (2010) Suzanne Yoakum-Stover. 2010. Keynote address ”Data and Dirt”. (12 2010). Talk given at the Top Information Manager Symposium, Dec 21 2010 [Accessed: 2016 04 27].