Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Jascha Schmitz, Anna Grönig: Report on the conference – “Data for History 2021” – Day 7 (30 June 2021)

Session 13 – Integrating Different Medialities

Øyvind Eide and Zoe Schubert (University of Cologne) opened with a presentation on their project Kompakkt, entitled “Space, time, and agents in theatre: Digital documentation of the transience of performances through theatrical agents in time and space”.
First, Øyvind Eide introduced the main features of the meaning of space in theatre: While in the text spatiality is represented by linguistic expressions, in performance both the existing spatiality of the stage (and beyond) and linguistic expressions represent space. Hence, concerning theatre, different forms of documentation are necessary for both categories. Many decisions have to be made and ways found to encircle the non-existent centre of documentation, the performance itself, in order to make it part of a (theatre) collection. 
Secondary objects and materials are used for this, such as notes, drawings, audio and video recordings. Digital methods now offer the possibility to make these objects/artifacts more available, describable and linkable.
To this end, they have developed Kompakkt, an online repository where artifacts used in performances are digitally available and explorable. For this purpose, they used a selection of artifacts from the theatre collection of the University of Cologne.

In the second half of the lecture, Zoe Schubert demonstrated how these artifacts are presented. She showed that the interactive media interface of Kompakkt helps to understand the value of these artifacts as well as their intended use. Kompakkt also provides standardised and accessible metadata, including annotations. Through these metadata an interactive system is established, the artifacts can be linked together, used as sources with reference to performances, and thus also used for research.
The data modelling thus enables storytelling through time and space and creates an interactive system that does not replicate but represents performances and their production.
This presents a (digital) possibility to model theatre and also theatre history.

The second presentation in this Session was introduced by Zakiya Collier (Weeksville Heritage Center, Brooklyn, NY) and Sarah Adams (Semantic Lab at Pratt, Pratt Institute School of Information, New York, NY), who presented their project “Jazz History as Linked Data: The Linking Lost Jazz Shrines Project”, a Collaboration between the Weeksville Heritage Center in Brooklyn, NY and the Semantic Lab, established at the Pratt Institute.
Zakiya Collier explained that the aim of the project is to make the forgotten history of jazz in Weeksville visible again and researchable through digital methods and representation. It will also extend the existing jazz music ontology “Linked Jazz Project” developed by the Semantic Lab. The underlying sources are mostly digitized transcripts of African-American contemporary testimonies, created between the 1930s and the 1960s, which were collected in a previous project called Weeksville Lost Jazz Shrines of Brooklyn (WLJSB).

Sarah Adams further expanded about data creation and integration into the existing Linked Jazz Project.
They started to transfer the original Linked Jazz data from a traditional rdf- triple store into wikibase, as the latter offered better opportunities to add qualifiers and references and thus contextualise statements and references to the model.
It also gave them the opportunity to adapt and extend the Linked Jazz data model to include data from the Lost Jazz Shrines, so the model was extended to include new classes and properties to define the relationships between individual musicians or the new class “music group” and the new place class “music venue”. Furthermore, they intend to take a shortcut by taking information out of the frequently used process of going through events, and try to connect people directly to places through properties.

Sarah Adams also presented a more detailed workflow for creating the data model. At first, they identified the entities they considered important. To do this, they used the tool Sélavy a document processing tool that reads text and exports triples. Once the entities are identified, these were entered into an entity checklist and added to Wikibase, where these specific entities aren’t yet represented in the model. Next, the rdf-triples were prepared. Now, data in Sélavy shall be scanned for bugs and then exported to wikibase.

The third and final paper “Project Omega: Modelling an Archive Catalogue to Support Future History” was presented by Faith Lawrence, Jone Garmendia (The National Archive, Kew) and Adam Retter (Evolved Binary). The National Archives preserve 1000 years of British and global history with diverse content and formats while its catalogue includes more than 16 million records and nine million digitised and born-digital assets.
Since 2019, the “archives for everyone” project has been in place and a big part of this is the drive to make the archive more inclusive and accessible. As part of this, Project Omega was created with the aim of creating a new Catalogue system to manage Catalogue data.
Faith Lawrence highlighted the manifold problems occurring in the current system ranging from multiple, overlapping databases to a back-end editorial system that can no longer be supported and therefore cannot be improved.

As the Catalogue system is outdated, a new editorial backend is necessary to improve the reliability and record-keeping of information, especially provenance, and better support linkage between related internal and external resources. It would also make the information more accessible to the public, especially through better linkage of documents.
Different vocabularies and data models have been examined for suitability to the project, as it included a wide variety of born-digital, born physical and hybrid data.
Ultimately, the Matterhorn RDF data model was chosen, using a variety of existing international standardised ontologies, having an open-world assumption, and thus meeting the required conditions. The model can also implement RiC-CM (Records in Context), a standard for indexing archival records.
Challenges are posed by validation, old errors in data and identifiers, as these old identifiers are not as unique and computer processable as needed.
The latest version (6.01) of the data model stems from March 2021. Further data modelling and data validation has been conducted as well as documenting the work as medium blogs.

The following discussion focused on the possibilities and insights of the approaches of the individual projects and how these models can be referenced by other standardised models.
For Zakiya Collier and Sarah Adams the focus was on the model suitability to represent particular relationships between people and places in the transcripts, rather than interlinkability with other ontologies. Since they are working with wikibase, they can also access wikidata and link their model to it.
Zoe Schubert and Øyvind Eide regard their project as an important extension of legacy databases. Both data origin and their context are crucial. The project is a building block to show annotations and different perspectives. They have started to map their data with CIDOC CRM. Expanding from textual annotations to three-dimensional annotations, however, represents a challenge. The annotation model of the Europeana Data Model or the Web Annotation Data Model could possibly meet these requirements.
This issue was also of essence to the Omega project and one of the reasons to choose the Matterhorn model, as it allows them to share data more easily with other institutions.

Session 14 – Building and Visualising Semantic Networks

The penultimate session of the D4H 2021, titled “Building and Visualising Semantic Networks”, started with Toby Burrows (Oxford e-Research Center) speaking about “Modelling the History of Medieval and Renaissance Manuscripts for the Mapping Manuscript Migrations Portal”, a work that was done together with Kevin Page, David Lewis and Emma Thomson (Oxford e-Research Center), Mikko Koho, Jouni Tuominen and Eero Hyvönen (Aalto University), Doug Emery and Lynn Ransom (University of Pennsylvania) as well as Hanno Wijnsman (Centre National de la Recherche Scientifique). Burrows presented challenges and results of the project “Mapping Manuscript Migrations” (MMM), which aims at providing an aggregated, browsable database for manuscripts based on a Linked Open Data (LOD) framework. This goal was achieved in January 2020 when the MMM Portal was launched.

One of the main challenges for the project was making the most important source datasets, which provided the manuscript data, interoperable. The team worked on fitting different available ontologies, most notably CIDOC CRM and FRBROO, to integrate the three different data models of the source datasets into the project’s own data model. A key task was also to reconcile entities in the model, using existing references to the Getty Thesaurus of Geographic Names (TGN) for and the Virtual International Authority File (VIAF) in the source datasets for places and agents, respectively. This proved difficult however, since both authority files are not always ideal for medieval and early modern contexts.

Especially challenging was reconciling the data into one of the models’ central pieces, provenance events. Many of these problems could be traced back to the different modelling approaches the source datasets took, resulting in the challenge of integrating all these into a single model. The project team dealt with this by keeping the data model generic. They concluded that this might be necessary for any attempt to integrate pre-existing source datasets which were not modelled with interoperability in mind. Such a generic modelling approach might be sufficient for many questions historians have in their research, though.

The second presentation in this session was held by Pavlos Fafalios (Centre for Computer Science – FORTH-ICS) and Athina Kritsotaki (Centre for Computer Science – FORTH-ICS) on „Challenges and solutions towards creating a semantic network of historical maritime data“ in collaboration with Korina Doerr, Kostas Petrakis, Giorgos Samaritakis, Anastasia Axarudou, Yanni Tzitzikas and Martin Doerr (Centre for Computer Science – FORTH-ICS) as well as George Bruseker (Takin.solutions). They presented the current state of the ongoing SeaLiT project, which explores processes and effects in the transition from sail to steam engine navigation in the Mediterranean and Black Sea between the 1850s and the 1920s. It aims at establishing a digital platform for exploring different sources and data concerning the research goals.

The central challenge that Fafalios addressed in his talk was managing the vast array of different types of information and sources the project wants to integrate into the platform (e.g. crew lists, logbooks, census data, ship registers, …). Two of the main problems in modelling such diverse data concerned the different languages the sources are written in and the fact that many come from different authorities. This means that they can be structured quite differently even when concerning similar information for the same period.

The teams’ solution to this problem is a structured workflow that adheres to principles of provenance-awareness, high recursivity and the usage of established documentation and publication standards. Fafalios presented a concrete workflow and toolset which is used in the SeaLiT project, including transcription and curation of sources via Fast Cat and Fast Cat team into record templates, data modelling based on CIDOC CRM and transformation of data via 3M / X3ML into a semantic network. They stressed the importance of such an approach to achieve data sustainability and semantic interoperability, but also highlighted some problems with their current workflow. At the moment, the process is time consuming and difficult for the researchers especially on a technical level, e.g. regarding configurations of the FAST CAT templates and the required expertise in CIDOC CRM.

The third and final talk of this session was held by Christopher Pollin (Austrian Centre for Digital Humanities – University of Graz) on “Mapping Semantic Constructs in Historical Domains to Visual Structures as Basis for Resource Discovery. Using the Example of Historical Financial Records”. He presented the Digital Edition Publishing Cooperative for Historical Accounts (DEPCHA), which aims to build a research environment (in the form of a web-based dashboard) that provides integrated access to and visualised information about different datasets in the domain of historical financial records. A key requirement for this is achieving interoperability between the diverse source datasets, which the project aims at achieving through building a Bookkeeping Ontology based on CIDOC CRM, and coming up with a workflow that can (semi-)automatically provide useful visualisations of these data.

The main argument of Mr Pollin’s talk is that there is a need for more customised visualisations of semantic networks that meet user- and domain-specific needs, in contrast to many existing overgeneralised approaches. However, interoperability of datasets in the accounting domain is a challenge and constrained by historical specificities – like different and changing units of measurement or currencies – which cannot yet be modelled sufficiently with existing ontologies. For visualising different types of information, Pollin shared an early version of a decision tree that could guide modelling decisions.

The ensuing discussion focused firstly on how the projects dealt with deficiencies of existing ontologies (especially CIDOC CRM) when adapting them for their domains, and secondly on ways to share possible solutions for ontology building with the wider community.
All three projects experienced the need to seek individual solutions to specific requirements and problems of their sources and domains. In the case of the MMM project, for example, the Provenance of manuscripts was difficult to model with existing approaches, especially because of the sometimes very different data models of the source datasets.

All three projects (notably MMM) are also already publishing or are planning to publish their solutions, models, and data in one way or another, although the need to do so in an accessible and centralised manner was stressed during the discussion. The participants were in stark agreement that sharing modelling endeavours and community building in general will be crucial parts in making sure that lessons learned in projects, like the ones presented, can be used productively and sustainably. In the end, Franceso Beretta proposed a future workshop event to share experiences and solutions for modelling and ontology building.


About the authors:

Jascha Schmitz has a bachelor’s degree in history and social sciences and is now studying for a master’s degree in history with a focus on Digital History at the Humboldt-Universität zu Berlin.

Anna Grönig has a bachelor’s degree in german language and literature and history and is now studying for a master’s degree in history with a focus on Digital History at the Humboldt-Universität zu Berlin.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Digital History Berlin (Redaktion) (4. August 2021). Jascha Schmitz, Anna Grönig: Report on the conference – “Data for History 2021” – Day 7 (30 June 2021). Digital History Berlin. Abgerufen am 15. Oktober 2024 von https://doi.org/10.58079/nl2p


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.