Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Julia Pabst, Anna Grönig: Report on the conference – “Data for History 2021” – Day 4 (09 June 2021)

Session 8 – Providing Data on Persons

The fourth day of the conference started with the lecture “IPIF – pragmatic modelling decisions”. Contributors were Matthias Schlögl (Austrian Academy of Sciences, Vienna), Georg Vogeler, Gunter Vasold (University of Graz) and Richard Hadden (Austrian Academy of Sciences, Vienna). The idea of IPIF is to take a pragmatic approach to modelling based on the factoid model and to provide a RESTful API to query data structured in this way. In the model, the metadata of its creation and modification aggregates information on a person, the source and the statements are extracted from the source by the creator of the factoid. Dating information a lot of the time is not easy in history with time ranges or broad terms in different sources. To model time and dating, the IPIF model uses a simple approach by distinguishing a date as a label and a sort date which allows to keep the original information in textual form, but also to filter it. The whole idea is to simplify the way data is computed. SPARQL is richer, but because of its complexity, you always have to know very specific information about the implementation. IPIF tries to use a less formalized approach to make it more easy to work with different data sources.

Bärbel Kröger and Christian Popp (Göttingen Academy of Sciences and Humanities) presented in the following lecture their research project with the title „WIAG – A dataHub for Medieval and Early Modern Research“. WIAG creates an editorial system which is embedded in a domain-specific knowledge platform and will establish a technical framework for structuring, standardising and providing research data. Based on extensive data collections from the research projects Germania Sacra and Deutsche Inschriften des Mittelalters und der frühen Neuzeit,  a reliable domain-specific knowledge hub for Medieval and Early Modern research will be created. WIAG features data for the identification of persons, ecclesiastical institutions, objects, places as well as other classifying data (types of artefacts, functions, religious orders etc.). The presentation showed the general approach, but also the challenges present in data sources, WIAG draws from, e.g. in Wikidata, when a person is linked to an institution, but the linked institution did not exist at this time. 

The discussion focused mainly on the technical component of the data workflow. In this context, the difficulty to understand the structure of a project (technical architecture) without diagrams etc. was discussed. The question of public access to the data of the projects was also raised. In addition, the process of aggregation of persons and multiple statements was the subject of the discussion.

Session 9 – Integrating Data from Sources

The second session of the day started with the Project “Linked Histories: Police-Ordinances as an Information-Hub for Early Modern History” by  Annemieke Romein (Huygens ING, Amsterdam), Andreas Wagner (Max Planck Institute for European Legal History, Frankfurt/M. ), Saskia Limbach (Johannes Gutenberg University Mainz), Klaas Van Gelder (Ghent University), Jørgen Mührmann-Lund (Aarhus University), Nicolas Simon (Casa de la Velasquez, Madrid) and Margo De Koster (Ghent University, Vrije Universiteit Amsterdam, Vrije Universiteit Brussel) .
In their talk, they outlined their approach on developing an ontology collaboratively across the research community of their domain. The project itself aims to create an ontology to link existing resources and information on police ordinances in Europe and its former colonies. The starting point of the idea was to connect the multitude of projects and research in this field by preserving their data and information digitally and making it available beyond the end of the projects. In the future, research projects from related areas and fields shall be included as well.
They want to encourage all those researching police ordinances in the early modern period to contribute to the project by sharing information on the data they work with via GitHub. Here, it is also possible to contribute user stories for a common system, processing such data.
For modeling the ontology, they chose an agile approach, as it seemed the most appropriate for their type of project, which would involve different datasets and scientific questions. Their workflow is to model smaller parts of the available data first, then extend, and only finally map it with already existing ontologies such as CIDOC CRM.

In the second contribution, David Zbíral (Masaryk University, Brno), Adam Mertel (Center for Advanced Systems Understanding, Görlitz), Robert L. J. Shaw, and Tomáš Hampejs (Masaryk University, Brno) presented their project “An ontology for modeling the social, spatial, and semantic relations in pre-modern written sources: Takeaways from data model development in the Dissident Networks Project (DISSINET)”.
The talk reported on experiences from the DISSNET project, regarding source-driven and hypotheses-driven data model creation. DISSNET deals with the study of medieval Inquisition sources. In particular, it focused on developing a comprehensive data model and Ontology with a strong link to the sources, rather than collecting and modelling data according to pre-established questions.
The goal was also to be able to represent the sources as structured data without sacrificing the complexity of the information given in and by the sources. It was considered to be particularly important, to ensure that critical examination of the sources could be carried out directly in the data model, for example in the case of unclear or divergent statements on the contextualization of the source. They tried to preserve a lot of complexity like the hierarchical structure of documents, metadata and also include uncertainties in the sources like uncertain claims. 
The project started with a simple google sheet and is now in the process of making a transition to a more complex structure and a more user-friendly interface that is linked to a database with JSON files. The ultimate goal is to build a graph database for data projection and research questions. 
Their takeaways from the first steps of the project are the necessity to model sources first, to not just relegate the source-critical formation just to the introduction or the closing sections but make them part of the analysis and data itself. Separating the data collection, which is low on selection and interpretation from data projections (which are hypothesis-driven), and never accept limitations imposed by infrastructure but try to expand it. 

In the discussion, questions arose about how the projects are documented and how this documentation is made available. It was also discussed how specific the domains are that the ontology is meant to represent and whether it would be possible to extend it to other domains and ages.
Also, how the projects can be made available to a wider audience and how the ontologies created can be linked to already available ontologies, such as UFO was discussed in this context.


About the authors:

Julia Pabst studies history at the Humboldt-Universität zu Berlin in the master’s program with focus on Digital History.

Anna Grönig has a bachelor’s degree in German language and literature and history and is now studying for a master’s degree in history with a focus on Digital History at the Humboldt-Universität zu Berlin.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Digital History Berlin (Redaktion) (16. Juni 2021). Julia Pabst, Anna Grönig: Report on the conference – “Data for History 2021” – Day 4 (09 June 2021). Digital History Berlin. Abgerufen am 7. Dezember 2024 von https://doi.org/10.58079/nl2k


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.