The conference was opened by a short introduction by Torsten Hiltmann (HU Berlin), who, in cooperation with Francesco Beretta and Vincent Alamercery (LARHRA Lyon) is the organizer of this event. He offered a brief overview of the still young history of the consortium and introduced the overall aim of the conference, which is to promote exchange within the broader community and to create more stable structures for the future development of the consortium. Even though it can only take place online this year due to the Corona pandemic, it was important to preserve the core idea of the meeting. Therefore, all changes from the initial plans, such as a new schedule spread over 7 weeks and Airmeet as the platform for the conference, were chosen to assure exactly that: namely communication, exchange and discussions among like-minded people facing the same challenges if it comes to the modelling of data in historical research.
The conference started with the first keynote by George Bruseker (Takin.solutions, Plovdiv/Athens), one of the co-founders and important drivers of the consortium. In his presentation, “The uses and limits of formal ontologies (Cidoc CRM) for history: addressing and going past the ‘digital’ in digital history”, George Bruseker made a strong case for the use of formal ontologies, such as CIDOC CRM, as the use of these is, in his view, of fundamental interest to the historical community and historical research. He illustrated his argument that there is a great opportunity to combine the practices of knowledge engineering (as ontologies) and historical research with a focus on deriving facts from (digital) documents to give historians better tools in the digital age. He underlined that the collaboration between knowledge engineering and historical research is important. For him, the integration of knowledge engineering techniques into historical research is just an extension of the tradition of historiographic methods into a new medium. Bruseker concluded by pointing out the current limitations in using CIDOC CRM, which he believes are still a hurdle to making CIDOC more widely usable. For example, he said, the expressive power of CIDOC needs to be expanded, a community needs to be built and maintained, and there needs to be a continuous revision of the foundation to allow for openness and customization.
In the ensuing discussion, particular attention was paid to the problem that the precise understanding of a term always depends on the context in which it is used, and to what extent CIDOC attempts to solve this problem of understanding. For Bruseker, clear documentation is particularly important here to be able to trace the work and translation processes at any time.
The following first session of conference was dedicated to Conceptual Models and addressed, in particular, the modelling of different activities in historical perspective. In their first paper on “The Reading experiences ontology: a Use-case for OntoMe”, François Vignale (Université du Maine), Francesco Beretta (CNRS/Université de Lyon, LARHRA) and Vincent Alamercery (ENS de Lyon, LARHRA) introduced the READ-IT project. The exploration tools developed there are intended to explore digital sources, starting from the 18th century until today, on the fundamental question of why and how people read. This will be done by matching CIDOC CRM with READ-IT’s data model and the creation of about 40 classes. Problems that arise during the research of reading experience include the problem of tracking reading activities and the change in the meaning of keywords over time. To address these problems, a source-based approach was used. In addition, different approaches were combined. Thus, a theoretical model with three main categories (Reading Agent, Reading Resource, and Reading Process) has emerged, which proposes a description of the reading experience.
In the second paper of this session on “Petitioning, meeting, negotiating: towards a conceptual model for communication in early modern parliamentary systems”, Roman Bleier, Florian Zeilinger, Georg Vogeler, Gabriele Haug-Moritz and Eva Ortlieb (University of Graz) presented a pilot project in terms of attempting to enhance older digital editions with a new ontology-based approach. It is dealing with early modern material, i.e. with the documents of the “The imperial diet of Regensburg 1567” and is part of a bigger editorial endeavor to examine Imperial Diet Proceedings between 1556-1662. The project starts from the observation that early parlements were not yet very formalised, which is why it is important to pay attention to the different forms of interactions. The project’s goal is thus mainly to propose an Ontology Model that may capture as much of these interactions between participants of the diet as possible and which, by doing so, may also be applied to other parliament meetings. The current research question is: To what extent are these actions specific to this event or can they be also be registered in relation to other parliamentary meetings?
In the following discussion, the question was raised to what extent the project’s model can be transferred to other periods, which was and is an important issue in the modeling. Furthermore, the advantages of this approach to (historical) data and the source selection were discussed, which not only lies in the application of the models, but also their creation brings a considerable gain in knowledge.
After dealing with modelling as such, the second session addressed the necessary specifics that should be taken into account during the modelling process, which is the importance of the context and transparency. In this perspective, Katrin Moeller and Georg Fertig (Martin Luther Universität Halle-Wittenberg), in their paper “The challenge of contextualization and data transparency in structured research data!”, presented a project that grew out of earlier paper and addresses the challenges of representing and using genealogical research data and the need to contextualise existing data in order to make it transparent and reusable. Since the nineties there is an alternative way of thinking about data models in genealogy which starts with a classical data model but sets assertions or claims – possible sentences that may be true or not – at the center, connecting the search process and the data. The project uses a model by Jesper Zetlitz which is called Gedbas4all Data Modell and uses graphs to connect subjects via assertions. This is important when, for example, entries are not identified correctly and maybe will lead to other alternative assertions. The goal is to make these processes transparent in the data. The second challenge is to prepare already existing data in such a way that it becomes shareable because there is a lot of existing genealogy data which is not linkable and so cannot yet be analyzed in a broader context.
Gioele Barabucci (Norwegian University of Science and Technology, Trondheim) and Fabio Vitali (Università di Bologna), in their presentation “Context is all: Guidelines for context characterization in knowledge modeling and data formats”, addressed a central challenge, not only in history, but generally in science and the deduction of scientific research. Topics are often reduced to a single fact and that is also how most data models are built today. They showed this with the example of many years of research on the Mona Lisa, which ends up being broken down to a few key facts such as artist, date of creation, title, etc. To challenge this over-simplified approach, their research project has identified several contexts that future data models would need to take into account. These are: Temporal relationships, Spatial/jurisdictional relationships, Part-whole relationships, Derivation relationships and Confidence relationships. The resulting structures should therefore be comprehensible and new information add but not overwrite current information. It’s a great start for the conversation about how to build richer data models for historical studies which better integrate the complex nature of human endeavors. This should also include the whole process of knowledge production within the different projects. How this can be done exactly, however, is still open to debate.
The following discussion focused on the problems that still exist between theoretical models and practical implementation, as these problems are already discussed since quite a while and thus have been addressed many times with different solutions. But none of these solutions has found general acceptance and thus made its way into application. Another question addressed in the discussion was that of how to map multiple interpretations in data models.
In the third and final session, we turned to Integration and Interoperability. Heikki Rantala, Esko Ikkala (Aalto University), Eero Hyvönen (Aalto University/University of Helsinki) presented in their talk with the title “Creating the HISTO Ontology of Finnish History Events” one approach to build an integrated model for history. This project creates an ontology of important events of Finnish history called the HISTO Ontology. They used the semantic structure of the timeline of a project of Finnish historians. The data model used is based on a CIDOC CRM ontology. Since events are very good at connecting different sources, the four categories events, people, places and periods are used to link different sources. In this way, possible causal chains can be identified more quickly.
In the last presentation of the day, “Connecting the dots: the case of Omnipot”, Matthias Schlögl, Matej Durco, Ingo Börner, Peter Andorfer and Klaus Illmayer (Austrian Academy of Sciences, Vienna) presented a comprehensive approach to knowledge management. Omnipot is one knowledge platform for multiple projects to establish connections, find overlaps, and to create new standards. It works only with references and not with direct content. Data from various projects by the Austrian Academy of Sciences are combined to form a triple store. They illustrated this with the example of the correspondence between Bahr and Schnitzler, addressing six problems they had to deal with: missing identifiers, conflicting schemas, unnormalized identifiers, keeping a link between original resource and harmonized entity, updates at sources and conflicting conceptualizations. They have already found solutions to a large part of these problems. For example, the problem of unnormalized identifiers can be solved with preprocessing. The biggest challenge, however, is that of the different conceptualizations, which can only be solved by talking and understanding. The discussion included problems of data processing. How does the researcher deal with things being coded differently and data being processed differently? Besides this, the question of documenting and publishing ontologies to enable reuse was discussed.
The first day provided a first comprehensive overview of the current status, challenges and solutions in modeling and managing historical data, which could be addressed more in detail during the discussions. The provided networking opportunities via Airmeet created the possibility to deepen this even more. Now we are looking forward to further days full of interesting presentations and exciting discussions.
About the authors:
Julia Pabst studies history at the Humboldt-Universität zu Berlin in the master’s program with focus on Digital History.
Anna Grönig has a bachelor’s degree in German language and literature and history and is now studying for a master’s degree in history with a focus on Digital History at the Humboldt-Universität zu Berlin.
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Digital History Berlin (Redaktion) (26. Mai 2021). Julia Pabst, Anna Grönig: Report on the conference “Data for History 2021” – Day 1. Digital History Berlin. Abgerufen am 15. Oktober 2024 von https://doi.org/10.58079/nl2h