Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Jascha Schmitz: Report on the conference – “Data for History 2021” – Day 7 (30 June 2021)

Panel Discussion

The last session of the Data for History Conference 2021 was a panel discussion between Ariana Ciula (King’s College London), Francesca Tomasi (University of Bologna), Harald Sack (Leibniz Institute for Information Infrastructure Karlsruhe) and Francesco Beretta (University of Lyon) on ‘Historical data and interoperability / Quo vadis interoperability’. The opening speech was held by one of the conference’s co-hosts, Torsten Hiltmann (Humboldt University of Berlin), who also took on the role of moderator for the following discussion.

Hiltmann started with tying many of the conference’s presentations together under the topic of interoperability. For him, interoperability poses an essential problem of the field which comes up in many projects in many different ways, and also has many possible solutions. In that regard, the conference presented the state of the art of data modelling in historical sciences. It was stressed that the modelling decisions of projects today will constitute a lot of the groundwork for historical research in the coming years and maybe decades. Interoperability, as part of FAIR modelling of data, is a centerpiece to this development. Hiltmann then proposed four questions to the panellists and historical researchers in general that he deems crucial for the debate surrounding interoperability:

1)  Do the benefits of interoperability actually justify the costs and is it always desirable?

2)  Is complete interoperability for historical data – and its heterogeneous conceptualisations historians come up with – even possible?

3)  If interoperability is desirable and possible, how exactly can it be achieved on a practical level?4)  What methodological and conceptual research is needed for interoperability

Before the discussion started, each panellist had the opportunity to give a short input or response to the panel’s topics.  Beretta was the first to respond. The main challenge for him does not lie in the data itself but firstly in the conceptualisation of models – thus in building common ontologies. The key to making a common ontology and interoperable data possible for Beretta is to build a community and infrastructure for collaborative research.  Ciula also spoke on what makes data interoperable in the first place. She shared some experiences of the King’s college in managing and sustaining data but she emphasised that more work has to be done in establishing an infrastructure ecosystem between experts, institutions and possible partners as well as designing better project lifecycles. Sack gave a number of propositions to ensure useful interoperability. He especially highlighted the crucial need for contextualising data properly, in order to make it understandable.Also, he stressed the need to semantically annotate data to make automated interoperability possible. A basis for better practices and standards for Sack are coordinated infrastructure projects like the NFDI (the German National Research Data Initiative). Tomasi closed the first round of inputs with remarks to the differences between syntactic (technical) interoperability and conceptual interoperability. Both approaches need different solutions and are not desirable or possible in the same way. Key problems of common conceptual interoperability for Tomasi are modelling the certainty of assertions, reliability and trust.

Prompted by the question of Hiltmann how far the field has progressed in terms of conceptual interoperability, both Tomasi and Sack agreed that there is still a lot of work to do regarding historical data modelling. They especially highlighted the identification of actual conceptual needs of both the domain-specific data as well as the research operatives a project might have. Ciula responded that she sees progress in the development of interoperability and data management in the past years, but that there is a need for more involvement of historians in the actual data modelling processes. This holds especially true for authoritative databases like wikidata, which could be more valuable for historians if there were collaborations between them and the platform. According to Ciula, minimum common denominators for the field of modelling are necessary, while also respecting the diversity of the modelling landscape.

Following that, different practical approaches for tackling some of the now identified methodological challenges in modelling were discussed. Hiltmann proposed that there might be two possible approaches: one, starting modelling from a common general ontology and trying to fit it on to the specific domain, and second, starting modelling from the specificities of the domain and trying to link these to general ontologies. He posed the question of what would be the costs and challenges of designing an interoperable model in one or the other way. Beretta emphasized the need to try to model objectively, meaning not solely concentrated on the unique research question at hand, but also keeping in mind that different questions might be put to the model in the future. Otherwise, he later explained, only non-reusable throw-away-data will be produced. He proposed to keep the modelling process separate from the research agenda as a general principle, which Hiltmann commented would establish the need for a proper methodology for historical research in this context. Tomasí later objected that there are some research questions where the own point of view or the specific research question are indeed paramount to the modelling process, for example, for philological approaches to documents. Objectivity of both data and models is difficult to reach from this point of view, since there is a need to respond to the researcher’s own questions and standpoints. Striving to produce data not only for oneself is usually the best option, according to Tomasí, but modelling does not always have to be increasingly objective.

Sack pointed out the need to keep a balance between objectivization and domain specific detail, both of which are in some part contrary. He also brought up his experiences in collaborating with different scientific fields, emphasising that there are a lot of different modelling objectives, which also means that the objective of interoperability is only one possible route to go as well. The natural or material sciences have completely different requirements of modelling, for example, interoperability is usually of comparatively minor importance to them. Sack expanded on this later in the discussion, explaining that natural sciences or engineering are different to humanities because in the latter there is no pre-given agreement over at least some natural laws. Many opinions are to some extent equally valid in the humanities, which precludes the often more straightforward, results-based modelling that is common in natural sciences. To Sack, a modelling process as well as a proper methodology hinge on what is the aim of an individual model. Thus, methodologies might always be quite domain specific. It is crucial though, according to Sack, that there is a mutual understanding between the data engineering and the historical sciences side to explain the requirements, objectives and concepts of any model.

Expanding on the topic of methodology, Sack proposed to include more general questions that might be posed to data alongside competency questions that are already part of modelling. This set of historical competency questions might be formulated collectively in the wider scientific community. This way, not everything has to be re-thought in every new project, but there still would be a foundation for interoperability. Hiltmann added that there might be a number of questions on a foundational level that would need to be answered first (some of which also featured in the conference as well) before thinking about the practical level of modelling. Especially since, as he pointed out further, many semantic web technologies were actually conceived for a different application, more akin to the facts- and results-based requirements and goals of the natural sciences, which Sack alluded to above. For Ciula, the current main challenge for digital history is that researchers often have no choice but to build models completely from scratch. There is not enough infrastructure and expertise in technical aspects of digital history to rely upon. On the one hand, this is a matter of building commonly accepted authority files and institutions with different roles for the community. Secondly, for Ciula it is about project architecture, which needs to have a more central place in discussions about and presentations of research projects. This would lead to a better general understanding, what sensible modelling and research practices are.

Following on Ciula’s comments, Hiltmann asked what is needed the most for modelling in digital history right now. Regarding authority files and general standards, Sack pointed out that there needs to be more knowledge and widespread usage of those which already exist. One aspect of this is that there are still instances of projects which, even if they try to apply modelling standards, are interpreting them in different ways, leading to misunderstandings and errors. Beretta emphasised, though, that there still needs to be a conclusive debate about common conceptions of what would be at the level of classes in models (in contrast to instances), like for example ‘place’ or ‘time’. He proposes that these general concepts must have a shared understanding in the research community since the problems attached to these concepts are shared by every historical research project. Sack later responded to this by pointing out the historical variability of these concepts over time, which also needs to be modelled. Tomasí answered the question of what is needed for digital history twofold. Firstly, digital history just needs more and more data to work with, and secondly, there needs to be more intense cooperation with different cultural and governmental institutions to develop common standards and infrastructure.

The last topic Hiltmann raised was how to teach modelling practices and prepare students to deal with the challenges discussed in the course of this conference. A key problem for him is the need of future generations of scholars to actually be able to use the knowledge graph that is built by the current generation of researchers. He asked the panelists what standards are needed in that regard that might enable future historians to get an even better grasp of these technologies and challenges.

Tomasí shared her experiences at the University of Bologna where there is a dedicated master’s programme in ‘Digital Humanities and Digital Knowledge’. It focuses especially on semantic web technologies for the humanities and tries to couple the theoretical and conceptual training to practical applications on a project basis. Beretta agreed that, while it can be tough to introduce historians to the formalised world of ontologies, a lot can already be done in classical teaching formats. He stressed, though, the need for  new approaches to teaching and learning, like a digital platform where courses and other materials can be shared with the whole community. Ciula added that the most valuable experience for students is to be exposed to processes of actual digital history projects, for example by collaborating with external partners. Sack agreed to the previous speakers by saying students need to ‘get their hands dirty’ in actual projects and development processes. However, students usually do not get a deeper insight into the underpinning philosophical problems of semantic web technologies and modelling in general. It is especially important for humanities researchers to understand the consequences and possible epistemological limitations of modelling a domain through classes and instances.

Hiltmann ended the panel discussion and with it the Data for History Conference 2021 by concluding that digital history and especially ontologies make historians think much more about the theoretical foundations of history than is usual in the field. Other than some colleagues might assume, digital history is not just about ‘pushing around numbers’ but about asking what history actually is, how it can be conceived, structured and modelled.


About the author:

Jascha Schmitz has a bachelor’s degree in history and social sciences and is now studying for a master’s degree in history with a focus on Digital History at the Humboldt-Universität zu Berlin.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Digital History Berlin (Redaktion) (2. September 2021). Jascha Schmitz: Report on the conference – “Data for History 2021” – Day 7 (30 June 2021). Digital History Berlin. Abgerufen am 15. Oktober 2024 von https://doi.org/10.58079/nl2q


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.