Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Lars-Erik Brandt, Diana Vegner: Report on the conference – “Data for History 2021” – Day 2

Session 4 – Modelling the Analyses Lars-Erik Brandt

The first presentation of Session 4: Modelling the Analyses was a project of Helen Mair Rawsthorne from the Université Gustave Eiffel with the title “Analysing 18th century hydrographic data: a campaign in the Bay of Biscay, 1750-1751”. The approach focused on maritime maps from the early modern period in France. At that time, a research project was to investigate the depth of the Bay of Biscay at different points and to map the entire bay with information on the depth of the sea and certain characteristics, such as sediment species. Helen Rawsthorne examined not only the maps themselves, created during this campaign, but also the written notes on the various measuring points. The sources are well preserved, since the maps were used by the people of that time in everyday life. The research question was to compare the quality of the results from the historical measurements with modern ones and thus to show how accurate the historical approach was. To achieve this, Helen Rawsthorne processed the measurement data from the historical sources and created digital maps to interactively merge the historical data with modern depth measurements.

At the beginning of the project, it was particularly noticeable that no existing data model was directly applicable to this study — a challenge which Helen Mair Rawsthorne described as relatively common for historical-digital research projects. Overall, the study showed how accurate the historical methods of ocean topography were. Furthermore, the study yielded insights into the history of science, namely, the workings of the production of hydrographic data in France in the 18th century. The method, developed by Helen Rasthorne to process the hydrographic data from the historical sources, was described by her as applicable to other, similar historical contexts.

The subsequent discussion focused on the underlying database of the project, how the information was semantically linked to the interactive maps, and how this was achieved technically. Another question was the examination of historical knowledge per se and how it was developed and used. Due to the limited time, it was not possible to discuss the aspect of interoperability and possible interfaces, but the presented research approach offers many possibilities to expand the data set developed or to apply the used methodology to another region to be able to make further comparative statements.

The second part of Session 4 was dedicated to a presentation by Dirk Wintergrün and Roberto Lalli from the Max-Planck-Institute for the History of Science in Berlin entitled „Coding and analysing socio-epistemic networks – an approach to combine modelling and network analysis “. The basic motivation of this project was the representation of dynamic knowledge systems, interdisciplinary research based on data analysis and the algorithmic analysis of large data sets. The approach was to semantically link large data sets of historical knowledge in order to analyse them. The initiators characterized this approach as a bridge between computer science and the humanities in an interdisciplinary environment. In their view, knowledge is represented either in scientific knowledge or in general knowledge. The sources to be examined therefore have quite different formats and, in addition, the different texts are available in different languages. The knowledge is structured by three layers: social layer of actors, responsible for structuring and restructuring the knowledge, the semiotic-material layer, modelling the physical and formal representations of knowledge, and the semantic layer, depicting the structure of knowledge itself. All have been formalized here as networks, so that they can be linked semantically and investigated more in detail with the help of network analysis methods. The resulting data models are based on the ontology CIDOC-CRM, since it offers large intersections with the objects of study. In summary, the project aims to formalize different source formats as networks and to evaluate them automatically. With the further project ModelSEN the approach is to be modelled semantically. This system will be able also to analyse changes over time and their impact on the different parts of the network.

In the ensuing discussion, the speakers emphasised that interoperability and cooperation are explicitly sought and desired in this project. The aim is a large-scale approach, allowing the network to continue to grow through data from other projects. According to the speakers, such an overarching approach is needed, as many digital projects are too specific in their research approach. It is also possible to join the project, if one follows the developed models.

 

 

Session 5 – Dating the Uncertainty Diana Vegner

In the second session, Mateusz Fafinski (Université de Lausanne) presented his presentation on “Challenges for visualizing spatial and chronological distribution of medieval manuscripts: towards new ontologies”. Manuscripts are handwritten texts and by nature complicated sources. According to Fafinski, manuscripts may feature different indications for dating and localization, which makes it difficult to map them geographically, e.g. by their place of creation.

Uncertainty arises due to different scholarly opinions regarding the possible date of the document, later annotations as well as real arrangement of the contents inside of the manuscripts itself. However, these sources are an important tool. The following questions focused on dealing with uncertainty. Fafinski clearly emphasized that historians should not reduce uncertainty and false or uncertain dating localizations, rather they should think about how these flaws can get factored into data models. Thus, historiographical interpretations would mean the acceptance of introducing uncertainty at every stage of visualization and analyses.

In the fourth and final paper of this day, we turn to Andreas Kuczera (Academy of Science and Literature, Mainz) and his presentation “Uncertainty as a Challenge – Normalization of Dates in Regesta Imperii (RI)”. The RI chronologically records all activities evidenced mainly by charters of the Roman-German emperors in the medieval age. The digitalization of RI started in the early 2000s. The speaker demonstrated how uncertainty concerning dates was expressed in the printed versions of the RI and emphasized that the digitization project already faced the challenge of fitting the partially inaccurate date information into a common schema.
Kuczera mentioned that uncertainty mainly occurs in summarized narrative sources. Looking for a better solution, he presented the Extended Date/Time Format format (EDTF) for uncertain datation as it is proposed by the Library of Congress and evaluated how it could be applied to the data of the RI.
In the second part of the presentation, the speaker addressed the question why the Regesta Imperii did not use the EDTF date format but preferred a strict range structure within the explicit start and end date. In a nutshell, EDTF is not reliable for uncertain time ranges, e.g. if the starting date of a time range is missing. Furthermore, it’s complicated to compute the data formatted this way. In the context of the RI, it has proved more helpful to always work in time ranges, i.e. not to give an approximate date, but rather the time span in which this date can occur. Nevertheless, according to Kuczera, one should very well use the EDTF format to record the points in time which define this time range, since they can always be converted into a simple date-time format. In the last part of his presentation, Kuczera made clear that modelling uncertainty always involves a large degree of subjectivity. Asking scholars to always indicate a time range instead of an uncertain point in time is quite demanding also for them, and everyone has his own approach to this challenge.

Those two papers followed a very insightful discussion on uncertainty and the possibility to model and represent uncertainty when it comes to time and date. In this context, Mateusz Fafinski highlighted that assumptions of time ranges are also valid. It would be more important to make the methodological aspects explicit and explain which conceptual models were used for the assumptions. Francesco Beretta, for his part, stressed that the use of Semantic Web Technologies for this is essential.


The second day provided a comprehensive overview of the current aspects of modelling scientific analyses and the current discussion on modelling uncertainty, which will certainly continue to play an important role in the remaining days of the conference.

 


About the authors:

Diana Vegner has a bachelor’s degree in history and political science and is now studying for a master’s degree in history with a focus on Economic History at the Humboldt-Universität zu Berlin.

Lars-Erik Brandt has a bachelor’s degree in history and is now studying for a master’s degree in history with a focus on Digital History at the Humboldt-Universität zu Berlin.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Digital History Berlin (Redaktion) (8. Juni 2021). Lars-Erik Brandt, Diana Vegner: Report on the conference – “Data for History 2021” – Day 2. Digital History Berlin. Abgerufen am 15. Oktober 2024 von https://doi.org/10.58079/nl2i


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.