With the advent of ChatGPT, the potential of Large Language Models (LLMs) to facilitate various academic, industrial and even everyday processes became evident to the general public. In its aftermath, publications proposing custom-made integrations of LLMs proliferated in wide range of disciplines far beyond the IT, from genetics and finance to the humanities. What they share in common is hope that LLMs can facilitate advances in their disciplines. But how does a historian turn the models to their advantage?
This master thesis aims to investigate the applicability of LLMs in historical research. In doing so, it critically considers the role of knowledge, reasoning and language in historical hermeneutics and contrasts them with what can be reasonably expected from a transformer-based LLM. Starting with a cornerstone of any historical research – a historical source – it is reflected how its content can be made available to the model. In particular, resource augmented generation (RAG) emerges as a viable option to customize a pre-trained LLM to historical research purposes. In short, RAG relies on embedding-based vector representation to identify the relevant (piece of) source and pass it to the LLM. As such, the technique does not only enrich the model with historical knowledge, but also – crucially for historical hermeneutics – allows the researcher to know exactly where the information in the model’s output has come from.
From there on, the thesis follows an experimental approach, in which the first (1779/1780) and the last edition (1830) of J. F. Blumenbach’s Handbuch der Naturgeschichte are divided into short pieces described by metadata and integrated with a chat version of Llama 2 (developed by Meta) through the RAG architecture. The objective is simple – to ask the model questions and have it answer them based on historical knowledge from the source. However, in order to accurately interpret the generated responses, one must confront two further hurdles: a) LLMs’ inability to perform formal reasoning and b) a semantic shift between the contemporary language used to interact with the model and the historical language of the source. So-called prompt engineering, that is strive for better answers through clever design of the prompt, is a low-cost technique with a potential to mitigate these challenges and will be thoroughly explored throughout the master thesis. As such, chatting with the model and critically confronting initial prompts with generated responses given the retrieved abstracts of the source becomes a novel research method to investigate the overall capacity of LLMs to facilitate historical research.
Vortrag im Rahmen des „Digital History“-Forschungskolloquiums
Zeit: Mittwoch, 15. November 2023, 16-18 Uhr
Ort: Zoom-Konferenz (Zugang auf Anfrage oder via Mailingliste)
Beitragsbild
Left: Detail from Blumenbach, Johann Friedrich: Handbuch der Naturgeschichte. Bd. 1. Göttingen, 1779, S. 81. In: Deutsches Textarchiv https://www.deutschestextarchiv.de/blumenbach_naturgeschichte_1779/103, last accessed 13.11.2023.
Right: AI-generated through perchance.org.
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Digital History Berlin (Redaktion) (13. November 2023). Olga Mlynarczyk: Learning to think like the past. Applicability of retrieval augmented LLMs in historical research. Digital History Berlin. Abgerufen am 15. Oktober 2024 von https://doi.org/10.58079/nl4x