Georeferencing complex relative locality descriptions with large language models

Loading...
Thumbnail Image

DOI

Open Access Location

Journal Title

Journal ISSN

Volume Title

Publisher

Taylor and Francis Group

Rights

CC BY 4.0
(c) 2026 The Author/s

Abstract

Georeferencing text documents has typically relied on either gazetteer-based methods to assign geographic coordinates to place names or on language modelling approaches that associate textual terms with geographic locations. However, many location descriptions specify positions relatively with spatial relationships, making geocoding based solely on place names or geo-indicative words inaccurate. This issue frequently arises in biological specimen collection records, where locations are often described through narratives rather than coordinates if they pre-date GPS. Accurate georeferencing is vital for biodiversity studies, yet the process remains labour-intensive, leading to a demand for automated georeferencing solutions. This paper explores the potential of Large Language Models (LLMs) to georeference complex locality descriptions automatically, focusing on the biodiversity collections domain. We first identified effective prompting patterns, then fine-tuned an LLM using Quantized Low-Rank Adaptation (QLoRA) on biodiversity datasets from multiple regions and languages. Our approach outperforms existing baselines with an average, across datasets, of 65% of records within a 10 km radius, for a fixed amount of training data. The best results (New York state) were 85% within 10 km and 67% within 1 km. The selected LLM performs well for lengthy, complex descriptions, highlighting its potential for georeferencing intricate locality descriptions.

Description

Citation

Fernando A, Ranathunga S, Stock K, Prasanna R, Jones CB. (2026). Georeferencing complex relative locality descriptions with large language models. International Journal of Geographical Information Science. Ahead of Print.

Collections

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as CC BY 4.0