Improving Named Entity Recognition for Low-Resource Languages Using Large Language Models: A Ukrainian Case Study

Vladyslav Radchenko, Nazarii Drushchak


Abstract
Named Entity Recognition (NER) is a fundamental task in Natural Language Processing (NLP), yet achieving high performance for low-resource languages remains challenging due to limited annotated data and linguistic complexity. Ukrainian exemplifies these issues with its rich morphology and scarce NLP resources. Recent advances in Large Language Models (LLMs) demonstrate their ability to generalize across diverse languages and domains, offering promising solutions without extensive annotations. This research explores adapting state-of-the-art LLMs to Ukrainian through prompt engineering, including chain-of-thought (CoT) strategies, and model refinement via Supervised Fine-Tuning (SFT). Our best model achieves 0.89 F1 on the NER-UK 2.0 benchmark, matching the performance of advanced encoder-only baselines. These findings highlight practical pathways for improving NER in low-resource contexts, promoting more accessible and scalable language technologies.
Anthology ID:
2025.unlp-1.3
Volume:
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria (online)
Editor:
Mariana Romanyshyn
Venues:
UNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27–35
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.unlp-1.3/
DOI:
Bibkey:
Cite (ACL):
Vladyslav Radchenko and Nazarii Drushchak. 2025. Improving Named Entity Recognition for Low-Resource Languages Using Large Language Models: A Ukrainian Case Study. In Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025), pages 27–35, Vienna, Austria (online). Association for Computational Linguistics.
Cite (Informal):
Improving Named Entity Recognition for Low-Resource Languages Using Large Language Models: A Ukrainian Case Study (Radchenko & Drushchak, UNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.unlp-1.3.pdf