LLM’s Weakness in NER Doesn’t Stop It from Enhancing a Stronger SLM

Weilu Xu, Renfei Dang, Shujian Huang


Abstract
Large Language Models (LLMs) demonstrate strong semantic understanding ability and extensive knowledge, but struggle with Named Entity Recognition (NER) due to hallucination and high training costs. Meanwhile, supervised Small Language Models (SLMs) efficiently provide structured predictions but lack adaptability to unseen entities and complex contexts. In this study, we investigate how a relatively weaker LLM can effectively support a supervised model in NER tasks. We first improve the LLM using LoRA-based fine-tuning and similarity-based prompting, achieving performance comparable to a SLM baseline. To further improve results, we propose a fusion strategy that integrates both models: prioritising SLM’s predictions while using LLM guidance in low confidence cases. Our hybrid approach outperforms both baselines on three classic Chinese NER datasets.
Anthology ID:
2025.alp-1.21
Volume:
Proceedings of the Second Workshop on Ancient Language Processing
Month:
May
Year:
2025
Address:
The Albuquerque Convention Center, Laguna
Editors:
Adam Anderson, Shai Gordin, Bin Li, Yudong Liu, Marco C. Passarotti, Rachele Sprugnoli
Venues:
ALP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
170–175
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.alp-1.21/
DOI:
Bibkey:
Cite (ACL):
Weilu Xu, Renfei Dang, and Shujian Huang. 2025. LLM’s Weakness in NER Doesn’t Stop It from Enhancing a Stronger SLM. In Proceedings of the Second Workshop on Ancient Language Processing, pages 170–175, The Albuquerque Convention Center, Laguna. Association for Computational Linguistics.
Cite (Informal):
LLM’s Weakness in NER Doesn’t Stop It from Enhancing a Stronger SLM (Xu et al., ALP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.alp-1.21.pdf