Self-Improving for Zero-Shot Named Entity Recognition with Large Language Models

Tingyu Xie, Qi Li, Yan Zhang, Zuozhu Liu, Hongwei Wang


Abstract
Exploring the application of powerful large language models (LLMs) on the named entity recognition (NER) task has drawn much attention recently. This work pushes the performance boundary of zero-shot NER with LLMs by proposing a training-free self-improving framework, which utilizes an unlabeled corpus to stimulate the self-learning ability of LLMs. First, we use the LLM to make predictions on the unlabeled corpus using self-consistency and obtain a self-annotated dataset. Second, we explore various strategies to select reliable annotations to form a reliable self-annotated dataset. Finally, for each test input, we retrieve demonstrations from the reliable self-annotated dataset and perform inference via in-context learning. Experiments on four benchmarks show substantial performance improvements achieved by our framework. Through comprehensive experimental analysis, we find that increasing the size of unlabeled corpus or iterations of self-improving does not guarantee further improvement, but the performance might be boosted via more advanced strategies for reliable annotation selection.
Anthology ID:
2024.naacl-short.49
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
583–593
Language:
URL:
https://aclanthology.org/2024.naacl-short.49
DOI:
Bibkey:
Cite (ACL):
Tingyu Xie, Qi Li, Yan Zhang, Zuozhu Liu, and Hongwei Wang. 2024. Self-Improving for Zero-Shot Named Entity Recognition with Large Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 583–593, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Self-Improving for Zero-Shot Named Entity Recognition with Large Language Models (Xie et al., NAACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-checklist/2024.naacl-short.49.pdf