PICLe: Pseudo-annotations for In-Context Learning in Low-Resource Named Entity Detection

Sepideh Mamooler, Syrielle Montariol, Alexander Mathis, Antoine Bosselut


Abstract
In-context learning (ICL) enables Large Language Models (LLMs) to perform tasks using few demonstrations, facilitating task adaptation when labeled examples are hard to come by. However, ICL is sensitive to the choice of demonstrations, and it remains unclear which demonstration attributes enable in-context generalization. In this work, we conduct a perturbation study of in-context demonstrations for low-resource Named Entity Detection (NED). Our surprising finding is that in-context demonstrations with partially-correct annotated entity mentions can be as effective for task transfer as fully correct demonstrations. Based off our findings, we propose Pseudo-annotated In-Context Learning (PICLe), a framework for in-context learning with noisy, pseudo-annotated demonstrations. PICLe leverages LLMs to annotate large quantities of demonstrations in a zero-shot first pass. We then cluster these synthetic demonstrations, sample specific sets of in-context demonstrations from each cluster, and predict entity mentions using each set independently. Finally, we use self-verification to select the final set of entity mentions. We extensively evaluate PICLe on five biomedical NED datasets and show that, with zero human-annotation, PICLe outperforms ICL in low-resource settings where few gold examples can be used as in-context demonstrations.
Anthology ID:
2025.naacl-long.518
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10314–10331
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.518/
DOI:
Bibkey:
Cite (ACL):
Sepideh Mamooler, Syrielle Montariol, Alexander Mathis, and Antoine Bosselut. 2025. PICLe: Pseudo-annotations for In-Context Learning in Low-Resource Named Entity Detection. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 10314–10331, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
PICLe: Pseudo-annotations for In-Context Learning in Low-Resource Named Entity Detection (Mamooler et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.518.pdf