Ardavan Saeedi
2025
LLMs are Better Than You Think: Label-Guided In-Context Learning for Named Entity Recognition
Fan Bai
|
Hamid Hassanzadeh
|
Ardavan Saeedi
|
Mark Dredze
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
In-context learning (ICL) enables large language models (LLMs) to perform new tasks using only a few demonstrations. In Named Entity Recognition (NER), demonstrations are typically selected based on semantic similarity to the test instance, ignoring training labels and resulting in suboptimal performance. We introduce DEER, a new method that leverages training labels through token-level statistics to improve ICL performance. DEER first enhances example selection with a label-guided, token-based retriever that prioritizes tokens most informative for entity recognition. It then prompts the LLM to revisit error-prone tokens, which are also identified using label statistics, and make targeted corrections. Evaluated on five NER datasets using four different LLMs, DEER consistently outperforms existing ICL methods and approaches the performance of supervised fine-tuning. Further analysis shows its effectiveness on both seen and unseen entities and its robustness in low-resource settings.
2016
Nonparametric Spherical Topic Modeling with Word Embeddings
Kayhan Batmanghelich
|
Ardavan Saeedi
|
Karthik Narasimhan
|
Sam Gershman
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Search
Fix author
Co-authors
- Fan Bai 1
- Kayhan Batmanghelich 1
- Mark Dredze 1
- Sam Gershman 1
- Hamid Hassanzadeh 1
- show all...