Improving LLMs’ Learning of Coreference Resolution
Yujian Gan, Yuan Liang, Yanni Lin, Juntao Yu, Massimo Poesio
Abstract
Coreference Resolution (CR) is crucial for many NLP tasks, but existing LLMs struggle with hallucination and under-performance. In this paper, we investigate the limitations of existing LLM-based approaches to CR—specifically the Question-Answering (QA) Template and Document Template methods—and propose two novel techniques: Reversed Training with Joint Inference and Iterative Document Generation. Our experiments show that Reversed Training improves the QA Template method, while Iterative Document Generation eliminates hallucinations in the generated source text and boosts coreference resolution. Integrating these methods and techniques offers an effective and robust solution to LLM-based coreference resolution- Anthology ID:
- 2025.sigdial-1.25
- Volume:
- Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue
- Month:
- August
- Year:
- 2025
- Address:
- Avignon, France
- Editors:
- Frédéric Béchet, Fabrice Lefèvre, Nicholas Asher, Seokhwan Kim, Teva Merlin
- Venue:
- SIGDIAL
- SIG:
- SIGDIAL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 311–321
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2025-10/2025.sigdial-1.25/
- DOI:
- Cite (ACL):
- Yujian Gan, Yuan Liang, Yanni Lin, Juntao Yu, and Massimo Poesio. 2025. Improving LLMs’ Learning of Coreference Resolution. In Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 311–321, Avignon, France. Association for Computational Linguistics.
- Cite (Informal):
- Improving LLMs’ Learning of Coreference Resolution (Gan et al., SIGDIAL 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2025-10/2025.sigdial-1.25.pdf