Few-Shot Multilingual Coreference Resolution Using Long-Context Large Language Models

Moiz Sajid, Muhammad Fraz, Seemab Latif, Zuhair Zafar


Abstract
In this work, we present our system, which ranked second in the CRAC 2025 Shared Task on Multilingual Coreference Resolution (LLM Track). For multilingual coreference resolution, our system mainly uses long-context large language models (LLMs) in a few-shot in-context learning setting. Among the various approaches we explored, few-shot prompting proved to be the most effective, particularly due to the complexity of the task and the availability of high-quality data with referential relationships provided as part of the competition. We employed Gemini 2.5 Pro, one of the best available closed-source long-context LLMs at the time of submission. Our system achieved a CoNLL F1 score of 61.74 on the mini-testset, demonstrating that performance improves significantly with the number of few-shot examples provided, thanks to the model’s extended context window. While this approach comes with trade-offs in terms of inference cost and response latency, it highlights the potential of long-context LLMs for tackling multilingual coreference without task-specific fine-tuning. Although direct comparisons with traditional supervised systems are not straightforward, our findings provide valuable insights and open avenues for future work, particularly in expanding support for low-resource languages.
Anthology ID:
2025.crac-1.14
Volume:
Proceedings of the Eighth Workshop on Computational Models of Reference, Anaphora and Coreference
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Maciej Ogrodniczuk, Michal Novak, Massimo Poesio, Sameer Pradhan, Vincent Ng
Venue:
CRAC
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
154–162
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.crac-1.14/
DOI:
10.18653/v1/2025.crac-1.14
Bibkey:
Cite (ACL):
Moiz Sajid, Muhammad Fraz, Seemab Latif, and Zuhair Zafar. 2025. Few-Shot Multilingual Coreference Resolution Using Long-Context Large Language Models. In Proceedings of the Eighth Workshop on Computational Models of Reference, Anaphora and Coreference, pages 154–162, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Few-Shot Multilingual Coreference Resolution Using Long-Context Large Language Models (Sajid et al., CRAC 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.crac-1.14.pdf