A Dataset for Expert Reviewer Recommendation with Large Language Models as Zero-shot Rankers
Vanja M. Karan, Stephen McQuistin, Ryo Yanagida, Colin Perkins, Gareth Tyson, Ignacio Castro, Patrick G.T. Healey, Matthew Purver
Abstract
The task of reviewer recommendation is increasingly important, with main techniques utilizing general models of text relevance. However, state of the art (SotA) systems still have relatively high error rates. Two possible reasons for this are: a lack of large datasets and the fact that large language models (LLMs) have not yet been applied. To fill these gaps, we first create a substantial new dataset, in the domain of Internet specification documents; then we introduce the use of LLMs and evaluate their performance. We find that LLMs with prompting can improve on SotA in some cases, but that they are not a cure-all: this task provides a challenging setting for prompt-based methods- Anthology ID:
- 2025.coling-main.756
- Volume:
- Proceedings of the 31st International Conference on Computational Linguistics
- Month:
- January
- Year:
- 2025
- Address:
- Abu Dhabi, UAE
- Editors:
- Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
- Venue:
- COLING
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 11422–11427
- Language:
- URL:
- https://preview.aclanthology.org/jlcl-multiple-ingestion/2025.coling-main.756/
- DOI:
- Cite (ACL):
- Vanja M. Karan, Stephen McQuistin, Ryo Yanagida, Colin Perkins, Gareth Tyson, Ignacio Castro, Patrick G.T. Healey, and Matthew Purver. 2025. A Dataset for Expert Reviewer Recommendation with Large Language Models as Zero-shot Rankers. In Proceedings of the 31st International Conference on Computational Linguistics, pages 11422–11427, Abu Dhabi, UAE. Association for Computational Linguistics.
- Cite (Informal):
- A Dataset for Expert Reviewer Recommendation with Large Language Models as Zero-shot Rankers (Karan et al., COLING 2025)
- PDF:
- https://preview.aclanthology.org/jlcl-multiple-ingestion/2025.coling-main.756.pdf