YESciEval: Robust LLM-as-a-Judge for Scientific Question Answering

Jennifer D’Souza, Hamed Babaei Giglou, Quentin Münch


Abstract
Large Language Models (LLMs) drive scientific question-answering on modern search engines, yet their evaluation robustness remains underexplored. We introduce YESciEval, an open-source framework that combines fine-grained rubric-based assessment with reinforcement learning to mitigate optimism bias in LLM evaluators. We release multidisciplinary scienceQ&A datasets, including adversarial variants, with evaluation scores from multiple LLMs. Independent of proprietary models and human feedback, our approach enables scalable, cost-free evaluation. By advancing reliable LLM-as-a-judge models, this work supports AI alignment and fosters robust, transparent evaluation essential for scientific inquiry.
Anthology ID:
2025.acl-long.675
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13749–13783
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.675/
DOI:
Bibkey:
Cite (ACL):
Jennifer D’Souza, Hamed Babaei Giglou, and Quentin Münch. 2025. YESciEval: Robust LLM-as-a-Judge for Scientific Question Answering. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13749–13783, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
YESciEval: Robust LLM-as-a-Judge for Scientific Question Answering (D’Souza et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.675.pdf