FrontierScience Bench: Evaluating AI Research Capabilities in LLMs

Matthew Li, Santiago Torres-Garcia, Shayan Halder, Phani Kuppa, Sean O’Brien, Vasu Sharma, Kevin Zhu, Sunishchal Dev


Abstract
Large language models (LLMs) have shown remarkable capabilities across various tasks, yet their potential to reason about and construct scientific methodologies remains under explored. This work introduces a novel benchmark evaluating LLMs’ capacity to predict methodological details in AI research papers. We construct a dataset of 88 papers with redacted methodology sections and zero-shot prompt several state-of-the-art LLMs to generate methodology predictions. Our evaluation framework then employs a LLM-as-judge system with multiple LLM judges, majority voting, and self-omission techniques to minimize biases. We validate our LLM judge scores against human judgments. We then briefly analyze the judging results of our zero-shot prediction pipeline, suggesting that even state-of-the-art LLMs struggle with the task of methodology generation without more advanced techniques. This benchmark lays the groundwork for future research into evaluating LLMs’ potential for aiding in AI research.
Anthology ID:
2025.realm-1.31
Volume:
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Ehsan Kamalloo, Nicolas Gontier, Xing Han Lu, Nouha Dziri, Shikhar Murty, Alexandre Lacoste
Venues:
REALM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
428–453
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.realm-1.31/
DOI:
Bibkey:
Cite (ACL):
Matthew Li, Santiago Torres-Garcia, Shayan Halder, Phani Kuppa, Sean O’Brien, Vasu Sharma, Kevin Zhu, and Sunishchal Dev. 2025. FrontierScience Bench: Evaluating AI Research Capabilities in LLMs. In Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025), pages 428–453, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
FrontierScience Bench: Evaluating AI Research Capabilities in LLMs (Li et al., REALM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.realm-1.31.pdf