Santiago Torres-Garcia


2025

pdf bib
FrontierScience Bench: Evaluating AI Research Capabilities in LLMs
Matthew Li | Santiago Torres-Garcia | Shayan Halder | Phani Kuppa | Sean O’Brien | Vasu Sharma | Kevin Zhu | Sunishchal Dev
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)

Large language models (LLMs) have shown remarkable capabilities across various tasks, yet their potential to reason about and construct scientific methodologies remains under explored. This work introduces a novel benchmark evaluating LLMs’ capacity to predict methodological details in AI research papers. We construct a dataset of 88 papers with redacted methodology sections and zero-shot prompt several state-of-the-art LLMs to generate methodology predictions. Our evaluation framework then employs a LLM-as-judge system with multiple LLM judges, majority voting, and self-omission techniques to minimize biases. We validate our LLM judge scores against human judgments. We then briefly analyze the judging results of our zero-shot prediction pipeline, suggesting that even state-of-the-art LLMs struggle with the task of methodology generation without more advanced techniques. This benchmark lays the groundwork for future research into evaluating LLMs’ potential for aiding in AI research.