RoSE: Round-robin Synthetic Data Evaluation for Selecting LLM Generators without Human Test Sets

Jan Cegin, Branislav Pecher, Ivan Srba, Jakub Simko


Abstract
LLMs are powerful generators of synthetic data, which are used for training smaller, specific models. This is especially valuable for low-resource languages, where human-labelled data is scarce but LLMs can still produce high-quality text. However, LLMs differ in how useful their outputs are for training. Selecting the best LLM as a generator is challenging because extrinsic evaluation requires costly human annotations (which are often unavailable for low-resource languages), while intrinsic metrics correlate poorly with downstream performance. We introduce Round-robin Synthetic data Evaluation (RoSE), a proxy metric for selecting the best LLM generator without human test sets. RoSE trains a small model on the outputs of a candidate generator (LLM) and then evaluates it on generated synthetic examples from all other candidate LLMs. The final RoSE score is the mean performance of this small model. Across six LLMs, eleven languages, and three tasks (sentiment, topic, intent), RoSE identifies the optimal generator more often than any other intrinsic heuristics. RoSE outperforms intrinsic heuristics and comes within 0.76 percentage points of the optimal generator baseline. This result is measured in terms of downstream performance, obtained by training a small model on the chosen generator’s outputs (optimal vs. proxy-metric–selected) and evaluating it on human-labelled test data. Additionally, RoSE is the only metric to achieve a positive correlation with performance on human test data.
Anthology ID:
2026.eacl-long.258
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5530–5545
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.258/
DOI:
Bibkey:
Cite (ACL):
Jan Cegin, Branislav Pecher, Ivan Srba, and Jakub Simko. 2026. RoSE: Round-robin Synthetic Data Evaluation for Selecting LLM Generators without Human Test Sets. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5530–5545, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
RoSE: Round-robin Synthetic Data Evaluation for Selecting LLM Generators without Human Test Sets (Cegin et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.258.pdf