Nvidia-Nemo’s WMT 2025 Metrics Shared Task Submission

Brian Yan, Shuoyang Ding, Kuang-Da Wang, Siqi Ouyang, Oleksii Hrinchuk, Vitaly Lavrukhin, Boris Ginsburg


Abstract
This paper describes Nvidia-Nemo’s WMT 2025 Metrics Shared Task submission. We investigated two strategies for extending Machine Translation (MT) evaluation to unsegmented documents: 1) first segmenting into sentences and then applying regression-based metrics and 2) directly utilizing the long-context capabilities of LLMs. The base comparison of the segmentation-based and LLM-based metrics on the WMT 2023-24 evaluation sets indicated that the former performs more robustly across language pairs.Thus we sought to improve the LLM-based approach by incorporating relative evaluation - this setting jointly evaluates all candidate translations at once and relative to each other, rather than evaluating each separately. Our experiments using the open-source Qwen3 LLM show that relative evaluation improves score correlations with human judgment, but only if the task is structured as a 2-stage evaluate-then-refine problem.
Anthology ID:
2025.wmt-1.66
Volume:
Proceedings of the Tenth Conference on Machine Translation
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Barry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
920–925
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.wmt-1.66/
DOI:
Bibkey:
Cite (ACL):
Brian Yan, Shuoyang Ding, Kuang-Da Wang, Siqi Ouyang, Oleksii Hrinchuk, Vitaly Lavrukhin, and Boris Ginsburg. 2025. Nvidia-Nemo’s WMT 2025 Metrics Shared Task Submission. In Proceedings of the Tenth Conference on Machine Translation, pages 920–925, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Nvidia-Nemo’s WMT 2025 Metrics Shared Task Submission (Yan et al., WMT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.wmt-1.66.pdf