Vitaly Lavrukhin


2025

pdf bib
Extending Automatic Machine Translation Evaluation to Book-Length Documents
Kuang-Da Wang | Shuoyang Ding | Chao-Han Huck Yang | Ping-Chun Hsieh | Wen-Chih Peng | Vitaly Lavrukhin | Boris Ginsburg
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Despite Large Language Models (LLMs) demonstrating superior translation performance and long-context capabilities, evaluation methodologies remain constrained to sentence-level assessment due to dataset limitations, token number restrictions in metrics, and rigid sentence boundary requirements. We introduce SEGALE, an evaluation scheme that extends existing automatic metrics to long-document translation by treating documents as continuous text and applying sentence segmentation and alignment methods. Our approach enables previously unattainable document-level evaluation, handling translations of arbitrary length generated with document-level prompts while accounting for under-/over-translations and varied sentence boundaries. Experiments show our scheme significantly outperforms existing long-form document evaluation schemes, while being comparable to evaluations performed with groundtruth sentence alignments. Additionally, we apply our scheme to book-length texts and newly demonstrate that many open-weight LLMs fail to effectively translate documents at their reported maximum context lengths.

pdf bib
Anticipating Future with Large Language Model for Simultaneous Machine Translation
Siqi Ouyang | Oleksii Hrinchuk | Zhehuai Chen | Vitaly Lavrukhin | Jagadeesh Balam | Lei Li | Boris Ginsburg
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Simultaneous machine translation (SMT) takes streaming input utterances and incrementally produces target text. Existing SMT methods only use the partial utterance that has already arrived at the input and the generated hypothesis. Motivated by human interpreters’ technique to forecast future words before hearing them, we propose Translation by Anticipating Future (TAF), a method to improve translation quality while retaining low latency. Its core idea is to use a large language model (LLM) to predict future source words and opportunistically translate without introducing too much risk. We evaluate our TAF and multiple baselines of SMT on four language directions. Experiments show that TAF achieves the best translation quality-latency trade-off and outperforms the baselines by up to 5 BLEU points at the same latency (three words).

pdf bib
Nvidia-Nemo’s WMT 2025 Metrics Shared Task Submission
Brian Yan | Shuoyang Ding | Kuang-Da Wang | Siqi Ouyang | Oleksii Hrinchuk | Vitaly Lavrukhin | Boris Ginsburg
Proceedings of the Tenth Conference on Machine Translation

This paper describes Nvidia-Nemo’s WMT 2025 Metrics Shared Task submission. We investigated two strategies for extending Machine Translation (MT) evaluation to unsegmented documents: 1) first segmenting into sentences and then applying regression-based metrics and 2) directly utilizing the long-context capabilities of LLMs. The base comparison of the segmentation-based and LLM-based metrics on the WMT 2023-24 evaluation sets indicated that the former performs more robustly across language pairs.Thus we sought to improve the LLM-based approach by incorporating relative evaluation - this setting jointly evaluates all candidate translations at once and relative to each other, rather than evaluating each separately. Our experiments using the open-source Qwen3 LLM show that relative evaluation improves score correlations with human judgment, but only if the task is structured as a 2-stage evaluate-then-refine problem.

2018

pdf bib
OpenSeq2Seq: Extensible Toolkit for Distributed and Mixed Precision Training of Sequence-to-Sequence Models
Oleksii Kuchaiev | Boris Ginsburg | Igor Gitman | Vitaly Lavrukhin | Carl Case | Paulius Micikevicius
Proceedings of Workshop for NLP Open Source Software (NLP-OSS)

We present OpenSeq2Seq – an open-source toolkit for training sequence-to-sequence models. The main goal of our toolkit is to allow researchers to most effectively explore different sequence-to-sequence architectures. The efficiency is achieved by fully supporting distributed and mixed-precision training. OpenSeq2Seq provides building blocks for training encoder-decoder models for neural machine translation and automatic speech recognition. We plan to extend it with other modalities in the future.